CN111123948A - Vehicle multidimensional perception fusion control method and system and automobile - Google Patents

Vehicle multidimensional perception fusion control method and system and automobile Download PDF

Info

Publication number
CN111123948A
CN111123948A CN201911406073.XA CN201911406073A CN111123948A CN 111123948 A CN111123948 A CN 111123948A CN 201911406073 A CN201911406073 A CN 201911406073A CN 111123948 A CN111123948 A CN 111123948A
Authority
CN
China
Prior art keywords
data
vehicle
scene
determining
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911406073.XA
Other languages
Chinese (zh)
Other versions
CN111123948B (en
Inventor
王志刚
潘定海
原诚寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Original Assignee
Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing New Energy Vehicle Technology Innovation Center Co Ltd filed Critical Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Priority to CN201911406073.XA priority Critical patent/CN111123948B/en
Publication of CN111123948A publication Critical patent/CN111123948A/en
Application granted granted Critical
Publication of CN111123948B publication Critical patent/CN111123948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Abstract

A vehicle multidimensional perception fusion control method, a system and an automobile are disclosed. The method can comprise the following steps: determining a near scene, a medium scene and a far scene of the vehicle according to the path distance; collecting real-time data and planning data when a vehicle runs; determining a plurality of virtual path control points, and further determining a scene of each virtual path control point; and determining the driving speed and direction of each virtual path control point according to the real-time data and the planning data and aiming at different scenes. According to the invention, the perception range of auxiliary driving and automatic driving is enhanced through multiple dimensions such as navigation, single-vehicle intelligence, vehicle-road cooperation, a big data cloud platform and the like, perception dead-angle-free future motion trend pre-judgment is realized, auxiliary driving and automatic driving decisions are optimized, and further, fine pre-judgment of far, middle and near field scenes is realized step by dividing scenes, so that the accuracy of the pre-judgment is ensured, meanwhile, the calculation pressure is reduced, and the functionality, the safety and the comfort are improved.

Description

Vehicle multidimensional perception fusion control method and system and automobile
Technical Field
The invention relates to the field of vehicle control, in particular to a vehicle multidimensional perception fusion control method and system and an automobile.
Background
At present, the application and verification scenes of the automatic driving automobile are more and more extensive, the large-scale mass production of the automatic driving automobile is started by the aid of L2 and L3, and the special scene testing and test operation verification are started by the aid of L4. Collision test accidents occur at times due to the perception limitations of the driver-assisted/autonomous bicycle intelligence. How to combine the complex scenes of Chinese characteristics breaks through the perception limitation of single vehicles and improves the safety of vehicle assistance/automatic driving is more and more important.
In the current stage, the automatic driving and the auxiliary driving are mostly integrated by adopting a single vehicle perception sensor, but the obstacle crossing identification cannot be carried out on the aspects of crossing and vehicle shielding, so that accidents are caused; the high-precision map can realize centimeter-level accurate navigation, but has limitation on historical data analysis. Therefore, it is necessary to develop a vehicle multidimensional perception fusion control method, system and automobile.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention provides a vehicle multi-dimensional perception fusion control method, a vehicle multi-dimensional perception fusion control system and an automobile, which can strengthen the perception range of auxiliary driving and automatic driving through multiple dimensions such as navigation, single-vehicle intelligence, vehicle-road collaboration, a big data cloud platform and the like, realize perception dead-angle-free and future movement trend pre-judgment, optimize decision making of auxiliary driving and automatic driving, realize gradual refined pre-judgment of far, middle and near field scenes by dividing scenes, reduce calculation pressure while ensuring the accuracy of the pre-judgment, and improve functionality, safety and comfort.
According to one aspect of the invention, a vehicle multidimensional perception fusion control method is provided. The method may include: determining a near scene, a medium scene and a far scene of the vehicle according to the path distance; collecting real-time data and planning data when a vehicle runs; determining a plurality of virtual path control points, and further determining a scene of each virtual path control point; and determining the driving speed and direction of each virtual path control point according to the real-time data and the planning data and aiming at different scenes.
Preferably, the scene of the virtual path control point is changed according to a change in a distance from an actual position where the vehicle travels to the virtual path control point.
Preferably, the real-time data comprises: GPS data, bicycle sensor data, V2X data, traffic real-time big data.
Preferably, the planning data comprises: TMC data, traffic history data, driving habit data.
Preferably, determining the driving speed and direction of each virtual path control point for different scenes according to the real-time data and the planning data comprises: determining corresponding driving speed and direction according to the bicycle sensor data, the V2X data and the traffic real-time big data respectively, and further calculating the driving speed and direction of the near scene; determining corresponding driving speed and direction according to the bicycle sensor data, the V2X data, the traffic real-time big data and the TMC data, and further calculating the driving speed and direction of the medium scene; and determining corresponding driving speed and direction according to the TMC data, the traffic history data and the driving habit data respectively, and further calculating the driving speed and direction of the far scene.
Preferably, the travel speed is calculated by the formula (1):
Figure BDA0002348661850000021
where v is the speed of travel, M is the number of data involved in the calculation, and Yjβ for a travel speed determined from the jth datajAnd the weighting coefficient is corresponding to the jth data.
Preferably, calculating the scene direction comprises:
the direction angle is calculated by equation (2):
Figure BDA0002348661850000031
where θ is the azimuth, N is the number of data participating in the calculation, XiCosine values for directions determined from the ith data, αiWeighting coefficients corresponding to the ith data; the direction with the smallest included angle with the direction angle isThe direction of the scene.
Preferably, the method further comprises the following steps: drawing a threat coefficient table related to the vehicle speed and the distance between the vehicles ahead, and determining a threat coefficient in real time; and controlling the running speed of the vehicle according to the threat coefficient.
As a specific implementation of the embodiments of the present disclosure,
in a second aspect, an embodiment of the present disclosure further provides a vehicle multidimensional perception fusion control system, where the system includes:
a memory storing executable instructions;
a processor executing the executable instructions in the memory to implement the vehicle multi-dimensional perception fusion control method.
In a third aspect, an embodiment of the present disclosure further provides an automobile, including a vehicle multidimensional perception fusion control system.
The beneficial effects are that:
(1) through multi-dimensional perception information fusion, the problem of single-vehicle perception is solved, and no dead angle is realized by combining vehicle-road cooperation with V2X information, so that traffic accidents are avoided;
(2) the combination of cloud traffic big data and vehicle monitoring big data is adopted, vehicle perception information decision is optimized in real time according to vehicle part running conditions and traffic comprehensive conditions, and the prediction analysis of the comprehensive big data is combined to predict and combine the front motion conditions for comprehensive optimization;
(3) the analysis of personal driving habit big data can customize common sites, lines and driving habits (acceleration, deceleration, turning and the like) of users, and lead the automatic driving and the driving feeling of the users to be synchronous;
(4) by dividing scenes, the progressive refined prejudgment of the far, middle and near field scenes is realized, the prejudgment accuracy is ensured, and the calculation pressure is reduced.
The method and apparatus of the present invention have other features and advantages which will be apparent from or are set forth in detail in the accompanying drawings and the following detailed description, which are incorporated herein, and which together serve to explain certain principles of the invention.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts.
Fig. 1 shows a flow chart of the steps of a vehicle multi-dimensional perception fusion control method according to the invention.
Detailed Description
The invention will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows a flow chart of the steps of a vehicle multi-dimensional perception fusion control method according to the invention.
In this embodiment, the vehicle multidimensional perception fusion control method according to the present invention may include: step 101, determining a near scene, a medium scene and a far scene of vehicle driving according to a path distance; step 102, collecting real-time data and planning data when a vehicle runs; 103, determining a plurality of virtual path control points, and further determining a scene of each virtual path control point; and step 104, determining the driving speed and direction of each virtual path control point according to the real-time data and the planning data and aiming at different scenes.
In one example, the scene of the virtual path control point is changed according to a change in the distance from the actual position where the vehicle travels to the virtual path control point.
In one example, the real-time data includes: GPS data, bicycle sensor data, V2X data, traffic real-time big data.
In one example, the planning data includes: TMC data, traffic history data, driving habit data.
In one example, determining the travel speed and direction of each virtual path control point for different scenarios from the real-time data and the planning data comprises: determining corresponding driving speed and direction according to the bicycle sensor data, the V2X data and the traffic real-time big data, and further calculating the driving speed and direction of the near scene; determining corresponding driving speed and direction according to the single vehicle sensor data, the V2X data, the traffic real-time big data and the TMC data, and further calculating the driving speed and direction of the medium scene; and determining corresponding running speed and direction according to the TMC data, the traffic history data and the driving habit data respectively, and further calculating the running speed and direction of the far scene.
In one example, the travel speed is calculated by equation (1):
Figure BDA0002348661850000051
where v is the speed of travel, M is the number of data involved in the calculation, and Yjβ for a travel speed determined from the jth datajAnd the weighting coefficient is corresponding to the jth data.
In one example, calculating the scene direction includes:
the direction angle is calculated by equation (2):
Figure BDA0002348661850000052
where θ is the azimuth, N is the number of data participating in the calculation, XiCosine values for directions determined from the ith data, αiWeighting coefficients corresponding to the ith data; and taking the direction with the smallest included angle with the direction angle as the scene direction.
In one example, further comprising: drawing a threat coefficient table related to the vehicle speed and the distance between the vehicles ahead, and determining a threat coefficient in real time; and controlling the running speed of the vehicle according to the threat coefficient.
Specifically, the vehicle multidimensional perception fusion control method according to the invention can comprise the following steps:
according to the path distance, a near scene, a middle scene and a far scene of the vehicle are determined, wherein the near scene is 0-200m, the middle scene is 200-500m, and the far scene is more than 500m, and a person skilled in the art can set a scene range according to specific conditions.
Collecting real-time data and planning data of a vehicle during driving, wherein the real-time data comprises:
GPS data, which determines the real-time position of the vehicle and can carry out driving behavior planning within the range of centimeter-meter to 10 meters;
the single-vehicle sensor data is combined with the self-vehicle equipment sensors, the single-sensor sensing information is expanded to all sensing sensor information fusion analysis, and the whole scene is covered from the automatic driving L2-L5, and the sensing is performed aiming at the surrounding environment of 100 m;
the V2X data realizes the high-efficiency transmission of V2X vehicle-to-vehicle, vehicle-to-traffic facility and vehicle-to-vehicle communication, and aims at the surrounding environment perception of 500 m;
the single vehicle sensor can be fused with V2X, and outputs the type (vehicle, person, obstacle, etc.), distance, speed, direction of the sensing target;
pre-judging the movement trend for traffic real-time big data and traffic real-time data in a universe range, including intersection flow, traffic facility signals and the like;
the planning data includes:
TMC data, predicting congestion influence according to the motion situation of a travel path in the future 5 minutes including smooth traffic, slow traffic, congestion and the like by real-time dynamic traffic sensing, and sensing the information of a road in front of 5000 m;
traffic historical data, future congestion prediction based on the historical data in the universe range, and prejudging for the movement trend;
driving habit data, personal driving common data sites, routes, driving habits and the like, specifically comprising rapid acceleration, rapid deceleration and rapid turning; the shortest distance, the fastest speed, the minimum charging priority and the like can enable automatic driving and individual driving to be efficiently integrated, and private customized driving experience is achieved.
Determining a plurality of virtual path control points, further determining a scene of each virtual path control point, determining a scene range according to the distance change between the actual position of the vehicle and the virtual path control points, further changing the scene of the virtual path control points, wherein the number of the virtual path control points is directly related to a pre-judgment result, if the number of the virtual path control points is too small, the pre-judgment result is not accurate enough, if the number of the virtual path control points is too large, the calculation amount is too large, and a person skilled in the art can set the number of the virtual path control points according to specific conditions.
Determining the driving speed and direction of each virtual path control point according to the real-time data and the planning data and aiming at different scenes:
determining corresponding driving speed and direction according to the single vehicle sensor data, the V2X data and the traffic real-time big data respectively, further calculating the driving speed of a near scene through a formula (1), calculating a direction angle through a formula (2), and taking the direction with the minimum included angle with the direction angle as the direction of the near scene, wherein the weight of the single vehicle sensor data is more than the weight of the V2X data and more than the weight of the traffic real-time big data;
determining corresponding driving speed and direction according to the bicycle sensor data, the V2X data, the traffic real-time big data and the TMC data respectively, further calculating the driving speed of a middle scene through a formula (1), calculating a direction angle through a formula (2), and taking the direction with the minimum included angle with the direction angle as the direction of the middle scene, wherein the weight of the V2X data is more than the weight of the bicycle sensor data is more than the weight of the traffic real-time big data is more than the weight of the TMC data;
determining corresponding running speed and direction according to TMC data, traffic history data and driving habit data respectively, further calculating the running speed of a far scene through a formula (1), calculating a direction angle through a formula (2), and taking the direction with the minimum included angle with the direction angle as the direction of the far scene, wherein the weight of the TMC data is more than that of the traffic history data and more than that of the driving habit data;
the GPS data includes longitude, latitude, real-time speed and direction of the location of the vehicle, real-time adjustment data that can be a result of fusing the data.
The method also comprises the step of monitoring the state of the parts of the vehicle in real time, and if the state of the parts of the vehicle reaches an early warning level, the parts are damaged and brake is insensitive, and the vehicle driver is warned and stops the vehicle nearby.
The method also comprises the steps of drawing a threat coefficient table related to the vehicle speed and the distance between the vehicles ahead, and determining the threat coefficient in real time as shown in a table 1; and controlling the running speed of the vehicle according to the threat coefficient, wherein if the threat coefficient is high, the vehicle speed is reduced to 50% of the current vehicle speed, if the threat coefficient is medium, the vehicle speed is reduced to 25% of the current vehicle speed, and if the threat coefficient is low, the current vehicle speed is kept unchanged.
TABLE 1
Figure BDA0002348661850000081
According to the method, the perception range of auxiliary driving and automatic driving is enhanced through multiple dimensions such as navigation, single-vehicle intelligence, vehicle-road cooperation, a big data cloud platform and the like, perception dead-angle-free future motion trend pre-judgment is achieved, auxiliary driving and automatic driving decisions are optimized, further fine pre-judgment of far, middle and near field scenes is achieved gradually through scene division, the accuracy of the pre-judgment is guaranteed, meanwhile, the calculation pressure is reduced, and the functionality, the safety and the comfort are improved.
According to an embodiment of the invention, a vehicle multidimensional perception fusion control system is provided and is characterized by comprising a memory and a processor.
The memory is to store non-transitory computer readable instructions. In particular, the memory may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions. In one embodiment of the disclosure, the processor is configured to execute the computer readable instructions stored in the memory.
Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures should also be included in the protection scope of the present disclosure.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
The invention also provides an automobile which comprises the automobile multi-dimensional perception fusion control system.
Specifically, the automobile adopting the vehicle multidimensional perception fusion control system can enhance the perception range of assistant driving and automatic driving through a plurality of dimensions such as navigation, single-vehicle intelligence, vehicle-road collaboration, a big data cloud platform and the like, realize perception dead-angle-free future movement trend pre-judgment, optimize assistant driving and automatic driving decisions, realize gradual refined pre-judgment of far, middle and near field scenes through scene division, reduce the calculation pressure while ensuring the accuracy of the pre-judgment, and improve the functionality, the safety and the comfort.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.

Claims (10)

1. A vehicle multidimensional perception fusion control method is characterized by comprising the following steps:
determining a near scene, a medium scene and a far scene of the vehicle according to the path distance;
collecting real-time data and planning data when a vehicle runs;
determining a plurality of virtual path control points, and further determining a scene of each virtual path control point;
and determining the driving speed and direction of each virtual path control point according to the real-time data and the planning data and aiming at different scenes.
2. The vehicle multidimensional perception fusion control method according to claim 1, wherein a scene of the virtual path control point is changed according to a change in a distance between an actual position where the vehicle travels and the virtual path control point.
3. The vehicle multidimensional perception fusion control method according to claim 1, wherein the real-time data includes: GPS data, bicycle sensor data, V2X data, traffic real-time big data.
4. The vehicle multidimensional perception fusion control method according to claim 3, wherein the planning data includes: TMC data, traffic history data, driving habit data.
5. The vehicle multidimensional perception fusion control method according to claim 4, wherein determining the driving speed and direction of each virtual path control point for different scenes according to the real-time data and the planning data comprises:
determining corresponding driving speed and direction according to the bicycle sensor data, the V2X data and the traffic real-time big data respectively, and further calculating the driving speed and direction of the near scene;
determining corresponding driving speed and direction according to the bicycle sensor data, the V2X data, the traffic real-time big data and the TMC data, and further calculating the driving speed and direction of the medium scene;
and determining corresponding driving speed and direction according to the TMC data, the traffic history data and the driving habit data respectively, and further calculating the driving speed and direction of the far scene.
6. The vehicle multidimensional perception fusion control method according to claim 5, wherein the traveling speed is calculated by formula (1):
Figure FDA0002348661840000021
where v is the speed of travel, M is the number of data involved in the calculation, and Yjβ for a travel speed determined from the jth datajAnd the weighting coefficient is corresponding to the jth data.
7. The vehicle multi-dimensional perception fusion control method according to claim 5, wherein calculating a scene direction includes:
the direction angle is calculated by equation (2):
Figure FDA0002348661840000022
where θ is the azimuth, N is the number of data participating in the calculation, XiCosine values for directions determined from the ith data, αiWeighting coefficients corresponding to the ith data;
and taking the direction with the smallest included angle with the direction angle as the scene direction.
8. The vehicle multidimensional perception fusion control method according to claim 1, further comprising:
drawing a threat coefficient table related to the vehicle speed and the distance between the vehicles ahead, and determining a threat coefficient in real time;
and controlling the running speed of the vehicle according to the threat coefficient.
9. A vehicle multi-dimensional perception fusion control system, the system comprising:
a memory storing executable instructions;
a processor executing the executable instructions in the memory to implement the vehicle multi-dimensional perception fusion control method of any of claims 1-8.
10. An automobile characterized by comprising the vehicle multi-dimensional perception fusion control system according to claim 9.
CN201911406073.XA 2019-12-31 2019-12-31 Vehicle multidimensional sensing fusion control method and system and automobile Active CN111123948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911406073.XA CN111123948B (en) 2019-12-31 2019-12-31 Vehicle multidimensional sensing fusion control method and system and automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911406073.XA CN111123948B (en) 2019-12-31 2019-12-31 Vehicle multidimensional sensing fusion control method and system and automobile

Publications (2)

Publication Number Publication Date
CN111123948A true CN111123948A (en) 2020-05-08
CN111123948B CN111123948B (en) 2023-04-28

Family

ID=70506049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911406073.XA Active CN111123948B (en) 2019-12-31 2019-12-31 Vehicle multidimensional sensing fusion control method and system and automobile

Country Status (1)

Country Link
CN (1) CN111123948B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396183A (en) * 2021-01-21 2021-02-23 国汽智控(北京)科技有限公司 Method, device and equipment for automatic driving decision and computer storage medium
CN113177663A (en) * 2021-05-20 2021-07-27 启迪云控(上海)汽车科技有限公司 Method and system for processing intelligent network connection application scene

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0995194A (en) * 1995-09-29 1997-04-08 Aisin Seiki Co Ltd Detecting device for object in front of vehicle
US20040189831A1 (en) * 2003-03-25 2004-09-30 Minolta Co., Ltd. Monitor device for moving body
JP2008148112A (en) * 2006-12-12 2008-06-26 Clarion Co Ltd Drive assisting device
US20170008522A1 (en) * 2015-07-06 2017-01-12 Toyota Jidosha Kabushiki Kaisha Control system of automated driving vehicle
CN107272687A (en) * 2017-06-29 2017-10-20 深圳市海梁科技有限公司 A kind of driving behavior decision system of automatic Pilot public transit vehicle
CN108928348A (en) * 2017-05-26 2018-12-04 德韧营运有限责任公司 Generate the method and system of wide area perception scene figure
CN109726942A (en) * 2019-03-01 2019-05-07 北京汽车研究总院有限公司 A kind of driving environment methods of risk assessment and system
CN109829365A (en) * 2018-12-20 2019-05-31 南京理工大学 More scenes based on machine vision adapt to drive the method for early warning that deviates and turn
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
CN110188689A (en) * 2019-05-30 2019-08-30 重庆大学 Virtual driving target collision detection method based on real scene modeling

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0995194A (en) * 1995-09-29 1997-04-08 Aisin Seiki Co Ltd Detecting device for object in front of vehicle
US20040189831A1 (en) * 2003-03-25 2004-09-30 Minolta Co., Ltd. Monitor device for moving body
JP2008148112A (en) * 2006-12-12 2008-06-26 Clarion Co Ltd Drive assisting device
US20170008522A1 (en) * 2015-07-06 2017-01-12 Toyota Jidosha Kabushiki Kaisha Control system of automated driving vehicle
CN108928348A (en) * 2017-05-26 2018-12-04 德韧营运有限责任公司 Generate the method and system of wide area perception scene figure
CN107272687A (en) * 2017-06-29 2017-10-20 深圳市海梁科技有限公司 A kind of driving behavior decision system of automatic Pilot public transit vehicle
CN109829365A (en) * 2018-12-20 2019-05-31 南京理工大学 More scenes based on machine vision adapt to drive the method for early warning that deviates and turn
CN109726942A (en) * 2019-03-01 2019-05-07 北京汽车研究总院有限公司 A kind of driving environment methods of risk assessment and system
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
CN110188689A (en) * 2019-05-30 2019-08-30 重庆大学 Virtual driving target collision detection method based on real scene modeling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANURAG SINGH,等: "Visually Salient Features for Highway Scene Analysis" *
高峰,等: "车辆智能行驶主动避撞的虚拟实现" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396183A (en) * 2021-01-21 2021-02-23 国汽智控(北京)科技有限公司 Method, device and equipment for automatic driving decision and computer storage medium
CN113177663A (en) * 2021-05-20 2021-07-27 启迪云控(上海)汽车科技有限公司 Method and system for processing intelligent network connection application scene
CN113177663B (en) * 2021-05-20 2023-11-24 云控智行(上海)汽车科技有限公司 Processing method and system of intelligent network application scene

Also Published As

Publication number Publication date
CN111123948B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
EP3347785B1 (en) Method to dynamically adjusting steering rates of autonomous vehicles
JP6832421B2 (en) Simulation-based evaluation method for sensing requirements of self-driving cars
EP3580625B1 (en) Driving scenario based lane guidelines for path planning of autonomous driving vehicles
EP3356900B1 (en) Method and system to predict one or more trajectories of a vehicle based on context surrounding the vehicle
EP3342670B1 (en) Method to dynamically adjusting speed control rates of autonomous vehicles
KR102260486B1 (en) Speed control for complete stopping of autonomous vehicles
US10824153B2 (en) Cost design for path selection in autonomous driving technology
EP3315388B1 (en) Spring system-based change lane approach for autonomous vehicles
EP3598411A1 (en) Method and system to predict object movement for autonomous driving vehicles
CN110389584A (en) Method for assessing the track candidate item of automatic driving vehicle
US20180143632A1 (en) Method for determining command delays of autonomous vehicles
WO2019200563A1 (en) Map-less and localization-less lane following method for autonomous driving of autonomous driving vehicles on highway
EP3694756B1 (en) Spiral curve based vertical parking planner system for autonomous driving vehicles
US20210107498A1 (en) Safe transition from autonomous-to-manual driving mode with assistance of autonomous driving system
US11731612B2 (en) Neural network approach for parameter learning to speed up planning for complex driving scenarios
CN112149487A (en) Method for determining anchor frame for training neural network object detection model for automatic driving
CN111746557B (en) Path plan fusion for vehicles
CN111123948B (en) Vehicle multidimensional sensing fusion control method and system and automobile
JP6892516B2 (en) 3-point turn plan for self-driving vehicles based on enumeration
US11254326B2 (en) Automatic comfort score system based on human driving reference data
JP2019066444A (en) Position calculation method, vehicle control method, and position calculation device
JP2019196941A (en) Own vehicle position estimating device
US11242057B2 (en) Method for optimizing three-point turn of autonomous driving vehicles
EP3697659B1 (en) Method and system for generating reference lines for autonomous driving vehicles
JP6838769B2 (en) Surrounding environment recognition device, display control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100176 floor 10, building 1, zone 2, yard 9, Taihe 3rd Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Beijing National New Energy Vehicle Technology Innovation Center Co.,Ltd.

Address before: 102600 1705, block a, building 1, No. 10, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant before: BEIJING NEW ENERGY VEHICLE TECHNOLOGY INNOVATION CENTER Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant