CN115311880B - Distributed visual field enhancement method for low-speed automatic driving vehicle - Google Patents
Distributed visual field enhancement method for low-speed automatic driving vehicle Download PDFInfo
- Publication number
- CN115311880B CN115311880B CN202210929214.1A CN202210929214A CN115311880B CN 115311880 B CN115311880 B CN 115311880B CN 202210929214 A CN202210929214 A CN 202210929214A CN 115311880 B CN115311880 B CN 115311880B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- automatic driving
- vehicles
- sentry
- distributed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 230000000007 visual effect Effects 0.000 title claims description 40
- 238000004891 communication Methods 0.000 claims abstract description 8
- 238000012795 verification Methods 0.000 claims abstract description 7
- 238000004364 calculation method Methods 0.000 claims abstract description 5
- 230000000694 effects Effects 0.000 claims abstract description 5
- 230000002093 peripheral effect Effects 0.000 claims description 7
- 230000008447 perception Effects 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 230000001788 irregular Effects 0.000 claims description 2
- 238000007726 management method Methods 0.000 description 24
- 230000006872 improvement Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000012423 maintenance Methods 0.000 description 4
- 230000000295 complement effect Effects 0.000 description 3
- 238000009434 installation Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096725—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L2209/00—Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
- H04L2209/84—Vehicles
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Atmospheric Sciences (AREA)
- Chemical & Material Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Analytical Chemistry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a distributed vision enhancement method for a low-speed automatic driving vehicle, which comprises the following steps of: the automatic driving vehicle finds out a complex scene and reports complex scene information to a cloud automatic driving management system; the cloud automatic driving management system calculates priority and assignable sentinel vehicles according to the reported complex scene information, and assigns the sentinel vehicles according to the priority; after the vehicle for executing the sentry task reaches a designated place, only a sensor related module is started, other irrelevant calculations at the vehicle end are closed, the vehicle is communicated with a cloud automatic driving management system, an identity verification key and a communication protocol are created, and a sentry mode is started; when other normally running automatic driving vehicles arrive at a complex scene, selecting whether to use the distributed vision enhancement service provided by the sentry vehicles for vision enhancement; after the sentry mode is released, the sentry vehicle returns. The invention can optimize the automatic driving effect without increasing hardware and deployment cost.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a distributed vision enhancement method for a low-speed automatic driving vehicle.
Background
In recent years, the development of automatic driving technology is rapid, wherein the most representative technology is a bicycle intelligent technology, so that various sensor devices are installed on an automatic driving vehicle in popular terms, peripheral obstacles and road conditions are sensed through a vehicle-mounted computing unit, and meanwhile, a feasible decision planning result is calculated through the vehicle-mounted computing unit, so that the aim of automatic driving is fulfilled.
In addition, the autopilot industry has begun to attempt to build fixed v2x device modules on exemplary roads to complement the inadequacies of the onboard hardware capabilities of a bicycle. Currently, all automatic driving companies try to solve the contradiction between automatic driving sensors and calculation bottlenecks and scene complexity, and the industry is continuously trying to improve the performance of a bicycle by developing high-performance chips or complement the situation by adding fixed v2x equipment. Likewise, the invention provides a set of distributed visual field enhancement system to solve the contradiction between insufficient capability of the sensor of the automatic driving bicycle and complexity of the scene.
The existing technical scheme basically has two ideas, one is to solve the perception requirement of a complex scene by continuously improving the number and the capability of the vehicle-mounted sensors of the bicycle; in another scheme, the fixed v2x sensor equipment is built to assist the automatic driving bicycle system to sense surrounding scenes, and the sensor capacity is complemented.
The complexity of the automatic driving road condition is a long-term problem to be solved, and when the vehicle is automatically driven, the scene is always in the process of changing, and the requirement on the sensor capability is changed along with the change of the complexity of the scene. The sensor base of the current bicycle is difficult to cover a complex scene in terms of cost, battery, endurance and the like.
In the prior art, according to the scheme of the first bicycle-mounted chip sensor, after an automatic driving vehicle leaves a factory, the bicycle-mounted chip and the sensor need to upgrade or replace hardware and software in order to improve the sensor capability, and the maintenance cost and the like are high; the second scheme of using v2x technology to complement the sensor capability can only be installed in a fixed scene at present, the installation cost and the area are very fixed, the cost is very high, the installation position is usually high in order not to influence normal traffic, and the maintenance cost are also very high.
Disclosure of Invention
The invention aims to provide a distributed view enhancement method for a low-speed automatic driving vehicle, aiming at the problems that the prior art has insufficient automatic driving bicycle sensor capability, and the existing solution increases scene complexity, causes the increase of deployment cost and maintenance cost and the like.
In order to achieve the above purpose, the invention is realized by the following technical scheme: a method of distributed field of view enhancement for a low speed autonomous vehicle comprising the steps of:
s1, finding a complex scene: all the automatic driving vehicles which normally run have the capability of finding complex scenes, and if the automatic driving vehicles cannot pass under a certain specific environment, pass is difficult, or the cloud automatic driving management system judges the situation that richer sensor visual field intervention is needed, the situation is regarded as the complex scenes which need external force assistance;
s2, reporting a complex scene: the automatic driving vehicle which finds out the complex scene actively reports complex scene information to the cloud automatic driving management system;
s3, sentinel distribution: the cloud automatic driving management system calculates priority and assignable sentry vehicles according to the reported complex scene information, and assigns the sentry vehicles according to the priority;
s4, a sentinel mode: after the vehicles for executing the sentry tasks reach the appointed place, only starting a sensor related module, closing other irrelevant calculations at the vehicle end, communicating with a cloud automatic driving management system, creating an identity verification key and a communication protocol, starting a sentry mode, and forming a distributed visual field enhancement network by a plurality of sentry vehicles and the cloud automatic driving management system;
s5, when other normally running automatic driving vehicles reach a specific distance of the distributed visual field enhancement network, if a task route passes through a service area of the sentry vehicle, the cloud automatic driving management system transmits identity keys and protocols with the normally running vehicles, the normally running automatic driving vehicles select whether to use the distributed visual field enhancement service provided by the sentry vehicle according to the actual situation of a current scene, if the service is used, vehicle-to-vehicle safety connection is established through identity key verification, and visual field enhancement is performed;
s6, after the complex scene changes, all automatic driving vehicles passing through the sentry service area judge that the sensor sharing service is not needed, and then the release advice is sent to the cloud automatic driving management system, and the cloud automatic driving management system releases the sentry mode according to the actual situation, so that the sentry vehicles return.
As a further improvement of the above solution, in the step S2, the complex scene information includes a position, a time, a road condition, and a type of the complex scene.
As a further improvement of the scheme, the complex scene comprises a traffic light-free intersection, or any one of several oversized intersections, machine-to-machine non-mixed roads, irregular construction roads, temporary control roads or stop points.
As a further improvement of the above scheme, in step S3, when the guard vehicles are allocated according to the priority, the priority is set according to the distance between each autonomous driving vehicle and the complex scene, the task situation of each driving vehicle and the electric quantity of the vehicle, and the guard vehicle executing the guard task is determined.
As a further improvement of the above solution, in step S5, the distributed view enhancement network can serve multiple autonomous vehicles passing through the complex scene simultaneously, and the multiple autonomous vehicles access the network simultaneously, and the sentinel vehicles notify all vehicles of view enhancement through the broadcast mode.
As a further improvement of the above scheme, in step S5, the sentinel vehicle establishes a sentinel distributed view network through a peer-to-peer communication manner of the automatic driving vehicle, and informs the cloud automatic driving management system of an access manner of the distributed view enhancement network, the cloud automatic driving management system informs the passing automatic driving vehicle of access information of the sentinel distributed view enhancement network according to the complexity of the road scene, and the passing automatic driving vehicle selectively accesses the distributed view enhancement network according to the actual scene condition after receiving the information of the distributed view enhancement network of the sentinel vehicle on the path; an automatic driving vehicle accessing to the network can selectively subscribe visual field information of sentry at different positions in the network according to own line conditions; the guard vehicles in the network can actively transmit the current guard vehicle peripheral visual field information to the automatic driving vehicle connected with the network through the actual requirement of the connected vehicle, so that richer perception prediction results are provided for other passing automatic driving vehicles, and the automatic driving safety of other vehicles is guaranteed.
As a further improvement of the scheme, if the complex scene is an intersection, a guard vehicle is respectively distributed at four corners of the intersection, the passing vehicles select the guard vehicle corresponding to the walking route of the passing vehicles, and the information subscribed for the guard vehicle is received to strengthen the field of view.
As a further improvement of the above solution, the visual field of the sentinel vehicle is enhanced by means of hardware or software capable of sensing the peripheral information, including but not limited to laser radar, camera, gps, imu, millimeter wave radar, ultrasonic radar or bumper strip.
As a further improvement of the scheme, the visual field enhancement mode realizes the visual field enhancement effect through a vehicle-to-vehicle safety link and through an encryption protocol through transmission of all data and control instructions.
The invention has the positive effects that: according to the low-speed automatic driving vehicle distributed visual field enhancement method, hardware transformation and deployment are not needed, the hardware transformation cost, deployment cost and maintenance cost are reduced, the visual field of the existing automatic driving vehicle can be increased, more abundant road environment information is obtained, and therefore the purpose of optimizing an automatic driving effect in a complex scene is achieved. Compared with the scheme using the v2x technology, the method can overcome the limitation in the region and has wider application range.
Drawings
Fig. 1 is a schematic diagram of the operation of the distributed field enhancement method of the low-speed autonomous vehicle of the present invention. The number of the implementation sequence of the distributed vision enhancement method of the low-speed automatic driving vehicle is shown in the figure.
Detailed Description
The technical solutions of the present invention will be clearly and completely described by means of examples, and it is obvious that the described examples are only some, but not all, examples of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A method of distributed field of view enhancement for a low speed autonomous vehicle comprising the steps of:
s1, finding a complex scene: all the automatic driving vehicles which normally run have the capability of finding complex scenes, and if the automatic driving vehicles cannot pass under a certain specific environment, pass is difficult, or the cloud automatic driving management system judges the situation that richer sensor visual field intervention is needed, the situation is regarded as the complex scenes which need external force assistance; generally, complex scenes include traffic light-free intersections, or any one of several oversized intersections, machine-to-machine non-mixable roads, irregularly constructed roads, temporarily regulated roads, or stop points.
S2, reporting a complex scene: the automatic driving vehicle which finds out the complex scene actively reports complex scene information to the cloud automatic driving management system; the reported complex scene information comprises the position, time, road condition and type of the complex scene.
S3, sentinel distribution: the cloud automatic driving management system calculates priority and assignable sentry vehicles according to the reported complex scene information, and assigns the sentry vehicles according to the priority; when the sentry vehicles are distributed according to the priority, the priority is set according to the distance between each automatic driving vehicle and the complex scene, the task situation of each driving vehicle and the electric quantity of the vehicles, and the sentry vehicles executing the sentry tasks are determined. If the complex scene is an intersection, respectively distributing a guard vehicle at four corners of the intersection, selecting a guard vehicle corresponding to a walking route of the passing vehicle, receiving information subscribed for the guard vehicle, and enhancing the visual field.
S4, a sentinel mode: after the vehicles for executing the sentry tasks reach the appointed place, only the sensor related module is started, other irrelevant calculations at the vehicle end are closed, the sensor related modules are communicated with the cloud automatic driving management system, an identity verification key and a communication protocol are created, the sentry mode is started, and a plurality of sentry vehicles and the cloud automatic driving management system form a distributed visual field enhancement network. The visual field of the sentry vehicle is enhanced by hardware or software capable of sensing the peripheral information, including but not limited to laser radar, cameras, gps, imu, millimeter wave radar, ultrasonic radar or bumper strips.
S5, when other normally running automatic driving vehicles reach a specific distance of the distributed visual field enhancement network, if a task route passes through a service area of the sentry vehicle, the cloud automatic driving management system transmits identity keys and protocols with the normally running vehicles, the normally running automatic driving vehicles select whether to use the distributed visual field enhancement service provided by the sentry vehicle according to actual conditions of a current scene, if the service is used, vehicle-to-vehicle safety connection is established through identity key verification, and visual field enhancement is performed.
The sentinel vehicles establish a sentinel distributed view network through a peer-to-peer communication mode of the automatic driving vehicle, and inform a cloud automatic driving management system of an access mode of the distributed view enhancement network. The visual field enhancement mode realizes the visual field enhancement effect through the transmission of all data and control instructions through an encryption protocol by a vehicle-to-vehicle safety link.
The cloud automatic driving management system informs the passing automatic driving vehicle of the access information of the sentinel distributed view enhancement network according to the complexity of the road scene, and the passing automatic driving vehicle can be selectively accessed into the distributed view enhancement network according to the actual scene after receiving the information of the distributed view enhancement network of the sentinel vehicle on the path; an automatic driving vehicle accessing to the network can selectively subscribe visual field information of sentry at different positions in the network according to own line conditions; the guard vehicles in the network can actively transmit the current guard vehicle peripheral visual field information to the automatic driving vehicle connected with the network through the actual requirement of the connected vehicle, so that richer perception prediction results are provided for other passing automatic driving vehicles, and the automatic driving safety of other vehicles is guaranteed. The distributed vision enhancement network can simultaneously serve a plurality of autonomous vehicles passing through complex scenes, the plurality of autonomous vehicles are simultaneously connected into the network, and the sentry vehicles inform all vehicles of vision enhancement through a broadcasting mode.
S6, after the complex scene changes, all automatic driving vehicles passing through the sentry service area judge that the sensor sharing service is not needed, and then the release advice is sent to the cloud automatic driving management system, and the cloud automatic driving management system releases the sentry mode according to the actual situation, so that the sentry vehicles return.
According to the low-speed automatic driving vehicle distributed visual field enhancement method, the cloud automatic driving management system is used for carrying out unified scheduling and management on automatic driving vehicles and sentry vehicles, when a complex scene is found, priority can be automatically calculated, a required number of sentry vehicles are distributed to execute sentry tasks, other normally driving vehicles can select the sentry vehicles according to the driving route when passing through the complex scene, communication connection is established, visual field of the sentry vehicles is obtained, visual field of the vehicles is enhanced, the distributed sentry vehicles can smoothly pass through the complex scene, visual field of a plurality of automatic driving vehicles can be enhanced, and utilization rate of the distributed sentry vehicles is improved; when the state of the complex scene changes, all automatic driving vehicles passing through the complex scene feed back to the cloud automatic driving management system without visual field enhancement, the whistle tasks can be ended, and the new tasks are distributed to the whistle vehicles through the cloud automatic driving management system so as to be converted into normal driving vehicles or distributed to the new whistle tasks, so that the whistle vehicles can drive to the new complex scene to start the new whistle tasks.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (9)
1. A distributed field of view enhancement method for a low-speed autonomous vehicle, comprising the steps of: which comprises the following steps:
s1, finding a complex scene: all the automatic driving vehicles which normally run have the capability of finding complex scenes, and if the automatic driving vehicles cannot pass under a certain specific environment, pass is difficult, or the cloud automatic driving management system judges the situation that richer sensor visual field intervention is needed, the situation is regarded as the complex scenes which need external force assistance;
s2, reporting a complex scene: the automatic driving vehicle which finds out the complex scene actively reports complex scene information to the cloud automatic driving management system;
s3, sentinel distribution: the cloud automatic driving management system calculates priority and assignable sentry vehicles according to the reported complex scene information, and assigns the sentry vehicles according to the priority;
s4, a sentinel mode: after the vehicle for executing the sentry task reaches a designated place, only a sensor related module is started, other irrelevant calculations at the vehicle end are closed, the vehicle is communicated with a cloud automatic driving management system, an identity verification key and a communication protocol are created, and a sentry mode is started;
s5, a distributed field of view enhancement network: when other normally running automatic driving vehicles reach a network specific distance formed by vehicles in a specific sentinel mode, if a task route passes through a service area of the sentinel vehicles, the cloud automatic driving management system transmits identity keys and protocols with the normally running vehicles, the normally running automatic driving vehicles select whether to use distributed visual field enhancement services provided by the sentinel vehicles according to actual conditions of current scenes, if the services are used, vehicle-to-vehicle safety connection is established through identity key verification, and visual field enhancement is performed;
s6, after the complex scene changes, all automatic driving vehicles passing through the sentry service area judge that the sensor sharing service is not needed, and then the release advice is sent to the cloud automatic driving management system, and the cloud automatic driving management system releases the sentry mode according to the actual situation, so that the sentry vehicles return.
2. The method for distributed vision enhancement for a low speed autonomous vehicle of claim 1, wherein: in the step S2, the complex scene information includes a location, a time, a road condition, and a type of the complex scene.
3. The method for distributed vision enhancement for a low speed autonomous vehicle of claim 1, wherein: the complex scene comprises any one of a traffic light-free intersection, a mechanical-non-mixed road, an irregular construction road, a temporary control road or a stop point.
4. The method for distributed vision enhancement for a low speed autonomous vehicle of claim 1, wherein: in step S3, when the guard vehicles are allocated according to the priority, the priority is set according to the distance between each autonomous driving vehicle and the complex scene, the task condition of each driving vehicle and the electric quantity of the vehicle, and the guard vehicle executing the guard task is determined.
5. The method for distributed vision enhancement for a low speed autonomous vehicle of claim 1, wherein: in step S5, the distributed view enhancement network can serve multiple autonomous vehicles passing through the complex scene at the same time, and the multiple autonomous vehicles access the network at the same time, and the sentry vehicles inform all vehicles of view enhancement through the broadcast mode.
6. The method for distributed vision enhancement for a low speed autonomous vehicle of claim 1, wherein: in step S5, the sentinel vehicle establishes a sentinel distributed view network through a peer-to-peer communication mode of the autopilot vehicle, and informs the cloud autopilot management system of an access mode of the distributed view enhancement network, the cloud autopilot management system informs the passing autopilot vehicle of the access information of the sentinel distributed view enhancement network according to the complexity of the road scene, and the passing autopilot vehicle selectively accesses the distributed view enhancement network according to the actual scene after receiving the information of the distributed view enhancement network of the sentinel vehicle on the path; an automatic driving vehicle accessing to the network can selectively subscribe visual field information of sentry at different positions in the network according to own line conditions; the guard vehicles in the network can actively transmit the current guard vehicle peripheral visual field information to the automatic driving vehicle connected with the network through the actual requirement of the connected vehicle, so that richer perception prediction results are provided for other passing automatic driving vehicles, and the automatic driving safety of other vehicles is guaranteed.
7. The method for distributed vision enhancement for a low speed autonomous vehicle of claim 1, wherein: if the complex scene is an intersection, respectively distributing a guard vehicle at four corners of the intersection, selecting a guard vehicle corresponding to a walking route of the passing vehicle, receiving information subscribed for the guard vehicle, and enhancing the visual field.
8. The method for distributed vision enhancement for a low speed autonomous vehicle of claim 1, wherein: the visual field of the sentry vehicle is enhanced by hardware or software capable of sensing the peripheral information, including but not limited to laser radar, cameras, gps, imu, millimeter wave radar, ultrasonic radar or bumper strips.
9. The method for distributed vision enhancement for a low speed autonomous vehicle of claim 1, wherein: the visual field enhancement mode realizes the visual field enhancement effect through the transmission of all data and control instructions through an encryption protocol by a vehicle-to-vehicle safety link.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210929214.1A CN115311880B (en) | 2022-08-03 | 2022-08-03 | Distributed visual field enhancement method for low-speed automatic driving vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210929214.1A CN115311880B (en) | 2022-08-03 | 2022-08-03 | Distributed visual field enhancement method for low-speed automatic driving vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115311880A CN115311880A (en) | 2022-11-08 |
CN115311880B true CN115311880B (en) | 2024-03-19 |
Family
ID=83859571
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210929214.1A Active CN115311880B (en) | 2022-08-03 | 2022-08-03 | Distributed visual field enhancement method for low-speed automatic driving vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115311880B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102015205806A1 (en) * | 2015-03-30 | 2016-10-06 | Zf Friedrichshafen Ag | Control method and control system for a motor vehicle |
KR20200058613A (en) * | 2018-11-13 | 2020-05-28 | 한국철도기술연구원 | Apparatus and method for controlling Autonomous vehicle using control system in intersection |
KR20200101517A (en) * | 2019-01-30 | 2020-08-28 | 한국자동차연구원 | Method for autonomous cooperative driving based on vehicle-road infrastructure information fusion and apparatus for the same |
CN113763694A (en) * | 2021-07-31 | 2021-12-07 | 重庆长安汽车股份有限公司 | Multi-user collaborative interactive navigation and emergency control system |
WO2021243710A1 (en) * | 2020-06-05 | 2021-12-09 | 曹庆恒 | Intelligent transportation system-based automatic driving method and device, and intelligent transportation system |
WO2022000202A1 (en) * | 2020-06-29 | 2022-01-06 | 曹庆恒 | Smart transportation system-based vehicle joint driving method and system, and power-assisted vehicle |
CN114510052A (en) * | 2022-02-17 | 2022-05-17 | 深圳海星智驾科技有限公司 | Cloud service platform, and collaborative scheduling method, device and system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101315466B1 (en) * | 2009-11-30 | 2013-10-04 | 한국전자통신연구원 | Apparatus and method for controlling vehicle based on infra sensor |
JP6372384B2 (en) * | 2015-02-09 | 2018-08-15 | 株式会社デンソー | Vehicle-to-vehicle management device and vehicle-to-vehicle management method |
JP7350517B2 (en) * | 2018-10-17 | 2023-09-26 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Information processing device, information processing method and program |
DE102018221740A1 (en) * | 2018-12-14 | 2020-06-18 | Volkswagen Aktiengesellschaft | Method, device and computer program for a vehicle |
US10796571B2 (en) * | 2019-01-31 | 2020-10-06 | StradVision, Inc. | Method and device for detecting emergency vehicles in real time and planning driving routes to cope with situations to be expected to be occurred by the emergency vehicles |
US11346682B2 (en) * | 2019-06-28 | 2022-05-31 | GM Cruise Holdings, LLC | Augmented 3D map |
KR20210077280A (en) * | 2019-12-17 | 2021-06-25 | 현대자동차주식회사 | System and method for controlling autonomous driving of vehicle using v2x communication |
-
2022
- 2022-08-03 CN CN202210929214.1A patent/CN115311880B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102015205806A1 (en) * | 2015-03-30 | 2016-10-06 | Zf Friedrichshafen Ag | Control method and control system for a motor vehicle |
KR20200058613A (en) * | 2018-11-13 | 2020-05-28 | 한국철도기술연구원 | Apparatus and method for controlling Autonomous vehicle using control system in intersection |
KR20200101517A (en) * | 2019-01-30 | 2020-08-28 | 한국자동차연구원 | Method for autonomous cooperative driving based on vehicle-road infrastructure information fusion and apparatus for the same |
WO2021243710A1 (en) * | 2020-06-05 | 2021-12-09 | 曹庆恒 | Intelligent transportation system-based automatic driving method and device, and intelligent transportation system |
WO2022000202A1 (en) * | 2020-06-29 | 2022-01-06 | 曹庆恒 | Smart transportation system-based vehicle joint driving method and system, and power-assisted vehicle |
CN113763694A (en) * | 2021-07-31 | 2021-12-07 | 重庆长安汽车股份有限公司 | Multi-user collaborative interactive navigation and emergency control system |
CN114510052A (en) * | 2022-02-17 | 2022-05-17 | 深圳海星智驾科技有限公司 | Cloud service platform, and collaborative scheduling method, device and system |
Also Published As
Publication number | Publication date |
---|---|
CN115311880A (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hakak et al. | Autonomous Vehicles in 5G and beyond: A Survey | |
US11521496B2 (en) | Lane-borrowing vehicle driving method and control center | |
CN113965568B (en) | Edge computing system for urban road C-V2X network | |
He et al. | Cooperative connected autonomous vehicles (CAV): research, applications and challenges | |
Zheng et al. | Reliable and efficient autonomous driving: the need for heterogeneous vehicular networks | |
CN109003467B (en) | Method, device and system for preventing vehicle collision | |
CN112839320B (en) | Traffic information transmission method and device, storage medium and electronic equipment | |
DE102019217763A1 (en) | V2X SUPPORTED UNmanned Aircraft | |
CN112249034B (en) | Automobile brain system and vehicle driving control method | |
CN112466115A (en) | Bus intersection priority passing control system and method based on edge calculation | |
CN112927543A (en) | Vehicle-road cooperative automatic driving method and system and vehicle | |
US11350257B2 (en) | Proxy environmental perception | |
CN112261098B (en) | Vehicle speed control method, device and system for Internet of vehicles | |
CN112037553A (en) | Remote driving method, device, system, equipment and medium | |
Alhilal et al. | Distributed vehicular computing at the dawn of 5G: A survey | |
CN113593221B (en) | Information value evaluation type driving system, internet vehicle system and data transmission method | |
Khan et al. | A journey towards fully autonomous driving-fueled by a smart communication system | |
Jurczenia et al. | A survey of vehicular network systems for road traffic management | |
CN112230657A (en) | Intelligent vehicle-oriented regional collaborative driving intention scheduling method, system and medium | |
CN113911139B (en) | Vehicle control method and device and electronic equipment | |
CN115311880B (en) | Distributed visual field enhancement method for low-speed automatic driving vehicle | |
Xia et al. | Lane scheduling around crossroads for edge computing based autonomous driving | |
CN115311839B (en) | Method for low-speed automatic driving vehicle team to pass through complex scene | |
Xiao et al. | A hierarchical decision architecture for network-assisted automatic driving | |
Tang et al. | Cooperative connected smart road infrastructure and autonomous vehicles for safe driving |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |