WO2020057406A1 - 一种辅助驾驶方法和系统 - Google Patents

一种辅助驾驶方法和系统 Download PDF

Info

Publication number
WO2020057406A1
WO2020057406A1 PCT/CN2019/105278 CN2019105278W WO2020057406A1 WO 2020057406 A1 WO2020057406 A1 WO 2020057406A1 CN 2019105278 W CN2019105278 W CN 2019105278W WO 2020057406 A1 WO2020057406 A1 WO 2020057406A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
driving
information
sensing device
road data
Prior art date
Application number
PCT/CN2019/105278
Other languages
English (en)
French (fr)
Inventor
吴栋磊
蔡岭
朱永盛
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020057406A1 publication Critical patent/WO2020057406A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to the field of vehicle assisted driving, and in particular to the field of using road environmental data to assist vehicle driving.
  • sensors and computing units in the vehicle or its surroundings can provide increasingly powerful driving-related data and computing capabilities. These data and capabilities can assist in driving vehicles more effectively than before, making vehicle driving easier, smarter, and safer.
  • sensors on the vehicle are generally used to collect data such as distance from the vehicle in front, the speed of the vehicle, and the real-time position of the vehicle during driving, and then the vehicle-mounted computing unit analyzes these data and The ability to provide assisted driving based on the analysis results.
  • this solution is limited by the relevant sensors installed on the vehicle, that is, the solution cannot be executed on a vehicle without the relevant sensors.
  • vehicle sensors can only sense data in a small area around the vehicle, and cannot provide information about the driving environment at a greater distance from the vehicle, which has obvious limitations.
  • Another existing solution is to use the monitoring equipment installed on the road to provide complex information for the vehicle, but the existing road monitoring equipment generally only provides functions such as measuring vehicle flow, vehicle distance, and vehicle speed, and can only provide the vehicle with Driving provides some very one-sided information about road traffic, and it also fails to achieve the goal of effectively assisting vehicle driving.
  • a new vehicle assisted driving scheme is needed, which can provide vehicle assisted driving functions without relying on the sensor capabilities of the vehicle, and can extend beyond the line-of-sight assisted driving capabilities to break through the limitations of existing assisted driving systems.
  • the present invention provides a new vehicle assisted driving solution in an attempt to solve or at least alleviate at least one of the problems existing above.
  • a driving assistance method includes the steps of: acquiring road data within a predetermined range, and the road data includes static and / or dynamic information of objects in the predetermined range; and based on the road data To identify one or more vehicles and vehicle motion information in each object; determine driving related information of the identified vehicle based on road data and vehicle motion information; and send driving related information to the identified vehicle through a predetermined communication method.
  • the step of obtaining road data within a predetermined range includes: obtaining pre-stored static information about the predetermined range; and using a roadside deployed in the predetermined range.
  • Each sensor in the sensing device obtains static and / or dynamic information of each object within a predetermined range; and combines static information stored in advance with information obtained by each sensor to generate the road data.
  • the step of obtaining road data within a predetermined range further includes: receiving vehicle driving information sent by a vehicle within the predetermined range through a predetermined communication method; and combining the previously stored data Static data, the information obtained by each sensor, and the received vehicle driving information to generate road data.
  • the step of acquiring static information on a predetermined range includes: determining a geographic position of the roadside sensing device; and acquiring static information within a predetermined range of the geographical position from a server.
  • the step of identifying one or more vehicles and vehicle motion information in each object based on road data includes determining a vehicle object belonging to the vehicle and its motion based on the motion characteristics of each object. Information; and an identification identifying each vehicle object.
  • the communication mode includes one or more of the following: V2X, 5G, 4G, and 3G communication.
  • each object includes one or more of the following objects: lane lines, guardrails, barriers, vehicles, pedestrians, and throws; static and / or dynamic information includes the following One or more of: position, distance, speed, angular velocity, license plate, type and size, etc.
  • the sensors in the roadside sensing device include one or more of the following: millimeter wave radar, lidar, camera, and infrared probe.
  • the vehicle running information includes one or more of the following: current time, size, speed, acceleration, angular velocity, and position.
  • the driving-related information includes a potential collision risk
  • the step of determining driving-related information of the identified vehicle based on road data and vehicle motion information includes: through modeling or deep learning
  • a potential collision risk of the identified vehicle is determined based on road data and vehicle motion information.
  • the step of determining driving related information of the identified vehicle based on road data includes: receiving a scene request sent by a vehicle within a predetermined range; and determining a relationship with the road based on the road data. Driving-related information corresponding to the scene.
  • the driving assistance method according to the present invention is adapted to be performed in a roadside sensing device deployed in the predetermined range or on a cloud server coupled to the roadside sensing device.
  • a driving assistance method performed in a vehicle is provided.
  • the vehicle runs on a road on which a roadside sensing device is deployed.
  • the method includes the steps of receiving a roadside sensing device from a predetermined communication method.
  • the driving-related information is generated by the roadside sensing device based on road data within its predetermined range; and the received driving-related information is output in the vehicle.
  • a roadside sensing device includes: a sensor group, adapted to obtain static and / or dynamic information of each object within its predetermined range; a storage unit, adapted to store all Said road data, said road data including static and / or dynamic information of objects within a predetermined range; and a computing unit adapted to execute the driving assistance method according to the present invention.
  • a driving assistance system including: a plurality of the above-mentioned roadside sensing devices, which are deployed at a side position of a road; and a vehicle, which runs on a road and performs assistance as described in the present invention. How to drive.
  • a computing device includes at least one processor and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor and include instructions for performing the driving assistance method described above.
  • a readable storage medium storing program instructions, and when the program instructions are read and executed by a computing device, the computing device is caused to execute the driving assistance method described above.
  • the sensing capability of the roadside sensing device is fully utilized, thereby significantly reducing the requirements for the on-board sensors. This makes it possible to obtain various types of assisted driving capabilities even if no additional sensors are installed in the vehicle.
  • FIG. 1 shows a schematic diagram of a driving assistance system according to an embodiment of the present invention
  • FIG. 2 shows a schematic diagram of a roadside sensing device according to an embodiment of the present invention
  • FIG. 3 shows a schematic diagram of a driving assistance method according to an embodiment of the present invention.
  • FIG. 4 shows a schematic diagram of a driving assistance method according to another embodiment of the present invention.
  • FIG. 1 shows a schematic diagram of a driving assistance system 100 according to an embodiment of the present invention.
  • the driving assistance system 100 includes a vehicle 110 and a roadside sensing device 200.
  • the vehicle 110 is traveling on a road 140.
  • the road 140 includes a plurality of lanes 150. During the process that the vehicle 110 can travel on the road 140, different lanes 150 can be switched according to the road conditions and driving targets.
  • the roadside sensing device 200 is deployed around the road and uses various sensors to collect various information within a predetermined range around the roadside sensing device 200, especially road data related to the road.
  • the roadside sensing device 200 has a predetermined coverage. According to the coverage of each roadside sensing device 200 and the road conditions, a sufficient number of roadside sensing devices 200 can be deployed on both sides of the road to achieve full coverage of the entire road. Of course, according to one embodiment, instead of achieving full coverage of the entire road, the roadside sensing device 200 can be deployed at the characteristic points (turns, intersections, bifurcations) of each road to obtain the characteristics of the road Data.
  • the invention is not limited to the specific number of roadside sensing devices 200 and the coverage of the road.
  • the position of the sensing device 200 to be deployed is first calculated according to the coverage area of a single roadside sensing device 200 and the condition of the road 140.
  • the coverage area of the roadside sensing device 200 depends at least on the arrangement height of the sensing device 200 and the effective distance sensed by the sensors in the sensing device 200.
  • the condition of the road 140 includes the length of the road, the number of lanes 150, the curvature and slope of the road, and the like.
  • the deployment position of the sensing device 200 may be calculated in any manner in the art.
  • the roadside sensing device 200 is deployed at the determined location. Since the data required by the roadside sensing device 200 includes motion data of a large number of objects, the clock synchronization of the roadside sensing device 200 is performed, that is, the time of each sensing device 200 is consistent with the time of the vehicle 110 and the cloud platform.
  • each deployed roadside sensing device 200 is determined. Since the sensing device 200 is to provide a driving assistance function for a vehicle 110 traveling at a high speed on the road 140, the position of the sensing device 200 must be highly accurate as the absolute position of the sensing device. There are many ways to calculate the high-accuracy absolute position of the sensing device 200. According to one embodiment, a global satellite navigation system (GNSS) may be utilized to determine a high-precision position.
  • GNSS global satellite navigation system
  • the roadside sensing device 200 uses its sensors to collect and sense the static conditions (lane lines 120, guardrails, barriers, etc.) and dynamic conditions (moving vehicles 110, pedestrians 130, and throws) of roads within its coverage area, and sense different sensors.
  • the data is fused to form road data for the road.
  • the road data includes static and dynamic information of all objects in the area covered by the sensing device 200, especially in road-related areas.
  • the roadside sensing device 200 can then calculate driving-related information for each vehicle based on road data, such as whether the vehicle has a potential collision risk, traffic conditions outside the vehicle's field of vision (such as road conditions after road corners, roads before vehicles in front Status) and so on.
  • the vehicle 110 entering the coverage area of one roadside sensing device 200 can communicate with the roadside sensing device 200.
  • a typical communication method is the V2X communication method.
  • mobile communication methods such as 5G, 4G, and 3G can be used to communicate with the roadside sensing device 200 through a mobile internet network provided by a mobile communication service provider.
  • the V2X communication method is adopted in the general embodiment of the present invention.
  • any communication method that can meet the time delay requirements required by the present invention is within the protection scope of the present invention.
  • the vehicle 110 may receive driving-related information related to the vehicle 110 from the roadside sensing device 200 and use the driving-related information to assist the driving of the vehicle.
  • the driving assistance system 100 further includes a server 160.
  • the server 160 may be a cloud service platform composed of multiple servers.
  • Each roadside sensing device 100 sends the sensed road data to the server 160.
  • the server 160 may combine road data based on the position of each roadside sensing device 100 to form road data for the entire road.
  • the server 160 may further perform processing on the road data of the road to form driving-related information, such as the traffic status of the entire road, unexpected road sections, expected transit time, and the like.
  • the server 160 may send the formed road data and driving related information of the entire road to each roadside sensing device 200, or may correspond to several roadside sensing devices 200 adjacent to a certain roadside sensing device 200 Road-related data and driving-related information of a section of road are sent to the roadside sensing device 200. In this way, the vehicle 110 can obtain a wider range of driving-related information from the roadside sensing device 200. Of course, the vehicle 110 may directly obtain driving-related information and road data from the server 160 without passing through the roadside sensing device 200.
  • roadside sensing devices 200 are deployed on all roads in an area, and these roadside sensing devices 200 all send road data to the server 160, a navigation instruction for road traffic in the area can be formed at the server 160.
  • the vehicle 110 may receive the navigation instruction from the server 160 and perform navigation accordingly.
  • FIG. 2 shows a schematic diagram of a roadside sensing device 200 according to an embodiment of the present invention.
  • the roadside sensing device 200 includes a communication unit 210, a sensor group 220, a storage unit 230, and a calculation unit 240.
  • the roadside sensing device 200 communicates with each vehicle 110 entering its coverage area in order to provide driving related information to the vehicle 110 and receive vehicle driving information of the vehicle from the vehicle 110. At the same time, the roadside sensing device 200 also needs to communicate with the server 160.
  • the communication unit 210 provides a communication function for the roadside sensing device 200.
  • the communication unit 210 may adopt various communication methods, including but not limited to Ethernet, V2X, 5G, 4G, and 3G mobile communication, as long as these communication methods can complete data communication with a minimum time delay.
  • the roadside sensing device 200 may use V2X to communicate with the vehicle 110 entering its coverage area, and the roadside sensing device 200 may communicate with the server 160 using, for example, a high-speed Internet.
  • the sensor group 220 includes various sensors, such as a radar sensor such as a millimeter-wave radar 222, a lidar 224, and an image sensor such as a camera 226 and an infrared probe 228 having a supplementary light function.
  • various sensors can obtain different attributes of the object. For example, radar sensors can measure object speed and acceleration, while image sensors can obtain object shape, relative angle, and so on.
  • the sensor group 220 uses various sensors to collect and sense the static conditions of the road (lane line 120, guardrails, barriers, etc.) and dynamic conditions (moving vehicles 110, pedestrians 130, and throws) in the coverage area, and collect and sense each sensor
  • the data is stored in the storage unit 230.
  • the computing unit 240 fuses the data sensed by each sensor to form road data of the road, and also stores the road data in 234.
  • the calculation unit 240 may further perform data analysis on the basis of road data, identify one or more vehicles and vehicle motion information therein, and further determine driving related information for the vehicle 110. These data and information may be stored in the storage unit 230 so as to be transmitted to the vehicle 110 or the server 160 via the communication unit 210.
  • the storage unit 230 may also store various calculation models, such as a collision detection model and a license plate recognition model. These calculation models may be used by the calculation unit 240 to implement the corresponding steps in the method 300 described below with reference to FIG. 3.
  • FIG. 3 shows a schematic diagram of a driving assistance method 300 according to an embodiment of the present invention.
  • the driving assistance method 300 is suitable for execution in the roadside sensing device 200 shown in FIG. 2, and is also suitable for execution in the server 160 of FIG. 1.
  • all relevant data acquired by the roadside sensing device 200 may be sent to the server 160 for execution in the server 160.
  • the driving assistance method 300 starts at step S310.
  • step S310 road data within a predetermined range of road positions is acquired.
  • the roadside sensing device 200 is usually fixedly deployed near a certain road, and therefore has a corresponding road position.
  • the roadside sensing device 200 has a predetermined coverage area, at least depending on the arrangement height of the sensing device 200, the effective distance for sensing by the sensors in the sensing device 200, and the like.
  • the roadside sensing device 200 uses various sensors to collect and / or collect static and dynamic conditions (lane lines 120, guardrails, barriers, etc.) and dynamic conditions (moving vehicles 110, pedestrians 130, and throws) of roads in the coverage area Perceive to obtain and store various sensor data.
  • the roadside sensing device 200 includes various sensors, such as a radar sensor such as a millimeter wave radar 222, a lidar 224, and an image sensor such as a camera 226 and an infrared probe 228 having a supplementary light function.
  • various sensors can obtain different attributes of the object.
  • radar sensors can measure object speed and acceleration, while image sensors can obtain object shape and relative angle.
  • step S310 processing and fusion may be performed based on the obtained various sensor raw data, thereby forming unified road data.
  • step S310 may further include a sub-step S312.
  • step S312 static information about a predetermined range of road positions that is stored in advance is acquired. After the roadside sensing device is deployed at a certain location on the road, the road range covered by the sensing device is also fixed. The static information of the predetermined range can be obtained, such as the width of the road, the number of lanes, the turning radius, etc. within the range. There are many ways to obtain static information about a road.
  • the static information may be stored in the sensing device in advance when the sensing device is deployed.
  • the location information of the sensing device may be obtained first, and then a request containing the location information is sent to the server 160 so that the server 160 returns static information of the relevant road range according to the request.
  • step S314 the original sensor data is processed according to different sensors to form perceptual data such as distance measurement, speed measurement, type, and size recognition.
  • step S316 based on the road static data obtained in step S312, in different cases, different sensor data is used as a reference, and other sensor data is used for calibration, and finally unified road data is formed.
  • Steps S312-S136 describe a way to obtain road data.
  • the present invention is not limited to a specific way of fusing data from various sensors to form road data. As long as the road data contains static and dynamic information of various objects within a predetermined range of the road position, this method is within the protection scope of the present invention.
  • each vehicle 110 that enters the coverage area of the roadside sensing device 200 will actively communicate with the sensing device 200 through various communication methods (such as V2X). Therefore, as described in step S318, the vehicle 110 sends vehicle driving information of the vehicle to the sensing device 200.
  • the running information of the vehicle includes running information of the vehicle during running, for example, including the current time when the running information is generated, the size, speed, acceleration, angular velocity, and position of the vehicle.
  • the method S310 further includes step S319, in which the vehicle driving information obtained in step S318 is further integrated on the basis of the road data formed in step S316 to form new road data.
  • step S320 based on the road data obtained in step S310, one or more vehicles within the coverage of the sensing unit and the motion information of these vehicles are identified.
  • the identification in step S320 includes two aspects of identification.
  • One aspect of identification is vehicle identification, which identifies which objects in the road data are vehicle objects. Because vehicle objects have different motion characteristics, such as high speed, driving along the lane in one direction, they generally do not send collisions with other objects.
  • a traditional classification detection model or a deep learning-based model can be constructed based on these motion features, and the constructed model can be applied to road data to determine motion features such as vehicle objects and motion trajectories of the vehicle objects in the road data.
  • Another aspect of identification is identifying the vehicle identification.
  • its vehicle identification is further determined.
  • One way to determine the vehicle identification is to determine the unique license plate of the vehicle, for example, through image recognition.
  • Another way to determine the vehicle identification may be to generate a unique mark of the vehicle by combining the size, type, location information, and driving speed of the vehicle object.
  • This vehicle identification is the unique identification of the vehicle object in this road segment and is used to distinguish it from other vehicle objects. The vehicle identification will be used in subsequent data transmission, and will be transmitted in different roadside sensing devices on this road to facilitate overall analysis.
  • step S330 based on the road data obtained in step S310 and the vehicle object and its motion information identified in step S320, data analysis is performed to determine driving related information of the vehicle.
  • the present invention includes a variety of driving-related information, and thus has a plurality of analysis models for driving-related information analysis.
  • step S330 data analysis is actively performed in step S330 to determine driving-related information.
  • the driving-related information is a potential collision possibility
  • step S330 a vehicle having a potential collision possibility on the road is detected.
  • the collision may include forward collision, overtaking collision, lane change collision, and the like.
  • Potential collision detection can be performed in various ways. One way is to use a collision detection model to detect vehicles that are likely to collide from road data. Another way is to analyze a large number of actual road collision examples and perform deep learning to determine the vehicles with the possibility of collision.
  • the invention is not limited to a specific way of performing a potential collision possibility.
  • step S330 data analysis may be performed according to a request of the vehicle 110 to determine driving related information of the vehicle.
  • the driving related information is, for example, scene information related to a scene requested by the vehicle.
  • Each scene can be defined in advance, and information corresponding to each scene.
  • the driving-related information includes roads and vehicle information in a certain range in front of the vehicle; when the scene is a 360-degree panoramic view, the driving-related information includes all information in a certain range around the vehicle; and When the scene is beyond line-of-sight perception, the driving-related information includes all information within the line of sight of the vehicle being blocked.
  • step S330 may further include steps S332 and S334.
  • step S332 a scene request sent by the vehicle 110 is received, and in step S334, driving-related information corresponding to the scene is determined based on the road data and the identification and motion information of the vehicle 110. Since the identification of the vehicle 110 is known, the dynamic and static information of other vehicles around the vehicle 110 and the environment can be determined from the road data, so that driving-related information corresponding to the requested scene can be provided.
  • vehicle matching needs to be performed to determine which vehicle object or vehicle objects within the coverage area of the current sensing device 200 to receive the driving analysis result. . For example, if a potential collision detection is performed in step S330, after determining a vehicle with a high collision probability, it is necessary to determine a matching vehicle identification and a corresponding communication method. If the driving related information is scene related information, after receiving the scene request from the vehicle 110, it is necessary to match the requesting vehicle with the vehicles in the coverage area to determine which vehicle has issued the scene request in order to Data analysis for this vehicle.
  • Vehicle matching can be performed through a variety of matching methods, such as license plate matching, driving speed and type matching, and fuzzy location information matching.
  • the vehicle 110 can bind the license plate information through V2X or application verification, and this license plate information can further be matched to the vehicle data of the corresponding license plate in the roadside sensing device and the server, thereby achieving license plate matching.
  • step S330 the determined driving related information is transmitted to the corresponding vehicle 110 through a predetermined communication method in step S340.
  • a communication method associated with the matching vehicle 110 is determined, and driving-related information is transmitted to the corresponding vehicle using the determined communication method.
  • the communication method is usually a mobile communication method such as V2X, 5G, 4G, or 3G.
  • the vehicle 110 may perform different processing according to the attributes of the driving-related information. For example, if it is scenario-related data, the driving-related information is displayed on the display or application of the vehicle's large central control screen, smart dashboard, or navigation software according to the scenario definition.
  • warning information such as a collision warning
  • the warning information can be presented to the vehicle owner in a variety of different ways such as display, voice, alarm, vibration, etc., according to the type and urgency of the warning.
  • FIG. 4 shows a schematic diagram of a driving assistance method 400 according to another embodiment of the present invention.
  • the driving assistance method 400 is adapted to be performed in a vehicle 110 that is traveling on a road on which a roadside sensing device 200 is deployed.
  • the method 400 includes step S410.
  • step S410 driving related information from the roadside sensing device 200 is received through a predetermined communication method.
  • Step S410 corresponds to step S340 in the method 300 described above with reference to FIG. 3, and therefore, the driving-related information is generated by the roadside sensing device based on road data within a predetermined range of its road position.
  • the processing in S410 is not repeated here.
  • the received driving-related information is then output in the vehicle 110 in step S420.
  • the output mode may be determined according to the attributes of the driving-related information.
  • the method 400 may further include step S430, in which the driver or owner of the vehicle is notified of a potential collision hazard. , Voice, alarm, vibration and many other ways to alert the owner of the warning information.
  • the method 400 may further include step S440, which may convert the warning information into control of the vehicle, directly control the driving of the vehicle, or provide information including forward collision warning, overtaking warning, lane change warning, blind spot warning, and rear vehicle protection.
  • Various assisted driving capabilities to reduce the possibility of collisions, resulting in more efficient, safer and more direct assisted driving.
  • the method 400 may further include a corresponding step S450.
  • a scenario request is sent to the roadside sensing device through a predetermined communication method, and in step S420, the driving-related information is displayed on a display screen such as a large central control screen, a smart dashboard or a navigation software according to the scenario definition, or Application.
  • the method 400 may further include step S460, in which the vehicle driving information is sent to the roadside sensing device through a predetermined communication method.
  • step S460 in which the vehicle driving information is sent to the roadside sensing device through a predetermined communication method.
  • the processing in step S460 corresponds to step S318, and details are not described herein again.
  • the sensing capability of the roadside unit can be fully utilized, and the perceived data is further analyzed and processed before being provided to the vehicle, thereby providing efficient assisted driving performance.
  • modules or units or components of the device in the example disclosed herein may be arranged in the device as described in this embodiment, or alternatively may be positioned differently from the device in this example Of one or more devices.
  • the modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
  • modules in the device in the embodiment can be adaptively changed and set in one or more devices different from the embodiment.
  • the modules or units or components in the embodiment may be combined into one module or unit or component, and furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Except for such features and / or processes or units, which are mutually exclusive, all features disclosed in this specification (including the accompanying claims, abstract and drawings) and any methods so disclosed may be employed in any combination or All processes or units of the equipment are combined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种辅助驾驶方法和系统,该方法包括步骤:获取在预定范围内的道路数据;基于该道路数据,识别各对象中的一个或者多个车辆和车辆运动信息;基于道路数据和车辆运动信息,确定所识别车辆的驾驶相关信息;以及通过预定通信方式将驾驶相关信息发送给所识别的车辆。还提供了一种相应的路侧感知设备和辅助驾驶系统。

Description

一种辅助驾驶方法和系统
本申请要求2018年09月21日递交的申请号为201811108869.2、发明名称为“一种辅助驾驶方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及车辆辅助驾驶领域,尤其涉及利用道路环境数据来辅助车辆驾驶的领域。
背景技术
随着汽车工业进入互联网和智能时代,在车辆或者其周边的传感器和运算单元可以提供日益强大的驾驶相关数据和运算能力。这些数据和能力能够比以前更有效地辅助驾驶车辆,使得车辆驾驶更加简单、智能和安全。
对于车辆驾驶来说,安全性和便利性是驾驶员经常要考虑的问题。在现有的车载辅助驾驶方案中,一般利用车辆上的传感器在行驶期间进行诸如和前车距离、车辆本身速度和车辆实时位置之类的数据收集,随后车载计算单元对这些数据进行分析,并基于分析结果进行提供辅助驾驶的能力。这种方案一方面受限于车辆上安装的相关传感器,即该方案无法在未安装相关传感器的车辆执行。另一方面,车辆传感器仅仅可以感测到车辆周边较小范围内的数据,无法提供离车辆更远距离的驾驶环境相关信息,具有明显的局限性。
现有的还有一种方案是利用安装在道路上的监控设备来为车辆提供复杂信息,但是现有的道路监控设备一般只提供了测量车辆流量、车辆距离、车辆速度等功能,只能给车辆驾驶提供一些很片面的道路流量提示信息,也无法实现有效辅助车辆驾驶的目标。
随着车联网V2X技术的发展,出现了协同式环境感知系统。这个系统可以综合利用车辆和周围环境的数据来辅助车辆驾驶。但是如何构造环境数据以及如何融合车辆本身和环境数据,是协同式环境感知系统所面临的问题。
为此,需要一种新的车辆辅助驾驶方案,可以不依赖于车辆上的传感器能力就可以提供车辆辅助驾驶功能,并可以超视距的辅助驾驶能力,突破现有辅助驾驶系统的局限性。
发明内容
为此,本发明提供了一种新的车辆辅助驾驶方案,以力图解决或者至少缓解上面存 在的至少一个问题。
根据本发明的一个方面,提供了一种辅助驾驶方法,该方法包括步骤:获取在预定范围内的道路数据,道路数据包括该预定范围内各对象的静态和/或动态信息;基于该道路数据,识别各对象中的一个或者多个车辆和车辆运动信息;基于道路数据和车辆运动信息,确定所识别车辆的驾驶相关信息;以及通过预定通信方式将驾驶相关信息发送给所识别的车辆。
可选地,在根据本发明的辅助驾驶方法中,获取在预定范围内的道路数据的步骤包括:获取预先存储的、关于该预定范围的静态信息;利用部署在所述预定范围内的路侧感知设备中的各个传感器来获得预定范围内各对象的静态和/或动态信息;组合预先存储的静态信息和各个传感器获得的信息来产生所述道路数据。
可选地,在根据本发明的辅助驾驶方法中,获取在预定范围内的道路数据的步骤还包括:接收在预定范围内的车辆通过预定通信方式发送过来的车辆行驶信息;以及组合所预先存储的静态信息、各个传感器获得的信息以及所接收的车辆行驶信息来产生道路数据。
可选地,在根据本发明的辅助驾驶方法中,获取关于预定范围的静态信息的步骤包括:确定路侧感知设备的地理位置;以及从服务器获取在该地理位置的预定范围内的静态信息。
可选地,在根据本发明的辅助驾驶方法中,基于道路数据识别各对象中的一个或者多个车辆和车辆运动信息步骤包括:基于各对象的运动特征来确定属于车辆的车辆对象及其运动信息;以及识别各车辆对象的标识。
可选地,在根据本发明的辅助驾驶方法中,通信方式包括下列中的一种或者多种:V2X、5G、4G和3G通信。
可选地,在根据本发明的辅助驾驶方法中,各对象包括下列对象中的一个或者多个:车道线、护栏、隔离带、车辆、行人和抛洒物;静态和/或动态信息包括下列中的一个或者多个:位置、距离、速度、角速度、车牌、类型和尺寸等。
可选地,在根据本发明的辅助驾驶方法中,路侧感知设备中的传感器包括下列中的一个或者多个:毫米波雷达、激光雷达、摄像头、红外探头。
可选地,在根据本发明的辅助驾驶方法中,车辆行驶信息包括下列中的一个或者多个:当前时刻、尺寸、速度、加速度、角速度和位置。
可选地,在根据本发明的辅助驾驶方法中,驾驶相关信息包括潜在碰撞危险,以及 基于道路数据和车辆运动信息来确定所识别车辆的驾驶相关信息的步骤包括:通过建模或者深度学习的方式,基于道路数据和车辆运动信息来确定所识别车辆的潜在碰撞危险。
可选地,在根据本发明的辅助驾驶方法,基于道路数据确定所识别车辆的驾驶相关信息的步骤包括:接收在预定范围内的车辆发送的场景请求;以及基于所述道路数据确定与所述场景相对应的驾驶相关信息。
可选地,根据本发明的辅助驾驶方法适于在部署在所述预定范围中的路侧感知设备中执行或者在耦接到该路侧感知设备的云服务器上执行。
根据本发明的另一方面,提供了一种在车辆中执行的辅助驾驶方法,该车辆在部署有路侧感知设备的道路上行驶,该方法包括步骤:通过预定通信方式接收来自路侧感知设备的驾驶相关信息,驾驶相关信息由路侧感知设备根据其预定范围内的道路数据产生;以及在车辆中输出所接收到的驾驶相关信息。
根据本发明的还有一个方面,提供了一种路侧感知设备,该设备包括:传感器组,适于获得在其预定范围内各对象的静态和/或动态信息;存储单元,适于存储所述道路数据,所述道路数据包括预定范围内各对象的静态和/或动态信息;以及计算单元,适于执行根据本发明所述的辅助驾驶方法。
根据本发明的还有一个方面,提供了一种辅助驾驶系统,包括:多个上述路侧感知设备,部署在道路侧边位置;以及车辆,在道路上行驶,并执行如根据本发明的辅助驾驶方法。
根据本发明的还有一个方面,还提供了一种计算设备。该计算设备包括至少一个处理器和存储有程序指令的存储器,其中,程序指令被配置为适于由至少一个处理器执行并包括用于执行上述辅助驾驶方法的指令。
根据本发明的还有另一个方面,还提供了一种存储有程序指令的可读存储介质,当该程序指令被计算设备读取并执行时,使得计算设备执行上述辅助驾驶方法。
根据本发明的辅助驾驶方案,充分利用了路侧感知设备的感知能力,从而显著减少了对车载传感器的要求。使得即使车辆上没有安装额外的传感器,也能获得各类辅助驾驶能力。
另外,通过对感知数据进行分析而得到各种驾驶相关信息,并且将这些驾驶相关信息发送给车辆,可以为车辆提供更高效、更安全的辅助驾驶能力,突破现有辅助驾驶系统的局限性。
附图说明
为了实现上述以及相关目的,本文结合下面的描述和附图来描述某些说明性方面,这些方面指示了可以实践本文所公开的原理的各种方式,并且所有方面及其等效方面旨在落入所要求保护的主题的范围内。通过结合附图阅读下面的详细描述,本公开的上述以及其它目的、特征和优势将变得更加明显。遍及本公开,相同的附图标记通常指代相同的部件或元素。
图1示出了根据本发明一个实施方式的辅助驾驶系统的示意图;
图2示出了根据本发明一个实施方式的路侧感知设备的示意图;
图3示出了根据本发明一个实施方式的辅助驾驶方法的示意图;以及
图4示出了根据本发明另一个实施方式的辅助驾驶方法的示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
图1示出了根据本发明一个实施方式的辅助驾驶系统100的示意图。如图1所示,辅助驾驶系统100包括车辆110和路侧感知设备200。车辆110在道路140上行驶。道路140包括多个车道150。车辆110可以在道路140上行驶的过程中,可以根据路况和行驶目标切换不同的车道150。路侧感知设备200部署在道路周边,并利用其所具有的各种传感器来收集在路侧感知设备200周围预定范围内的各种信息,特别是与道路相关的道路数据。
路侧感知设备200具有预定的覆盖范围。根据每个路侧感知设备200的覆盖范围和道路状况,可以在道路两侧部署足够数量的路侧感知设备200,可以对整条道路实现全覆盖。当然,根据一种实施方式,不用对整条道路实现全覆盖,可以在每条道路的特征点(拐弯,交叉口,分叉口)处部署路侧感知设备200,获得关于该条道路的特征数据即可。本发明不受限于路侧感知设备200的具体数量和对道路的覆盖范围。
在部署路侧感知设备200时,首先根据单个路侧感知设备200的覆盖区域和道路140的状况,计算需要部署的感知设备200的位置。路侧感知设备200的覆盖区域至少取决于感知设备200的布置高度和感知设备200中的传感器进行感知的有效距离等。而道路 140的状况包括道路长度、车道150的数量、道路曲率和坡度等。可以利用本领域的任何一种方式来计算感知设备200的部署位置。
在确定了部署位置之后,在所确定的位置部署路侧感知设备200。由于路侧感知设备200需要感知的数据包含大量对象的运动数据,所以要进行路侧感知设备200的时钟同步,即保持各个感知设备200的时间和车辆110以及云平台的时间一致。
随后,确定每个部署的路侧感知设备200的位置。由于感知设备200要为道路140上高速行驶的车辆110提供辅助驾驶功能,所以感知设备200的位置必须是高精度的,以作为感知设备的绝对位置。可以有多种方式来计算感知设备200的高精度绝对位置。根据一个实施方式,可以利用全球卫星导航系统(GNSS)来确定高精度位置。
路侧感知设备200利用其传感器对其覆盖区域内的道路静态情况(车道线120、护栏、隔离带等)和动态情况(行驶车辆110、行人130和抛洒物)进行采集感知,将不同传感器感知数据进行融合以形成该段道路的道路数据。道路数据包括感知设备200所覆盖范围内,特别是道路相关领域内的所有对象的静态和动态信息。随后路侧感知设备200可以基于道路数据来计算各个车辆的驾驶相关信息,如该车辆是否具有潜在碰撞危险,在车辆视野之外的交通状况(如道路拐弯之后的道路状况、前车之前的道路状况)等。
进入一个路侧感知设备200的覆盖范围内的车辆110可以和路侧感知设备200进行通信。一种典型的通信方式为V2X通信方式。当然,可以利用诸如5G、4G和3G之类的移动通信方式,由移动通信服务商提供的移动互联网络与路侧感知设备200进行通信。考虑到车辆行驶的速度较快,对通信的时间延迟要求尽可能的短,本发明的一般实施方式中采用V2X通信方式。但是,任何可以满足本发明所需要的时间延迟要求的通信方式都在本发明的保护范围之内。
车辆110可以从路侧感知设备200接收与该车辆110相关的驾驶相关信息,并利用该驾驶相关信息来辅助车辆驾驶。
可选地,辅助驾驶系统100还包括服务器160。图1中虽然仅仅示出了一台服务器160,但是应当理解的是,服务器160可以是由多台服务器构成的云服务平台。各个路侧感知设备100将所感知的道路数据发送到服务器160。服务器160可以基于每个路侧感知设备100的位置,对道路数据进行组合,从而形成整条道路的道路数据。服务器160还可以在该条道路的道路数据上进行进一步处理,以形成驾驶相关信息,例如整条道路的交通状况、突发事故路段、预期通过时间等。
服务器160可以将所形成的整个道路的道路数据和驾驶相关信息发送给各个路侧感知设备200,或者可以将与某个路侧感知设备200相邻的几个路侧感知设备200的所对应的一段道路的道路相关数据和驾驶相关信息发送给该路侧感知设备200。这样,车辆110可以从路侧感知设备200获得更大范围的驾驶相关信息。当然,车辆110可以直接从服务器160获得驾驶相关信息和道路数据而不通过路侧感知设备200。
如果一个区域内的所有道路上都部署了路侧感知设备200,而且这些路侧感知设备200都将道路数据发送给服务器160,则可以在服务器160处形成该区域内的道路交通的导航指示。车辆110可以从服务器160接收该导航指示并据此进行导航。
图2示出了根据本发明一个实施方式的路侧感知设备200的示意图。如图2所示,路侧感知设备200包括通信单元210、传感器组220、存储单元230和计算单元240。
路侧感知设备200要和进入其覆盖范围的各个车辆110进行通信,以便为车辆110提供驾驶相关信息,以及从车辆110接收该车辆的车辆行驶信息。同时路侧感知设备200也需要和服务器160进行通信。通信单元210为路侧感知设备200提供了通信功能。通信单元210可以采用各种通信方式,包括但不限于以太网、V2X、5G、4G和3G移动通信等,只要这些通信方式可以以尽量小的时间延迟完成数据通信即可。在一个实施方式中,路侧感知设备200可以采用V2X和进入其覆盖范围的车辆110进行通信,而路侧感知设备200可以采用例如高速互联网的方式与服务器160进行通信。
传感器组220包括有各种传感器,例如诸如毫米波雷达222、激光雷达224之类的雷达传感器和诸如具有补光功能的摄像头226和红外探头228之类的图像传感器等。对于同一对象,各种传感器可以获得该对象的不同属性,例如雷达传感器可以进行对象速度和加速度测量,而图像传感器可以获得对象外形,相对角度等。
传感器组220利用各个传感器对覆盖区域内的道路静态情况(车道线120、护栏、隔离带等)和动态情况(行驶车辆110、行人130和抛洒物)进行采集感知,并且将各个传感器采集和感知的数据存储到存储单元230中。
计算单元240对各传感器所感知的数据进行融合以形成该段道路的道路数据,并也将道路数据存储在234中。另外,计算单元240还可以在道路数据的基础上继续进行数据分析,识别出其中的一个或者多个车辆和车辆运动信息,进一步确定针对车辆110的驾驶相关信息。这些数据和信息都可以存储在存储单元230中,以便经由通信单元210发送给车辆110或者服务器160。
另外,存储单元230中还可以存储有各种计算模型,例如碰撞检测模型、车牌识别 模型等。这些计算模型可以由计算单元240使用,来实现下面参考图3描述的方法300中的相应步骤。
图3示出了根据本发明一个实施方式的辅助驾驶方法300的示意图。辅助驾驶方法300适于在图2所示的路侧感知设备200中执行,也适于在图1的服务器160中执行。当在服务器160中执行时,可以将路侧感知设备200所获取的所有相关数据发送到服务器160,以便在服务器160中执行。
如图3所示,辅助驾驶方法300始于步骤S310。
在步骤S310中,获取道路位置的预定范围内的道路数据。如上参考图1所述,路侧感知设备200通常固定部署在某个道路附近,因此具有相对应的道路位置。另外,至少取决于感知设备200的布置高度和感知设备200中的传感器进行感知的有效距离等,路侧感知设备200具有预定的覆盖区域。一旦将路侧感知设备200部署在某个道路侧边时,根据感知设备和道路的具体位置、高度以及感知有效距离,就可以确定该感知设备所能覆盖的道路预定范围。
路侧感知设备200利用其所具有的各个传感器对覆盖区域内的道路静态情况(车道线120、护栏、隔离带等)和动态情况(行驶车辆110、行人130和抛洒物)进行采集和/或感知,以获得各种传感器数据并进行存储。
如上所述,路侧感知设备200包括有各种传感器,例如诸如毫米波雷达222、激光雷达224之类的雷达传感器和诸如具有补光功能的摄像头226和红外探头228之类的图像传感器等。对于同一对象,各种传感器可以获得该对象的不同属性,例如雷达传感器可以进行对象速度和加速度测量,而图像传感器可以获得对象外形和相对角度等。
在步骤S310中,可以基于所获得的各种传感器原始数据进行加工和融合,从而形成统一的道路数据。在一种实施方式中,步骤S310还可以包括子步骤S312。在步骤S312中,获取预先存储的、关于道路位置预定范围的静态信息。在路侧感知设备部署在道路的某个位置之后,该感知设备所覆盖的道路范围也就固定了。可以获得该预定范围的静态信息,例如该范围内的道路宽度,车道数量,拐弯半径等内容。可以有多种方式来获得道路的静态信息。在一种实施方式中,这些静态信息可以在部署感知设备时预先存储在感知设备中。在另一种实施方式中,可以首先获得感知设备的位置信息,随后向服务器160发送包含该位置信息的请求,以便服务器160根据请求返回相关道路范围的静态信息。
随后,在步骤S314中,根据不同的传感器分别对原始传感器数据加工,形成测距、 测速和类型、大小识别等感知数据。接着在步骤S316中,基于在步骤S312中获得的道路静态数据,在不同的情况下,以不同的传感器数据作为基准,加以其他传感器数据进行校准,最终形成统一的道路数据。
步骤S312-S136描述了一种获取道路数据的方式。本发明并不受限于融合各个传感器的数据以形成道路数据的具体方式。只要道路数据中包含了在该道路位置预定范围内各种对象的静态和动态信息,则该方式就在本发明的保护范围之内。
根据一种实施方式,每个进入路侧感知设备200的覆盖范围之内的车辆110都会主动通过各种通信方式(如V2X)与感知设备200进行通信。因此,如步骤S318所述,车辆110会将车辆的车辆行驶信息发送给感知设备200。车辆的行驶信息包括车辆在行驶中所具有的运行信息,例如包括产生该运行信息的当前时刻、车辆的尺寸、速度、加速度、角速度和位置等内容。方法S310还包括步骤S319,其中在步骤S316所形成的道路数据的基础上,进一步融合步骤S318获得的车辆行驶信息,以形成新的道路数据。
接着,在步骤S320中,基于步骤S310处获得的道路数据,识别在感知单元覆盖范围内的一个或者多个车辆以及这些车辆的运动信息。步骤S320中的识别包括两个方面的识别。一个方面的识别是车辆识别,即识别出道路数据中的哪些对象是车辆对象。由于车辆对象具有不同的运动特征,如速度较快、沿着一个方向在车道上行驶,一般不和其他对象发送碰撞等。可以基于这些运动特征构造传统的分类检测模型或者基于深度学习的模型,并将所构造的模型应用到道路数据中,从而确定道路数据中的车辆对象以及车辆对象的运动轨迹等运动特征。
另一个方面的识别是识别车辆标识。对于识别出的车辆对象,进一步确定其车辆标识。一种确定车辆标识的方式是例如通过图像识别等方式来确定车辆的唯一车牌。而当无法识别车辆的车牌时,另一种确定车辆标识的方式可以是通过结合车辆对象的大小、类型、位置信息和行驶速度等方式生成车辆的唯一标记。这个车辆标识是车辆对象在这个路段内的唯一标识,并用于区别于其他车辆对象。该车辆标识会在后续的数据传输中使用,并且会在这个道路内不同的路侧感知设备中进行传递,以便于整体分析。
随后,在步骤S330中,基于步骤S310获得的道路数据和步骤S320识别到的车辆对象及其运动信息,进行数据分析以确定车辆的驾驶相关信息。
本发明包括多种驾驶相关信息,并因此具有多个用于驾驶相关信息分析的分析模型。
根据一种实施方式,在步骤S330中主动进行数据分析以确定驾驶相关信息。在此情况下,例如驾驶相关信息为潜在碰撞可能性,则在步骤S330中,检测道路中存在潜在碰 撞可能性的车辆。而碰撞可以包括前向碰撞、超车碰撞、变道碰撞等。可以利用各种方式进行潜在碰撞检测。一种方式是碰撞检测模型,以便从道路数据中检测出存在碰撞可能性的车辆。另一种方式是通过分析大量实际道路碰撞示例,进行深度学习来判断存在碰撞可能性的车辆。本发明不受限于进行潜在碰撞可能性的具体方式。
根据另一种实施方式,在步骤S330中,可以根据车辆110的请求来进行数据分析以确定该车辆的驾驶相关信息。在此情况下,例如驾驶相关信息例如为与车辆请求的场景相关的场景信息。
可以预先定义各个场景,以及与各个场景相对应的信息。例如当场景为夜视辅助时,驾驶相关信息包括该车辆前方一定范围内的道路和车辆信息;当场景为360度全景环视时,则驾驶相关信息包括该车辆四周一定范围内的所有信息;以及当场景为超视距感知时,则驾驶相关信息包括该车辆被遮挡的视距内的所有信息。
为此,步骤S330还可以进一步包括步骤S332和步骤S334。在步骤S332中,接收车辆110发送过来的场景请求,并且在步骤S334中,基于道路数据和车辆110的标识和运动信息,确定与该场景相对应的驾驶相关信息。由于车辆110的标识已知,因此可以从道路数据中确定该车辆110周围的其他车辆以及环境的动态和静态信息,从而可以提供与所请求的场景相对应的驾驶相关信息。
无论在步骤S330是以主动方式还是应车辆请求以被动方式进行数据分析,都需要进行车辆匹配以确定出要接收驾驶分析结果的车辆110是当前感知设备200的覆盖范围内的哪个或者哪些车辆对象。例如,如果在步骤S330进行的是潜在碰撞检测,在确定具有较高碰撞可能性的车辆之后,需要确定与之匹配的车辆标识和对应通信方式。而如果驾驶相关信息为场景相关的信息,在接收到车辆110发出的场景请求的之后,需要将该发出请求的车辆与覆盖范围内的车辆进行匹配,从而确定是哪个车辆发出了场景请求,以便为该车辆进行数据分析。
可以通过车牌匹配、行驶速度和类型匹配、位置信息模糊匹配等多种匹配方式或者结合,来进行车辆匹配。根据一个实施方式,车辆110可以通过V2X或者应用验证来绑定车牌信息,而这个车牌信息进而可以匹配到路侧感知设备和服务器中的对应车牌的车辆数据,从而实现车牌匹配。
在步骤S330确定了驾驶相关信息之后,在步骤S340中,通过预定通信方式将所确定的驾驶相关信息发送给相应的车辆110。在步骤S340中,确定与匹配车辆110相关联的通信方式,并且利用所确定的通信方式将的驾驶相关信息发送给相应的车辆。可选地, 通信方式通常为V2X、5G、4G、3G等移动通信方式。
车辆110在接收到驾驶相关信息之后,可以根据驾驶相关信息的属性进行不同的处理。例如,如果是场景相关的数据,则按照场景定义,将驾驶相关信息显示在车载中控大屏、智能仪表盘或者导航软件等显示屏或者应用上。
如果是诸如碰撞预警之类的预警信息,则可以根据预警的类别和紧急程度以诸如显示、语音、警报、震动等多种不同方式将预警信息提示给车主。
图4示出了根据本发明另一个实施方式的辅助驾驶方法400的示意图。辅助驾驶方法400适于在车辆110中执行,并且该车辆110在部署有路侧感知设备200的道路上行驶。方法400包括步骤S410。在步骤S410中,通过预定通信方式接收来路侧感知设备200的驾驶相关信息。步骤S410和上面参考图3描述的方法300中的步骤S340相对应,因此,驾驶相关信息由路侧感知设备根据其道路位置预定范围内的道路数据产生。这里对于S410中的处理不再进行赘述。
随后在步骤S420中,在车辆110中输出接收到的驾驶相关信息。在步骤S420中,可以根据驾驶相关信息的属性来确定输出方式。
如果驾驶相关信息是诸如碰撞预警之类的预警信息。则除了以传统方式在车辆内呈现该报警信息之外,方法400还可以包括步骤S430,其中向所述车辆的驾驶者或者车主通知潜在碰撞危险,例如可以根据预警的类别和紧急程度以诸如显示、语音、警报、震动等多种不同方式将预警信息提示给车主。另外,方法400还可以包括步骤S440,可以将预警信息转化为对车辆的控制,直接控制车辆的行驶或者提供包括前向碰撞预警、超车预警、变道预警、盲区预警和后车保护在内的各种辅助驾驶能力,以减少碰撞发生的可能性,从而形成更高效、更安全、更直接的辅助驾驶。
如果驾驶相关信息是场景相关的数据,则如上参考图3所述,方法400中还可以包括对应的步骤S450。在S450中,则通过预定通信方式向路侧感知设备发送场景请求,并在步骤S420中,按照场景定义,将驾驶相关信息显示在车载中控大屏、智能仪表盘或者导航软件等显示屏或者应用上。
另外,可选地,为了更好地构造道路数据,方法400还可以包括步骤S460,其中通过预定通信方式向所述路侧感知设备发送车辆行驶信息。步骤S460中的处理和步骤S318相对应,这里不再进行赘述。
根据本发明的辅助驾驶方案,可以充分利用路侧单元的感知能力,并对所感知的数据进行进一步分析处理之后,提供给车辆,从而能够提供高效辅助驾驶性能。
应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员应当理解在本文所公开的示例中的设备的模块或单元或组件可以布置在如该实施例中所描述的设备中,或者可替换地可以定位在与该示例中的设备不同的一个或多个设备中。前述示例中的模块可以组合为一个模块或者此外可以分成多个子模块。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
此外,所述实施例中的一些在此被描述成可以由计算机系统的处理器或者由执行所述功能的其它装置实施的方法或方法元素的组合。因此,具有用于实施所述方法或方法元素的必要指令的处理器形成用于实施该方法或方法元素的装置。此外,装置实施例的在此所述的元素是如下装置的例子:该装置用于实施由为了实施该发明的目的的元素所执行的功能。
如在此所使用的那样,除非另行规定,使用序数词“第一”、“第二”、“第三”等等来描述普通对象仅仅表示涉及类似对象的不同实例,并且并不意图暗示这样被描述 的对象必须具有时间上、空间上、排序方面或者以任意其它方式的给定顺序。
尽管根据有限数量的实施例描述了本发明,但是受益于上面的描述,本技术领域内的技术人员明白,在由此描述的本发明的范围内,可以设想其它实施例。此外,应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。

Claims (22)

  1. 一种辅助驾驶方法,所述方法包括步骤:
    获取在预定范围内的道路数据,所述道路数据包括所述预定范围内各对象的静态和/或动态信息;
    基于所述道路数据,识别所述各对象中的一个或者多个车辆和车辆运动信息;
    基于所述道路数据和车辆运动信息,确定所识别车辆的驾驶相关信息;以及
    通过预定通信方式将所述驾驶相关信息发送给所识别的车辆。
  2. 如权利要求1所述的辅助驾驶方法,所述获取在预定范围内的道路数据的步骤包括:
    获取预先存储的、关于所述预定范围的静态信息;
    利用部署在所述预定范围内的路侧感知设备中的各个传感器来获得所述预定范围内各对象的静态和/或动态信息;
    组合所述预先存储的静态信息和各个传感器获得的信息来产生所述道路数据。
  3. 如权利要求2所述的辅助驾驶方法,所述获取在预定范围内的道路数据的步骤包括:
    接收在所述预定范围内的车辆通过预定通信方式发送过来的车辆行驶信息;以及
    组合所述预先存储的静态信息、各个传感器获得的信息以及所接收的车辆行驶信息来产生所述道路数据。
  4. 如权利要求2或者3所述的辅助驾驶方法,所述获取预先存储的、关于所述预定范围的静态信息的步骤包括:
    确定所述路侧感知设备的地理位置;以及
    从服务器获在所述地理位置的预定范围内的静态信息。
  5. 如权利要求1-4中任一个所述的辅助驾驶方法,所述基于所述道路数据识别所述各对象中的一个或者多个车辆和车辆运动信息步骤包括:
    基于各对象的运动特征来确定属于车辆的车辆对象及其运动信息;以及
    识别各车辆对象的标识。
  6. 如权利要求5所述的辅助驾驶方法,所述通信方式包括下列中的一种或者多种:
    V2X、5G、4G和3G通信。
  7. 如权利要求1-6中任一个所述的辅助驾驶方法,所述各对象包括下列对象中的一个或者多个:车道线、护栏、隔离带、车辆、行人和抛洒物;
    所述静态和/或动态信息包括下列中的一个或者多个:位置、距离、速度、角速度、车牌、类型和尺寸等。
  8. 如权利要求2-7中任一个所述的辅助驾驶方法,所述路侧感知设备中的传感器包括下列中的一个或者多个:
    毫米波雷达、激光雷达、摄像头、红外探头。
  9. 如权利要求3-8中任一个所述的辅助驾驶方法,所述车辆行驶信息包括下列中的一个或者多个:
    当前时刻、尺寸、速度、加速度、角速度和位置。
  10. 如权利要求1-9中任一个所述的辅助驾驶方法,所述驾驶相关信息包括潜在碰撞危险,以及所述基于所述道路数据和车辆运动信息来确定所识别车辆的驾驶相关信息的步骤包括:
    通过建模或者深度学习的方式,基于所述道路数据和车辆运动信息来确定所识别车辆的潜在碰撞危险。
  11. 如权利要求1-9中任一个所述的辅助驾驶方法,所述基于所述道路数据确定所识别车辆的驾驶相关信息的步骤包括:
    接收在所述预定范围内的车辆发送的场景请求;以及
    基于所述道路数据确定与所述场景相对应的驾驶相关信息。
  12. 如权利要求1-11中任一个所述的辅助驾驶方法,其中所述方法适于在部署在所述预定范围中的路侧感知设备中执行或者在耦接到所述路侧感知设备的云服务器上执行。
  13. 一种在车辆中执行的辅助驾驶方法,所述车辆在部署有路侧感知设备的道路上行驶,所述方法包括步骤:
    通过预定通信方式接收驾驶相关信息,所述驾驶相关信息由所述路侧感知设备根据其预定范围内的道路数据产生;
    在车辆中输出所接收到的驾驶相关信息。
  14. 如权利要求13所述的辅助驾驶方法,还包括步骤:
    通过预定通信方式向所述路侧感知设备发送车辆行驶信息。
  15. 如权利要求13或者14所述的辅助驾驶方法,所述驾驶相关信息包括潜在碰撞危险,所述方法还包括:
    向所述车辆的驾驶者通知所述潜在碰撞危险。
  16. 如权利要求15所述的辅助驾驶方法,所述方法还包括:
    基于所接收到的驾驶相关信息来控制所述车辆的行驶以减少所述潜在碰撞危险。
  17. 如权利要求13-16中任一个所述的辅助驾驶方法,还包括:
    通过预定通信方式向所述路侧感知设备发送场景请求;以及
    所述驾驶相关信息包括与所述场景相对应的驾驶相关信息。
  18. 一种路侧感知设备,包括:
    各个传感器,适于获得其预定范围内各对象的静态和动态信息;
    存储单元,适于存储所述道路数据,所述道路数据包括所述预定范围内各对象的静态和/或动态信息;以及
    计算单元,适于执行如权利要求1-12中任一个所述的方法。
  19. 一种辅助驾驶系统,包括:
    多个如权利要求18所述的路侧感知设备,部署在道路侧边位置;以及
    车辆,在所述道路上行驶,并执行如权利要求13-17中任一个所述的辅助驾驶方法。
  20. 如权利要求19所述的辅助驾驶系统,还包括:
    云服务器,适于接收所述路侧感知设备的道路数据,基于各路侧感知设备的部署位置组合所述道路数据,以生成整条道路的道路数据。
  21. 一种计算设备,包括:
    至少一个处理器;和
    存储有程序指令的存储器,其中,所述程序指令被配置为适于由所述至少一个处理器执行,所述程序指令包括用于执行如权利要求1-17中任一项所述方法的指令。
  22. 一种存储有程序指令的可读存储介质,当所述程序指令被计算设备读取并执行时,使得所述计算设备执行如权利要求1-17中任一项所述的方法。
PCT/CN2019/105278 2018-09-21 2019-09-11 一种辅助驾驶方法和系统 WO2020057406A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811108869.2 2018-09-21
CN201811108869.2A CN110936960A (zh) 2018-09-21 2018-09-21 一种辅助驾驶方法和系统

Publications (1)

Publication Number Publication Date
WO2020057406A1 true WO2020057406A1 (zh) 2020-03-26

Family

ID=69888290

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105278 WO2020057406A1 (zh) 2018-09-21 2019-09-11 一种辅助驾驶方法和系统

Country Status (3)

Country Link
CN (1) CN110936960A (zh)
TW (1) TW202031538A (zh)
WO (1) WO2020057406A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111540224A (zh) * 2020-06-12 2020-08-14 深圳市元征科技股份有限公司 一种道路数据处理方法及相关设备
CN111899625A (zh) * 2020-07-16 2020-11-06 北京理工大学 智能辅助驾驶开发装置
CN114071417A (zh) * 2020-08-06 2022-02-18 索尼公司 用于无线通信的电子设备和方法、计算机可读存储介质
CN113066289B (zh) * 2021-04-30 2024-03-15 腾讯科技(深圳)有限公司 驾驶辅助处理方法、装置、计算机可读介质及电子设备
CN113538913A (zh) * 2021-07-16 2021-10-22 河南理工大学 一种高速公路超车提示及心理干预装置、方法
CN114368388B (zh) * 2022-01-28 2024-05-03 中国第一汽车股份有限公司 一种驾驶行为分析方法、装置、设备以及存储介质
CN114724366B (zh) * 2022-03-29 2023-06-20 北京万集科技股份有限公司 辅助驾驶方法、装置、设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007103180A2 (en) * 2006-03-03 2007-09-13 Inrix, Inc. Assessing road traffic conditions using data from mobile data sources
US9494940B1 (en) * 2015-11-04 2016-11-15 Zoox, Inc. Quadrant configuration of robotic vehicles
US9507346B1 (en) * 2015-11-04 2016-11-29 Zoox, Inc. Teleoperation system and method for trajectory modification of autonomous vehicles
CN107719363A (zh) * 2016-08-11 2018-02-23 Trw汽车股份有限公司 用于沿着路径引导机动车辆的控制系统和控制方法
EP3315359A1 (en) * 2016-10-28 2018-05-02 Volvo Car Corporation Road vehicle turn signal assist system and method
CN108399792A (zh) * 2018-01-25 2018-08-14 北京墨丘科技有限公司 一种无人驾驶车辆避让方法、装置和电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102431556B (zh) * 2011-11-15 2015-09-30 武汉理工大学 基于车路协同的驾驶员一体化预警装置
CN108417087B (zh) * 2018-02-27 2021-09-14 浙江吉利汽车研究院有限公司 一种车辆安全通行系统及方法
CN108447291B (zh) * 2018-04-03 2020-08-14 南京锦和佳鑫信息科技有限公司 一种智能道路设施系统及控制方法
CN108765982A (zh) * 2018-05-04 2018-11-06 东南大学 车路协同环境下信号控制交叉口车速引导系统及引导方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007103180A2 (en) * 2006-03-03 2007-09-13 Inrix, Inc. Assessing road traffic conditions using data from mobile data sources
US9494940B1 (en) * 2015-11-04 2016-11-15 Zoox, Inc. Quadrant configuration of robotic vehicles
US9507346B1 (en) * 2015-11-04 2016-11-29 Zoox, Inc. Teleoperation system and method for trajectory modification of autonomous vehicles
CN107719363A (zh) * 2016-08-11 2018-02-23 Trw汽车股份有限公司 用于沿着路径引导机动车辆的控制系统和控制方法
EP3315359A1 (en) * 2016-10-28 2018-05-02 Volvo Car Corporation Road vehicle turn signal assist system and method
CN108399792A (zh) * 2018-01-25 2018-08-14 北京墨丘科技有限公司 一种无人驾驶车辆避让方法、装置和电子设备

Also Published As

Publication number Publication date
TW202031538A (zh) 2020-09-01
CN110936960A (zh) 2020-03-31

Similar Documents

Publication Publication Date Title
WO2020057406A1 (zh) 一种辅助驾驶方法和系统
US9922565B2 (en) Sensor fusion of camera and V2V data for vehicles
CN111731101B (zh) 一种融合v2x信息的ar-hud显示方法和系统
WO2020057407A1 (zh) 一种辅助车辆导航方法和系统
EP3745376B1 (en) Method and system for determining driving assisting data
CN110796007B (zh) 场景识别的方法与计算设备
CN111915915A (zh) 驾驶场景重构方法、装置、系统、车辆、设备及存储介质
CN111429739A (zh) 一种辅助驾驶方法和系统
CN111354182A (zh) 一种辅助驾驶方法和系统
US8543290B2 (en) Vehicle information providing device
CN111354214B (zh) 一种辅助停车方法和系统
CN111354222A (zh) 一种辅助驾驶方法和系统
CN111508276B (zh) 基于高精地图的v2x逆向超车预警方法、系统和介质
JP7362733B2 (ja) 道路環境情報の自動化クラウドソーシング
US20200294432A1 (en) Advertisement display device, vehicle, and advertisement display method
US20200005562A1 (en) Method for ascertaining illegal driving behavior by a vehicle
EP4102323B1 (en) Vehicle remote control device, vehicle remote control system, vehicle remote control method, and vehicle remote control program
US12030512B2 (en) Collision warning system for a motor vehicle having an augmented reality head up display
CN112950995B (zh) 泊车辅助装置、辅助装置、相应方法及车辆和服务器
CN110763244B (zh) 一种电子地图生成系统和方法
JP2022056153A (ja) 一時停止検出装置、一時停止検出システム、及び一時停止検出プログラム
EP4357944A1 (en) Identification of unknown traffic objects
CN113415289B (zh) 无人驾驶车辆的标识装置和方法
CN117470254B (zh) 一种基于雷达服务的车载导航系统和方法
US20240232715A9 (en) Lane-assignment for traffic objects on a road

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19861854

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2101001563

Country of ref document: TH

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19861854

Country of ref document: EP

Kind code of ref document: A1