WO2021212379A1 - 车道线检测方法及装置 - Google Patents

车道线检测方法及装置 Download PDF

Info

Publication number
WO2021212379A1
WO2021212379A1 PCT/CN2020/086196 CN2020086196W WO2021212379A1 WO 2021212379 A1 WO2021212379 A1 WO 2021212379A1 CN 2020086196 W CN2020086196 W CN 2020086196W WO 2021212379 A1 WO2021212379 A1 WO 2021212379A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
vehicle
weekly
probability
car
Prior art date
Application number
PCT/CN2020/086196
Other languages
English (en)
French (fr)
Inventor
徐建锋
郭姣
张晓洪
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080005027.3A priority Critical patent/CN112703506B/zh
Priority to PCT/CN2020/086196 priority patent/WO2021212379A1/zh
Publication of WO2021212379A1 publication Critical patent/WO2021212379A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Definitions

  • This application relates to the field of automatic driving, and in particular to a lane line detection method and device.
  • the lane line is used to indicate the path planning of the autonomous vehicle to ensure the safety, comfort and intelligence of the vehicle during the autonomous driving process.
  • an autonomous driving vehicle can use real-time vision to detect the lane line on the road where it is located, so as to perform automatic driving path planning based on the detected lane line.
  • the location of vehicles around the vehicle that is updated in real time that is, the location of the vehicle
  • the horizontal distance is used to translate the trajectory of the surrounding car, and then the trajectory of the translated surrounding car is equalized to obtain the lane line.
  • the obstacle information around the own car is used to further determine the range of the lane line, and the lane of the own car is output. Range, that is, the lane line of the lane where the vehicle is located.
  • the car’s trajectory is used as the lane centerline of the car’s lane, and the vehicle’s trajectory and presets are used.
  • Lane width is used to translate and equalize the lane center line of the lane where the car is located, so as to obtain the lane center line of the lane where the vehicle is located, and finally combine the preset lane width to translate the lane center line of the lane where the vehicle is located to determine the lane where the vehicle is located Lane range.
  • the lane lines detected by the above prior art may have lane deviations, thereby affecting the safety of the vehicle, and the above two prior art techniques fail to distinguish whether the traffic is changing lanes or not, which may affect the safety of the vehicle passing the intersection scene Therefore, it is not suitable for more complex intersection scenes, such as a two-lane intersection at an exit and a three-lane intersection at the entrance.
  • various observation data obtained by sensors can also be used for fusion, so as to obtain the lane line of the lane where the vehicle is located.
  • the lane line may be unstable, thereby affecting the safety of the vehicle passing the intersection scene.
  • the present application provides a lane line detection method and device, which recognizes the lane-changing intention of the weekly car according to the lane-changing probability of the weekly car, so as to filter the weekly car, and then according to the center line of the virtual lane of the weekly car and the self-car With the associated probability, the lane line of the lane where the own vehicle is located is determined according to the appropriate trajectory of the surrounding vehicle and the preset width, thereby improving the accuracy of the lane line detection of the lane where the own vehicle is located and ensuring the safety of the vehicle.
  • the present application provides a lane line detection method used in the field of automatic driving.
  • the method includes: determining the changing trend of the distance between the weekly car and the center line of the virtual lane according to the movement trajectory of the weekly car.
  • the weekly vehicle is a vehicle whose distance from its own vehicle is less than a preset distance. Then, analyze the changing trend of the distance between the weekly vehicle and the center line of the virtual lane, and the lane-changing probability of the weekly vehicle at the previous moment to obtain the lane-changing probability of the weekly vehicle at the current moment.
  • the lane change probability is the probability that the vehicle changes its lane.
  • the preset transfer coefficient is the correlation probability between the weekly vehicle and the center line of the virtual lane at the previous moment, and the conversion coefficient when converted to the correlation probability between the weekly vehicle and the center line of the virtual lane at the current time, and the correlation probability is used to represent the week The distance between the vehicle and the center line of the virtual lane, and the similarity between the trajectory of the surrounding vehicles and the center line of the virtual lane.
  • the lane line of the vehicle's lane is determined.
  • the target vehicle is the vehicle with the highest probability of association with the virtual lane in the weekly traffic.
  • the application can determine the changing trend of the distance between the weekly vehicle and the center line of the virtual lane according to the trajectory of the weekly vehicle, and then according to the changing trend of the distance between the weekly vehicle and the center line of the virtual lane And the lane-changing probability of the weekly car at the previous moment, determine the lane-changing probability of the weekly car at the current moment, thereby identifying the lane-changing intention of the weekly car at the current moment.
  • this application can be based on the weekly car’s lane-changing probability at the current time, the associated probability of the weekly car and the centerline of the virtual lane at the previous time, the position of the weekly car at the current time relative to the centerline of the virtual lane, and the forecast.
  • the virtual lane is parallel to the line connecting the front and rear of the vehicle, and the width of the virtual lane is a preset width, and the vehicle is located on the center line of the virtual lane.
  • the trajectory of the cycle is a line of multiple position points, where the value of the longitudinal distance between the multiple position points is equal to the value of the preset distance interval, and the multiple The location points include the location points corresponding to the current location of each weekly car.
  • the trend of the change of the distance between the weekly vehicle and the center line of the virtual lane and the lane change probability of the weekly vehicle at the previous moment are analyzed to obtain the lane change probability of the weekly vehicle at the current moment, including : Analyze the change trend of the distance between the weekly vehicle and the center line of the virtual lane, and obtain the first observation probability of the weekly vehicle at the current moment, which is the predicted value of the vehicle's lane changing probability. Then the first observation probability of the weekly car at the current moment and the lane-changing probability of the weekly car at the previous moment are weighted and summed to determine the lane-changing probability of the weekly car at the current moment.
  • this application can analyze the change trend of the distance between the weekly train and the center line of the virtual lane line, and obtain the predicted value of the lane change probability of the weekly train at the current moment, that is, the weekly train at the current moment.
  • the first observation probability, and then the weighted summation is performed on the first observation probability of the weekly car at the current moment and the lane-changing probability of the weekly car at the previous moment, so as to obtain the lane-changing probability of the weekly car at the current moment.
  • the lane-changing probability of the weekly car at the previous moment is further considered, so the obtained lane-changing probability of the weekly car at the current moment is accurate The speed is higher to more accurately identify the lane-changing intention of the week.
  • the lane-changing probability before the weekly train enters the preset area is the preset lane-changing probability, where the preset area is an intersection.
  • the preset transfer coefficient to determine the associated probability of the center line of the weekly car and the virtual lane at the current moment includes: first determining the first target weekly car according to the preset lane-changing probability threshold and the lane-changing probability of the weekly car at the current moment.
  • the first target car at the current time relative to the center line of the virtual lane determine the lateral distance between the first target car and the center line of the virtual lane, and the heading of the first target car and the center of the virtual lane The angle of the line. Then, the included angle and the lateral distance between the first target car and the center line of the virtual lane are weighted and summed to determine the second observation probability of the first target car at the current moment, that is, the center line of the vehicle and the virtual lane The predicted value of the associated probability.
  • the second observation probability of the first target weekly car at the current moment, the preset transfer coefficient, and the associated probability of the first target weekly car and the center line of the virtual lane at the previous moment are normalized to determine the first target week
  • this application can identify the lane-changing intention of the weekly car at the current moment based on the preset lane-changing probability threshold and the lane-changing probability of the weekly car at the current moment, and according to the lane-changing intention of the weekly car at the current moment Weekly vehicles are screened and eliminated, so as to determine the first target weekly vehicle, so as to improve the accuracy of determining the correlation probability between the weekly vehicle and the center line of the virtual lane at the current moment.
  • this application also considers the correlation probability between the first target weekly cycle and the center line of the virtual lane at the previous moment. In order to further improve the accuracy of the correlation probability between the center line of the first target weekly car and the virtual lane at the current moment, the accuracy of determining the lane line of the lane where the own vehicle is located is improved, and the safety of the vehicle is ensured.
  • the first target weekly vehicle is determined according to the preset lane-changing probability threshold and the lane-changing probability of the weekly vehicle at the current moment, including: If the lane-changing probability of the vehicle at the current moment is greater than or equal to the preset lane-changing probability threshold, it is determined that the multiple weekly vehicles are the first target weekly; if there are multiple weekly vehicles with the lane-changing probability at the current time greater than or equal to the predicted Set the weekly car with the lane-changing probability threshold, and there are weekly cars whose lane-changing probability at the current moment is less than the preset lane-changing probability threshold, then delete the weekly cars whose lane-changing probability at the current moment is greater than or equal to the preset lane-changing probability threshold. And the weekly car whose lane-changing probability is less than the preset lane-changing probability threshold at the current moment is determined as the first target weekly car.
  • the lane-changing probability of weekly vehicles at the current moment is greater than or equal to the preset lane-changing probability threshold, that is, all weekly vehicles have lane-changing intentions, it can be determined that all vehicles need to change lanes in the current scene.
  • the vehicles with the intention to change lanes are eliminated to determine the first target car, reduce the need to determine the associated probability of vehicles, and improve the efficiency of determining the lane line of the vehicle's lane.
  • the method further includes: first determining that the lane where the vehicle is located after leaving the preset area is the target lane, where the preset area is an intersection. Then, the target point of the vehicle on the target lane is determined according to the speed of the vehicle at the first location point and the heading of the vehicle, where the first location point is the location point corresponding to the location of the vehicle before it leaves the preset area.
  • the line between the target point of the vehicle on the target lane and the first position point is determined, which is the center of the lane where the vehicle is located before leaving the preset area Line; if the target point of the vehicle in the target lane is not located on the center line of the target lane, the line between the vertical point of the target point on the center line of the target lane and the first position point is determined as the vehicle leaving the preset area The center line of the previous lane.
  • this application can determine before the vehicle leaves the preset area, that is, the intersection, that is, when the vehicle is at the first position, according to the center line of the target lane and the speed and heading of the vehicle.
  • the center line of the lane before the vehicle leaves the preset area so as to obtain a smoother lane line of the lane before the vehicle leaves the preset area according to the preset width and the center line, reducing the lateral jitter when the vehicle leaves the preset area , Improve the safety of vehicle driving.
  • the present application provides a lane line detection device used in the field of automatic driving.
  • the device includes a determining unit and an analyzing unit: the determining unit is used to determine the center line of the surrounding vehicle and the virtual lane according to the trajectory of the surrounding vehicle.
  • the change trend of the distance where the weekly vehicle is the vehicle whose distance from the self-vehicle is less than the preset distance.
  • the analysis unit is used to analyze the change trend of the distance between the weekly vehicle and the center line of the virtual lane, and the lane-changing probability of the weekly vehicle at the previous moment, to obtain the lane-changing probability of the weekly vehicle at the current moment, and the lane-changing probability is the vehicle The probability of changing its lane.
  • the determining unit is also used to determine the lane-changing probability of the weekly car at the current moment, the associated probability of the weekly car and the center line of the virtual lane at the previous time, the position of the weekly car relative to the center line of the virtual lane at the current time, and the preset
  • the transfer coefficient determines the correlation probability of the center line of the weekly car and the virtual lane at the current moment.
  • the preset transfer coefficient is the correlation probability between the weekly vehicle and the center line of the virtual lane at the previous moment, and the conversion coefficient when converted to the correlation probability between the weekly vehicle and the center line of the virtual lane at the current time, and the correlation probability is used to represent the week The distance between the vehicle and the center line of the virtual lane, and the similarity between the trajectory of the surrounding vehicles and the center line of the virtual lane.
  • the determining unit is also used to determine the lane line of the lane where the vehicle is located according to the trajectory of the target vehicle and the preset width, where the target vehicle is the vehicle with the highest probability of association with the center line of the virtual lane at the current moment among the surrounding vehicles.
  • the virtual lane is parallel to the line connecting the front and rear of the vehicle, and the width of the virtual lane is a preset width, and the vehicle is located on the center line of the virtual lane.
  • the trajectory of the cycle is a line of multiple location points, and the value of the longitudinal distance between adjacent location points in the multiple location points is equal to the value of the preset distance interval.
  • the multiple location points include the location point corresponding to the current location of each weekly vehicle.
  • the analysis unit is used to analyze the changing trend of the distance between the weekly vehicle and the center line of the virtual lane, and obtain the first observation probability of the weekly vehicle at the current moment, that is, the probability of the vehicle lane changing Predicted value. Then the analysis unit is also used to perform a weighted summation on the first observation probability of the weekly vehicle at the current moment and the lane-changing probability of the weekly vehicle at the previous moment to determine the lane-changing probability of the weekly vehicle at the current moment.
  • the lane-changing probability before the weekly train enters the preset area is the preset lane-changing probability, where the preset area is an intersection.
  • the determining unit is configured to determine the first target weekly car according to the preset lane-changing probability threshold and the lane-changing probability of the weekly car at the current moment. Then the determining unit is also used to determine the lateral distance between the first target car and the center line of the virtual lane according to the position of the first target car at the current moment relative to the center line of the virtual lane, and the front of the first target car The angle between the direction and the centerline of the virtual lane.
  • the determining unit is also used to perform a weighted summation of the included angle and the lateral distance between the first target car and the center line of the virtual lane to determine the second observation probability of the first target car at the current moment, that is, to the vehicle The predicted value of the probability of association with the center line of the virtual lane.
  • the final determination unit is also used to normalize the second observation probability of the first target weekly car at the current moment, the preset transition coefficient, and the associated probability of the first target weekly car and the center line of the virtual lane at the previous moment. , To determine the associated probability of the first target weekly car and the center line of the virtual lane at the current moment.
  • the determining unit is configured to determine the first target weekly vehicle according to the preset lane-changing probability threshold and the lane-changing probability of the weekly vehicle at the current moment. , Including: if the lane-changing probabilities of multiple weekly trains at the current moment are all greater than or equal to the preset lane-changing probability threshold, determining that the multiple weekly trains are all the first target weekly trains.
  • the determining unit is further configured to determine that the lane where the vehicle is located after leaving the preset area is the target lane, where the preset area is an intersection. Then the determining unit is also used to determine the target point of the vehicle on the target lane according to the speed and heading of the vehicle at the first location point, where the first location point corresponds to the location of the vehicle before leaving the preset area Location point. The determining unit is also used for determining the connection line between the target point of the vehicle on the target lane and the first position point if the target point of the vehicle on the target lane is located on the center line of the target lane, and the preset is for the vehicle to leave The centerline of the lane in front of the area.
  • the determining unit is also used for determining that the line between the vertical point of the target point on the center line of the target lane and the first position point is the own vehicle if the target point of the vehicle in the target lane is not located on the center line of the target lane The center line of the lane before leaving the preset area.
  • the present application provides a lane line detection device, which includes: a processor and a memory; wherein the memory is used to store computer program instructions, and the processor runs the computer program instructions to make the lane line detection device execute the first aspect The lane line detection method.
  • the present application provides a computer-readable storage medium including computer instructions, which when executed by a processor, cause the lane line detection device to execute the lane line detection method as described in the first aspect.
  • the present application provides a computer program product, which is characterized in that when the computer program product runs on a processor, it causes the lane line detection device to execute the lane line detection method as described in the first aspect.
  • FIG. 1 is a first structural diagram of an automatic driving vehicle provided by an embodiment of the application
  • FIG. 2 is a second structural diagram of an automatic driving vehicle provided by an embodiment of the application.
  • FIG. 3 is a schematic structural diagram of a computer system provided by an embodiment of this application.
  • FIG. 4 is a schematic structural diagram of a chip system provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram 1 of the application of a cloud-side command automatic driving vehicle provided by an embodiment of this application;
  • FIG. 6 is a second schematic diagram of the application of a cloud-side command automatic driving vehicle provided by an embodiment of this application.
  • FIG. 7 is a schematic structural diagram of a computer program product provided by an embodiment of this application.
  • FIG. 8 is a schematic flowchart of a lane line detection method provided by an embodiment of the application.
  • FIG. 9(a) is a schematic diagram of a virtual lane provided by an embodiment of this application.
  • Figure 9(b) is a schematic diagram of a weekly car provided by an embodiment of this application.
  • FIG. 10 is a schematic diagram of a preset distance interval provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of the position of a weekly car provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram of a Markov probability model provided by an embodiment of the application.
  • FIG. 13 is a first schematic diagram of a motion trajectory of a self-vehicle provided by an embodiment of the application.
  • FIG. 14 is a second schematic diagram of the motion trajectory of a self-vehicle provided by an embodiment of the application.
  • FIG. 15 is a schematic diagram of a preset lane changing probability and a preset associated probability provided by an embodiment of this application;
  • FIG. 16 (a) is a schematic diagram of the third of the motion trajectory of a self-vehicle provided by an embodiment of this application;
  • FIG. 16(b) is a fourth schematic diagram of the motion trajectory of a self-vehicle provided by an embodiment of this application.
  • FIG. 17 is a first schematic diagram of a lane line detection device provided by an embodiment of the application.
  • FIG. 18 is a second schematic diagram of a lane line detection device provided by an embodiment of the application.
  • the embodiment of the present application provides a lane line detection method, which is applied to a vehicle, or applied to other devices (such as a cloud server, a mobile phone terminal, etc.) having a function of controlling a vehicle.
  • the vehicle may be an automatic driving vehicle
  • the automatic driving vehicle may be a vehicle with partial automatic driving functions, or a vehicle with all automatic driving functions, that is to say, the level of automatic driving of the vehicle can refer to the United States
  • SAE Society of Automotive Engineers
  • the vehicle or other equipment can implement the lane line detection method provided in the embodiments of the present application through the components (including hardware and software) included in the vehicle or other equipment.
  • the weekly car that is, the trajectory of the vehicle whose distance from the self-car is less than the preset distance
  • determine the change trend of the distance between the weekly car and the center line of the virtual lane and determine the distance between the weekly car and the center line of the virtual lane Analyze the change trend of the week and the lane-changing probability of the week at the previous moment to determine the lane-changing probability of the week at the current time.
  • the lane-changing probability of the weekly car at the current moment determines the associated probability of the center line of the weekly car and the virtual lane at the current moment, and determines the target vehicle according to the associated probability of the center line of the weekly vehicle and the virtual lane at the current moment.
  • the trajectory of the target vehicle that is, the trajectory and the preset width of the cycle vehicle with the highest probability of being associated with the center line of the virtual lane at the current moment, the lane line of the vehicle's lane is determined to improve the determination of where the vehicle is located. The accuracy of the lane line of the lane ensures the safety of the vehicle.
  • FIG. 1 is a schematic structural diagram of an autonomous driving vehicle provided by an embodiment of the application.
  • the vehicle 100 is configured in a fully or partially automatic driving mode.
  • the vehicle 100 can be based on the trajectory of the weekly car, the lane change probability of the car at the previous time, and the car at the current time.
  • the associated probability of the weekly car and the center line of the virtual lane at the previous moment determine the lane-changing probability of the weekly car at the current moment and the weekly car
  • the associated probability with the center line of the virtual lane at the current moment is used to determine the lane line of the lane where the vehicle is located.
  • the vehicle 100 may include various subsystems, such as a travel system 110, a sensor system 120, a control system 130, one or more peripheral devices 140 and a power supply 150, a computer system 160, and a user interface 170.
  • the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
  • each of the subsystems and elements of the vehicle 100 may be wired or wirelessly interconnected.
  • the travel system 110 may include components that provide power movement for the vehicle 100.
  • the travel system 110 may include an engine 111, a transmission 112, an energy source 113 and wheels 114.
  • the engine 111 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, such as a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the engine 111 converts the energy source 113 into mechanical energy.
  • Examples of the energy source 113 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 113 may also provide energy for other systems of the vehicle 100.
  • the transmission device 112 can transmit the mechanical power from the engine 111 to the wheels 114.
  • the transmission 112 may include a gearbox, a differential, and a drive shaft.
  • the transmission device 112 may also include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that may be coupled to one or more wheels 114.
  • the sensor system 120 may include several sensors that sense information about the environment around the vehicle 100.
  • the sensor system 120 may include a positioning system 121 (the positioning system may be a global positioning system (GPS), a Beidou system or other positioning systems), an inertial measurement unit (IMU) 122, and a radar 123, Lidar 124, and camera 125.
  • the sensor system 120 may also include sensors of the internal system of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.).
  • One or more sensor data collected by these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and recognition are the key to the safe operation of the vehicle 100 to realize automatic driving.
  • the positioning system 121 can be used to estimate the geographic location of the vehicle 100.
  • the IMU 122 is used to sense the position and orientation change of the vehicle 100 based on the inertial acceleration.
  • the IMU 122 may be a combination of an accelerometer and a gyroscope.
  • the radar 123 may use radio signals to sense objects in the surrounding environment of the vehicle 100. In some embodiments, in addition to sensing the object, the radar 123 may also be used to sense the speed and/or the forward direction of the object.
  • the lidar 124 can use laser light to sense objects in the environment where the vehicle 100 is located.
  • the lidar 124 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the camera 125 may be used to capture multiple images of the surrounding environment of the vehicle 100 and multiple images in the cockpit of the vehicle.
  • the camera 125 may be a still camera or a video camera.
  • the control system 130 may control the operation of the vehicle 100 and its components.
  • the control system 130 may include various components, including a steering system 131, a throttle 132, a braking unit 133, a computer vision system 134, a route control system 135, and an obstacle avoidance system 136.
  • the steering system 131 is operable to adjust the forward direction of the vehicle 100.
  • it may be a steering wheel system.
  • the throttle 132 is used to control the operating speed of the engine 111, thereby controlling the speed of the vehicle 100.
  • the braking unit 133 is used to control the vehicle 100 to decelerate.
  • the braking unit 133 may use friction to slow down the wheels 114.
  • the braking unit 133 may also convert the kinetic energy of the wheels 114 into electric current.
  • the braking unit 133 may also take other forms to slow down the rotation speed of the wheels 114 to control the speed of the vehicle 100.
  • the computer vision system 134 can process and analyze the images captured by the camera 125 to identify objects and/or features in the surrounding environment of the vehicle 100 and the driver's physical features and facial features in the cockpit of the vehicle.
  • the objects and/or features may include traffic signals, road conditions, and obstacles, and the driver's physical and facial features include the driver's behavior, line of sight, expression, and the like.
  • the computer vision system 134 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision technologies.
  • SFM structure from motion
  • the computer vision system 134 can also be used to map the environment, track objects, estimate the speed of objects, determine driver behavior, face recognition, and so on.
  • the route control system 135 is used to determine the travel route of the vehicle 100.
  • the route control system 135 may combine data from sensors, the positioning system 121, and one or more predetermined maps to determine a travel route for the vehicle 100.
  • the obstacle avoidance system 136 is used to identify, evaluate and avoid or otherwise cross over potential obstacles in the environment of the vehicle 100.
  • control system 130 may additionally or alternatively include other components in addition to the components shown and described. Alternatively, a part of the components shown above may be reduced.
  • the vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through the peripheral device 140.
  • the peripheral device 140 may include a wireless communication system 141, an onboard computer 142, a microphone 143, and/or a speaker 144.
  • the peripheral device 140 provides a means for the user of the vehicle 100 to interact with the user interface 170.
  • the onboard computer 142 may provide information to the user of the vehicle 100.
  • the user interface 170 can also operate the on-board computer 142 to receive user input.
  • the on-board computer 142 can be operated through a touch screen.
  • the peripheral device 140 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle.
  • the microphone 143 may receive audio (eg, voice commands or other audio input) from the user of the vehicle 100.
  • the speaker 144 may output audio to the user of the vehicle 100.
  • the wireless communication system 141 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 141 may use 3G cellular communication, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication.
  • the wireless communication system 141 may use WiFi to communicate with a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the wireless communication system 141 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 141 may include one or more dedicated short range communications (DSRC) devices.
  • DSRC dedicated short range communications
  • the power supply 150 may provide power to various components of the vehicle 100.
  • the power source 150 may be a rechargeable lithium ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100.
  • the power source 150 and the energy source 113 may be implemented together, such as in some all-electric vehicles.
  • the computer system 160 may include at least one processor 161 that executes instructions 1621 stored in a non-transitory computer readable medium such as a data storage device 162.
  • the computer system 160 may also be multiple computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
  • the processor 161 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, the memory, and other elements in the same physical enclosure, those of ordinary skill in the art should understand that the processor, computer system, or memory may actually include Multiple processors, computer systems, or memories in a physical housing, or include multiple processors, computer systems, or memories that may not be stored in the same physical housing.
  • the memory may be a hard drive, or other storage medium located in a different physical enclosure.
  • a reference to a processor or computer system will be understood to include a reference to a collection of processors or computer systems or memories that may operate in parallel, or a reference to a collection of processors or computer systems or memories that may not operate in parallel.
  • some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
  • the processor may be located away from the vehicle and wirelessly communicate with the vehicle.
  • some of the processes described herein are executed on a processor disposed in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single manipulation.
  • the data storage device 162 may include instructions 1621 (eg, program logic), which may be executed by the processor 161 to perform various functions of the vehicle 100, including those described above.
  • the data storage device 162 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data on one or more of the traveling system 110, the sensor system 120, the control system 130, and the peripheral device 140. Control instructions.
  • the data storage device 162 may also store data, such as road maps, route information, the location, direction, and speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 100 and the computer system 160 during the operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • the data storage device 162 may obtain driver information and surrounding environmental information from the sensor system 120 or other components of the vehicle 100.
  • the environmental information may be the road category of the road where the vehicle is currently located, The current weather, current time, etc.
  • the driver information may be the identity of the driver, the driving experience of the driver, the current physical condition of the driver, and so on.
  • the data storage device 162 may also store the state information of the vehicle itself and the state information of other surrounding vehicles or equipment whose distance from the vehicle is less than a preset distance.
  • the status information includes, but is not limited to, the vehicle's speed, acceleration, heading angle, position, and movement trajectory.
  • the vehicle obtains the speed of the vehicle itself, the speed of other vehicles, and the movement trajectory of other vehicles.
  • the processor 161 can process the information acquired from the data storage device 162 according to a preset algorithm, etc., to determine the lane-changing probability of the other vehicle at the current moment and the centerline of the other vehicle and the virtual lane at the current time.
  • the correlation probability at the time, etc. and determine the center line of the lane where the vehicle is located according to the trajectory of the vehicle with the highest correlation probability, so as to determine the lane line of the lane where the vehicle is located according to the center line and the preset width, and then determine the lane line according to the lane line It is suitable for the self-driving strategy of the self-car, and controls the self-driving car to make the self-driving pass smoothly through the intersection.
  • the user interface 170 is used to provide information to or receive information from a user of the vehicle 100.
  • the user interface 170 may include an interface for interacting and exchanging information with one or more input/output devices in the set of peripheral devices 140, where one or more input/output devices in the set of peripheral devices 140
  • the output device may be, for example, one or more of the wireless communication system 141, the onboard computer 142, the microphone 143, and the speaker 144.
  • the computer system 160 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the traveling system 110, the sensor system 120, and the control system 130) and from the user interface 170. For example, the computer system 160 may use input from the control system 130 to control the steering system 131 to avoid obstacles detected by the sensor system 120 and the obstacle avoidance system 136. In some embodiments, the computer system 160 is operable to provide control of many aspects of the vehicle 100 and its subsystems.
  • various subsystems for example, the traveling system 110, the sensor system 120, and the control system 130
  • the computer system 160 may use input from the control system 130 to control the steering system 131 to avoid obstacles detected by the sensor system 120 and the obstacle avoidance system 136.
  • the computer system 160 is operable to provide control of many aspects of the vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • the data storage device 162 may exist partially or completely separately from the vehicle 100.
  • the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation to the embodiments of the present application.
  • a self-driving vehicle traveling on the road can identify its own environment, the location of the vehicle in the surrounding environment, and the speed and orientation of the surrounding vehicles to determine the state of the vehicle 100 that is the best. Similar surrounding vehicles.
  • the characteristics of each vehicle whose distance from the vehicle 100 does not exceed a preset distance in the surrounding environment can be considered independently, and based on the center line of the virtual lane of the vehicle 100 and the respective vehicles in the surrounding environment at the current moment.
  • the probability of changing lanes, or the associated probability of the center line of the virtual lane of the vehicle and the vehicle 100 in the surrounding environment at the current moment, etc., to determine the most suitable follow-up object of the vehicle 100, and then determine that the vehicle 100 will be in the next period of time The movement trajectory of the vehicle 100 and the lane line of the lane where the vehicle 100 is located to support the automatic driving of the vehicle.
  • the autonomous vehicle 100 or the computing device associated with the autonomous vehicle 100 may be based on the state of the vehicle in the surrounding environment (for example, Movement trajectory, position, speed, etc.) and the centerline of the virtual lane where the vehicle is located to determine the lane line of the vehicle’s lane and the movement trajectory of the vehicle within a certain period of time. In this process, other factors may also be considered to determine the lane line of the lane where the vehicle is located, such as the trajectory of the vehicle 100 when it passed the road before.
  • the computer device can also provide instructions to adjust the speed and steering angle of the vehicle 100 so that the autonomous vehicle follows a given trajectory and/or maintains objects near the autonomous vehicle ( For example, the safe horizontal and vertical distances of cars in adjacent lanes on the road.
  • the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, and trolley, etc.
  • the application examples are not particularly limited.
  • the autonomous driving vehicle may also include a hardware structure and/or software module, which implements the above-mentioned functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether a certain function among the above-mentioned functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraint conditions of the technical solution.
  • the vehicle may include the following modules:
  • the sensing module 201 is used to obtain video stream data, radar laser point cloud and other data through roadside sensors and vehicle sensors, etc., and process the raw video stream data and radar laser point cloud data collected by the sensor, Obtain information such as the location of obstacles in the surrounding environment of the vehicle and the location of the lane line (such as the location of vehicles, pedestrians, traffic signs, traffic lights, and the location of sidewalks, stop lines, etc.).
  • the roadside sensor and the vehicle-mounted sensor can be laser radar, millimeter wave radar, vision sensor, and so on.
  • the sensing module 201 is also used to send the determined location of obstacles in the surrounding environment of the vehicle and the location of the lane line, etc., to the fusion module 202 and the positioning module 203 based on the data collected by all or a certain type or a certain sensor. , Road structure recognition module 204, and prediction module 205.
  • the fusion module 202 is used to obtain information such as the position of obstacles in the surrounding environment of the own vehicle and the position of the lane line from the perception module 201, and analyze the information, and analyze the obstacles and lane lines in the surrounding environment of the own vehicle, etc. Perform fusion to determine the information belonging to the same obstacle (such as the same vehicle, etc.) or the same lane line (such as the same parking space boundary, etc.), thereby further determining the type, location, size, speed, etc. of each obstacle and each lane Line type, position, shape and other information.
  • the fusion module 202 is also used to analyze the information obtained from the perception module 201, and determine information such as the area where the vehicle can pass, the location where the vehicle cannot pass, and so on.
  • the fusion module 202 is also used to combine information such as the type, location, size, speed, etc. of each obstacle, the type, location, shape, etc. of each lane line, the area where the vehicle can pass, and the area where the vehicle cannot pass, etc.
  • the information is sent to the positioning module 203, the road structure recognition module 204, and the prediction module 205.
  • the positioning module 203 is used to locate the vehicle based on the information received from the perception module 201 and the fusion module 202 and the map information obtained from the map interface, and determine the location of the vehicle (for example, the latitude and longitude of the vehicle).
  • the positioning module 203 is also used to send the location of the vehicle to the perception module 201, the fusion module 202, the road structure recognition module 204, the prediction module 205, and so on.
  • the road structure recognition module 204 is used to determine the lane lines and traffic flow in the surrounding environment of the vehicle (that is, the other Vehicle), other surrounding obstacles and the location of the vehicle, and further analyze the local road model of the surrounding environment where the vehicle is located and the confidence level of the local road model.
  • the road structure recognition module 204 is also used to send information such as the local road model of the surrounding environment where the self-vehicle is located and the confidence level of the local road model to the prediction module 205 and the planning control module 206.
  • the prediction module 205 is used to make inferences based on the information obtained from the perception module 201, the fusion module 202, the positioning module 203, and the road structure recognition module 204, and to determine the intention of the vehicle or pedestrian in the surrounding environment where the vehicle is located And the motion trajectory, etc. to make predictions.
  • the prediction module 205 is also used to send the predicted intention or movement trajectory of the vehicle or pedestrian in the surrounding environment where the own vehicle is located to the planning control module 206.
  • the planning control module 206 is used to determine the road structure and obstacles (such as the surrounding environment where the vehicle is Vehicle) intention and movement trajectory, etc.
  • the planning control module 206 is also used to plan the motion trajectory of the self-vehicle according to the road structure, obstacle intent, and motion trajectory, plan the driving path of the vehicle, generate a driving strategy suitable for the self-vehicle, and output the corresponding driving strategy
  • the action command or the control quantity is based on the command or control quantity to control the vehicle for automatic driving.
  • In-vehicle communication module 207 (not shown in Figure 2): used for information interaction between the own vehicle and other vehicles.
  • Storage component 208 (not shown in FIG. 2): used to store the executable codes of the above-mentioned modules, and running these executable codes can realize part or all of the method procedures of the embodiments of the present application.
  • the computer system 160 shown in FIG. 1 includes a processor 301, which is coupled to a system bus 302, and the processor 301 may be one or more Processors, each of which can include one or more processor cores.
  • the video adapter 303 can drive the display 324, and the display 324 is coupled to the system bus 302.
  • the system bus 302 is coupled with the input/output (I/O) bus (BUS) 305 through the bus bridge 304, the I/O interface 306 is coupled with the I/O bus 305, and the I/O interface 306 communicates with various I/O devices, For example, an input device 307 (such as a keyboard, a mouse, a touch screen, etc.), a media tray 308 (such as a CD-ROM, a multimedia interface, etc.).
  • the transceiver 309 can send and/or receive radio communication signals
  • the camera 310 can capture static and dynamic digital video images
  • USB Universal Serial Bus
  • the interface connected to the I/O interface 306 may be a USB interface.
  • the processor 301 may be any traditional processor, including a Reduced Instruction Set Computer (RISC) processor, a Complex Instruction Set Computer (CISC) processor, or a combination of the foregoing.
  • the processor 301 may also be a dedicated device such as an application specific integrated circuit (ASIC).
  • the processor 301 may also be a neural network processor or a combination of a neural network processor and the foregoing traditional processors.
  • the computer system 160 may be located far away from the autonomous driving vehicle and wirelessly communicate with the autonomous driving vehicle 100.
  • some of the processes described in this application may be configured to be executed on a processor in an autonomous vehicle, and other processes may be executed by a remote processor, including taking actions required to perform a single manipulation.
  • the computer system 160 can communicate with a software deployment server (deploying server) 313 through a network interface 312.
  • the network interface 312 may be a hardware network interface, such as a network card.
  • the network 314 may be an external network, such as the Internet, or an internal network, such as Ethernet or a virtual private network (VPN).
  • the network 314 may also be a wireless network, such as a WiFi network, a cellular network, and so on.
  • the hard disk drive interface 315 and the system bus 302 are coupled.
  • the hard disk drive interface 315 and the hard disk drive 316 are connected.
  • the system memory 317 and the system bus 302 are coupled.
  • the data running in the system memory 317 may include an operating system (OS) 318 and application programs 319 of the computer system 160.
  • OS operating system
  • the operating system (OS) 318 includes but is not limited to Shell 320 and kernel 321.
  • Shell 320 is an interface between the user and the kernel 321 of the operating system 318.
  • Shell 320 is the outermost layer of operating system 318. The shell manages the interaction between the user and the operating system 318: waiting for the user's input, interpreting the user's input to the operating system 318, and processing various output results of the operating system 318.
  • the kernel 321 is composed of parts of the operating system 318 for managing memory, files, peripherals, and system resources, and directly interacts with the hardware.
  • the kernel 321 of the operating system 318 generally runs processes, provides communication between processes, and provides functions such as CPU time slice management, interruption, memory management, and IO management.
  • Application programs 319 include programs 323 related to autonomous driving, such as programs that manage the interaction between autonomous vehicles and road obstacles, programs that control the driving route or speed of autonomous vehicles, and control interaction between autonomous vehicles and other cars on the road/autonomous vehicles Procedures, etc.
  • the application 319 also exists on the deploying server 313 system. In one embodiment, when the application program 319 needs to be executed, the computer system 160 may download the application program 319 from the deploying server 313.
  • the application program 319 may be an application program that controls the vehicle to determine the driving strategy according to the lane line of the vehicle and the traditional control module.
  • the processor 301 of the computer system 160 calls the application 319 to obtain the driving strategy.
  • the sensor 322 is associated with the computer system 160.
  • the sensor 322 is used to detect the environment around the computer system 160.
  • the sensor 322 can detect animals, cars, obstacles, and/or pedestrian crossings.
  • the sensor 322 can also detect the environment around the aforementioned objects such as animals, cars, obstacles and/or pedestrian crossings.
  • the environment around the animal for example, other animals that appear around the animal, weather conditions, and the brightness of the environment around the animal.
  • the sensor 322 may be at least one of a camera, an infrared sensor, a chemical detector, a microphone, and other devices.
  • the lane line detection method of the embodiments of the present application may also be executed by a chip system.
  • FIG. 4 is a structural diagram of a chip system provided by an embodiment of the present application.
  • a neural network processor (neural-network processing unit, NPU) 40 is mounted on a host CPU (host CPU) as a coprocessor, and the host CPU assigns tasks to the NPU 40.
  • the core part of the NPU 40 is the arithmetic circuit 403.
  • the arithmetic circuit 403 is controlled by the controller 404 so that the arithmetic circuit 403 uses matrix data extracted from the memory to perform multiplication operations.
  • the arithmetic circuit 403 includes multiple processing units (process engines, PE). In some implementations, the arithmetic circuit 403 is a two-dimensional systolic array. Optionally, the arithmetic circuit 403 may also be a one-dimensional systolic array, or other electronic circuits capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 403 is a general-purpose matrix processor.
  • the arithmetic circuit 403 obtains the data corresponding to the weight matrix B from the weight memory 402 and caches it on each PE in the arithmetic circuit 403.
  • the arithmetic circuit 403 also obtains the corresponding data of the input matrix A from the input memory 401, and then performs a matrix operation according to the input matrix A and the weight matrix B, and stores partial or final results of the matrix operation in an accumulator 408.
  • the arithmetic circuit 403 can be used to implement a feature extraction model (such as a convolutional neural network model), and input image data into the convolutional neural network model, and the features of the image can be obtained through the operation of the model. Furthermore, the image features are output to the classifier, and the classifier outputs the classification probability of the object in the image.
  • a feature extraction model such as a convolutional neural network model
  • the unified memory 406 is used to store input data and output data.
  • the weight data in the external memory is directly sent to the weight memory 402 through a direct memory access controller (DMAC) 405.
  • the input data in the external memory can be transferred to the unified memory 406 through the DMAC, or transferred to the input memory 401.
  • DMAC direct memory access controller
  • the bus interface unit (BIU) 410 is used for interaction between the advanced extensive interface (AXI) bus and the DMAC and instruction fetch buffer 409. It is also used for the instruction fetch memory 409 to obtain instructions from an external memory, and is also used for the storage unit access controller 405 to obtain the original data of the input matrix A or the weight matrix B from the external memory.
  • AXI advanced extensive interface
  • the DMAC is mainly used to transfer the input data in the external memory (DDR) to the unified memory 406, or to transfer the weight data to the weight memory 402, or to transfer the input data to the input memory 401.
  • DDR external memory
  • the input data may be the input data of the DQN model, that is, the target in the surrounding environment of the vehicle.
  • Lidar point cloud data and other information of objects such as other vehicles that interact with the vehicle.
  • the output data is the output data of the DQN model, that is, the target category, target shape information and target tracking information of the target in the surrounding environment of the vehicle.
  • the vector calculation unit 407 may include a plurality of operation processing units. It is used to perform further processing on the output of the arithmetic circuit 403 when needed, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison and so on. Mainly used for non-convolution/FC layer network calculations in neural networks, such as pooling, batch normalization, local response normalization, etc.
  • the vector calculation unit 407 stores the processed output vector in the unified memory 406.
  • the vector calculation unit 407 may apply a nonlinear function to the output of the arithmetic circuit 403, such as a vector of accumulated values, to generate the activation value.
  • the vector calculation unit 407 generates a normalized value, a combined value, or both.
  • the processed output vector can also be used as an activation input of the arithmetic circuit 403, for example, for use in a subsequent layer in a neural network.
  • the controller 404 is connected to an instruction fetch buffer 409, and the instructions used by the controller 404 can be stored in the instruction fetch buffer 409.
  • the unified memory 406, the input memory 401, the weight memory 402, and the fetch memory 409 are all On-Chip memories.
  • the external memory is private to the NPU 40 hardware architecture.
  • the main CPU and NPU 40 work together to implement the corresponding algorithm for the functions required by the vehicle 100 in Figure 1, and the corresponding algorithm for the functions required by the vehicle shown in Figure 2, or the algorithm shown in Figure 3.
  • the corresponding algorithms for the functions required by the computer system 160 are shown.
  • the computer system 160 may also receive information from other computer systems or transfer information to other computer systems.
  • the sensor data collected from the sensor system 120 of the vehicle 100 may be transferred to another computer, and the data can be processed by the other computer.
  • the data from the computer system 160 may be transmitted to the computer system 510 on the cloud side via the network for further processing.
  • the network and intermediate nodes can include various configurations and protocols, including the Internet, World Wide Web, Intranet, virtual private network, wide area network, local area network, private network using one or more company’s proprietary communication protocols, Ethernet, WiFi and HTTP, And various combinations of the foregoing. This communication can be performed by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
  • the computer system 510 may include a server with multiple computers, such as a load balancing server group.
  • the server 520 exchanges information with different nodes of the network.
  • the computer system 510 may have a configuration similar to that of the computer system 160, and have a processor 530, a memory 540, instructions 550, and data 560.
  • the data 560 of the server 520 may include providing weather-related information.
  • the server 520 may receive, monitor, store, update, and transmit various information related to target objects in the surrounding environment.
  • the information may include, for example, target category, target shape information, and target tracking information in a report form, radar information form, forecast form, etc.
  • the cloud service center may receive information (such as data collected by vehicle sensors or other information) from the autonomous vehicles 613 and 612 in its environment 600 via a network 611 such as a wireless communication network.
  • a network 611 such as a wireless communication network.
  • the cloud service center 620 runs its stored programs related to controlling auto-driving of automobiles to control the autonomous vehicles 613 and 612.
  • Programs related to controlling auto-driving cars can be: programs that manage the interaction between autonomous vehicles and road obstacles, or programs that control the route or speed of autonomous vehicles, or programs that control interaction between autonomous vehicles and other autonomous vehicles on the road.
  • the cloud service center 620 may provide a part of the map to the vehicles 613 and 612 through the network 611.
  • operations can be divided between different locations.
  • multiple cloud service centers can receive, confirm, combine, and/or send information reports.
  • information reports and/or sensor data can also be sent between vehicles.
  • Other configurations are also possible.
  • the cloud service center 620 sends to the autonomous vehicle a suggested solution regarding possible driving situations in the environment (for example, telling the obstacle ahead and telling how to circumvent it)). For example, the cloud service center 620 may assist the vehicle in determining how to proceed when facing a specific obstacle in the environment.
  • the cloud service center 620 sends a response to the autonomous vehicle indicating how the vehicle should travel in a given scene.
  • the cloud service center 620 can confirm the existence of a temporary stop sign in front of the road based on the collected sensor data. For example, based on the "lane closed" sign and the sensor data of construction vehicles, it can be determined that the lane is closed due to construction.
  • the cloud service center 620 sends a recommended operation mode for the vehicle to pass through the obstacle (for example, instructing the vehicle to change lanes on another road).
  • the operation steps used for the autonomous driving vehicle can be added to the driving information map.
  • this information can be sent to other vehicles in the area that may encounter the same obstacle, so as to assist other vehicles not only to recognize the closed lanes but also to know how to pass.
  • the disclosed methods may be implemented as computer program instructions in a machine-readable format, encoded on a computer-readable storage medium, or encoded on other non-transitory media or articles.
  • Figure 7 schematically illustrates a conceptual partial view of an example computer program product arranged in accordance with at least some of the embodiments shown herein, the example computer program product including a computer program for executing a computer process on a computing device.
  • the example computer program product 700 is provided using a signal bearing medium 701.
  • the signal-bearing medium 701 may include one or more program instructions 702, which, when run by one or more processors, can provide all or part of the functions described above with respect to FIGS. 1 to 7, or can provide descriptions in subsequent embodiments All or part of the function.
  • program instructions 702 in FIG. 7 also describe example instructions.
  • the signal-bearing medium 701 may include a computer-readable medium 703, such as, but not limited to, a hard disk drive, compact disk (CD), digital video compact disk (DVD), digital tape, memory, read-only storage memory (read -only memory, ROM) or random access memory (RAM), etc.
  • the signal bearing medium 701 may include a computer recordable medium 704, such as, but not limited to, memory, read/write (R/W) CD, R/W DVD, and so on.
  • the signal-bearing medium 701 may include a communication medium 705, such as, but not limited to, digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
  • the signal bearing medium 701 may be transmitted by a wireless communication medium 705 (for example, a wireless communication medium that complies with the IEEE 802.11 standard or other transmission protocols).
  • the one or more program instructions 702 may be, for example, computer-executable instructions or logic-implemented instructions.
  • a computing device such as that described with respect to FIGS. 1-7 may be configured to respond to one or more of the computer readable medium 703, and/or computer recordable medium 704, and/or communication medium 705.
  • a program instruction 702 communicated to the computing device provides various operations, functions, or actions. It should be understood that the arrangement described here is for illustrative purposes only.
  • the embodiment of the present application provides a lane line detection method, and the execution subject of the method is a vehicle with an automatic driving function. Or it may be another device with the function of controlling the vehicle, or it may be a processor in a vehicle with an automatic driving function or another device with the function of controlling the vehicle, such as the processor 161 mentioned in Figure 1 to Figure 7 above. 301 and processor 530, etc.
  • the method for detecting the lane line of the vehicle includes steps S801-S805:
  • S801 Determine the lane line of the virtual lane according to the location of the own vehicle.
  • the position of the vehicle is taken as a point on the centerline of the virtual lane, parallel to the line connecting the front and rear of the vehicle, and the virtual lane line of the lane where the vehicle is located is established with a preset width.
  • the preset width is the lane width set by the vehicle or the user.
  • a and c in the figure are the lane lines of a virtual lane constructed according to the position of the vehicle, b is the centerline of the virtual lane, and the distance between a and c The value of d is the preset width.
  • the arrow on the side of the own vehicle in Figure 9(a) is used to indicate the direction of travel of the front of the vehicle, a, b, and c are all parallel to the line connecting the front and rear of the own vehicle, and the own vehicle is located on the centerline b of the virtual lane. superior.
  • the weekly vehicle is a vehicle whose distance from its own vehicle is less than a preset distance, and the preset distance is determined by the vehicle or the user. As the location of the own vehicle changes, or as the current moment changes, the number and location of vehicles whose distance from the own vehicle is less than the preset distance may change.
  • the weekly car can be divided into the front car of the own car, the front left car, the front right car, the right car, the left car, the rear car, Rear left car or rear right car.
  • FIG. 9(a) Take the virtual lane where the own vehicle is located as shown in Figure 9(a) as an example, take the location of the own vehicle as the center, and divide the virtual lane and the area within the preset distance from the own vehicle , Obtain 9 regions, as shown in Figure 9(b). These 9 areas are area A, area B, area C, area D, area E, area F, area G, area H, and area I, respectively. Among them, the vehicle is located in area I, and the surrounding vehicles in area A to area H are the front left, front, right, left, right, left, rear, and right rear cars of the own vehicle.
  • vehicle A1 is the front left vehicle of the own vehicle
  • vehicle B1 is the front vehicle of the own vehicle
  • vehicle C1 is the front right vehicle of the own vehicle
  • vehicle D1 is the left vehicle of the own vehicle
  • vehicle E1 is the right vehicle of the own vehicle.
  • F1 is the left rear vehicle of the own vehicle
  • the vehicle G1 is the rear vehicle of the own vehicle
  • the vehicle H1 is the right rear vehicle of the own vehicle.
  • the sensor determine the weekly car whose distance to the own car does not exceed a preset distance, and the movement trajectory of the car every week. For each vehicle, there are multiple location points on the vehicle's motion trajectory, and the vehicle's motion trajectory is composed of smooth lines between these multiple location points.
  • the multiple position points on the movement track of the vehicle include the position points corresponding to the current position of the vehicle.
  • multiple position points on the vehicle's trajectory are randomly determined, or multiple position points on the vehicle's trajectory are determined at a preset time interval, or The value of the longitudinal distance between the multiple position points is the same as the value of the preset distance interval.
  • the value of the longitudinal distance between the multiple position points on the trajectory of the vehicle is the same as the value of the preset distance interval.
  • the surrounding vehicles whose distance from the own vehicle is less than the preset distance include vehicle a, vehicle b, and vehicle c.
  • the trajectory of vehicle a includes 3 position points
  • a1 and a2 are the longitudinal distances between adjacent position points among these 3 position points
  • the trajectory of vehicle b includes 4 position points
  • b1, b2 and b3 Is the longitudinal distance between the adjacent position points in these 4 position points
  • the movement trajectory of the vehicle c includes 5 position points
  • c1, c2, c3 and c4 are the adjacent position points among the 5 position points.
  • d is the value of the preset distance interval.
  • the preset distance interval is determined by the user or the vehicle.
  • the user or the vehicle determines the value range of the preset distance interval, and the value of the longitudinal distance between multiple position points on the trajectory of the vehicle is located at the value of the preset distance interval Within range.
  • a plurality of position points are determined first, and then a smooth curve formed by the plurality of position points is determined as the trajectory of the vehicle.
  • a smooth curve formed by the plurality of position points is determined as the trajectory of the vehicle.
  • the surrounding vehicle after determining the movement trajectory of the surrounding vehicle, determine the lateral distance of the surrounding vehicle from the center line of the virtual lane when the surrounding vehicle is at its multiple position points according to the movement trajectory, and record the lateral distance to obtain the surrounding vehicle The change trend of the distance from the center line of the virtual lane.
  • the vehicle 1 has three position points on its trajectory, and the three position points are a, b, and c respectively.
  • the lateral distances between the vehicle 1 and the centerline of the virtual lane at these three position points are respectively 5, 7, 6, that is, the changing trend of the distance between the vehicle 1 and the center line of the virtual lane is 5, 7, and 6.
  • the vehicle can give its driving authority to the user, so that the user can drive the vehicle safely according to his driving experience and surrounding environment.
  • the surrounding vehicle includes at least one of the left, right, rear left, or rear right vehicles, the vehicle will decelerate to make the left side
  • the car, the right car, the right rear car, and the left rear car are changed to the left front car, right front car or front car, and the movement trajectories of the left front car, right front car, and front car at this time are obtained.
  • S803 Analyze the change trend of the distance between the weekly car and the center line of the virtual lane, and the lane-changing probability of the weekly car at the previous moment, and determine the lane-changing probability of the weekly car at the current moment.
  • the weighted sum of the first observed probability of the weekly car at the current moment and the lane-changing probability of the weekly car at the previous moment is performed to obtain the lane-changing probability of the weekly car at the current moment.
  • the weight of the first observation probability and the weight of the lane change probability at the previous moment are preset by the vehicle or the user. Traverse the weekly cars of the own car to get the lane changing probability of the car every week.
  • P Observation represents the first observation probability of the vehicle at the current moment
  • ⁇ closetL represents the change trend of the center line between the vehicle and the virtual lane.
  • P LaneChange ⁇ *P LaneChange ′+ ⁇ *P Observation
  • P LaneChange represents the lane changing probability of the vehicle at the current moment
  • P LaneChange ′ represents the lane changing probability of the vehicle at the previous moment
  • represents the vehicle’s lane changing probability at the previous moment.
  • represents the preset weight of the first observation probability of the vehicle.
  • ⁇ and ⁇ are determined by the vehicle itself, or preset by the user according to his needs.
  • the lane-changing probability of the weekly train entering the preset area is the preset lane-changing probability.
  • the preset lane changing probability is determined by the vehicle itself, or preset by the user according to his needs.
  • the transfer coefficient determines the associated probability of the center line of the weekly car and the virtual lane at the current moment. Traverse the cars every week to determine the correlation probability between the car and the center line of the virtual lane at the current moment.
  • the position of the surrounding vehicle relative to the centerline of the virtual lane at the current moment includes its lateral distance from the centerline of the virtual lane, and the angle of its heading and the centerline of the virtual lane.
  • the preset transfer coefficient is the associated probability of the weekly vehicle with the center line of the virtual lane at the previous moment, and the conversion coefficient when converted to the associated probability of the weekly vehicle with the center line of the virtual lane at the current time.
  • the preset transfer coefficient is determined by The vehicle is determined by itself, or it may be predetermined by the user according to his needs.
  • the correlation probability is used to indicate the size of the distance between the weekly car and the center line of the virtual lane, and the similarity between the motion trajectory of the weekly car and the center line of the virtual lane.
  • vehicle 1 represents one of its own vehicles
  • d represents the lateral distance between vehicle 1 and the center line of the virtual lane where the vehicle 1 is located
  • represents the angle between the head of vehicle 1 and the center line of the virtual lane.
  • the direction shown by the single arrow in the figure is the front of the vehicle 1.
  • the first target weekly car is determined according to the lane-changing probability of the weekly car at the current moment and a preset lane-changing probability threshold.
  • the preset area is an intersection with construction ahead, or when the number of lanes at the exit and the entrance is different, all vehicles passing through the preset area need to change lanes. Pass the intersection safely and smoothly. Therefore, if there are multiple weekly trains, when determining the first target weekly train, it is necessary to consider whether all weekly trains need to change lanes at the current moment, that is, whether the lane-changing probability of all weekly trains at the current moment is greater than Equal to the preset lane change probability threshold. If the lane-changing probability of all vehicles in the multiple weekly vehicles at the current moment is greater than or equal to the preset lane-changing probability threshold, it is determined that all the vehicles in the multiple weekly vehicles are the first target weekly vehicles.
  • the car whose lane-changing probability at the current moment is less than the preset lane-changing probability threshold is the first target car.
  • the week whose lane-changing probability at the current moment is greater than or equal to the preset lane-change probability threshold is deleted to reduce the need to determine the associated probability of the current moment and the center line of the virtual lane, thereby improving The efficiency of lane line detection in the lane where the vehicle is located.
  • the position between the first target cycle and the virtual lane is determined according to the position of the first target cycle relative to the virtual lane at the current moment.
  • the lateral distance of the centerline, and the angle between the heading of the first target car and the centerline of the virtual lane are weighted and summed to determine that the first target cycle is at the current moment.
  • the second observation probability is the predicted value of the associated probability between the first target cycle and the center line of the virtual lane.
  • the first target circle is vehicle 1, and the distance d from the centerline of the virtual lane is d, and the angle between the heading of the vehicle 1 and the centerline of the virtual lane is ⁇ .
  • Set the weight, ⁇ and ⁇ are determined by the vehicle itself, or determined by the user according to their needs.
  • the Markov probability model shown in Fig. 12 is used to determine the associated probability of each first target cycle and the center line of the virtual lane at the current moment. That is, for each first target cycle, the second observation probability of the first target cycle at the current moment, the preset transfer coefficient, and the center line of the first target cycle and the virtual lane are in the previous The correlation probability at the time is normalized, so as to obtain the correlation probability between the first target weekly car and the center line of the virtual lane at the current time.
  • vehicle 1 is the first target cycle, and the associated probability of vehicle 1 and the center line of the virtual lane where its own vehicle is located at the current moment, the second observation probability of vehicle 1 at the current moment, and the preset transfer coefficient are input into
  • P′ represents the second observation probability of the vehicle 1 at the current moment
  • P′′ represents the preset transition coefficient
  • P′′′ represents the associated probability of the center line of the vehicle 1 and the virtual lane at the previous moment.
  • the correlation probability between the first target cycle and the center line of the virtual lane is the preset correlation probability.
  • the preset association probability is determined by the vehicle itself, or is preset by the user according to his needs.
  • S805 Determine the lane line of the lane where the vehicle is located according to the trajectory of the target vehicle and the preset width.
  • the target vehicle is the first target vehicle with the highest probability of association with the center line of the virtual lane at the current moment.
  • the vehicle can follow the trajectory of the target vehicle to travel.
  • the associated probability of the weekly car and the center line of the virtual lane indicates the similarity between the weekly car and the center line of the virtual lane, and the distance between the weekly car and the center line of the virtual lane. Therefore, the greater the correlation probability between the first target cycle and the center line of the virtual lane at the current moment is, the greater the similarity between the first target cycle and the center line of the virtual lane is, the first target cycle
  • the distance from the center line of the virtual lane is also small.
  • the movement trajectory of the target vehicle is translated to obtain the movement trajectory of the vehicle.
  • the target vehicle may also change according to the changes of the own vehicle's weekly vehicles, as well as the changes in the lane-changing probability of the weekly vehicles and the associated probability of the weekly vehicles and the center line of the virtual lane.
  • the application also needs to judge whether the motion trajectory of the vehicle meets the needs of the user. If the motion trajectory is a motion trajectory that does not change lanes, and the vehicle needs to change lanes to reach the destination , The own vehicle can give its driving authority to the user, so that the user can drive the vehicle safely through according to his driving experience and surrounding environment and reach the destination smoothly.
  • the motion trajectory of the own vehicle determined in this step is the motion trajectory of the own vehicle from the current time (for example, time t) to a certain time after the current time (for example, from time t to time t+2).
  • the trajectory of the target vehicle is used as the center line of the lane where the vehicle is located, and the preset width is used as the lane width to determine the lane where the vehicle is located and the lane line of the lane.
  • the movement trajectory of the target vehicle is translated, and the translated movement trajectory, which is the movement trajectory of the vehicle in a period of time, is used as the center line of the lane where the vehicle is located, with a preset width For the lane width, determine the lane where the vehicle is located and the lane line of the lane.
  • curve c and curve e take the movement trajectory of the vehicle obtained by translation of the movement trajectory of the target vehicle in FIG. 13, that is, curve c and curve e as an example.
  • curve c and curve e are taken as the center of the lane where the vehicle is located.
  • the preset width f as the lane width, the lane where the vehicle is located and the lane lines g and h of the lane where the vehicle is located can be determined.
  • the application can determine the changing trend of the distance between the weekly vehicle and the center line of the virtual lane according to the trajectory of the weekly vehicle, and then according to the changing trend of the distance between the weekly vehicle and the center line of the virtual lane
  • the lane-changing probability of the weekly car at the previous moment is combined with the lane-changing probability of the weekly car at the current moment to identify the lane-changing intention of the weekly car.
  • this application can be based on the weekly car’s lane-changing probability at the current time, the associated probability of the weekly car and the centerline of the virtual lane at the previous time, the position of the weekly car at the current time relative to the centerline of the virtual lane, and the forecast.
  • the lane line of the lane where the own vehicle is located is determined according to the trajectory of the weekly vehicle with the highest associated probability at the current moment and the preset width, thereby improving the accuracy of the lane line detection of the lane where the own vehicle is located and ensuring vehicle safety.
  • the preset lane-changing probabilities of the vehicles at the initial position points in the multiple lanes may be the same or may be different.
  • take the vehicle entering the preset area that is, there are 3 lanes before the intersection as an example.
  • the lane probability is the preset lane changing probability a1
  • the associated probability with the center line of the virtual lane is the preset associated probability a2.
  • the center of the virtual lane is The correlation probability of the line is the preset correlation probability b2.
  • the lane-changing probability when the vehicle is at the initial position point c before entering the predetermined area that is before the intersection is the preset lane-changing probability c1.
  • the vehicle is at the initial position point c before entering the intersection, it is at the center of the virtual lane
  • the correlation probability of the line is the preset correlation probability c2.
  • the values of a1, b1, and c1 may be the same or different, and the values of a2, b2, and c2 may be the same or different.
  • the historical movement trajectory of the vehicle before entering the intersection and the movement trajectory of the vehicle just entering the intersection determined according to the above steps S801-S805 are smoothed, so that the self-vehicle can safely and stably enter the intersection area .
  • smooth processing is performed on the movement trajectory of the self-vehicle determined according to different target vehicles in the foregoing steps S801-S805, so that the self-vehicle can pass the intersection safely and smoothly.
  • the lane where the vehicle is located after leaving the intersection is determined as the target lane. Then, according to the speed and heading of the vehicle at the first position point (that is, the position before the vehicle leaves the intersection), the target point of the vehicle on the target lane is determined. Among them, if the target point of the vehicle on the target lane is located on the center line of the target lane, the smooth connection between the target point of the vehicle on the target lane and the first position point is determined as the point before the vehicle leaves the intersection.
  • Movement trajectory on the contrary, that is, the target point of the vehicle on the target lane is not located on the center line of the target lane, the smooth connection between the vertical point of the target point on the center line of the target lane and the first position point is determined
  • the line is the trajectory of the vehicle before it leaves the intersection.
  • One or more position points are determined between the first position points, and the first position point, the target point of the vehicle on the target lane or the center line of the target lane relative to the starting point of the vehicle, and the one or more The smooth line between these points is determined as the trajectory of the vehicle when it leaves the intersection.
  • a represents the first position point corresponding to the position before the vehicle leaves the intersection
  • b1 represents the position of the vehicle on the target lane determined according to the heading and speed of the vehicle.
  • Target If b1 is located on the centerline of the target lane, the location point d between a and b1 is determined according to the centerline of the target lane, and d is located on the extension line of the centerline of the target lane. According to the heading of the vehicle and the speed of the vehicle, the position c between a and b1 is determined.
  • the Bezier curve to connect a, b1, c, and d to obtain the trajectory of the vehicle when it leaves the intersection, namely the curve e (ie, the dashed line in the figure).
  • the longitudinal distance between b1 and a is f
  • the longitudinal distance between d and b1 is f/4
  • the longitudinal distance between c and a is f/4.
  • a represents the first position point corresponding to the position before the vehicle leaves the intersection
  • b2 represents the position of the vehicle on the target lane determined according to the heading and speed of the vehicle.
  • Target If b2 is not located on the center line of the target lane, cross b2 as the vertical line of the center line of the target lane, and determine the intersection point of the vertical line and the center line of the target lane as b3, that is, the vertical point of b2 on the center line of the target lane For b3.
  • the position point d1 between a and b3 is determined, and d1 is located on the extension line of the center line of the target lane.
  • the position point c1 between a and b3 is determined.
  • curve e is a dashed line.
  • the longitudinal distance between b3 and a is f
  • the longitudinal distance between d1 and b3 is f/4
  • the longitudinal distance between c1 and a is f/4.
  • connection lines between the position point and the target point and other points are fitted by a Bezier curve, so as to obtain a smooth curve.
  • the present application can determine the smooth motion trajectory of the vehicle at the entrance and exit and in the intersection area, thereby determining the smooth lane line of the lane where the vehicle is at the entrance and exit and when driving in the intersection area. , So that the self-car can pass through the intersection safely and smoothly, reduce the lateral jitter of the self-car when passing the intersection, and improve the riding experience of the self-car occupants when driving at the intersection.
  • the vehicles in the first target cycle are further screened, so that the first target cycle includes only at least one of the front vehicle, the front right vehicle, or the front left vehicle of the own vehicle. Item, thereby reducing the influence of vehicles in other lanes on determining the lane of the own vehicle, and improving the efficiency and accuracy of detecting the lane line of the lane where the own vehicle is located.
  • the vehicles in the weekly car are first eliminated, so that only the preceding car is included in the weekly car.
  • the vehicles in the first target week are further screened, so that the first target week only includes the preceding vehicle of the own vehicle.
  • the embodiment of this application can also record the motion trajectory and lane line of the vehicle when passing through the intersection, so that the vehicle can follow the vehicle or control the vehicle when there is no surrounding traffic.
  • FIG. 17 shows one of the lane line detection devices involved in the foregoing embodiment.
  • the device includes a determining unit 1701 and an analyzing unit 1702.
  • the lane line detection device may also include other modules, or the lane line detection device may include fewer modules.
  • the determining unit 1701 is configured to determine the changing trend of the distance between the weekly car and the center line of the virtual lane according to the trajectory of the weekly car.
  • the surrounding vehicle is a vehicle whose distance from the vehicle is less than a preset distance
  • the virtual lane is parallel to the line connecting the front and rear of the vehicle
  • the width of the virtual lane is the preset width
  • the vehicle is located on the edge of the virtual lane. Center line.
  • the trajectory of the cycle is a line of multiple location points, and the value of the longitudinal distance between adjacent location points in the multiple location points is equal to the value of the preset distance interval.
  • the multiple location points include the location point corresponding to the current location of each weekly vehicle.
  • the analysis unit 1702 is used to analyze the changing trend of the distance between the weekly vehicle and the center line of the virtual lane, and the lane changing probability of the weekly vehicle at the previous moment, to obtain the lane changing probability of the weekly vehicle at the current moment, the lane changing probability is The probability that the vehicle changes its lane.
  • the analysis unit 1702 is used to analyze the changing trend of the distance between the weekly vehicle and the center line of the virtual lane to obtain the first observation probability of the weekly vehicle at the current moment, and the first observation probability is the lane-changing probability of the vehicle Predicted value. Then the analysis unit 1702 is also used to perform a weighted summation on the first observed probability of the weekly car at the current moment and the lane-changing probability of the weekly car at the previous moment to determine the lane-changing probability of the weekly car at the current moment.
  • the determining unit 1701 is also used to determine the lane-changing probability of the weekly car at the current moment, the associated probability of the weekly car and the center line of the virtual lane at the previous time, the position of the weekly car relative to the center line of the virtual lane at the current time, and the forecast Set the transfer coefficient to determine the correlation probability between the center line of the weekly car and the virtual lane at the current moment.
  • the preset transfer coefficient is the correlation probability between the weekly vehicle and the center line of the virtual lane at the previous moment, and the conversion coefficient when converted to the correlation probability between the weekly vehicle and the center line of the virtual lane at the current time, and the correlation probability is used to represent the week The distance between the vehicle and the center line of the virtual lane, and the similarity between the trajectory of the surrounding vehicles and the center line of the virtual lane.
  • the lane-changing probability before the weekly train enters the preset area is the preset lane-changing probability, where the preset area is an intersection.
  • the determining unit 1701 is configured to determine the first target weekly car according to the preset lane-changing probability threshold and the lane-changing probability of the weekly car at the current moment. Then the determining unit 1701 is further configured to determine the lateral distance between the first target car and the center line of the virtual lane according to the position of the first target car at the current moment relative to the center line of the virtual lane, and the position of the first target car The angle between the front of the vehicle and the centerline of the virtual lane.
  • the determining unit 1701 is also used to perform a weighted summation of the included angle and the lateral distance between the first target car and the center line of the virtual lane to determine the second observation probability of the first target car at the current moment, and the second The observation probability is the predicted value of the associated probability between the vehicle and the center line of the virtual lane.
  • the final determination unit 1701 is also used to normalize the second observation probability of the first target weekly car at the current moment, the preset transfer coefficient, and the associated probability of the first target weekly car and the center line of the virtual lane at the previous moment. Process to determine the associated probability of the first target weekly car and the center line of the virtual lane at the current moment.
  • the determining unit 1701 is further configured to: if the lane-changing probabilities of the multiple weekly vehicles at the current moment are all greater than or equal to the preset lane-changing probability threshold, then It is determined that the multiple weekly trains are the first target weekly trains.
  • the determining unit 1701 is further configured to: if there is a weekly car whose lane-changing probability at the current moment is greater than or equal to the preset lane-changing probability threshold among the plurality of weekly trains, and the lane-changing probability at the current moment is less than the preset lane-changing probability threshold , Delete the weekly car whose lane-changing probability is greater than or equal to the preset lane-changing probability threshold at the current moment, and determine the weekly car whose lane-changing probability at the current moment is less than the preset lane-changing probability threshold as the first target week vehicle.
  • the determining unit 1701 is further configured to determine the lane line of the lane where the vehicle is located according to the trajectory of the target vehicle and the preset width. Among them, the target vehicle is the vehicle with the highest probability of association with the center line of the virtual lane at the current moment in the weekly traffic.
  • the determining unit 1701 is further configured to determine that the lane where the vehicle is located after leaving the preset area is the target lane, where the preset area is an intersection. Then the determining unit 1701 is further configured to determine the target point of the vehicle on the target lane according to the speed of the vehicle at the first position point and the heading of the vehicle, where the first position point is the position of the vehicle before it leaves the preset area The corresponding location point. The determining unit 1701 is also used for determining the connection line between the target point of the own vehicle on the target lane and the first position point if the target point of the own vehicle on the target lane is located on the center line of the target lane, which is the pre-determined departure of the own vehicle.
  • the determining unit 1701 is further configured to determine that if the target point of the vehicle in the target lane is not located on the center line of the target lane, determine that the line between the vertical point of the target point on the center line of the target lane and the first position point is the self The center line of the lane where the car was before leaving the preset area.
  • the present application also provides a lane line detection device, which includes a memory 1801, a processor 1802, a communication interface 1803 and a bus 1804.
  • the processor 1802 is used to manage and control the actions of the device, and/or to perform other processes of the technology described in the text.
  • the communication interface 1803 is used to support communication between the device and other network entities.
  • the memory 1801 is used to store program codes and data of the device.
  • the above-mentioned processor 1802 may implement or execute various exemplary logical blocks, unit modules, and circuits described in conjunction with the disclosure of this application.
  • the processor or controller may be a central processing unit, a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It can implement or execute various exemplary logical blocks, unit modules and circuits described in conjunction with the disclosure of this application.
  • the processor 1802 may also be a combination for realizing calculation functions, for example, including a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and so on.
  • the communication interface 1803 may be a transceiver circuit.
  • the memory 1801 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk or solid-state hard disk; the memory may also include the above-mentioned types of memory. combination.
  • the bus 1804 may be an extended industry standard architecture (EISA) bus or the like.
  • the bus 1804 can be divided into an address bus, a data bus, a control bus, and so on. For ease of presentation, only one thick line is used in Fig. 18, but it does not mean that there is only one bus or one type of bus.
  • An embodiment of the present application provides a computer-readable storage medium storing one or more programs.
  • the one or more programs include instructions that, when executed by a computer, cause the computer to execute steps S801-S805 of the foregoing embodiment.
  • the embodiments of the present application also provide a computer program product containing instructions, which when the instructions run on a computer, cause the computer to execute the lane line detection method executed in steps S801-S805 of the foregoing embodiment.
  • An embodiment of the present application provides a lane line detection device, including a processor and a memory; wherein the memory is used to store computer program instructions, and the processor is used to run the computer program instructions so that the lane line detection device executes steps S801-S805 of the foregoing embodiment. Lane detection method implemented in.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be It can be combined or integrated into another device, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may be physically separated or not physically separated.
  • the parts displayed as a unit may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed. To many different places. In the application process, some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art or the part of the technical solutions can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • Including several instructions to make a device (which may be a personal computer, a server, a network device, a single-chip microcomputer, or a chip, etc.) or a processor execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Abstract

一种车道线检测方法及装置,涉及自动驾驶领域,用于根据换道概率对周车进行筛选,并根据关联概率最大的周车的运动轨迹,确定自车所在车道的车道线,以提高车道线检测的准确性,保证车辆安全。该方法包括:根据周车运动轨迹,确定周车与虚拟车道中心线的距离的变化趋势;对周车与虚拟车道中心线的距离的变化趋势及其在上一时刻的换道概率进行分析,得到其在当前时刻的换道概率;根据周车在当前时刻的换道概率、周车与虚拟车道中心线在上一时刻的关联概率、周车在当前时刻相对于虚拟车道中心线的位置以及预设转移系数,确定周车与虚拟车道中心线在当前时刻的关联概率;将目标车辆的运动轨迹以及预设宽度,确定自车所在车道的车道线。

Description

车道线检测方法及装置 技术领域
本申请涉及自动驾驶领域,尤其涉及一种车道线检测方法及装置。
背景技术
车道线作为道路的构成要素,用于指示自动驾驶车辆的路径规划,以保证车辆在自动驾驶过程中的安全性、舒适性和智能性等。一般的,自动驾驶车辆可以采用实时视觉来对其所在道路上的车道线进行检测,以根据检测到的车道线进行自动驾驶的路径规划。
在现有技术中,通常会对实时更新的自车周围的车辆的位置,即周车位置,进行拟合以及插值等处理来确定周车的运动轨迹,并根据周车与自车之间的横向距离来平移周车的运动轨迹,再对平移后的周车的运动轨迹进行均衡处理得到车道线,最后利用自车周围的障碍物信息进一步确定车道线的范围,输出自车所在车道的车道范围,即自车所在车道的车道线。或者,在对实时更新的周车位置进行拟合、插值等处理来确定周车的运动轨迹后,以周车的运动轨迹作为周车所在车道的车道中心线,利用车辆的运动轨迹以及预设车道宽度来对周车所在车道的车道中心线进行平移和均衡处理,从而得到自车所在车道的车道中心线,最后结合预设车道宽度平移自车所在车道的车道中心线,确定自车所在车道的车道范围。利用上述现有技术检测到的车道线可能会存在车道偏移的情况,从而影响车辆安全,且上述两种现有技术未能区分周车是否换道,可能会影响到车辆通过路口场景的安全性,因此并不适用于较为复杂的路口场景,例如出路口为两车道,进路口为三车道的路口场景。或者,在现有技术中,以特斯拉自动驾驶车辆为例,还可以采用传感器获取到的多种观测数据来进行融合,从而得到自车所在车道的车道线。在该现有技术中,可能会由于用于确定车道线的观测数据种类过多,出现车道线不稳定的情况,从而影响车辆通过路口场景的安全性。
发明内容
本申请提供一种车道线检测方法及装置,根据周车的换道概率来识别该周车的换道意图,以对周车进行筛选后,再根据周车与自车虚拟车道的中心线的关联概率,根据合适的周车的运动轨迹以及预设宽度来确定自车所在车道的车道线,从而提高自车所在车道的车道线检测的准确性,保证车辆安全。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供一种车道线检测方法,用于自动驾驶领域,该方法包括:根据周车的运动轨迹,确定周车与虚拟车道的中心线的距离的变化趋势。其中,周车为与自车的距离小于预设距离的车辆。然后,对周车与虚拟车道的中心线的距离的变化趋势,以及周车在上一时刻的换道概率进行分析,得到周车在当前时刻的换道概率。其中,换道概率为车辆更换其所在车道的概率。再根据周车在当前时刻的换道概率、周车与所述虚拟车道的中心线在上一时刻的关联概率、周车在当前时刻相对于虚拟车道的中心线的位置以及预设转移系数,确定周车与虚拟车道的中心线在当前时刻的关联概率。其中,预设转移系数为周车在上一时刻与虚拟车道的中心线的关联概率,转换为周车在当前时刻与虚拟车道的中心线的关联概率时的转换系数,关联概率用于表示周车与虚拟车道的 中心线的距离的大小,以及周车的运动轨迹与虚拟车道的中心线的相似度。最后,根据目标车辆的运动轨迹以及预设宽度,确定自车所在车道的车道线。其中,目标车辆为周车中与虚拟车道的关联概率最大的车辆。
在上述过程中,首先,本申请可以根据周车的运动轨迹,确定该周车与虚拟车道的中心线的距离的变化趋势,进而根据该周车的与虚拟车道的中心线的距离的变化趋势和该周车在上一时刻的换道概率,确定该周车在当前时刻的换道概率,从而识别该周车在当前时刻的换道意图。其次,本申请可以根据周车在当前时刻的换道概率、该周车与虚拟车道的中心线在上一时刻的关联概率、该周车在当前时刻相对于虚拟车道的中心线的位置以及预设转移系数,来确定该周车与虚拟车道的中心线在当前时刻的关联概率。最后将在当前时刻的关联概率最大的周车的运动轨迹以及预设宽度确定自车所在车道的车道线,从而提高自车所在车道的车道线检测的准确性,保证车辆安全。
在一种可能的实现方式中,虚拟车道平行于自车的车头与车尾的连线,且虚拟车道的宽度为预设宽度,自车位于虚拟车道的中心线上。
在一种可能的实现方式中,周车的运动轨迹为多个位置点的连线,其中,该多个位置点之间的纵向距离的取值等于预设距离间隔的取值,该多个位置点中包括每个周车当前所在位置对应的位置点。
在一种可能的实现方式中,对周车与虚拟车道的中心线的距离的变化趋势,以及周车在上一时刻的换道概率进行分析,得到周车在当前时刻的换道概率,包括:对周车与虚拟车道的中心线的距离的变化趋势进行分析,得到周车在当前时刻的第一观测概率,第一观测概率即对车辆的换道概率的预测值。然后对周车在当前时刻的第一观测概率以及周车在上一时刻的换道概率进行加权求和,确定周车在当前时刻的换道概率。
在上述过程中,本申请可以对周车与虚拟车道线的中心线的距离的变化趋势进行分析,得到对该周车在当前时刻的换道概率的预测值,即该周车在当前时刻的第一观测概率,然后对该周车在当前时刻的第一观测概率和该周车在上一时刻的换道概率进行加权求和,从而得到该周车在当前时刻的换道概率。由于在考虑到周车在当前时刻的第一观测概率的基础上,进一步考虑到了该周车在上一时刻的换道概率,因此,所得到的该周车在当前时刻的换道概率的准确度更高,以更加准确地识别出该周车的换道意图。
在一种可能的实现方式中,周车进入预设区域前的换道概率为预设换道概率,其中,预设区域为路口。
在一种可能的实现方式中,根据周车在当前时刻的换道概率、周车与虚拟车道的中心线在上一时刻的关联概率、周车在当前时刻相对于虚拟车道的中心线的位置以及预设转移系数,确定周车与虚拟车道的中心线在当前时刻的关联概率,包括:先根据预设换道概率阈值以及周车在当前时刻的换道概率,确定第一目标周车。然后根据第一目标周车在当前时刻相对于虚拟车道的中心线的位置,确定第一目标周车与虚拟车道的中心线的横向距离,以及第一目标周车的车头朝向与虚拟车道的中心线的夹角。再然后对该夹角以及第一目标周车与虚拟车道的中心线的横向距离进行加权求和,确定第一目标周车在当前时刻的第二观测概率,即对车辆与虚拟车道的中心线的关联概率的预测值。最后对第一目标周车在当前时刻的第二观测概率、预设转移系数以及第一目标周车与虚拟车道的中心线在上一时刻的关联概率进行归一化处理,确定第一目标周车与虚拟车道的中心线在当前时刻的关联概率。
在上述过程中,本申请可以根据预设换道概率阈值以及周车在当前时刻的换道概率,来识别该周车在当前时刻的换道意图,并根据周车在当前时刻的换道意图对周车进行筛选和剔除,从而确定第一目标周车,以提高确定周车与虚拟车道的中心线在当前时刻的关联概率的准确性。另外,本申请在确定第一目标周车与虚拟车道的中心线在当前时刻的关联概率的过程中,还考虑到了该第一目标周车与虚拟车道的中心线在上一时刻的关联概率,以进一步提高第一目标周车与虚拟车道的中心线在当前时刻的关联概率的准确性,从而提高确定自车所在车道的车道线的准确性,保证车辆安全。
在一种可能的实现方式中,若周车的数量为多个,则根据预设换道概率阈值以及周车在当前时刻的换道概率,确定第一目标周车,包括:若多个周车在当前时刻的换道概率均大于等于预设换道概率阈值,则确定该多个周车均为第一目标周车;若多个周车中存在在当前时刻的换道概率大于等于预设换道概率阈值的周车,且存在在当前时刻的换道概率小于预设换道概率阈值的周车,则删除在当前时刻的换道概率大于等于预设换道概率阈值的周车,并将在当前时刻换道概率小于预设换道概率阈值的周车确定为第一目标周车。
在上述过程中,若周车在当前时刻的换道概率均大于等于预设换道概率阈值,即所有周车均有换道意图,则可以确定当前场景下,所有车辆均需换道通过,将所有周车确定为第一目标周车。若周车中存在在当前时刻的换道概率大于等于预设换道概率阈值的车辆,且存在在当前时刻的换道概率小于预设换道概率阈值的车辆,则可以确定在当前场景下,周车可以换道,也可以不换道,将存在换道意图的车辆剔除,以确定第一目标周车,减少需要确定关联概率的车辆,提高确定自车所在车道的车道线的效率。
在一种可能的实现方式中,该方法还包括:先确定自车离开预设区域后所在的车道为目标车道,其中,预设区域为路口。然后根据自车在第一位置点处的速度和车头朝向,确定自车在目标车道上的目标点,其中,第一位置点为自车离开预设区域前所在位置对应的位置点。若自车在目标车道上的目标点位于目标车道的中心线上,则确定自车在目标车道上的目标点与第一位置点的连线,为自车离开预设区域前所在车道的中心线;若自车在目标车道的目标点不位于目标车道的中心线上,则确定该目标点在目标车道的中心线上的垂点与第一位置点的连线为自车离开预设区域前所在车道的中心线。
在上述过程中,本申请可以在自车离开预设区域即路口之前,也就是说,自车在处于第一位置点时,根据目标车道的中心线和自车的速度与车头朝向,来确定自车离开预设区域前所在车道的中心线,从而根据预设宽度和该中心线得到自车离开预设区域前所在车道的较为平滑的车道线,减少车辆离开该预设区域时的横向抖动,提高车辆行驶的安全性。
第二方面,本申请提供一种车道线检测装置,用于自动驾驶领域,该装置包括确定单元和分析单元:确定单元,用于根据周车的运动轨迹,确定周车与虚拟车道的中心线的距离的变化趋势,其中,周车为与自车的距离小于预设距离的车辆。分析单元,用于对周车与虚拟车道的中心线的距离的变化趋势,以及周车在上一时刻的换道概率进行分析,得到周车在当前时刻的换道概率,换道概率为车辆更换其所在车道的概率。确定单元,还用于根据周车在当前时刻的换道概率、周车与虚拟车道的中心线在上一时刻的关联概率、周车在当前时刻相对于虚拟车道的中心线的位置以及预设转移系数,确定周车与虚拟车道的中心线在当前时刻的关联概率。其中,预设转移系数为周车在上一时刻与 虚拟车道的中心线的关联概率,转换为周车在当前时刻与虚拟车道的中心线的关联概率时的转换系数,关联概率用于表示周车与虚拟车道的中心线的距离的大小,以及周车的运动轨迹与虚拟车道的中心线的相似度。确定单元,还用于根据目标车辆的运动轨迹以及预设宽度确定自车所在车道的车道线,其中,目标车辆为周车中与虚拟车道的中心线在当前时刻的关联概率最大的车辆。
在一种可能的实现方式中,虚拟车道平行于自车的车头与车尾的连线,且虚拟车道的宽度为预设宽度,自车位于该虚拟车道的中心线上。
在一种可能的实现方式中,周车的运动轨迹为多个位置点的连线,多个位置点中相邻位置点之间的纵向距离的取值等于预设距离间隔的取值,这多个位置点中包括每个周车当前所在位置对应的位置点。
在一种可能的实现方式中,分析单元,用于对周车与虚拟车道的中心线的距离的变化趋势进行分析,得到周车在当前时刻的第一观测概率,即对车辆的换道概率的预测值。然后分析单元,还用于对周车在当前时刻的第一观测概率以及周车在上一时刻的换道概率进行加权求和,确定周车在当前时刻的换道概率。
在一种可能的实现方式中,周车进入预设区域前的换道概率为预设换道概率,其中,预设区域为路口。
在一种可能的实现方式中,确定单元,用于根据预设换道概率阈值以及周车在当前时刻的换道概率,确定第一目标周车。然后确定单元,还用于根据第一目标周车在当前时刻相对于虚拟车道的中心线的位置,确定第一目标周车与虚拟车道的中心线的横向距离,以及第一目标周车的车头朝向与虚拟车道的中心线的夹角。再然后确定单元,还用于对该夹角以及第一目标周车与虚拟车道的中心线的横向距离进行加权求和,确定第一目标周车在当前时刻的第二观测概率,即对车辆与虚拟车道的中心线的关联概率的预测值。最后确定单元,还用于对第一目标周车在当前时刻的第二观测概率、预设转移系数以及第一目标周车与虚拟车道的中心线在上一时刻的关联概率进行归一化处理,确定第一目标周车与虚拟车道的中心线在当前时刻的关联概率。
在一种可能的实现方式中,若周车的数量为多个,则确定单元,用于根据预设换道概率阈值以及所述周车在当前时刻的换道概率,确定第一目标周车,包括:若多个周车在当前时刻的换道概率均大于等于预设换道概率阈值,则确定该多个周车均为第一目标周车。若多个周车中存在在当前时刻的换道概率大于等于预设换道概率阈值的周车,且存在在当前时刻的换道概率小于预设换道概率阈值的周车,则删除换道概率大于等于预设换道概率阈值的周车,并将换道概率小于预设换道概率阈值的周车确定为第一目标周车。
在一种可能的实现方式中,确定单元,还用于确定自车离开预设区域后所在的车道为目标车道,其中,预设区域为路口。然后确定单元,还用于根据自车在第一位置点处的速度和车头朝向,确定自车在目标车道上的目标点,其中,第一位置点为自车离开预设区域前所在位置对应的位置点。确定单元,还用于若自车在目标车道上的目标点位于目标车道的中心线上,则确定自车在目标车道上的目标点与第一位置点的连线,为自车离开预设区域前所在车道的中心线。确定单元,还用于若自车在目标车道的目标点不位于目标车道的中心线上,则确定该目标点在目标车道的中心线上的垂点与第一位置点的连线为自车离开预设区域前所在车道的中心线。
第三方面,本申请提供一种车道线检测装置,该装置包括:处理器和存储器;其中,存储器用于存储计算机程序指令,处理器运行计算机程序指令以使该车道线检测装置执行第一方面所述的车道线检测方法。
第四方面,本申请提供一种计算机可读存储介质,包括计算机指令,当该计算机指令被处理器运行时,使得车道线检测装置执行如第一方面所述的车道线检测方法。
第五方面,本申请提供一种计算机程序产品,其特征在于,当该计算机程序产品在处理器上运行时,使得车道线检测装置执行如第一方面所述的车道线检测方法。
附图说明
图1为本申请实施例提供的一种自动驾驶车辆的结构示意图一;
图2为本申请实施例提供的一种自动驾驶车辆的结构示意图二;
图3为本申请实施例提供的一种计算机系统的结构示意图;
图4为本申请实施例提供的一种芯片系统的结构示意图;
图5为本申请实施例提供的一种云侧指令自动驾驶车辆的应用示意图一;
图6为本申请实施例提供的一种云侧指令自动驾驶车辆的应用示意图二;
图7为本申请实施例提供的一种计算机程序产品的结构示意图;
图8为本申请实施例提供的车道线检测方法的流程示意图;
图9的(a)为本申请实施例提供的一种虚拟车道示意图;
图9的(b)为本申请实施例提供的一种周车示意图;
图10为本申请实施例提供的一种预设距离间隔示意图;
图11为本申请实施例提供的一种周车的位置的示意图;
图12为本申请实施例提供的一种马尔科夫概率模型的示意图;
图13为本申请实施例提供的一种自车的运动轨迹的示意图一;
图14为本申请实施例提供的一种自车的运动轨迹的示意图二;
图15为本申请实施例提供的一种预设换道概率与预设关联概率的示意图;
图16的(a)为本申请实施例提供的一种自车的运动轨迹的示意图三;
图16的(b)为本申请实施例提供的一种自车的运动轨迹的示意图四;
图17为本申请实施例提供的一种车道线检测装置的示意图一;
图18为本申请实施例提供的一种车道线检测装置的示意图二。
具体实施方式
本申请实施例提供一种车道线检测方法,该方法应用于车辆中,或者应用于具有控制车辆的功能的其他设备(比如云端服务器、手机终端等)中。其中,所述车辆可以是自动驾驶车辆,该自动驾驶车辆可以是具备部分自动驾驶功能的车辆,也可以是具备全部自动驾驶功能的车辆,也就是说,该车辆的自动驾驶的等级可以参照美国汽车工程师协会(society of automotive engineers,SAE)的分类标准,划分为无自动化(L0)、驾驶支援(L1)、部分自动化(L2)、有条件自动化(L3)、高度自动化(L4)或者完全自动化(L5)。车辆或者其他设备可以通过其包含的组件(包括硬件和软件),来实施本申请实施例提供的车道线检测方法。首先,确定平行于自车的车头和车尾,且其宽度为预设宽度的虚拟车道,使自车位于该虚拟车道的中心线上。然后根据周车,即与自车的距离小于预设距离的车辆的运动轨迹,确定该周车与虚拟车道的中心线的距离的变化趋势,并对该周车与虚拟车道的中心线的距离的变化趋势和该周车在上一时刻的换道概率 进行分析,确定该周车在当前时刻的换道概率。随后,根据该周车在当前时刻的换道概率、该周车与虚拟车道的中心线在上一时刻的关联概率、该周车在当前时刻相对于虚拟车道线的中心线的位置以及预设转移系数,确定该周车与虚拟车道的中心线在当前时刻的关联概率,并根据该周车与虚拟车道的中心线在当前时刻的关联概率,确定目标车辆。最后,根据目标车辆的运动轨迹,也就是在当前时刻与虚拟车道的中心线的关联概率最大的周车的运动轨迹和预设宽度来确定自车所在车道的车道线,以提高确定自车所在车道的车道线的准确率,保证车辆安全。
图1为本申请实施例提供的一种自动驾驶车辆的结构示意图。在一个实施例中,将车辆100配置为完全或部分的自动驾驶模式。例如,车辆100在处于部分自动驾驶模式(即L2)或L3、L4、L5时,车辆100可以根据周车的运动轨迹、该周车在上一时刻的换道概率、该周车在当前时刻相对于虚拟车道线的中心线的位置、该周车与虚拟车道的中心线在上一时刻的关联概率,以及预设转移系数,来确定该周车在当前时刻的换道概率和该周车与虚拟车道的中心线在当前时刻的关联概率,进而确定自车所在车道的车道线。
车辆100可包括各种子系统,例如行进系统110、传感器系统120、控制系统130、一个或多个外围设备140以及电源150、计算机系统160和用户接口170。可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。
行进系统110可包括为车辆100提供动力运动的组件。在一个实施例中,行进系统110可包括引擎111、传动装置112、能量源113和车轮114。引擎111可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎111将能量源113转换成机械能量。
能量源113的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源113也可以为车辆100的其他系统提供能量。
传动装置112可以将来自引擎111的机械动力传送到车轮114。传动装置112可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置112还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮114的一个或多个轴。
传感器系统120可包括感测关于车辆100周边的环境的信息的若干个传感器。例如,传感器系统120可包括定位系统121(定位系统可以是全球定位系统(global positioning system,GPS),也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)122、雷达123、激光雷达124以及相机125。传感器系统120还可包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。这些传感器收集到的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等),这种检测和识别是车辆100实现自动驾驶的安全操作的关键。
定位系统121可用于估计车辆100的地理位置。IMU 122用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU 122可以是加速度计和陀螺仪的组合。
雷达123可利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中, 除了感测物体以外,雷达123还可用于感测物体的速度和/或前进方向。
激光雷达124可利用激光来感测车辆100所处环境中的物体。在一些实施例中,激光雷达124可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
相机125可用于捕捉车辆100的周边环境的多个图像,以及车辆驾驶舱内的多个图像。相机125可以是静态相机或视频相机。
控制系统130可控制车辆100及其组件的操作。控制系统130可包括各种元件,其中包括转向系统131、油门132、制动单元133、计算机视觉系统134、路线控制系统135以及障碍规避系统136。
转向系统131可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘系统。
油门132用于控制引擎111的操作速度,进而控制车辆100的速度。
制动单元133用于控制车辆100减速。制动单元133可使用摩擦力来减慢车轮114。在其他实施例中,制动单元133还可将车轮114的动能转换为电流。制动单元133也可采取其他形式来减慢车轮114转速从而控制车辆100的速度。
计算机视觉系统134可以处理和分析由相机125捕捉的图像,以识别车辆100周边环境中的物体和/或特征以及车辆驾驶舱内的驾驶员的肢体特征和面部特征。所述物体和/或特征可包括交通信号、道路状况和障碍物,所述驾驶员的肢体特征和面部特征包括驾驶员的行为、视线、表情等。计算机视觉系统134可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统134还可以用于为环境绘制地图、跟踪物体、估计物体的速度、确定驾驶员行为、人脸识别等等。
路线控制系统135用于确定车辆100的行驶路线。在一些实施例中,路线控制系统135可结合来自传感器、定位系统121和一个或多个预定地图的数据以为车辆100确定行驶路线。
障碍规避系统136用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。
当然,在一个实例中,控制系统130可以增加或替换地包括除了所示出和描述的组件以外的其他组件。或者也可以减少一部分上述示出的组件。
车辆100通过外围设备140与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备140可包括无线通信系统141、车载电脑142、麦克风143和/或扬声器144。
在一些实施例中,外围设备140提供车辆100的用户与用户接口170交互的手段。例如,车载电脑142可向车辆100的用户提供信息。用户接口170还可操作车载电脑142来接收用户的输入。车载电脑142可以通过触摸屏进行操作。在其他情况中,外围设备140可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风143可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器144可向车辆100的用户输出音频。
无线通信系统141可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统141可使用3G蜂窝通信,例如CDMA、EVD0、GSM/GPRS,或者 4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信系统141可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统141可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统141可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备。
电源150可向车辆100的各种组件提供电力。在一个实施例中,电源150可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源,从而为车辆100的各种组件提供电力。在一些实施例中,电源150和能量源113可一起实现,例如一些全电动车中那样。
车辆100的部分或所有功能受计算机系统160控制。计算机系统160可包括至少一个处理器161,处理器161执行存储在例如数据存储装置162这样的非暂态计算机可读介质中的指令1621。计算机系统160还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
处理器161可以是任何常规的处理器,诸如商业可获得的中央处理单元(central processing unit,CPU)。替选地,该处理器可以是诸如专用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同物理外壳中的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机系统、或存储器实际上可以包括可以存储在相同的物理外壳内的多个处理器、计算机系统、或存储器,或者包括可以不存储在相同的物理外壳内的多个处理器、计算机系统、或存储器。例如,存储器可以是硬盘驱动器,或位于不同于物理外壳内的其它存储介质。因此,对处理器或计算机系统的引用将被理解为包括对可以并行操作的处理器或计算机系统或存储器的集合的引用,或者可以不并行操作的处理器或计算机系统或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,数据存储装置162可包含指令1621(例如,程序逻辑),指令1621可被处理器161执行来执行车辆100的各种功能,包括以上描述的那些功能。数据存储装置162也可包含额外的指令,包括向行进系统110、传感器系统120、控制系统130和外围设备140中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令1621以外,数据存储装置162还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机系统160使用。
比如,在本申请的实施例中,数据存储装置162可以从传感器系统120或车辆100的其他组件获取驾驶员信息以及周围的环境信息,例如,环境信息可以为车辆当前所处道路的道路类别、当前天气、当前时间等,驾驶员信息可以为驾驶员的身份、驾驶员的驾驶经历、驾驶员当前的身体状况等。数据存储装置162还可以存储该车辆自身的状态 信息,以及与该车辆的距离小于预设距离的周围其他车辆或设备的状态信息。状态信息包括但不限于车辆的速度、加速度、航向角、位置以及运动轨迹等。比如,车辆基于雷达123和激光雷达124的测速、测距功能,得到车辆自身的速度、其他车辆的速度、其他车辆的运动轨迹等。如此,处理器161可根据预设算法等,对其从数据存储装置162获取到的信息进行处理,来确定该其他车辆在当前时刻的换道概率以及该其他车辆与虚拟车道的中心线在当前时刻的关联概率等,并根据关联概率最大的车辆的运动轨迹确定自车所在车道的中心线,从而根据该中心线和预设宽度来确定自车所在车道的车道线,进而根据该车道线确定适用于自车的自动驾驶策略,并控制自车进行自动驾驶,使自车顺利通过路口。
用户接口170,用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口170可包括与在外围设备140的集合内的一个或多个输入/输出设备与用户进行交互和信息交换的接口,其中,外围设备140的集合内的一个或多个输入/输出设备可以为例如无线通信系统141、车载电脑142、麦克风143和扬声器144中的一个或多个。
计算机系统160可基于从各种子系统(例如,行进系统110、传感器系统120和控制系统130)以及从用户接口170接收的输入来控制车辆100的功能。例如,计算机系统160可利用来自控制系统130的输入,以便控制转向系统131,从而规避由传感器系统120和障碍规避系统136检测到的障碍物。在一些实施例中,计算机系统160可操作来对车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,数据存储装置162可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
在道路行进的自动驾驶车辆,如上面的车辆100,可以识别其自身所处环境、周围环境中的车辆的位置以及周围车辆的速度、朝向等状态信息,以确定与车辆100当前所处状态最为相似的周围车辆。在一些示例中,可以独立地考虑周围环境中与车辆100的距离不超过预设距离的每个车辆的特点,并且基于车辆100的虚拟车道的中心线和周围环境中的车辆在当前时刻的各自的换道概率,或者该周围环境中的车辆与车辆100的虚拟车道的中心线在当前时刻的关联概率等,来确定车辆100的最合适的跟车对象,进而确定车辆100在之后一段时间内的运动轨迹,以及车辆100所在车道的车道线,以支持车辆的自动驾驶。
可选地,自动驾驶车辆100或者与自动驾驶车辆100相关联的计算设备(如图1的计算机系统160、计算机视觉系统134、数据存储装置162)可以基于周围环境中的车辆的状态(例如,运动轨迹、位置、速度等等)和自车所在虚拟车道的中心线来确定自车在一定时间段内的自车所在车道的车道线以及自车的运动轨迹。在这个过程中,也可以考虑其它因素来确定自车所在车道的车道线,诸如,车辆100在之前通过该段道路时的运动轨迹等等。
除了提供自车所在车道的车道线之外,计算机设备还可以提供调整车辆100的速度和转向角的指令,以使得自动驾驶车辆遵循给定的轨迹和/或维持与自动驾驶车辆附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
在本申请的另一些实施例中,自动驾驶车辆还可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
参见图2,示例性的,车辆中可以包括以下模块:
感知模块201,用于通过路侧传感器与车载传感器等,获取视频流数据、雷达的激光点云等数据,并对通过传感器收集到的原始视频流数据、雷达的激光点云等数据进行处理,得到自车周边环境中的障碍物的位置以及车道线的位置等信息(例如车辆、行人、交通标志、交通灯的位置以及人行道、停止线的位置等)。其中,路侧传感器和车载传感器可以是激光雷达、毫米波雷达、视觉传感器等等。感知模块201还用于将根据所有或者某一类或者某一传感器采集到的数据,确定的自车周边环境中的障碍物的位置以及车道线的位置等,发送给融合模块202、定位模块203、道路结构认知模块204,以及预测模块205。
融合模块202,用于从感知模块201获取自车周边环境中的障碍物的位置以及车道线的位置等信息,并对这些信息进行分析,对自车的周围环境中的障碍物以及车道线等进行融合,确定属于的同一障碍物(例如同一车辆等)或同一车道线(例如同一停车位边界等)的信息,从而进一步确定每一障碍物的类型、位置、大小、速度等以及每一车道线的类型、位置、形状等信息。融合模块202还用于对从感知模块201获取到的信息进行分析,确定自车可以通行的区域、自车不可通行的位置等信息。融合模块202还用于将每一障碍物的类型、位置、大小、速度等,每一车道线的类型、位置、形状等,自车可以通行的区域,以及自车不可通行的区域等信息,并将这些信息发送给定位模块203、道路结构认知模块204以及预测模块205。
定位模块203,用于根据从感知模块201和融合模块202接收到的信息以及从地图接口获取到的地图信息对自车进行定位,确定自车所处位置(例如自车所在的经纬度)。定位模块203,还用于将自车所在位置发送给感知模块201、融合模块202、道路结构认知模块204以及预测模块205等。
道路结构认知模块204,用于根据从感知模块201、融合模块202,以及定位模块203获取到的信息,确定自车周边环境中的车道线、交通流(即自车所在周围环境中的其他车辆)、周边其他障碍物以及自车所处位置等信息,并进一步分析出车辆所在位置的周围环境的局部道路模型以及该局部道路模型的置信度等。道路结构认知模块204还用于将其确定的自车所在位置的周围环境的局部道路模型以及该局部道路模型的置信度等信息,发送给预测模块205以及规划控制模块206。
预测模块205,用于根据从感知模块201、融合模块202、定位模块203,以及道路结构认知模块204获取到的信息进行推理,对自车所在位置的周围环境中的车辆或行人等的意图和运动轨迹等进行预测。预测模块205,还用于将其预测得到的自车所在位置的周围环境中的车辆或行人等的意图或运动轨迹,发送给规划控制模块206。
规划控制模块206,用于根据其从感知模块201、融合模块202、定位模块203、道 路结构认知模块204,以及预测模块205得到的信息,确定道路结构、障碍物(例如车辆所在周围环境中的车辆)的意图和运动轨迹等。规划控制模块206还用于并根据道路结构、障碍物意图和运动轨迹等规划自车的运动轨迹,对车辆的行驶路径进行规划,生成适用于自车的驾驶策略,并输出与该驾驶策略对应的动作指令或者说是控制量,根据该指令或控制量控制车辆进行自动驾驶。
车载通信模块207(图2中未示出):用于自车与其他车之间的信息交互。
存储组件208(图2中未示出):用于存储上述各个模块的可执行代码,运行这些可执行代码即可实现本申请实施例的部分或全部方法流程。
在本申请实施例的一种可能的实现方式中,如图3所示,图1所示的计算机系统160包括处理器301,处理器301和系统总线302耦合,处理器301可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)303可以驱动显示器324,显示器324和系统总线302耦合。系统总线302通过总线桥304和输入输出(I/O)总线(BUS)305耦合,I/O接口306和I/O总线305耦合,I/O接口306和多种I/O设备进行通信,比如输入设备307(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)308,(例如,CD-ROM,多媒体接口等)。收发器309(可以发送和/或接收无线电通信信号),摄像头310(可以捕捉静态和动态数字视频图像)和外部通用串行总线(Universal Serial Bus,USB)端口311。其中,可选地,和I/O接口306相连接的接口可以是USB接口。
其中,处理器301可以是任何传统处理器,包括精简指令集计算(Reduced Instruction Set Computer,RISC)处理器、复杂指令集计算(Complex Instruction Set Computer,CISC)处理器或上述的组合。可选地,处理器301还可以是诸如专用集成电路(ASIC)的专用装置。可选地,处理器301还可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。
可选地,在本申请所述的各种实施例中,计算机系统160可位于远离自动驾驶车辆的地方,且与自动驾驶车辆100无线通信。在其它方面,本申请所述的一些过程可设置在自动驾驶车辆内的处理器上执行,其它一些过程由远程处理器执行,包括采取执行单个操纵所需的动作。
计算机系统160可以通过网络接口312和软件部署服务器(deploying server)313通信。可选的,网络接口312可以是硬件网络接口,比如网卡。网络(network)314可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(VPN),可选地,network314还可以为无线网络,比如WiFi网络、蜂窝网络等。
硬盘驱动器接口315和系统总线302耦合。硬盘驱动器接口315和硬盘驱动器316相连接。系统内存317和系统总线302耦合。运行在系统内存317的数据可以包括计算机系统160的操作系统(OS)318和应用程序319。
操作系统(OS)318包括但不限于Shell 320和内核(kernel)321。Shell 320是介于使用者和操作系统318的kernel 321间的一个接口。Shell 320是操作系统318最外面的一层。shell管理使用者与操作系统318之间的交互:等待使用者的输入,向操作系统318解释使用者的输入,并且处理各种各样的操作系统318的输出结果。
内核321由操作系统318中用于管理存储器、文件、外设和系统资源的部分组成,直接与硬件交互。操作系统318的内核321通常运行进程,并提供进程间的通信,提供 CPU时间片管理、中断、内存管理、IO管理等功能。
应用程序319包括自动驾驶相关的程序323,比如,管理自动驾驶汽车和路上障碍物交互的程序,控制自动驾驶汽车的行驶路线或者速度的程序,控制自动驾驶汽车和路上其他汽车/自动驾驶汽车交互的程序等。应用程序319也存在于deploying server 313的系统上。在一个实施例中,在需要执行应用程序319时,计算机系统160可以从deploying server 313下载应用程序319。
又比如,应用程序319可以是控制车辆根据上述车辆的车道线以及传统控制模块确定驾驶策略的应用程序。计算机系统160的处理器301调用该应用程序319,得到驾驶策略。
传感器322和计算机系统160关联。传感器322用于探测计算机系统160周围的环境。举例来说,传感器322可以探测动物,汽车,障碍物和/或人行横道等。进一步传感器322还可以探测上述动物,汽车,障碍物和/或人行横道等物体周围的环境。比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,动物周围环境的光亮度等。可选地,如果计算机系统160位于自动驾驶的汽车上,传感器322可以是摄像头,红外线感应器,化学检测器,麦克风等器件中的至少一项。
在本申请的另一些实施例中,本申请实施例的车道线检测方法还可以由芯片系统执行。参见图4,是本申请实施例提供的一种芯片系统的结构图。
神经网络处理器(neural-network processing unit,NPU)40作为协处理器挂载到主CPU(host CPU)上,由host CPU为NPU 40分配任务。NPU 40的核心部分为运算电路403。示例性的,通过控制器404控制运算电路403,使得运算电路403利用从存储器中提取的矩阵数据进行乘法运算。
在一些实现中,运算电路403内部包括多个处理单元(process engine,PE)。在一些实现中,运算电路403是二维脉动阵列。可选的,运算电路403还可以是一维脉动阵列,或者是能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路403是通用的矩阵处理器。
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路403从权重存储器402中获取权重矩阵B相应的数据,并缓存在运算电路403中的每一个PE上。运算电路403还从输入存储器401中获取输入矩阵A相应的数据,进而根据输入矩阵A和权重矩阵B进行矩阵运算,将矩阵运算的部分结果或最终结果保存在累加器(accumulator)408中。
又比如,运算电路403可用于实现特征提取模型(如卷积神经网络模型),并将图像数据输入卷积神经网络模型,通过该模型的运算,得到图像的特征。进而,将图像特征输出到分类器,由分类器输出图像中物体的分类概率。
统一存储器406用于存放输入数据以及输出数据。外部存储器中的权重数据直接通过存储单元访问控制器(direct memory access controller,DMAC)405被送往到权重存储器402中。外部存储器中的输入数据可通过DMAC被搬运到统一存储器406中,或者被搬运到输入存储器401中。
总线接口单元(bus interface unit,BIU)410,用于高级扩展接口(advanced extensible interface,AXI)总线与DMAC和取指存储器(instruction fetch buffer)409的交互。还用于取指存储器409从外部存储器获取指令,还用于存储单元访问控制器405从外部存 储器获取输入矩阵A或者权重矩阵B的原数据。
DMAC主要用于将外部存储器(DDR)中的输入数据搬运到统一存储器406,或将权重数据搬运到权重存储器402中,或将输入数据搬运到输入存储器401中。
示例性的,在本申请实施例中,若使用DQN模型计算目标对应的目标类别、目标形状信息以及目标跟踪信息等数据时,输入数据可以是DQN模型的输入数据,即车辆周边环境中的目标对象(比如与车辆有交互的其他车辆)的激光雷达点云数据等信息。输出数据为DQN模型的输出数据,即车辆周边环境中目标的目标类别、目标形状信息以及目标跟踪信息等数据。
向量计算单元407可包括多个运算处理单元。用于在需要的情况下,可以对运算电路403的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。主要用于神经网络中非卷积/FC层网络计算,如池化(pooling),批归一化(batch normalization),局部响应归一化(local response normalization)等。
在一些实现种,向量计算单元407将经处理的输出向量存储到统一存储器406。例如,向量计算单元407可以将非线性函数应用到运算电路403的输出,例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元407生成归一化的值、合并值,或二者均有。在一些实现中,处理过的输出向量还能够用作运算电路403的激活输入,例如用于在神经网络中的后续层中的使用。
控制器404连接取指存储器(instruction fetch buffer)409,控制器404使用的指令可存储在取指存储器409中。
作为一种可能的实现方式,统一存储器406,输入存储器401,权重存储器402以及取指存储器409均为On-Chip存储器。外部存储器私有于该NPU 40硬件架构。
结合图1至图3,主CPU和NPU 40共同配合,可实现图1中车辆100所需功能的相应算法,也可实现图2所示车辆所需功能的相应算法,也可以实现图3所示计算机系统160所需功能的相应算法。
在本申请的另一些实施例中,计算机系统160还可以从其它计算机系统接收信息或转移信息到其它计算机系统。或者,从车辆100的传感器系统120收集的传感器数据可以被转移到另一个计算机,由另一计算机对此数据进行处理。如图5所示,来自计算机系统160的数据可以经由网络被传送到云侧的计算机系统510用于进一步的处理。网络以及中间节点可以包括各种配置和协议,包括因特网、万维网、内联网、虚拟专用网络、广域网、局域网、使用一个或多个公司的专有通信协议的专用网络、以太网、WiFi和HTTP、以及前述的各种组合。这种通信可以由能够传送数据到其它计算机和从其它计算机传送数据的任何设备执行,诸如调制解调器和无线接口。
在一个示例中,计算机系统510可以包括具有多个计算机的服务器,例如负载均衡服务器群。为了从计算机系统160接收、处理并传送数据,服务器520与网络的不同节点交换信息。该计算机系统510可以具有类似于计算机系统160的配置,并具有处理器530、存储器540、指令550、和数据560。
在一个示例中,服务器520的数据560可以包括提供天气相关的信息。例如,服务器520可以接收、监视、存储、更新、以及传送与周边环境中目标对象相关的各种信息。该信息可以包括例如以报告形式、雷达信息形式、预报形式等的目标类别、目标形状信息以及目标跟踪信息。
参见图6,为自主驾驶车辆和云服务中心(云服务器)交互的示例。云服务中心可以经诸如无线通信网络的网络611,从其环境600内的自动驾驶辆613、612接收信息(诸如车辆传感器收集到数据或者其它信息)。
云服务中心620根据接收到的数据,运行其存储的控制汽车自动驾驶相关的程序对自动驾驶车辆613、612进行控制。控制汽车自动驾驶相关的程序可以为:管理自动驾驶汽车和路上障碍物交互的程序,或者控制自动驾驶汽车路线或者速度的程序,或者控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。
示例性的,云服务中心620通过网络611可将地图的部分提供给车辆613、612。在其它示例中,可以在不同位置之间划分操作。例如,多个云服务中心可以接收、证实、组合和/或发送信息报告。在一些示例中还可以在车辆之间发送信息报告和/传感器数据。其它配置也是可能的。
在一些示例中,云服务中心620向自动驾驶车辆发送关于环境内可能的驾驶情况所建议的解决方案(如,告知前方障碍物,并告知如何绕开它))。例如,云服务中心620可以辅助车辆确定当面对环境内的特定障碍时如何行进。云服务中心620向自动驾驶车辆发送指示该车辆应当在给定场景中如何行进的响应。例如,云服务中心620基于收集到的传感器数据,可以确认道路前方具有临时停车标志的存在,又比如,基于“车道封闭”标志和施工车辆的传感器数据,确定该车道由于施工而被封闭。相应地,云服务中心620发送用于车辆通过障碍的建议操作模式(例如:指示车辆变道另一条道路上)。云服务中心620观察其操作环境600内的视频流,并且已确认自动驾驶车辆能安全并成功地穿过障碍时,对该自动驾驶车辆所使用的操作步骤可以被添加到驾驶信息地图中。相应地,这一信息可以发送到该区域内可能遇到相同障碍的其它车辆,以便辅助其它车辆不仅识别出封闭的车道还知道如何通过。
在一些实施例中,所公开的方法可以实施为以机器可读格式,被编码在计算机可读存储介质上的或者被编码在其它非瞬时性介质或者制品上的计算机程序指令。图7示意性地示出根据这里展示的至少一些实施例而布置的示例计算机程序产品的概念性局部视图,示例计算机程序产品包括用于在计算设备上执行计算机进程的计算机程序。在一个实施例中,示例计算机程序产品700是使用信号承载介质701来提供的。信号承载介质701可以包括一个或多个程序指令702,其当被一个或多个处理器运行时可以提供以上针对图1至图7描述的全部功能或者部分功能,或者可以提供后续实施例中描述的全部或部分功能。例如,参考图8中所示的实施例,S801至S805中的一个或多个特征可以由与信号承载介质701相关联的一个或多个指令来承担。此外,图7中的程序指令702也描述示例指令。
在一些示例中,信号承载介质701可以包含计算机可读介质703,诸如但不限于,硬盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、只读存储记忆体(read-only memory,ROM)或随机存储记忆体(random access memory,RAM)等等。在一些实施方式中,信号承载介质701可以包含计算机可记录介质704,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。在一些实施方式中,信号承载介质701可以包含通信介质705,诸如但不限于,数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。因此,例如,信号承载介质701可以由无线形式的通信介质705(例如,遵守IEEE 802.11标准或者其它传输协议的无线通信介质)来传 达。一个或多个程序指令702可以是,例如,计算机可执行指令或者逻辑实施指令。在一些示例中,诸如针对图1至图7描述的计算设备可以被配置为,响应于通过计算机可读介质703、和/或计算机可记录介质704、和/或通信介质705中的一个或多个传达到计算设备的程序指令702,提供各种操作、功能、或者动作。应该理解,这里描述的布置仅仅是用于示例的目的。因而,本领域技术人员将理解,其它布置和其它元素(例如,机器、接口、功能、顺序、和功能组等等)能够被取而代之地使用,并且一些元素可以根据所期望的结果而一并省略。另外,所描述的元素中的许多是可以被实现为离散的或者分布式的组件的、或者以任何适当的组合和位置来结合其它组件实施的功能实体。
为了提高确定自车所在车道的车道线的准确性,保证车辆安全,使得车辆安全通过路口场景,本申请实施例提供一种车道线检测方法,该方法的执行主体为具有自动驾驶功能的车辆,或者是具有控制车辆的功能的其他设备,也可以是具有自动驾驶功能的车辆或者具有控制车辆的功能的其他设备中的处理器,例如上述图1-图7中提到的处理器161、处理器301以及处理器530等。如图8所示,车辆该车道线检测方法包括步骤S801-S805:
S801、根据自车所在位置,确定虚拟车道的车道线。
可选的,以自车所在位置作为虚拟车道的中心线上的一点,平行于自车的车头与车尾的连线,且以预设宽度建立自车所在车道的虚拟车道线。其中,预设宽度为自车或用户自行设定的车道宽度。
示例性的,如图9的(a)所示,图中的a和c为根据自车所在位置构建的虚拟车道的车道线,b为该虚拟车道的中心线,a和c之间的距离d的取值为预设宽度。图9的(a)中自车一侧的箭头用于指示车辆的车头行进方向,a、b、c均与自车的车头和车尾的连线平行,自车位于虚拟车道的中心线b上。
S802、根据周车的运动轨迹,确定该周车与虚拟车道的中心线的距离的变化趋势。
其中,周车为与自车的距离小于预设距离的车辆,预设距离是由车辆或者用户自行确定的。随着自车所在位置的变化,或者随着当前时刻的变化,与自车的距离小于预设距离的车辆的数目、位置可能会发生变化。
在一种可能的实现方式中,根据周车相对于自车的位置的不同,可以将周车划分为自车的前车、左前车、右前车、右侧车、左侧车、后车、左后车或右后车。
示例性的,以图9的(a)所示的自车所在的虚拟车道为例,以自车所在位置为中心,对虚拟车道以及与自车的距离未超过预设距离内的区域进行划分,得到9个区域,图9的(b)所示。这9个区域分别为区域A、区域B、区域C、区域D、区域E、区域F、区域G、区域H以及区域I。其中,自车位于区域I中,位于区域A到区域H中周车分别为自车的左前车、前车、右前车、左侧车、右侧车、左后车、后车、右后车,即车辆A1为自车的左前车,车辆B1为自车的前车,车辆C1为自车的右前车,车辆D1为自车的左侧车,车辆E1为自车的右侧车,车辆F1为自车的左后车,车辆G1为自车的后车,车辆H1为自车的右后车。
可选的,先根据传感器检测到的信息,确定与自车的距离不超过预设距离的周车,以及每一周车的运动轨迹。对于每一车辆来说,车辆的运动轨迹上存在多个位置点,该车辆的运动轨迹由这多个位置点之间的平滑的连线构成。
在一种可能的实现方式中,车辆的运动轨迹上的多个位置点中包括车辆当前所在位 置对应的位置点。
在一种可能的实现方式中,车辆的运动轨迹上的多个位置点是随机确定的,或者车辆的运动轨迹上的多个位置点是预设时间间隔确定的,或者车辆的运动轨迹上的多个位置点之间的纵向距离的取值与预设距离间隔的取值相同。
示例性的,以车辆的运动轨迹上的多个位置点之间的纵向距离的取值与预设距离间隔的取值相同为例。如图10所示,与自车的距离小于预设距离的周车中包括车辆a、车辆b以及车辆c。车辆a的运动轨迹上包括3个位置点,a1和a2为这3个位置点中的相邻位置点之间的纵向距离,车辆b的运动轨迹上包括4个位置点,b1、b2和b3为这4个位置点中的相邻位置点之间的纵向距离,车辆c的运动轨迹上包括5个位置点,c1、c2、c3和c4为这5个位置点中的相邻位置点之间的纵向距离。其中,a1=a2=b1=b2=b3=c1=c2=c3=c4=d,d为预设距离间隔的取值。以下示例均以车辆的运动轨迹上的多个位置点之间的纵向距离的取值与预设距离间隔的取值相同为例,对本申请实施例进行说明。
在一种可能的实现方式中,预设距离间隔由用户或车辆自行确定。
在一种可能的实现方式中,由用户或车辆自行确定预设距离间隔的取值范围,车辆的运动轨迹上的多个位置点之间的纵向距离的取值位于预设距离间隔的取值范围之内。
示例性的,以预设距离间隔的取值范围为[8,16]为例,若车辆a的运动轨迹上包括3个位置点,这3个位置点之间的纵向距离分别为a1和a2,则8<=a1<=16,8<=a2<=16。
在一种可能的实现方式中,对于每一车辆,先确定多个位置点,再确定由这多个位置点构成的平滑的曲线为该车辆的运动轨迹。或者,也可以先确定车辆的运动轨迹,然后确定这运动轨迹上的多个位置点,再利用这多个位置点连接得到的平滑的曲线作为该车辆的运动轨迹。
可选的,在确定周车的运动轨迹后,根据该运动轨迹,确定该周车在其多个位置点时与虚拟车道的中心线的横向距离,并记录该横向距离,即得到该周车与虚拟车道的中心线的距离的变化趋势。
示例性的,车辆1在其运动轨迹上存在3个位置点,这3个位置点分别为a、b、c,车辆1在这3个位置点处与虚拟车道的中心线的横向距离分别为5、7、6,即车辆1与虚拟车道中心线的距离的变化趋势为5、7、6。
需要说明的是,若在当前时刻无与自车的距离小于预设距离的车辆,则自车可将其驾驶权限交给用户,以使得用户根据其驾驶经验以及周围环境来驾驶车辆安全通过。若在当前时刻与自车的距离小于预设距离的周车中包括左侧车、右侧车、左后车或右后车中的至少一种周车,则自车减速,以使得左侧车、右侧车、右后车、左后车变化为左前车、右前车或前车,并获取其此时的左前车、右前车及前车的运动轨迹。
S803、对周车与虚拟车道的中心线的距离的变化趋势,以及该周车在上一时刻的换道概率进行分析,确定该周车在当前时刻的换道概率。
可选的,针对每一个周车,对周车与虚拟车道的中心线的距离的变化趋势,以及该周车在上一时刻的换道概率进行分析,确定该周车在当前时刻的换道概率。
可选的,先对周车与虚拟车道的中心线的距离的变化趋势进行分析,得到该周车在当前时刻的第一观测概率,也就是在当前时刻对该周车的换道概率的预测值。然后对该周车在当前时刻的第一观测概率和该周车在上一时刻的换道概率进行加权求和,得到该 周车在当前时刻的换道概率。其中,第一观测概率的权重和上一时刻的换道概率的权重是由自车或者用户预先设定的。对自车的周车进行遍历,得到每一周车的换道概率。
示例性的,将周车与虚拟车道的中心线的变化趋势输入到sigmoid函数中进行分析,输出该周车在当前时刻的第一观测概率,如公式P Observation=sigmoid(△closetL)。其中,P Observation表示车辆在当前时刻的第一观测概率,△closetL表示车辆与虚拟车道的中心线的变化趋势。然后,根据公式P LaneChange=α*P LaneChange′+β*P Observation,其中,P LaneChange表示车辆在当前时刻的换道概率,P LaneChange′表示车辆在上一时刻的换道概率,α表示车辆在上一时刻的换道概率的预设权重,β表示车辆的第一观测概率的预设权重。α与β是由车辆自行确定的,或者是用户根据其需求预先设定的。
在一种可能的实现方式中,周车在进入预设区域,即路口前的换道概率为预设换道概率。其中,该预设换道概率是由车辆自行确定的,或者是用户根据其需求预先设定的。
S804、根据周车在当前时刻的换道概率、该周车与虚拟车道的中心线在上一时刻的关联概率、该周车在当前时刻相对于虚拟车道的中心线的位置以及预设转移系数,确定周车与虚拟车道的中心线在当前时刻的关联概率。
可选的,根据周车在当前时刻的换道概率、该周车与虚拟车道的中心线在上一时刻的关联概率、该周车在当前时刻相对于虚拟车道的中心线的位置以及预设转移系数,确定该周车与虚拟车道的中心线在当前时刻的关联概率。对每一周车进行遍历,确定每一周车与虚拟车道的中心线在当前时刻的关联概率。其中,周车在当前时刻相对于虚拟车道的中心线的位置中包括其与该虚拟车道的中心线的横向距离,以及其车头朝向与虚拟车道的中心线的角度。预设转移系数为周车在上一时刻与虚拟车道的中心线的关联概率,转换为该周车在当前时刻与虚拟车道的中心线的关联概率时的转换系数,该预设转移系数是由车辆自行确定的,也可以是由用户根据其需求预先确定的。关联概率用于表示周车与虚拟车道的中心线的距离的大小,以及周车的运动轨迹与虚拟车道的中心线的相似度。
示例性的,以图9的(a)所示的虚拟车道为例,如图11所示。其中,在图11中,车辆1表示自车的一个周车,d表示车辆1与自车所在虚拟车道的中心线的横向距离,θ表示车辆1的车头朝向与虚拟车道的中心线的夹角,图中单箭头所示方向为车辆1的车头朝向。
可选的,根据周车在当前时刻的换道概率以及预设换道概率阈值,确定第一目标周车。
在一种可能的实现方式中,在预设区域为前方有施工等的十字路口,或者,出路口与进路口时的车道数量不同时,通过该预设区域的车辆均需要进行换道,以安全顺利地通过该路口。因此,若周车为多个,则在确定第一目标周车时,需要考虑到在当前时刻是否所有的周车均需要进行换道,即所有周车在当前时刻的换道概率是否均大于等于预设换道概率阈值。若多个周车中的所有车辆在当前时刻的换道概率均大于等于预设换道概率阈值,则确定该多个周车中的所有车辆均为第一目标周车。若多个周车中存在在当前时刻的换道概率大于等于预设换道概率阈值的周车,且存在在当前时刻的换道概率小于预设换道概率阈值的周车,则确定该多个周车中在当前时刻的换道概率小于预设换道概率阈值的周车为第一目标周车。可选的,将该多个周车中在当前时刻的换道概率大于等于预设换道概率阈值的周车删除,减少需要确定当前时刻与虚拟车道的中心线的关联 概率的车辆,从而提高自车所在车道的车道线检测的效率。
可选地,在确定第一目标周车后,对于每一第一目标周车,根据第一目标周车在当前时刻相对于虚拟车道的位置,来确定该第一目标周车与虚拟车道的中心线的横向距离,以及该第一目标周车的车头朝向与虚拟车道的中心线的夹角。然后,针对每一第一目标周车,对其车头朝向与虚拟车道的中心线的夹角以及其与虚拟车道的中心线的横向距离进行加权求和,确定该第一目标周车在当前时刻的第二观测概率,第二观测概率即该第一目标周车与虚拟车道的中心线的关联概率的预测值。
示例性的,以图11为例,第一目标周车即车辆1,与虚拟车道的中心线的距离为d,车辆1的车头朝向与虚拟车道的中心线的夹角为θ。将d与θ输入公式dist=δ*fabs(d)+ε*θ中进行加权求和,其中,fabs(d)表示对d取绝对值,δ表示d的预设权重,ε表示θ的预设权重,δ和ε是车辆自行确定的,或者是用户根据其需求确定的。然后,将车辆1在不同位置点(包括其处于当前所在位置对应的位置点)时的dist输入sigmoid函数中,得到车辆1在当前时刻的第二观测概率P′,即P′=sigmoid(dist)。
在一种可能的实现方式中,为了更准确地获取周车与虚拟车道的中心线在当前时刻的关联概率,可以在考虑该周车与虚拟车道的中心线在上一时刻的关联概率的基础上,利用图12所示的马尔科夫概率模型,确定每一第一目标周车与虚拟车道的中心线在当前时刻的关联概率。也就是说,针对每一第一目标周车,对该第一目标周车在当前时刻的第二观测概率、预设转移系数,以及该第一目标周车与虚拟车道的中心线在上一时刻的关联概率进行归一化处理,从而得到该第一目标周车与虚拟车道的中心线在当前时刻的关联概率。
示例性的,车辆1为第一目标周车,将车辆1与自车所在虚拟车道的中心线在当前时刻的关联概率、车辆1在当前时刻的第二观测概率,以及预设转移系数输入到公式P=P′*P″*P″′中,即进行归一化处理,得到车辆1与虚拟车道的中心线在当前时刻的关联概率P。其中,P′表示车辆1在当前时刻的第二观测概率,P″表示预设转移系数,P″′表示车辆1与虚拟车道的中心线在上一时刻的关联概率。
在一种可能的实现方式中,第一目标周车在进入预设区域时,即路口前,该第一目标周车与虚拟车道的中心线的关联概率为预设关联概率。其中,该预设关联概率是由车辆自行确定的,或者是用户根据其需求预先设定的。
S805、根据目标车辆的运动轨迹以及预设宽度,确定自车所在车道的车道线。
其中,该目标车辆为在当前时刻与虚拟车道的中心线的关联概率最大的第一目标周车为目标车辆,该目标车辆为自车的周车中最适于自车的跟车对象,自车可以跟随该目标车辆的运动轨迹来行驶。周车与虚拟车道的中心线的关联概率表示该周车与虚拟车道的中心线的相似性,以及该周车与虚拟车道的中心线的距离的远近。因此,第一目标周车与虚拟车道的中心线在当前时刻的关联概率越大,则该第一目标周车在与虚拟车道的中心线的相似度较大的同时,该第一目标周车与虚拟车道的中心线的距离也较小。
可选的,将该目标车辆的运动轨迹进行平移,可得到自车的运动轨迹。在自车的运动过程中,根据自车的周车的变化,以及周车的换道概率和周车与虚拟车道的中心线的关联概率的变化,目标车辆也可能会发生变化。
示例性的,如图13所示,在自车所在位置点a1之后,经过上述步骤S801-S804,确定车辆2为目标车辆,按照图中所示箭头1方向平移曲线b,即车辆2的运动轨迹上 位置点b1和b2之间的曲线,得到自车在一段时间内的运动轨迹为曲线c。若自车行驶到位置点a2后,经过上述步骤S801-S804,可重新确定目标车辆为车辆3,曲线d为车辆3的运动轨迹,按照图中所示箭头2方向平移曲线d,即车辆3的运动轨迹上位置点d2和d3之间的曲线,得到自车的运动轨迹为曲线e。
需要说明的是,在本步骤之后,本申请还需要对自车的运动轨迹是否符合用户需求进行判断,若该运动轨迹为不换道的运动轨迹,而自车需要换道才可以到达目的地,则自车可将其驾驶权限交给用户,以使得用户根据其驾驶经验以及周围环境来驾驶车辆安全通过,并顺利到达目的地。另外,该步骤中所确定的自车的运动轨迹为自车从当前时刻(例如t时刻)到当前时刻之后某一时刻的一段时间(例如从t时刻到t+2时刻)内的运动轨迹。
可选的,以目标车辆的运动轨迹作为自车所在车道的中心线,以预设宽度为车道宽度,确定自车所在的车道,以及该车道的车道线。
在一种可能的实现方式中,将目标车辆的运动轨迹进行平移,将平移后的运动轨迹也就是自车在一段时间段内的运动轨迹,作为自车所在车道的中心线,以预设宽度为车道宽度,确定自车所在的车道,以及该车道的车道线。
示例性的,以图13中目标车辆的运动轨迹平移得到的自车的运动轨迹,即曲线c和曲线e为例,如图14所示,以曲线c和曲线e为自车所在车道的中心线,以及以预设宽度f为车道宽度,则可以确定自车所在车道,以及自车所在车道的车道线g和h。
在上述过程中,首先,本申请可以根据周车的运动轨迹,确定该周车与虚拟车道的中心线的距离的变化趋势,进而根据该周车的与虚拟车道的中心线的距离的变化趋势和该周车在上一时刻的换道概率,确定该周车在当前时刻的换道概率,从而识别出该周车的换道意图。其次,本申请可以根据周车在当前时刻的换道概率、该周车与虚拟车道的中心线在上一时刻的关联概率、该周车在当前时刻相对于虚拟车道的中心线的位置以及预设转移系数,来确定该周车与虚拟车道的中心线在当前时刻的关联概率。最后根据在当前时刻的关联概率最大的周车的运动轨迹以及预设宽度确定自车所在车道的车道线,从而提高自车所在车道的车道线检测的准确性,保证车辆安全。
在一种可能的实现方式中,若车辆在进入路口前的区域中包含多个车道,则这多个车道中的初始位置点处的车辆的预设换道概率可能相同,也可能不同。示例性的,以车辆进入预设区域即路口之前的车道有3个为例,如图15所示,在车道1中,确定车辆处于进入预设区域即路口之前的初始位置点a时的换道概率为预设换道概率a1,车辆处于进入路口之前的初始位置点a时,与虚拟车道的中心线的关联概率为预设关联概率a2。在车道2中,确定车辆处于进入预设区域即路口之前的初始位置点b时的换道概率为预设换道概率b1,车辆处于进入路口之前的初始位置点b时,与虚拟车道的中心线的关联概率为预设关联概率b2。在车道3中,确定车辆处于进入预设区域即路口之前的初始位置点c时的换道概率为预设换道概率c1,车辆处于进入路口之前的初始位置点c时,与虚拟车道的中心线的关联概率为预设关联概率c2。a1、b1和c1的取值可能相同,也可能不同,a2、b2和c2的取值可能相同,也可能不同。根据自车所在位置经上述方法实施例中的步骤S801-S805,确定自车所在车道的车道线。
在一种可能的实现方式中,对车辆进入路口前的历史运动轨迹,以及根据上述步骤S801-S805确定的车辆刚刚进入路口时的运动轨迹进行平滑处理,使得自车可以安全稳 定地进入路口区域。
在一种可能的实现方式中,对上述步骤S801-S805中根据不同目标车辆确定的自车的运动轨迹进行平滑处理,使得自车安全顺利地通过路口。
在一种可能的实现方式中,根据自车在路口中的运动轨迹,确定自车在离开路口后所在的车道为目标车道。然后,根据自车在第一位置点处(即自车离开路口前所在的位置点)的速度和车头朝向,确定自车在目标车道上的目标点。其中,若自车在目标车道上的目标点位于目标车道的中心线上,则确定自车在目标车道上的目标点与第一位置点之间的平滑连线,为自车离开路口前的运动轨迹;反之,即自车在目标车道上的目标点不位于该目标车道的中心线上,则确定该目标点在目标车道的中心线上的垂点与第一位置点之间的平滑连线,为自车离开路口前的运动轨迹。最后,以自车离开路口前的运动轨迹为自车离开路口前所在车道的中心线,以预设宽度为自车离开路口前所在车道的车道宽度,确定自车离开路口前所在车道,以及自车离开路口前所在车道的平滑的车道线。
在一种可能的实现方式中,根据自车在第一位置点处的车头朝向和速度,在自车在目标车道上的目标点或目标车道的中心线上相对于自车的起始点,和第一位置点之间再确定一个或多个位置点,将第一位置点、自车在目标车道上的目标点或目标车道的中心线上相对于自车的起始点,以及该一个或多个位置点之间的平滑的连线,确定为自车离开路口时的运动轨迹。
示例性的,如图16的(a)所示,a表示自车离开路口前的所在位置对应的第一位置点,b1表示根据自车的车头朝向和速度确定的自车在目标车道上的目标点。b1位于目标车道的中心线上,则根据目标车道的中心线,确定位于a和b1之间的位置点d,d位于目标车道的中心线的延长线上。根据自车的车头朝向和自车速度,确定a和b1之间的位置点c。利用贝塞尔曲线连接a、b1、c、d,得到自车在离开路口时的运动轨迹,即曲线e(即图示虚线)。其中,b1与a之间的纵向距离为f,d和b1之间的纵向距离为f/4,c和a之间的纵向距离为f/4。
示例性的,如图16的(b)所示,a表示自车离开路口前的所在位置对应的第一位置点,b2表示根据自车的车头朝向和速度确定的自车在目标车道上的目标点。b2不位于目标车道的中心线上,则过b2作目标车道的中心线的垂线,确定该垂线与目标车道的中心线的交点为b3,即b2在目标车道的中心线上的垂点为b3。根据目标车道的中心线,确定位于a和b3之间的位置点d1,d1位于目标车道的中心线的延长线上。根据自车的车头朝向和自车速度,确定a和b3之间的位置点c1。利用贝塞尔曲线连接a、b3、c1、d1,得到自车在离开路口时的运动轨迹,即曲线e(曲线e为虚线)。其中,b3与a之间的纵向距离为f,d1和b3之间的纵向距离为f/4,c1和a之间的纵向距离为f/4。
在一种可能的实现方式中,位置点与目标点等多点之间的连线通过贝塞尔曲线来进行拟合,从而得到平滑的曲线。
需要说明的是,通过上述过程,本申请可以确定自车在进出路口以及在路口区域中的平滑的运动轨迹,从而确定自车在进出路口以及在路口区域中行驶时所在车道的平滑的车道线,使得自车可以安全顺利地通过路口,减少自车在通过路口时的横向抖动,提升路口行车时自车乘员的乘车体验。
在一种可能的实现方式中,为了进一步提高对自车所在车道的车道线进行检测的效率,在获取到自车的周车后,先对该周车中的车辆进行剔除,使得该周车中只包括前车、 左前车或右前车中的至少一项,从而减少其他车道上的车辆对确定自车车道的影响,提高对自车所在车道的车道线进行检测的效率和准确性。或者在上述步骤S804确定第一目标周车后,进一步对第一目标周车中的车辆进行筛选,使得第一目标周车中只包括自车的前车、右前车或左前车中的至少一项,从而减少其他车道上的车辆对确定自车车道的影响,提高对自车所在车道的车道线进行检测的效率和准确性。
在一种可能的实现方式中,为了进一步提高对自车所在车道的车道线进行检测的效率,减少其他车道上的车辆对确定自车车道的影响,提高对自车所在车道的车道线进行检测的效率和准确性,在获取到自车的周车后,先对该周车中的车辆进行剔除,使得该周车中只包括前车。或者在上述步骤S804确定第一目标周车后,进一步对第一目标周车中的车辆进行筛选,使得第一目标周车中只包括自车的前车。
本申请实施例在应用于L2以上的自动驾驶模式时,还可以记录自车在通过路口时的运动轨迹及车道线,使得自车可以在之后无周车的情况下,通过跟随车辆或控制车辆行驶的设备中所记录的历史运动轨迹或车道线,来安全地通过该路口。
本申请实施例可以根据上述方法示例对车道线检测装置进行功能模块的划分,在采用对应各个功能划分各个功能模块的情况下,图17示出上述实施例中所涉及的车道线检测装置的一种可能的结构示意图,该装置包括确定单元1701和分析单元1702。当然,车道线检测装置还可以包括其他模块,或者车道线检测装置可以包括更少的模块。
确定单元1701,用于根据周车的运动轨迹,确定周车与虚拟车道的中心线的距离的变化趋势。其中,周车为与自车的距离小于预设距离的车辆,虚拟车道平行于自车的车头与车尾的连线,且虚拟车道的宽度为预设宽度,自车位于所述虚拟车道的中心线上。
在一种可能的实现方式中,周车的运动轨迹为多个位置点的连线,多个位置点中相邻位置点之间的纵向距离的取值等于预设距离间隔的取值,这多个位置点中包括每个周车当前所在位置对应的位置点。
分析单元1702,用于对周车与虚拟车道的中心线的距离的变化趋势,以及周车在上一时刻的换道概率进行分析,得到周车在当前时刻的换道概率,换道概率为车辆更换其所在车道的概率。
可选的,分析单元1702,用于对周车与虚拟车道的中心线的距离的变化趋势进行分析,得到周车在当前时刻的第一观测概率,第一观测概率即对车辆的换道概率的预测值。然后分析单元1702,还用于对周车在当前时刻的第一观测概率以及周车在上一时刻的换道概率进行加权求和,确定周车在当前时刻的换道概率。
确定单元1701,还用于根据周车在当前时刻的换道概率、周车与虚拟车道的中心线在上一时刻的关联概率、周车在当前时刻相对于虚拟车道的中心线的位置以及预设转移系数,确定周车与虚拟车道的中心线在当前时刻的关联概率。
其中,预设转移系数为周车在上一时刻与虚拟车道的中心线的关联概率,转换为周车在当前时刻与虚拟车道的中心线的关联概率时的转换系数,关联概率用于表示周车与虚拟车道的中心线的距离的大小,以及周车的运动轨迹与虚拟车道的中心线的相似度。
在一种可能的实现方式中,周车进入预设区域前的换道概率为预设换道概率,其中,预设区域为路口。
可选的,确定单元1701,用于根据预设换道概率阈值以及周车在当前时刻的换道概率,确定第一目标周车。然后确定单元1701,还用于根据第一目标周车在当前时刻相对 于虚拟车道的中心线的位置,确定第一目标周车与虚拟车道的中心线的横向距离,以及第一目标周车的车头朝向与虚拟车道的中心线的夹角。再然后确定单元1701,还用于对该夹角以及第一目标周车与虚拟车道的中心线的横向距离进行加权求和,确定第一目标周车在当前时刻的第二观测概率,第二观测概率即对车辆与虚拟车道的中心线的关联概率的预测值。最后确定单元1701,还用于对第一目标周车在当前时刻的第二观测概率、预设转移系数以及第一目标周车与虚拟车道的中心线在上一时刻的关联概率进行归一化处理,确定第一目标周车与虚拟车道的中心线在当前时刻的关联概率。
在一种可能的实现方式中,若周车的数量为多个,则确定单元1701,还用于若该多个周车在当前时刻的换道概率均大于等于预设换道概率阈值,则确定该多个周车均为第一目标周车。确定单元1701,还用于若该多个周车中存在在当前时刻的换道概率大于等于预设换道概率阈值的周车,且存在在当前时刻的换道概率小于预设换道概率阈值的周车,则删除在当前时刻的换道概率大于等于预设换道概率阈值的周车,并将在当前时刻的换道概率小于预设换道概率阈值的周车确定为第一目标周车。
确定单元1701,还用于根据目标车辆的运动轨迹以及预设宽度,确定自车所在车道的车道线。其中,目标车辆为周车中与虚拟车道的中心线在当前时刻的关联概率最大的车辆。
可选的,确定单元1701,还用于确定自车离开预设区域后所在的车道为目标车道,其中,预设区域为路口。然后确定单元1701,还用于根据自车在第一位置点处的速度和车头朝向,确定自车在目标车道上的目标点,其中,第一位置点为自车离开预设区域前所在位置对应的位置点。确定单元1701,还用于若自车在目标车道上的目标点位于目标车道的中心线上,则确定自车在目标车道上的目标点与第一位置点的连线,为自车离开预设区域前所在车道的中心线。确定单元1701,还用于若自车在目标车道的目标点不位于目标车道的中心线上,则确定该目标点在目标车道的中心线上的垂点与第一位置点的连线为自车离开预设区域前所在车道的中心线。
参照图18所示,本申请还提供了一种车道线检测装置,该装置包括存储器1801,处理器1802,通信接口1803和总线1804。处理器1802用于对装置的动作进行管理控制,和/或用于执行文本所描述的技术的其他过程。通信接口1803用于支持装置与其他网络实体的通信。存储器1801用于存储装置的程序代码和数据。
其中,上述处理器1802(或者描述为控制器)可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,单元模块和电路。该处理器或控制器可以是中央处理器,通用处理器,数字信号处理器,专用集成电路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,单元模块和电路。所述处理器1802也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等。
通信接口1803可以是收发电路。
存储器1801可以包括易失性存储器,例如随机存取存储器;该存储器也可以包括非易失性存储器,例如只读存储器,快闪存储器,硬盘或固态硬盘;该存储器还可以包括上述种类的存储器的组合。
总线1804可以是扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线1804可以分为地址总线、数据总线、控制总线等。为便于表示,图18中 仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
上述描述的服务器或装置的具体工作过程,可以参考上述方法实施例中的对应过程,在此不再赘述。
本申请实施例提供一种存储一个或多个程序的计算机可读存储介质,所述一个或多个程序包括指令,所述指令当被计算机执行时使计算机执行上述实施例的步骤S801-S805所述的车道线检测方法。
本申请实施例还提供一种包含指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行上述实施例步骤S801-S805中执行的车道线检测方法。
本申请实施例提供一种车道线检测装置,包括处理器和存储器;其中,存储器用于存储计算机程序指令,处理器用于运行计算机程序指令以使该车道线检测装置执行上述实施例步骤S801-S805中执行的车道线检测方法。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是物理上分开的,或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。在应用过程中,可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是个人计算机,服务器,网络设备,单片机或者芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。

Claims (19)

  1. 一种车道线检测方法,其特征在于,所述方法包括:
    根据周车的运动轨迹,确定所述周车与虚拟车道的中心线的距离的变化趋势,所述周车为与自车的距离小于预设距离的车辆;
    对所述周车与虚拟车道的中心线的距离的变化趋势,以及所述周车在上一时刻的换道概率进行分析,得到所述周车在当前时刻的换道概率;所述换道概率为车辆更换其所在车道的概率;
    根据所述周车在当前时刻的换道概率、所述周车与所述虚拟车道的中心线在上一时刻的关联概率、所述周车在当前时刻相对于所述虚拟车道的中心线的位置以及预设转移系数,确定所述周车与所述虚拟车道的中心线在当前时刻的关联概率;
    其中,所述预设转移系数为所述周车在上一时刻与所述虚拟车道的中心线的关联概率,转换为所述周车在当前时刻与所述虚拟车道的中心线的关联概率时的转换系数,所述关联概率用于表示所述周车与所述虚拟车道的中心线的距离的大小,以及所述周车的运动轨迹与所述虚拟车道的中心线的相似度;
    根据目标车辆的运动轨迹以及预设宽度,确定所述自车所在车道的车道线,所述目标车辆为所述周车中与所述虚拟车道的中心线在当前时刻的关联概率最大的车辆。
  2. 根据权利要求1所述的车道线检测方法,其特征在于,所述虚拟车道平行于所述自车的车头与车尾的连线,且所述虚拟车道的宽度为所述预设宽度,所述自车位于所述虚拟车道的中心线上。
  3. 根据权利要求1或2所述的车道线检测方法,其特征在于,所述周车的运动轨迹为多个位置点的连线,所述多个位置点中相邻位置点之间的纵向距离的取值等于预设距离间隔的取值,所述多个位置点中包括每个所述周车当前所在位置对应的位置点。
  4. 根据权利要求1-3任一项所述的车道线检测方法,其特征在于,所述对所述周车与虚拟车道的中心线的距离的变化趋势,以及所述周车在上一时刻的换道概率进行分析,得到所述周车在当前时刻的换道概率,包括:
    对所述周车与虚拟车道的中心线的距离的变化趋势进行分析,得到所述周车在当前时刻的第一观测概率;所述第一观测概率用于表示对车辆的换道概率的预测值;
    对所述周车在当前时刻的第一观测概率以及所述周车在上一时刻的换道概率进行加权求和,确定所述周车在当前时刻的换道概率。
  5. 根据权利要求1-4任一项所述的车道线检测方法,其特征在于,所述周车进入预设区域前的换道概率为预设换道概率,所述预设区域为路口。
  6. 根据权利要求1所述的车道线检测方法,其特征在于,所述根据所述周车在当前时刻的换道概率、所述周车与所述虚拟车道的中心线在上一时刻的关联概率、所述周车在当前时刻相对于所述虚拟车道的中心线的位置以及预设转移系数,确定所述周车与所述虚拟车道的中心线在当前时刻的关联概率,包括:
    根据预设换道概率阈值以及所述周车在当前时刻的换道概率,确定第一目标周车;
    根据所述第一目标周车在当前时刻相对于所述虚拟车道的中心线的位置,确定所述第一目标周车与所述虚拟车道的中心线的横向距离,以及所述第一目标周车的车头朝向与所述虚拟车道的中心线的夹角;
    对所述夹角以及所述第一目标周车与所述虚拟车道的中心线的横向距离进行加权 求和,确定所述第一目标周车在当前时刻的第二观测概率,所述第二观测概率表示对车辆与所述虚拟车道的中心线的关联概率的预测值;
    对所述第一目标周车在当前时刻的第二观测概率、所述预设转移系数以及所述第一目标周车与所述虚拟车道的中心线在上一时刻的关联概率进行归一化处理,确定所述第一目标周车与所述虚拟车道的中心线在当前时刻的关联概率。
  7. 根据权利要求6所述的车道线检测方法,其特征在于,若所述周车的数量为多个,则所述根据预设换道概率阈值以及所述周车在当前时刻的换道概率,确定第一目标周车,包括:
    若多个所述周车在当前时刻的换道概率均大于等于所述预设换道概率阈值,则确定多个所述周车均为所述第一目标周车;
    若多个所述周车中存在在当前时刻的换道概率大于等于所述预设换道概率阈值的周车,且存在在当前时刻的换道概率小于所述预设换道概率阈值的周车,则删除在当前时刻的换道概率大于等于所述预设换道概率阈值的周车,并将在当前时刻的换道概率小于所述预设换道概率阈值的周车确定为第一目标周车。
  8. 根据权利要求1-7任一项所述的车道线检测方法,其特征在于,所述方法还包括:
    确定所述自车离开预设区域后所在的车道为目标车道,所述预设区域为路口;
    根据所述自车在第一位置点处的速度和车头朝向,确定所述自车在所述目标车道上的目标点,所述第一位置点为所述自车离开所述预设区域前所在位置对应的位置点;
    若所述自车在所述目标车道上的目标点位于所述目标车道的中心线上,则确定所述自车在所述目标车道上的目标点与所述第一位置点的连线,为所述自车离开所述预设区域前所在车道的中心线;
    若所述自车在所述目标车道上的目标点不位于所述目标车道的中心线上,则确定所述目标点在所述目标车道的中心线上的垂点与所述第一位置点的连线,为所述自车离开所述预设区域前所在车道的中心线。
  9. 一种车道线检测装置,其特征在于,所述装置包括:
    确定单元,用于根据周车的运动轨迹,确定所述周车与虚拟车道的中心线的距离的变化趋势,所述周车为与自车的距离小于预设距离的车辆;
    分析单元,用于对所述周车与虚拟车道的中心线的距离的变化趋势,以及所述周车在上一时刻的换道概率进行分析,得到所述周车在当前时刻的换道概率;所述换道概率为车辆更换其所在车道的概率;
    所述确定单元,还用于根据所述周车在当前时刻的换道概率、所述周车与所述虚拟车道的中心线在上一时刻的关联概率、所述周车在当前时刻相对于所述虚拟车道的中心线的位置以及预设转移系数,确定所述周车与所述虚拟车道的中心线在当前时刻的关联概率;
    其中,所述预设转移系数为所述周车在上一时刻与所述虚拟车道的中心线的关联概率,转换为所述周车在当前时刻与所述虚拟车道的中心线的关联概率时的转换系数,所述关联概率用于表示所述周车与所述虚拟车道的中心线的距离的大小,以及所述周车的运动轨迹与所述虚拟车道的中心线的相似度;
    所述确定单元,还用于根据目标车辆的运动轨迹以及预设宽度,确定所述自车所在车道的车道线,所述目标车辆为所述周车中与所述虚拟车道的中心线在当前时刻的关联 概率最大的车辆。
  10. 根据权利要求9所述的车道线检测装置,其特征在于,所述虚拟车道平行于所述自车的车头与车尾的连线,且所述虚拟车道的宽度为所述预设宽度,所述自车位于所述虚拟车道的中心线上。
  11. 根据权利要求9或10所述的车道线检测装置,其特征在于,所述周车的运动轨迹为多个位置点的连线,所述多个位置点中相邻位置点之间的纵向距离的取值等于预设距离间隔的取值,所述多个位置点中包括每个所述周车当前所在位置对应的位置点。
  12. 根据权利要求9-11任一项所述的车道线检测装置,其特征在于,
    所述分析单元,用于对所述周车与虚拟车道的中心线的距离的变化趋势进行分析,得到所述周车在当前时刻的第一观测概率;所述第一观测概率用于表示对车辆的换道概率的预测值;
    所述分析单元,还用于对所述周车在当前时刻的第一观测概率以及所述周车在上一时刻的换道概率进行加权求和,确定所述周车在当前时刻的换道概率。
  13. 根据权利要求9-12任一项所述的车道线检测装置,其特征在于,所述周车进入预设区域前的换道概率为预设换道概率,所述预设区域为路口。
  14. 根据权利要求9所述的车道线检测装置,其特征在于,
    所述确定单元,用于根据预设换道概率阈值以及所述周车在当前时刻的换道概率,确定第一目标周车;
    所述确定单元,还用于根据所述第一目标周车在当前时刻相对于所述虚拟车道的中心线的位置,确定所述第一目标周车与所述虚拟车道的中心线的横向距离,以及所述第一目标周车的车头朝向与所述虚拟车道的中心线的夹角;
    所述确定单元,还用于对所述夹角以及所述第一目标周车与所述虚拟车道的中心线的横向距离进行加权求和,确定所述第一目标周车在当前时刻的第二观测概率,所述第二观测概率表示对车辆与所述虚拟车道的中心线的关联概率的预测值;
    所述确定单元,还用于对所述第一目标周车在当前时刻的第二观测概率、所述预设转移系数以及所述第一目标周车与所述虚拟车道的中心线在上一时刻的关联概率进行归一化处理,确定所述第一目标周车与所述虚拟车道的中心线在当前时刻的关联概率。
  15. 根据权利要求14所述的车道线检测装置,其特征在于,若所述周车的数量为多个,则所述确定单元,用于根据预设换道概率阈值以及所述周车在当前时刻的换道概率,确定第一目标周车,包括:
    若多个所述周车在当前时刻的换道概率均大于等于所述预设换道概率阈值,则确定多个所述周车均为所述第一目标周车;
    若多个所述周车中存在在当前时刻的换道概率大于等于所述预设换道概率阈值的周车,且存在在当前时刻的换道概率小于所述预设换道概率阈值的周车,则删除在当前时刻的换道概率大于等于所述预设换道概率阈值的周车,并将在当前时刻的换道概率小于所述预设换道概率阈值的周车确定为第一目标周车。
  16. 根据权利要求9-15任一项所述的车道线检测装置,其特征在于,
    所述确定单元,还用于确定所述自车离开预设区域后所在的车道为目标车道,所述预设区域为路口;
    所述确定单元,还用于根据所述自车在第一位置点处的速度和车头朝向,确定所述 自车在所述目标车道上的目标点,所述第一位置点为所述自车离开所述预设区域前所在位置对应的位置点;
    所述确定单元,还用于若所述自车在所述目标车道上的目标点位于所述目标车道的中心线上,则确定所述自车在所述目标车道上的目标点与所述第一位置点的连线,为所述自车离开所述预设区域前所在车道的中心线;
    所述确定单元,还用于若所述自车在所述目标车道上的目标点不位于所述目标车道的中心线上,则确定所述目标点在所述目标车道的中心线上的垂点与所述第一位置点的连线,为所述自车离开所述预设区域前所在车道的中心线。
  17. 一种车道线检测装置,其特征在于,所述装置包括:处理器和存储器;其中,存储器用于存储计算机程序指令,所述处理器运行所述计算机程序指令以使所述车道线检测装置执行权利要求1-8任一项所述的车道线检测方法。
  18. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令被处理器运行时,使得车道线检测装置执行如权利要求1-8任一项所述的车道线检测方法。
  19. 一种计算机程序产品,其特征在于,当所述计算机程序产品在处理器上运行时,使得所述车道线检测装置执行如权利要求1-8任一项所述的车道线检测方法。
PCT/CN2020/086196 2020-04-22 2020-04-22 车道线检测方法及装置 WO2021212379A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080005027.3A CN112703506B (zh) 2020-04-22 2020-04-22 车道线检测方法及装置
PCT/CN2020/086196 WO2021212379A1 (zh) 2020-04-22 2020-04-22 车道线检测方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/086196 WO2021212379A1 (zh) 2020-04-22 2020-04-22 车道线检测方法及装置

Publications (1)

Publication Number Publication Date
WO2021212379A1 true WO2021212379A1 (zh) 2021-10-28

Family

ID=75514822

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/086196 WO2021212379A1 (zh) 2020-04-22 2020-04-22 车道线检测方法及装置

Country Status (2)

Country Link
CN (1) CN112703506B (zh)
WO (1) WO2021212379A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113804214A (zh) * 2021-11-19 2021-12-17 智道网联科技(北京)有限公司 车辆定位方法、装置及电子设备、计算机可读存储介质
CN114212110A (zh) * 2022-01-28 2022-03-22 中国第一汽车股份有限公司 障碍物轨迹预测方法、装置、电子设备及存储介质
US20220171977A1 (en) * 2020-12-01 2022-06-02 Hyundai Motor Company Device and method for controlling vehicle
CN115206053A (zh) * 2022-06-02 2022-10-18 河南越秀尉许高速公路有限公司 一种低虚警率的高速公路施工区预警方法
CN115507874A (zh) * 2022-06-09 2022-12-23 广东省智能网联汽车创新中心有限公司 一种基于v2x的车道匹配方法及装置
CN116110216A (zh) * 2022-10-21 2023-05-12 中国第一汽车股份有限公司 车辆跨线时间确定方法、装置、存储介质及电子装置
CN114212110B (zh) * 2022-01-28 2024-05-03 中国第一汽车股份有限公司 障碍物轨迹预测方法、装置、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115265564A (zh) * 2021-04-30 2022-11-01 华为技术有限公司 一种车道线标注方法及装置
CN113276853B (zh) * 2021-05-21 2022-09-09 武汉光庭信息技术股份有限公司 一种失效场景下lka控制方法及系统
CN113895439B (zh) * 2021-11-02 2022-08-12 东南大学 一种基于车载多源传感器概率融合的自动驾驶变道行为决策方法
CN115311635B (zh) * 2022-07-26 2023-08-01 阿波罗智能技术(北京)有限公司 车道线处理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140074356A1 (en) * 2011-05-20 2014-03-13 Honda Motor Co., Ltd. Lane change assist system
CN105631217A (zh) * 2015-12-30 2016-06-01 苏州安智汽车零部件有限公司 基于本车自适应虚拟车道的前方有效目标选择系统及方法
CN109855639A (zh) * 2019-01-15 2019-06-07 天津大学 基于障碍物预测与mpc算法的无人驾驶轨迹规划方法
CN110550030A (zh) * 2019-09-09 2019-12-10 深圳一清创新科技有限公司 无人车的变道控制方法、装置、计算机设备和存储介质
CN110614993A (zh) * 2018-12-29 2019-12-27 长城汽车股份有限公司 自动驾驶车辆的换道方法、系统及车辆

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140257686A1 (en) * 2013-03-05 2014-09-11 GM Global Technology Operations LLC Vehicle lane determination
JP5900454B2 (ja) * 2013-10-09 2016-04-06 トヨタ自動車株式会社 車両用車線案内システム及び車両用車線案内方法
CN105809130B (zh) * 2016-03-08 2020-03-10 武汉大学 一种基于双目深度感知的车辆可行驶区域计算方法
KR102368604B1 (ko) * 2017-07-03 2022-03-02 현대자동차주식회사 Ecu, 상기 ecu를 포함하는 무인 자율 주행 차량, 및 이의 차선 변경 제어 방법
CN107919027B (zh) * 2017-10-24 2020-04-28 北京汽车集团有限公司 预测车辆变道的方法、装置和系统
CN110146100B (zh) * 2018-02-13 2021-08-13 华为技术有限公司 轨迹预测方法、装置及存储介质
CN109733390B (zh) * 2018-12-29 2021-07-20 江苏大学 一种基于驾驶人特性的自适应换道预警方法
CN110610271B (zh) * 2019-09-17 2022-05-13 北京理工大学 一种基于长短记忆网络的多重车辆轨迹预测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140074356A1 (en) * 2011-05-20 2014-03-13 Honda Motor Co., Ltd. Lane change assist system
CN105631217A (zh) * 2015-12-30 2016-06-01 苏州安智汽车零部件有限公司 基于本车自适应虚拟车道的前方有效目标选择系统及方法
CN110614993A (zh) * 2018-12-29 2019-12-27 长城汽车股份有限公司 自动驾驶车辆的换道方法、系统及车辆
CN109855639A (zh) * 2019-01-15 2019-06-07 天津大学 基于障碍物预测与mpc算法的无人驾驶轨迹规划方法
CN110550030A (zh) * 2019-09-09 2019-12-10 深圳一清创新科技有限公司 无人车的变道控制方法、装置、计算机设备和存储介质

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220171977A1 (en) * 2020-12-01 2022-06-02 Hyundai Motor Company Device and method for controlling vehicle
CN113804214A (zh) * 2021-11-19 2021-12-17 智道网联科技(北京)有限公司 车辆定位方法、装置及电子设备、计算机可读存储介质
CN114212110A (zh) * 2022-01-28 2022-03-22 中国第一汽车股份有限公司 障碍物轨迹预测方法、装置、电子设备及存储介质
CN114212110B (zh) * 2022-01-28 2024-05-03 中国第一汽车股份有限公司 障碍物轨迹预测方法、装置、电子设备及存储介质
CN115206053A (zh) * 2022-06-02 2022-10-18 河南越秀尉许高速公路有限公司 一种低虚警率的高速公路施工区预警方法
CN115206053B (zh) * 2022-06-02 2023-10-03 河南越秀尉许高速公路有限公司 一种低虚警率的高速公路施工区预警方法
CN115507874A (zh) * 2022-06-09 2022-12-23 广东省智能网联汽车创新中心有限公司 一种基于v2x的车道匹配方法及装置
CN115507874B (zh) * 2022-06-09 2024-03-01 广东省智能网联汽车创新中心有限公司 一种基于v2x的车道匹配方法及装置
CN116110216A (zh) * 2022-10-21 2023-05-12 中国第一汽车股份有限公司 车辆跨线时间确定方法、装置、存储介质及电子装置
CN116110216B (zh) * 2022-10-21 2024-04-12 中国第一汽车股份有限公司 车辆跨线时间确定方法、装置、存储介质及电子装置

Also Published As

Publication number Publication date
CN112703506A (zh) 2021-04-23
CN112703506B (zh) 2022-04-08

Similar Documents

Publication Publication Date Title
WO2021212379A1 (zh) 车道线检测方法及装置
CN109901574B (zh) 自动驾驶方法及装置
WO2022001773A1 (zh) 轨迹预测方法及装置
CN110379193B (zh) 自动驾驶车辆的行为规划方法及行为规划装置
CN112230642B (zh) 道路可行驶区域推理方法及装置
WO2021102955A1 (zh) 车辆的路径规划方法以及车辆的路径规划装置
WO2021052213A1 (zh) 调整油门踏板特性的方法和装置
WO2022016457A1 (zh) 控制车辆驾驶模式切换的方法和装置
WO2022021910A1 (zh) 一种车辆碰撞检测方法、装置及计算机可读存储介质
CN110371132B (zh) 驾驶员接管评估方法及装置
CN113160547B (zh) 一种自动驾驶方法及相关设备
CN112512887B (zh) 一种行驶决策选择方法以及装置
CN113631452B (zh) 一种变道区域获取方法以及装置
WO2021168669A1 (zh) 车辆控制方法及装置
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
CN114693540A (zh) 一种图像处理方法、装置以及智能汽车
WO2022017307A1 (zh) 自动驾驶场景生成方法、装置及系统
CN116261649A (zh) 一种车辆行驶意图预测方法、装置、终端及存储介质
WO2022061702A1 (zh) 驾驶提醒的方法、装置及系统
CN114261404A (zh) 一种自动驾驶方法及相关装置
CN113022573A (zh) 道路结构检测方法及装置
WO2022001432A1 (zh) 推理车道的方法、训练车道推理模型的方法及装置
CN113799794B (zh) 车辆纵向运动参数的规划方法和装置
WO2022041820A1 (zh) 换道轨迹的规划方法及装置
WO2022127502A1 (zh) 控制方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20932516

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20932516

Country of ref document: EP

Kind code of ref document: A1