WO2023123172A1 - 一种驾驶辅助的方法以及相关设备 - Google Patents

一种驾驶辅助的方法以及相关设备 Download PDF

Info

Publication number
WO2023123172A1
WO2023123172A1 PCT/CN2021/142923 CN2021142923W WO2023123172A1 WO 2023123172 A1 WO2023123172 A1 WO 2023123172A1 CN 2021142923 W CN2021142923 W CN 2021142923W WO 2023123172 A1 WO2023123172 A1 WO 2023123172A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving
data
target
strategy
distance
Prior art date
Application number
PCT/CN2021/142923
Other languages
English (en)
French (fr)
Inventor
黄琪
陈超越
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/142923 priority Critical patent/WO2023123172A1/zh
Priority to CN202180017218.6A priority patent/CN116686028A/zh
Publication of WO2023123172A1 publication Critical patent/WO2023123172A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/16Control of vehicles or other craft
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/042Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles providing simulation in a real vehicle

Definitions

  • the present application relates to the technical field of automatic driving, in particular to a driving assistance method and related equipment.
  • coaches provide relatively simple auxiliary teaching programs to guide novice drivers to drive vehicles.
  • the actual traffic rules and actual road scenes are not considered, and the teaching strategy is relatively simple. Most of them are simple reminders such as speeding and too close distance, which are not applicable when driving on actual roads.
  • Embodiments of the present application provide a driving assistance method and related equipment. Combined with the environmental data of the actual driving process, targeted teaching strategies are given to guide the driver to operate the driving equipment correctly and avoid driving danger. Moreover, it also adjusts the target driving strategy in real time based on environmental data, avoiding the subsequent single teaching strategy, and can adapt to complex and changeable driving scenarios, and has the usability and practicability of driving on actual roads.
  • the present application provides a driving assistance method.
  • the driving assistance method can be applied in driving assistance devices, such as driving equipment and automatic driving devices, which is not limited in this application.
  • the abnormal driving data can be determined according to the target driving strategy and actual driving data.
  • the target driving strategy is obtained based on the actual driving data and the corresponding environment data when driving the driving equipment.
  • the teaching strategy is sent to the terminal device, the teaching strategy corresponds to the driving scene, and the driving scene corresponds to the abnormal driving data.
  • the teaching strategy can be used to guide the driving device. In the above manner, after the target driving strategy is obtained from the environment data, the difference between the target driving strategy and the actual driving data is compared to obtain the abnormal driving data.
  • the driver can correctly operate the driving device according to the instructions of the teaching strategy after viewing the teaching strategy through the terminal device. It not only gives targeted teaching strategies based on the environmental data of the actual driving process, guides the driver to operate the driving equipment correctly, and avoids driving hazards; but also adjusts the target driving strategy in real time based on the environmental data, avoiding a single teaching strategy in the future. It can adapt to complex and changeable driving scenarios, and has the usability and practicability of driving on actual roads.
  • the first video may also be sent to the terminal device.
  • the first video includes the actual driving data, the target driving strategy, and abnormal driving behavior corresponding to the abnormal driving data.
  • the driver can view the abnormal driving behavior displayed in the first video through the terminal device, that is, look back at the abnormal driving behavior that occurred during driving, so that the driver can further operate the driving correctly according to the instructions of the teaching strategy. equipment to correct abnormal driving behavior.
  • the target driving strategy includes a target driving trajectory and target control instructions corresponding to each trajectory point in the target driving trajectory.
  • the actual driving data includes the actual driving trajectory and the actual control instructions corresponding to each trajectory point in the actual driving trajectory.
  • the determination of the abnormal driving data based on the target driving strategy and the actual driving data may be carried out in the following way, that is, the similarity between the first data and the second data within the first time period is firstly calculated.
  • the first data is the data of the target driving trajectory and the target control instruction that changes with the change of the driving time.
  • the second data is the data of the actual driving trajectory and the actual control instruction that changes with the change of the driving time.
  • the first duration is any duration in at least one group of durations in the driving time. Then, when the similarity is smaller than a preset similarity threshold, it is determined that the second data within the first time period is abnormal driving data.
  • the target driving strategy further includes target driving decision data.
  • the driving scenario may also be determined based on the target driving decision data and the environment data.
  • the environmental data, the target driving strategy and the actual driving data may also be used as input of a behavior analysis model to determine the Abnormal driving behavior.
  • the behavior analysis model is to determine the abnormal driving behavior as the training target, and use the corresponding environmental data, target driving strategy and actual driving data when the abnormal driving behavior occurs as the training data to train the initial model model obtained later.
  • the driving assistance method may further include: first, acquiring a first distance and a second distance, and acquiring a first safety distance and a second safety distance.
  • the first distance is the distance between the traveling device and an obstacle in the first direction.
  • the second distance is the distance between the traveling equipment and the obstacle in a second direction, and the first direction is perpendicular to the second direction.
  • the first safety distance is a safety distance between the traveling equipment and the obstacle in the first direction.
  • the second safety distance is a safety distance between the traveling equipment and the obstacle in the second direction.
  • a first verification result is determined based on the first distance and the first safety distance
  • a second verification result is determined based on the second distance and the second safety distance.
  • instruct the traveling equipment to execute a safety operation instruction.
  • the obtaining the first safety distance and the second safety distance includes: obtaining the first speed and the second speed.
  • the first speed is the traveling speed of the traveling equipment in the first direction
  • the second speed is the traveling speed of the traveling equipment in the second direction.
  • the first safety distance is determined based on the first speed
  • the second safety distance is determined based on the second speed.
  • the embodiment of the present application provides a driving assistance device.
  • the driving assistance device includes a processing unit and a sending unit.
  • an acquisition unit may also be included.
  • a processing unit is used for determining abnormal driving data according to the target driving strategy and actual driving data.
  • the target driving strategy is obtained based on the actual driving data and the corresponding environment data when driving the driving device.
  • a sending unit configured to send a teaching strategy to the terminal device, where the teaching strategy corresponds to a driving scene, and the driving scene corresponds to the abnormal driving data.
  • the teaching strategy is used to provide driving guidance to the driving device.
  • the sending unit is further configured to send the first video to the terminal device.
  • the first video includes the actual driving data, the target driving strategy, and abnormal driving behavior corresponding to the abnormal driving data.
  • the target driving strategy includes a target driving trajectory and target control instructions corresponding to each trajectory point in the target driving trajectory.
  • the actual driving data includes an actual driving trajectory and actual control instructions corresponding to each trajectory point in the actual driving trajectory.
  • the processing unit is configured to: calculate the similarity between the first data and the second data within the first duration, and determine that the second data within the first duration is abnormal when the similarity is less than a preset similarity threshold driving data.
  • the first data is the data of the target driving trajectory and the target control instruction that changes with the change of the driving time.
  • the second data is the data of the actual driving trajectory and the actual control instruction that changes with the change of the driving time.
  • the first duration is any duration in at least one group of durations in the driving time.
  • the target driving strategy further includes target driving decision data.
  • the processing unit is further configured to determine the driving scenario based on the target driving decision data and the environment data.
  • the processing unit is further configured to: use the environment data, the target driving strategy and the actual driving data as inputs of a behavior analysis model to determine the abnormal driving behavior.
  • the behavior analysis model is to determine the abnormal driving behavior as the training target, and use the corresponding environmental data, target driving strategy and actual driving data when the abnormal driving behavior occurs as the training data to train the initial model model obtained later.
  • the acquiring unit is configured to: acquire the first distance and the second distance, and acquire the first safety distance and the second safety distance.
  • the first distance is the distance between the driving device and the obstacle in the first direction.
  • the second distance is a distance between the driving device and the obstacle in a second direction, and the first direction is perpendicular to the second direction.
  • the first safety distance is a safety distance between the driving device and the obstacle in the first direction.
  • the second safety distance is a safety distance between the driving device and the obstacle in the second direction.
  • the processing unit is configured to: determine a first verification result according to the first distance and the first safety distance, and determine a second verification result based on the second distance and the second safety distance; The first verification result and/or the second verification result indicate that the driving device executes a safe operation instruction.
  • the acquiring unit is configured to: acquire the first speed and the second speed.
  • the first speed is the driving speed of the driving device in the first direction
  • the second speed is the driving speed of the driving device in the second direction.
  • the first safety distance is determined based on the first speed
  • the second safety distance is determined based on the second speed.
  • the embodiment of the present application provides an automatic driving device.
  • the automatic driving device may include: a memory and a processor.
  • the memory is used to store computer readable instructions.
  • the processor is coupled with the memory.
  • the processor is configured to execute computer-readable instructions in the memory so as to execute the method described in the first aspect or any possible implementation manner of the first aspect.
  • the embodiment of the present application provides a driving device.
  • the driving device may include: a memory and a processor.
  • the memory is used to store computer readable instructions.
  • the processor is coupled with the memory.
  • the processor is configured to execute computer-readable instructions in the memory so as to execute the method described in the first aspect or any possible implementation manner of the first aspect.
  • the embodiment of the present application provides an automatic driving device.
  • the automatic driving device may be a vehicle-mounted device or a chip or a system on a chip in the vehicle-mounted device.
  • the automatic driving device can realize the functions performed by the above-mentioned aspects or every possible design automatic driving device, and the functions can be realized by hardware.
  • the automatic driving device may include: a processor and a communication interface, the processor is used to run computer programs or instructions, so as to realize the first aspect or any possible implementation manner of the first aspect The method of driving assistance described in .
  • the sixth aspect of the present application provides a computer-readable storage medium.
  • the computer device executes the method described in the first aspect or any possible implementation manner of the first aspect.
  • a seventh aspect of the present application provides a computer program product, which, when run on a computer, enables the computer to execute the method described in the first aspect or any possible implementation manner of the first aspect.
  • the difference between the target driving strategy and the actual driving data is compared to obtain the abnormal driving data. And determine the driving scene corresponding to the abnormal driving data, and further determine the teaching strategy corresponding to the driving scene.
  • the driver can correctly operate the driving device according to the instructions of the teaching strategy after viewing the teaching strategy through the terminal device. It not only gives targeted teaching strategies based on the environmental data of the actual driving process, guides the driver to operate the driving equipment correctly, and avoids driving hazards; but also adjusts the target driving strategy in real time based on the environmental data, avoiding a single teaching strategy in the future. It can adapt to complex and changeable driving scenarios, and has the usability and practicability of driving on actual roads.
  • Fig. 1 shows a schematic diagram of a driving assistance system provided by an embodiment of the present application
  • Fig. 2 shows a schematic structural diagram of the driving equipment provided by the embodiment of the present application
  • Fig. 3 shows a schematic structural diagram of an automatic driving device provided by an embodiment of the present application
  • Fig. 4 shows a first schematic flow chart of the driving assistance method provided by the embodiment of the present application
  • Fig. 5 shows a schematic diagram of a driving scene provided by an embodiment of the present application
  • FIG. 6 shows a schematic diagram of a distance between data provided by an embodiment of the present application.
  • Fig. 7 shows a second schematic flow chart of the driving assistance method provided by the embodiment of the present application.
  • Fig. 8 shows a network schematic diagram of behavior analysis provided by the embodiment of the present application.
  • Fig. 9 shows a schematic structural diagram of a driving assistance device provided by an embodiment of the present application.
  • FIG. 10 shows a schematic diagram of a hardware structure of a communication device provided by an embodiment of the present application.
  • Embodiments of the present application provide a driving assistance method and related equipment. Combined with the environmental data of the actual driving process, targeted teaching strategies are given to guide the driver to operate the driving equipment correctly and avoid driving danger. Moreover, it also adjusts the target driving strategy in real time based on environmental data, avoiding the subsequent single teaching strategy, and can adapt to complex and changeable driving scenarios, and has the usability and practicability of driving on actual roads.
  • At least one of the following" or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • at least one item (piece) of a, b or c can represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be It can be single or multiple.
  • at least one item (item) can also be interpreted as “one item (item) or multiple items (item)”.
  • the novice driver is usually provided with a simple auxiliary teaching program through a trainer to guide the novice driver to drive the vehicle.
  • the actual traffic rules and actual road scenes are not considered, and the teaching strategy is single, which is not applicable when driving on the actual road, and cannot guarantee the safety of drivers and other personnel, resulting in dangerous behaviors .
  • Fig. 1 shows a schematic diagram of a driving assistance system provided by an embodiment of the present application. This method can be applied in the assisted driving system shown in FIG. 1 .
  • the assisted driving system includes driving equipment and terminal equipment.
  • the driving device and the terminal device are connected through a network, such as a wired network or a wireless network such as Bluetooth or wireless fidelity (Wireless Fidelity, WiFi).
  • the driving device can determine the abnormal driving data by comparing the difference between the target driving strategy and the actual driving data. Then, the driving device determines the corresponding driving scene when the abnormal driving data appears, and searches the corresponding teaching strategy from the database based on the driving scene.
  • the driving device sends the teaching strategy to the terminal device, so that the terminal device can guide the driver to drive the driving device correctly according to the teaching strategy, correct bad driving habits, and be able to adapt to complex and changeable driving scenarios. Availability and practicality of driving.
  • the driving device may be an intelligent network driving vehicle, which is a kind of vehicle networking terminal.
  • the driving equipment can implement the driving assistance method provided by the embodiment of the present application through its internal functional units or devices.
  • the driving equipment may include an automatic driving device for performing the driving assistance method provided by the embodiment of the present application.
  • the automatic driving device can communicate with other components of the driving equipment through a controller area network (controller area network, CAN) bus.
  • controller area network controller area network, CAN
  • the specific structure of the traveling equipment will be described in detail in the subsequent embodiment shown in FIG. 2 .
  • Terminal devices may include, but are not limited to, mobile phones, foldable electronic devices, tablet computers, laptop computers, handheld devices, notebook computers, netbooks, personal digital assistants (personal digital assistant, PDA), artificial intelligence (artificial intelligence, AI) devices , wearable devices, vehicle-mounted devices, or other processing devices connected to wireless modems, as well as various forms of user equipment (user equipment, UE), mobile station (mobile station, MS) and so on.
  • PDA personal digital assistant
  • AI artificial intelligence
  • UE user equipment
  • MS mobile station
  • the embodiment of the present application does not specifically limit the specific type of the terminal device.
  • the driving assistance method provided by the embodiment of the present application may also be applied to other system architectures in practical applications, which is not specifically limited in the embodiment of the present application. .
  • Fig. 2 shows a schematic structural diagram of the traveling equipment provided by the embodiment of the present application.
  • the driving equipment includes components such as an automatic driving device, a vehicle body gateway, and a vehicle body antenna.
  • the automatic driving device can communicate with the vehicle body antenna through a radio frequency (radio frequency, RF) cable.
  • RF radio frequency
  • the automatic driving device may be called an on board unit (OBU), a vehicle terminal, and the like.
  • the automatic driving device may be a vehicle box (telematics BOX, T-Box).
  • the automatic driving device is mainly used to execute the driving assistance method provided by the embodiment of the present application.
  • the automatic driving device may be a car networking chip or the like. The specific structure of the automatic driving device will be described in detail in the embodiment shown in FIG. 3 .
  • the body gateway is mainly used for receiving and sending vehicle information, and the body gateway can be connected with the automatic driving device through the CAN bus.
  • the vehicle body gateway can obtain from the automatic driving device the target driving strategy and actual driving data obtained after the automatic driving device executes the driving assistance method provided by the embodiment of the present application, and the acquired target driving strategy and actual driving data, etc.
  • the information is sent to other components of the driving equipment.
  • the body antenna can have a built-in communication antenna, which is responsible for receiving and sending signals.
  • the communication antenna can send the driving information of the driving equipment to the terminal equipment, the automatic driving device in other driving equipment, etc.; it can also receive the instructions from the terminal equipment, or receive the driving information sent by other automatic driving devices.
  • the structure shown in Fig. 2 does not constitute a specific limitation on the traveling equipment.
  • the travel equipment may include more or fewer components than those shown in FIG. 2 .
  • the travel equipment may include a combination of some of the components shown in FIG. 2 .
  • the running equipment may include disassembled components of the components shown in FIG. 2 or the like.
  • the driving device may also include a domain controller (domain controller, DC), a multi-domain controller (multi-domain controller, MDC), etc., which are not limited in this embodiment of the present application.
  • the components shown in FIG. 2 can be realized in hardware, software, or a combination of software and hardware.
  • Fig. 3 shows a schematic structural diagram of an automatic driving device provided by an embodiment of the present application.
  • the automatic driving device may include an automatic driving module, a data collection module, a data analysis module and an auxiliary teaching module.
  • the automatic driving module can be used to receive the driving data collected by various sensors on the driving equipment and the actual driving process of the driving equipment, such as: environmental data, etc., and provide target decision-making strategies based on the environmental data during the actual driving process .
  • the described target decision-making strategy includes target driving trajectory, target control instructions related to each trajectory point in the target driving trajectory, target driving decision data and so on.
  • the data acquisition module can be used to receive the target driving strategy sent by the automatic driving device in real time, and collect the actual driving data of the driving device during the actual driving process in real time.
  • the data analysis module can be used to judge the difference between the target driving strategy and the actual driving data, and combine environmental data and other information to analyze the corresponding driving scene, Abnormal driving behavior and other information.
  • the data analysis module may also include a difference analysis submodule, a scene analysis submodule, and a behavior analysis submodule.
  • the difference analysis sub-module is used to judge the difference between the target driving strategy and the actual driving data, and determine the abnormal driving data.
  • the scene analysis sub-module can be used to analyze the corresponding driving scene when the abnormal driving data is generated.
  • the behavior analysis sub-module is used to analyze the abnormal driving behavior of the corresponding driver when the abnormal driving data is generated.
  • the auxiliary teaching module can be used to generate playback videos based on actual driving data, and add information such as abnormal driving behaviors and target driving strategies to the playback videos. It can also be used to find corresponding teaching strategies from the database according to behavioral scenarios.
  • the auxiliary teaching module may include a video generation sub-module and a tutorial generation sub-module.
  • the video generation sub-module can be used to generate playback videos based on actual driving data, and add abnormal driving behaviors, target driving strategies, etc. to the playback videos.
  • the tutorial generation sub-module can be used to find the corresponding teaching strategies from the database according to the behavior scenarios.
  • the automatic driving device may further include a safety verification module.
  • the safety verification module can judge whether the driving device is currently in a safe state based on the environmental data, and can provide corresponding safe operation instructions in a safe scene or in an unsafe scene.
  • the structure shown in FIG. 3 does not constitute a specific limitation on the automatic driving device.
  • the automatic driving device may include more or fewer components than those shown in FIG. 3 .
  • the automatic driving device may include a combination of certain components shown in FIG. 3 .
  • the automatic driving device may include disassembled components of the components shown in FIG. 3 , and the like.
  • the components shown in FIG. 3 can be realized in hardware, software, or a combination of software and hardware.
  • the device shown in FIG. 3 may also be a chip or a system on a chip in an automatic driving device.
  • the system on chip may consist of a chip, or may include a chip and other discrete devices, which is not specifically limited in this embodiment of the present application.
  • the driving assistance method provided in the embodiment of the present application can be applied in the automatic driving device shown in FIG. 2 or FIG. 3 . It can also be applied to a chip or a system on a chip in the automatic driving device. Alternatively, it may also be applied to the driving equipment shown in FIG. 1 or FIG. 2 , which is not specifically limited in this embodiment of the present application. In the following, only the method for executing the driving assistance provided by the embodiment of the present application by the driving device will be described as an example.
  • Fig. 4 shows a schematic flowchart of the first type of driving assistance method provided by the embodiment of the present application.
  • the driving assistance method may include the following steps:
  • the driving device determines abnormal driving data based on a target driving strategy and actual driving data, wherein the target driving strategy is obtained from corresponding environmental data when driving the driving device based on the actual driving data.
  • sensors such as cameras, millimeter-wave radars, lidars, inertial measurement units (inertial measurement, IMU), and global positioning systems (global positioning system, GPS) can be installed in the driving equipment.
  • sensors can be used to obtain the actual driving data of the driving equipment during the actual driving process, for example: the camera collects environmental data, the millimeter wave radar can detect the distance between the driving equipment and obstacles, etc.
  • the driving device can acquire corresponding actual driving data from various sensors, and determine a target driving strategy based on the corresponding environmental data when the actual driving data is generated.
  • the target driving strategy may include data that can reflect that the driving device meets the requirements of automatic driving, so as to determine a reasonable target driving trajectory. In other words, by driving the driving device according to the target driving strategy, bad driving behaviors and driving dangers can be avoided.
  • the above-mentioned target driving strategy includes a target driving trajectory and target control instructions corresponding to each trajectory point in the target driving trajectory.
  • the target driving strategy may also include target driving decision data.
  • the target driving trajectory is a trajectory formed by connecting a series of trajectory points obtained based on the target driving decision data and environmental data during the driving process of the driving device, that is, a reasonable driving route.
  • Each trajectory point includes corresponding position information, orientation angle, speed, acceleration and other information, which can be used as a reference for controlling the driving of the driving equipment.
  • the target control instruction can be understood as the target control quantity of the related actuators used to control the running equipment.
  • the target driving decision data can be understood as the driving behavior that the driver wants the driving device to perform, such as: following the current lane, keeping the lane, changing lanes, avoiding the vehicle behind, giving way to the vehicle in front, overtaking the vehicle in front, etc. , this application does not make a limiting description.
  • the mentioned environmental data can reflect the environmental situation of the driving device during the current driving process.
  • the environment data includes, but is not limited to, obstacle information on the driving road of the driving device, traffic environment information, and the like.
  • Obstacle information includes, but is not limited to, obstacles such as vehicles, pedestrians, or roads.
  • the traffic environment information may include but not limited to road information, lighting conditions, and weather conditions.
  • the road information may be expressways, national highways, provincial highways, urban roads, country roads, ordinary curves, straight roads, sharp curves, one-way roads, multi-lane roads, ramps, or urban intersections.
  • the road information may also include traffic signs, traffic lights, lane lines, etc., which are not limited in this application.
  • the above actual driving data may also include the actual driving trajectory and the actual control instructions corresponding to each trajectory point in the actual driving trajectory.
  • Each track point in the actual driving track may include location information.
  • the actual control command can represent the actual control amount of the related actuators when the driving equipment is controlled to travel.
  • the actual rotation angle of the steering wheel, the actual acceleration of the chassis, the actual gear position, the actual turn signal information, etc. are not limited in this application.
  • FIG. 5 shows a schematic diagram of a driving scene of a driving device provided in an embodiment of the present application. As shown in FIG. 5 , in the current driving scene, a driving device 1 , a driving device 2 and a driving device 3 are included. Wherein, during the current driving process of the driving equipment 1, the target driving trajectory indicates that the driving equipment 1 should go straight on the current lane 1, while the actual driving trajectory shows that the driving equipment 1 changes from the current lane 1 to the right to the lane 2 middle.
  • the driving device can analyze and process the target driving strategy and the actual driving data to determine abnormal driving data.
  • the described abnormal driving data can also be understood as data with a large difference between the target driving trajectory and target control instructions in the target driving strategy and the actual driving trajectory and actual control instructions in the actual driving data.
  • the determination of the abnormal driving data by the driving device can be achieved by calculating the similarity between the first data and the second data within the first time period; when the similarity is less than the preset similarity threshold, determine the first The second data within the duration is abnormal driving data.
  • the driving time can be divided into at least one group of durations, and the first duration is any duration in the at least one group of durations.
  • the first data can be understood as the data in the target driving trajectory and the target control instruction that changes with the change of the driving time.
  • the second data can be understood as the data that changes with the change of the actual driving situation in the actual driving trajectory and the actual control command. It should be understood that the described changes can be understood as continuous changes or sudden changes.
  • the traveling device may divide the first data within the first duration according to attribute types to obtain first sub-data of different attribute types.
  • the first data can be divided into three types of sub-data: position type, gear type, and command type.
  • the driving device may also divide the second data within the first duration into second sub-data of different attribute types according to attribute types.
  • the second data is also divided into three types of sub-data: position type, gear position type, and instruction type.
  • the instruction types may include but not limited to brake instructions, accelerator instructions, turn signal instructions, etc., which are not limited in this application.
  • the driving device can, for example, calculate the distance between the corresponding first sub-data and the second sub-data through algorithms such as Euclidean distance, so as to obtain the distance between the first sub-data and the second sub-data in the corresponding attribute type. similarity. Then, the driving device performs weighted average processing on the similarity between the first sub-data and the second sub-data corresponding to all attribute types within the first time period, and then obtains the first data and the second sub-data within the first time period.
  • the similarity between the two data It should be understood that the larger the distance between the data, the lower the similarity between the data; on the contrary, the smaller the distance between the data, the higher the similarity between the data.
  • the traveling device compares the similarity between the first data and the second data with the preset similarity threshold, and when the similarity is smaller than the preset similarity threshold, determines that the second data within the first duration is abnormal driving data.
  • the first time length can also be further divided, and the similarity between data in each group of sub-time lengths can be determined based on the above operation of calculating the similarity. And when the similarity between data in any sub-time length is less than a preset similarity threshold, directly determine that the second data in the first time length is abnormal driving data.
  • FIG. 6 is a schematic diagram of a distance between data provided by an embodiment of the present application.
  • a coordinate system is constructed with the driving time as the abscissa and the distance between data as the ordinate. It can be seen from FIG. 6 that within the first time period (ie, t1 to t2), the distance between the actual driving trajectory and the target driving trajectory is relatively large. If within t1 to t2, the calculated similarity corresponding to the position type is 0.6, the similarity corresponding to the gear type is 0.5, and the similarity corresponding to the command type is 0.3. Moreover, the weights of position type, gear type, and instruction type are 0.4, 0.2, and 0.4, respectively.
  • the calculated similarity between the first data and the second data is 0.46. If the preset similarity threshold is 0.5, it is obvious that 0.46 ⁇ 0.5, and it can be determined that the second data within the period from t1 to t2 is abnormal driving data.
  • the specific values of the duration, weight, and similarity shown in FIG. 6 are only a schematic description. In practical applications, other values may also be used, which are not limited in this embodiment of the application. In addition, besides the position type, the gear type and the instruction type, the attribute type may also be other types in practical application, which is not limited in this embodiment of the present application.
  • the driving device sends a teaching strategy to the terminal device, the teaching strategy corresponds to the driving scene, and the driving scene corresponds to the abnormal driving data.
  • the target driving decision data can reflect the driving behavior that the driver wants the driving device to perform
  • the environment data can reflect the environmental conditions of the driving device during the current driving process. Therefore, the driving device can analyze the current driving behavior from the target driving decision data, such as: go straight, follow the car, overtake, change lanes or give way, etc.
  • the environmental data includes traffic environment information and obstacle information
  • the traffic environment information includes information such as road information and weather conditions. Then, it can be determined from the road information which driving areas and types of roads the driving device is driving on when the abnormal driving data is generated, such as: urban one-way streets, high-speed multi-lane streets, urban intersections, sharp curves, etc.
  • the weather where the driving equipment is located when the abnormal driving data is generated is determined from the weather conditions, such as sunny days, rainy days, rainstorms, foggy days, cloudy days, etc.
  • the driving device can further determine the driving scene corresponding to the abnormal driving data based on the target driving decision data and the environment data.
  • the described driving scene can be understood as the scene where the driving device is in during the driving process when the abnormal driving data is generated.
  • Driving scenarios may include, but are not limited to: straight driving on multi-lane roads at urban intersections in sunny weather, following cars on one-way roads in urban areas under heavy rain, overtaking on high-speed multi-lane roads, etc. This application does not make a limited description.
  • the driving device can obtain abnormal driving data based on the target driving decision data and environmental data.
  • the teaching materials in the teaching library will be classified according to the scene, and each teaching material has a corresponding scene label.
  • the driving device determines the driving scene corresponding to the abnormal driving data, it can search for matching teaching materials from the teaching database according to the driving scene, and then integrate the found teaching materials to obtain a teaching strategy that matches the driving scene. That is to say, for each driving scene, a matching teaching strategy can be obtained from the teaching materials in the teaching library. Then, the driving device can send the teaching strategy to the terminal device.
  • the driving scene corresponding to the abnormal driving data is "going straight on a multi-lane road at an urban intersection in sunny weather", and the four scenarios of "sunny day”, “urban intersection”, “multi-lane road” and “going straight” can be corresponding
  • the teaching materials are integrated and processed to obtain the final teaching strategy.
  • teaching strategy may include, but is not limited to, information such as correct driving operation and driving advice given in the form of video, text and/or voice, which is not limited in this embodiment of the present application.
  • Driving suggestions and driving operations also include but are not limited to driving precautions at urban intersections, how to merge correctly, common lane signs, etc., which are not limited in this application.
  • the driving device can also judge whether the driving device is in a safe driving state according to the environmental data. In the case of safe driving or dangerous driving, different operation instructions can also be given.
  • the driving assistance method may further include: the driving device acquires the first distance and the second distance, and acquires the first safety distance and the second safety distance. Then, the traveling device determines a first verification result based on the first distance and the first safety distance, and determines a second verification result according to the second distance and the second safety distance. Finally, based on the first verification result and/or the second verification result, the traveling device instructs the traveling device to execute the safety operation instruction.
  • the driving device may collect the first distance and the second distance through devices such as an ultrasonic sensor mounted on itself during driving.
  • the described first distance is the distance between the traveling device and the obstacle in the first direction.
  • the second distance is the distance between the traveling device and the obstacle in the second direction.
  • the first direction is perpendicular to the second direction.
  • the first direction may be the direction in which the traveling device travels forward, and the second direction is understood to be a direction perpendicular to the first direction.
  • the second direction may also be the direction in which the traveling device travels forward, which is not limited in this embodiment of the present application.
  • the traveling device can also detect the first speed and the second speed through an ultrasonic sensor or the like.
  • the first speed is the traveling speed of the traveling device in the first direction.
  • the second speed is the traveling speed of the traveling device in the second direction.
  • the traveling device further calculates the first safety distance based on the first speed, and calculates the second safety distance based on the second speed.
  • the described first safety distance is the safety distance between the driving equipment and the obstacle in the first direction, that is, the safety distance that needs to be kept with the obstacle in the first direction.
  • the second safety distance is a safety distance between the driving device and the obstacle in the second direction, that is, a safety distance that needs to be kept with the obstacle in the second direction.
  • the traveling device obtains the first verification result and the second verification result respectively by comparing the first distance with the first safety distance, and comparing the second distance with the second safety distance. Then, based on the first verification result and/or the second verification result, the traveling device instructs the traveling device to execute different safety operation instructions. Specifically refer to the following way to understand, namely:
  • the first verification result shows that the first distance is greater than the first safety distance
  • the second verification result shows that the second distance is greater than the second safety distance
  • instruct the driving equipment to drive through safely For example, if the first safety distance is 1.5 meters (m) and the second safety distance is 1 m, the first distance and the second distance are 5 m and 1.5 m respectively. It is obvious that there is a certain distance between the driving equipment and the obstacle, and at this time the driving equipment can safely drive through.
  • the first verification result is expressed as the first distance being less than the first safety distance and greater than 1/N of the first safety distance (N is a positive number other than 1)
  • the second verification result is expressed as the first
  • a safety operation instruction is output at this time to remind the driving equipment to avoid collision with obstacles.
  • the terminal device provides driving guidance to the driving device according to the teaching strategy.
  • the driving device sends the teaching strategy to the terminal device.
  • the terminal device can obtain the teaching strategy, and provide driving guidance to the driving device according to the teaching strategy.
  • the terminal device can correctly operate the driving device according to the instruction in the teaching strategy, so that the driving device can drive according to the correct driving track.
  • the difference between the target driving strategy and the actual driving data is compared to obtain the abnormal driving data. And determine the driving scene corresponding to the abnormal driving data, and further determine the teaching strategy corresponding to the driving scene.
  • the driver can correctly operate the driving device according to the instructions of the teaching strategy after viewing the teaching strategy through the terminal device. It not only gives targeted teaching strategies based on the environmental data of the actual driving process, guides the driver to operate the driving equipment correctly, and avoids driving hazards; but also adjusts the target driving strategy in real time based on the environmental data, avoiding a single teaching strategy in the future. It can adapt to complex and changeable driving scenarios, and has the usability and practicability of driving on actual roads.
  • Fig. 7 shows a second schematic flowchart of the driving assistance method provided by the embodiment of the present application.
  • the driving assistance method may include the following steps:
  • the driving device determines abnormal driving data based on a target driving strategy and actual driving data, wherein the target driving strategy is obtained from corresponding environmental data when driving the driving device based on the actual driving data.
  • the driving device sends a teaching strategy to the terminal device, the teaching strategy corresponds to the driving scene, and the driving scene corresponds to the abnormal driving data.
  • steps 701-702 in this embodiment are similar to the contents of steps 401-402 in FIG. 4 described above, which can be understood by referring to the contents described in steps 401-402, and will not be repeated here.
  • the driving device sends a first video to the terminal device, where the first video includes actual driving data, a target driving strategy, and abnormal driving behavior corresponding to the abnormal driving data.
  • the abnormal driving behavior corresponding to the generation of the abnormal driving data can be analyzed through the behavior analysis model.
  • Abnormal driving behaviors include but are not limited to changing lanes on a solid line, speeding, high-risk merging, failing to maintain a safe distance, etc., which are not limited in this embodiment of the application.
  • FIG. 8 shows a schematic diagram of a behavior analysis network provided by an embodiment of the present application.
  • the corresponding environmental data such as location, road type, lane type, speed limit, red street lights, etc.
  • actual driving data such as: actual driving trajectory and actual control instructions
  • target driving strategy such as: target driving trajectory and target control instructions
  • the environment data, the actual driving data and the target driving strategy are processed by removing outliers, normalizing or one-hot encoding. Then, the environment data, actual driving data and target driving strategy corresponding to these abnormal driving behaviors are taken as a data pair.
  • a batch of data pairs can be randomly selected for training until the loss value of network fitting meets the requirements, so as to obtain the trained behavior analysis model.
  • the driving equipment can input the environmental data collected during the current driving process, the target driving strategy and the actual driving data into the trained behavior analysis model, and then the corresponding abnormal driving behavior during the current driving process can be obtained.
  • the abnormal driving behaviors that can be determined are: a solid line lane change and a high risk merge.
  • the described behavior analysis model may be a long short-term memory network (long short-term memory, LSTM), etc., which is not limited in this embodiment of the present application.
  • the driving device can automatically add the target driving strategy and abnormal driving behavior on the basis of the video recording with actual driving data to generate the first video. That is to say, from the first video, you can not only view the actual driving conditions of the driving equipment, but also view the correct target driving trajectory and the control curve corresponding to the target control command, and you can also learn about it through text and/or voice.
  • the abnormal driving behavior of this driving equipment in actual driving is to say, from the first video, you can not only view the actual driving conditions of the driving equipment, but also view the correct target driving trajectory and the control curve corresponding to the target control command, and you can also learn about it through text and/or voice.
  • the driving device After the driving device generates the first video, it can send the first video to the terminal device.
  • the terminal device can display the first video to the driver through a display interface, etc., so that the driver can clearly understand the abnormal driving behavior and correct the bad driving behavior in time.
  • step 703 may also be performed first, and then step 702 may be performed. Alternatively, step 702 and step 703 may also be executed synchronously.
  • the terminal device displays the first video.
  • the terminal device provides driving guidance to the driving device according to the teaching strategy.
  • step 705 in this embodiment is similar to that of step 403 in FIG. 4 , which can be understood with reference to the content described in step 403 , and details are not repeated here.
  • the difference between the target driving strategy and the actual driving data is compared to obtain the abnormal driving data. And determine the driving scene and abnormal driving behavior corresponding to the abnormal driving data, and then generate the first video including the abnormal driving behavior and determine the teaching strategy corresponding to the driving scene.
  • the driver can view the abnormal driving behavior displayed in the first video through the terminal device, and operate the driving device correctly according to the instruction of the teaching strategy.
  • it not only enables the driver to look back at the abnormal driving behavior that occurred during the driving process, but also provides targeted teaching strategies in combination with the environmental data of the actual driving process to guide the driver to operate the driving equipment correctly and correct the abnormal driving behavior. Avoid driving hazards.
  • the real-time adjustment of the target driving strategy based on the environmental data avoids the subsequent single teaching strategy, can adapt to complex and changeable driving scenarios, and has the usability and practicability of driving on actual roads.
  • the above-mentioned driving assistance device includes hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
  • this application can divide the functional units of the driving assistance device according to the above method embodiments, for example, each functional unit can be divided corresponding to each function, or two or more functions can be integrated into one function in the unit.
  • the above-mentioned integrated functional units can be implemented in the form of hardware or in the form of software functional units.
  • FIG. 9 shows a schematic structural diagram of a driving assistance device provided by an embodiment of the present application.
  • the described driving assistance device may be a driving device, an automatic driving device, etc., which are not limited in this application.
  • the driving assistance device may include: a processing unit 901 and a sending unit 902 .
  • the driving assistance device may further include an acquisition unit 903 .
  • the processing unit 901 is configured to determine abnormal driving data according to the target driving strategy and actual driving data.
  • the target driving strategy is obtained based on the actual driving data and the corresponding environment data when driving the driving device.
  • step 401 in the embodiment shown in FIG. 4 and step 701 in the embodiment shown in FIG. 7 please refer to the detailed description of step 401 in the embodiment shown in FIG. 4 and step 701 in the embodiment shown in FIG. 7 , and details are not repeated here.
  • the sending unit 902 is configured to send a teaching strategy to the terminal device, where the teaching strategy corresponds to a driving scene, and the driving scene corresponds to the abnormal driving data.
  • the teaching strategy is used to provide driving guidance to the driving device.
  • steps 402 to 403 in the embodiment shown in FIG. 4 and step 702 and step 705 in the embodiment shown in FIG. 7 please refer to the detailed description of steps 402 to 403 in the embodiment shown in FIG. 4 and step 702 and step 705 in the embodiment shown in FIG. 7 , and details are not repeated here.
  • the sending unit 902 is further configured to send the first video to the terminal device.
  • the first video includes the actual driving data, the target driving strategy, and abnormal driving behavior corresponding to the abnormal driving data.
  • steps 703 to 704 in the embodiment shown in FIG. 7 please refer to the detailed description of steps 703 to 704 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • the target driving strategy includes a target driving trajectory and target control instructions corresponding to each trajectory point in the target driving trajectory.
  • the actual driving data includes an actual driving trajectory and actual control instructions corresponding to each trajectory point in the actual driving trajectory.
  • the processing unit 901 is configured to: calculate the similarity between the first data and the second data within the first duration, and determine that the second data within the first duration is Abnormal driving data.
  • the first data is the data of the target driving trajectory and the target control instruction that changes with the change of the driving time.
  • the second data is the data of the actual driving trajectory and the actual control instruction that changes with the change of the driving time.
  • the first duration is any duration in at least one group of durations in the driving time. For a specific implementation manner, please refer to the detailed description of step 401 in the embodiment shown in FIG. 4 and step 701 in the embodiment shown in FIG. 7 , and details are not repeated here.
  • the target driving strategy further includes target driving decision data.
  • the processing unit 901 is further configured to determine the driving scene based on the target driving decision data and the environment data.
  • step 402 in the embodiment shown in FIG. 4 and step 702 in the embodiment shown in FIG. 7 please refer to the detailed description of step 402 in the embodiment shown in FIG. 4 and step 702 in the embodiment shown in FIG. 7 , and details are not repeated here.
  • the processing unit 901 is further configured to: use the environmental data, the target driving strategy and the actual driving data as inputs of a behavior analysis model to determine the abnormal driving behavior .
  • the behavior analysis model is to determine the abnormal driving behavior as the training target, and use the corresponding environmental data, target driving strategy and actual driving data when the abnormal driving behavior occurs as the training data to train the initial model model obtained later.
  • the acquiring unit 903 is configured to: acquire the first distance and the second distance, and acquire the first safety distance and the second safety distance.
  • the first distance is the distance between the driving device and the obstacle in the first direction.
  • the second distance is a distance between the driving device and the obstacle in a second direction, and the first direction is perpendicular to the second direction.
  • the first safety distance is a safety distance between the driving device and the obstacle in the first direction.
  • the second safety distance is a safety distance between the driving device and the obstacle in the second direction.
  • the processing unit 901 is configured to: determine a first verification result according to the first distance and the first safety distance, and determine a second verification result based on the second distance and the second safety distance;
  • the first verification result and/or the second verification result indicate that the driving device executes a safe operation instruction.
  • the acquiring unit 903 is configured to: acquire the first speed and the second speed.
  • the first speed is the driving speed of the driving device in the first direction
  • the second speed is the driving speed of the driving device in the second direction.
  • the first safety distance is determined based on the first speed
  • the second safety distance is determined based on the second speed.
  • the above describes the driving assistance device in the embodiment of the present application from the perspective of a modular functional entity. Described from the perspective of physical equipment, the above application driving assistance device can be realized by one physical device, or can be jointly realized by multiple physical devices, or can be a logical function unit in one physical device, which is not covered in this embodiment of the present application. Specific limits.
  • FIG. 10 is a schematic diagram of a hardware structure of a communication device provided by an embodiment of the present application.
  • the communication device includes at least one processor 1001 , a communication line 1007 , a memory 1003 and at least one communication interface 1004 .
  • the processor 1001 can be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, a specific application integrated circuit (application-specific integrated circuit, server IC), or one or more for controlling the program execution of the application program integrated circuits.
  • CPU central processing unit
  • microprocessor central processing unit
  • application-specific integrated circuit server IC
  • the communication line 1007 may comprise a path through which information is communicated between the aforementioned components.
  • the communication interface 1004 uses any device such as a transceiver for communicating with other devices or a communication network, such as Ethernet and the like.
  • the memory 1003 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (random access memory, RAM) or other types that can store information and instructions
  • the dynamic storage device can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be programmed by a computer Any other medium accessed, but not limited to.
  • the memory may exist independently and be connected to the processor through the communication line 1007 .
  • the memory 1003 can also be integrated with the processor 1001.
  • the memory 1003 is used to store computer-executed instructions for implementing the solutions of the present application, and the execution is controlled by the processor 1001 .
  • the processor 1001 is configured to execute computer-executed instructions stored in the memory 1003, so as to implement the driving assistance method provided by the above-mentioned embodiments of the present application.
  • the computer-executed instructions in the embodiments of the present application may also be referred to as application program codes, which is not specifically limited in the embodiments of the present application.
  • the processor 1001 may include one or more CPUs, for example, CPU0 and CPU1 in FIG. 10 .
  • a communications device may include multiple processors, for example, processor 1001 and processor 1002 in FIG. 10 .
  • processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the communication device may further include an output device 1005 and an input device 1006 .
  • the output device 1005 is in communication with the processor 1001 and can display information in a variety of ways.
  • the input device 1006 communicates with the processor 1001 and can receive user input in various ways.
  • the input device 1006 may be a mouse, a touch screen device, or a sensing device, among others.
  • the communication device mentioned above may be a general-purpose device or a dedicated device.
  • the communication device may be a portable computer, a mobile terminal, etc., or a device having a structure similar to that shown in FIG. 10 .
  • the embodiment of the present application does not limit the type of the communication device.
  • the processor 1001 in FIG. 10 can call the computer-executed instructions stored in the memory 1003 to make the driving assistance device execute the method in the method embodiment corresponding to FIG. 4 and FIG. 7 .
  • the function/implementation process of the processing unit 901 in FIG. 9 may be implemented by the processor 1001 in FIG. 10 invoking computer execution instructions stored in the memory 1003 .
  • the functions/implementation process of the acquiring unit 903 and the sending unit 902 in FIG. 9 may be implemented through the communication interface 1004 in FIG. 10 .
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the disclosed system, device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method in each embodiment of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • a computer program product includes one or more computer instructions. When computer-executed instructions are loaded and executed on a computer, the processes or functions according to the embodiments of the present application are generated in whole or in part.
  • a computer can be a general purpose computer, special purpose computer, a computer network, or other programmable apparatus.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media.
  • Usable media may be magnetic media, (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, SSD)), among others.

Abstract

一种驾驶辅助的方法以及相关设备,在该方法中,不仅结合实际行驶过程的环境数据给出针对性的教学策略,指导驾驶员正确地操作行驶设备,避免行车危险。而且还基于环境数据实时地调整目标行驶策略,避免了后续得到单一的教学策略,能够适应于复杂多变的行驶场景中,具备实际道路上行驶的可用性和实用性。前述的方法可以包括:根据目标行驶策略和实际行驶数据确定异常行驶数据,目标行驶策略是基于实际行驶数据驾驶行驶设备时对应的环境数据得到的。并且向终端设备发送教学策略,教学策略与行驶场景对应,行驶场景与异常行驶数据对应。教学策略可以用于对行驶设备进行行驶指导。

Description

一种驾驶辅助的方法以及相关设备 技术领域
本申请涉及自动驾驶技术领域,具体涉及一种驾驶辅助的方法以及相关设备。
背景技术
在道路上驾驶车辆等行驶设备时,若因缺乏相关的驾驶经历,对某些行驶场景没有进行正确地处理,从而导致较多的不良驾驶行为与驾驶习惯出现。例如:高风险变道、变道不打开转向灯、驾驶速度较慢等等。如果不能及时地纠正这些不良的驾驶行为与驾驶习惯,很容易造成驾驶员等人员的安全存在较大的风险。
目前,通常都是在驾校中,通过教练员提供较为简单的辅助教学计划,指导新手驾驶员驾驶车辆。而且,在当前的辅助教学计划中,并不考虑实际的交通规则和实际道路场景,教学策略较为单一,多为简单的超速、距离过近等提醒,在实际道路上行驶时不具备适用性。
发明内容
本申请实施例提供了一种驾驶辅助的方法以及相关设备。不仅结合实际行驶过程的环境数据给出针对性的教学策略,指导驾驶员正确地操作行驶设备,避免行车危险。而且还基于环境数据实时地调整目标行驶策略,避免了后续得到单一的教学策略,能够适应于复杂多变的行驶场景中,具备实际道路上行驶的可用性和实用性。
第一方面,本申请提供了一种驾驶辅助的方法。该驾驶辅助的方法可以应用在驾驶辅助装置中,例如行驶设备、自动驾驶装置中,本申请不做限定。在该驾驶辅助的方法中,首先可以根据目标行驶策略和实际行驶数据确定异常行驶数据。目标行驶策略是基于实际行驶数据驾驶行驶设备时对应的环境数据得到的。然后,向终端设备发送教学策略,教学策略与行驶场景对应,行驶场景与异常行驶数据对应。教学策略可以用于对行驶设备进行行驶指导。通过上述方式,通过环境数据得到目标行驶策略后,将该目标行驶策略和实际行驶数据进行差异比较,得到异常行驶数据。并确定出与该异常行驶数据对应的行驶场景,以及进一步确定出与行驶场景对应的教学策略。这样,驾驶员能够通过终端设备查看到教学策略后,按照教学策略的指示正确地操作该行驶设备。不仅结合实际行驶过程的环境数据给出针对性的教学策略,指导驾驶员正确地操作行驶设备,避免行车危险;而且还基于环境数据实时地调整目标行驶策略,避免了后续得到单一的教学策略,能够适应于复杂多变的行驶场景中,具备实际道路上行驶的可用性和实用性。
在一种可能的实施方式中,还可以向所述终端设备发送第一视频。所述第一视频包括所述实际行驶数据、所述目标行驶策略、以及与所述异常行驶数据对应的异常行驶行为。通过上述方式,驾驶员通过终端设备能够查看到该第一视频中显示出的异常驾驶行为,即回看行驶过程中出现的异常行驶行为,使得驾驶员进一步按照教学策略的指示正确地操作该行驶设备,纠正异常行驶行为。
在另一种可能的实施方式中,所述目标行驶策略包括目标行驶轨迹和所述目标行驶轨迹中各个轨迹点对应的目标控制指令。所述实际行驶数据包括实际行驶轨迹和所述实际行 驶轨迹中各个轨迹点对应的实际控制指令。所述基于目标行驶策略和实际行驶数据确定异常行驶数据,可以采用如下方式,即:首先计算第一时长内的第一数据和第二数据之间的相似度。其中,所述第一数据为所述目标行驶轨迹和所述目标控制指令中随着行驶时间的变化而发生变化的数据。所述第二数据为所述实际行驶轨迹和所述实际控制指令中随着所述行驶时间的变化而发生改变的数据。所述第一时长为所述行驶时间中至少一组时长中的任一时长。然后,在所述相似度小于预设相似阈值时,确定所述第一时长内的第二数据为异常行驶数据。
在另一种可能的实施方式中,所述目标行驶策略还包括目标行驶决策数据。在向终端设备发送教学策略之前,还可以基于所述目标行驶决策数据和所述环境数据确定所述行驶场景。
在另一种可能的实施方式中,在向终端设备发送第一视频之前,还可以将所述环境数据、所述目标行驶策略和所述实际行驶数据作为行为分析模型的输入,以确定所述异常行驶行为。其中,所述行为分析模型是以确定出所述异常行驶行为为训练目标,以发生所述异常行驶行为时所对应的环境数据、目标行驶策略和实际行驶数据为作为训练数据对初始模型进行训练后获取的模型。
在另一种可能的实施方式中,该驾驶辅助的方法还可以包括:首先,获取第一距离和第二距离,以及获取第一安全距离和第二安全距离。其中,所述第一距离为所述行驶设备在第一方向上与障碍物之间的距离。所述第二距离为所述行驶设备在第二方向上与所述障碍物之间的距离,所述第一方向垂直于所述第二方向。所述第一安全距离为所述行驶设备在所述第一方向上与所述障碍物之间的安全距离。所述第二安全距离为所述行驶设备在所述第二方向上与所述障碍物之间的安全距离。然后,基于所述第一距离和所述第一安全距离确定第一校验结果,以及基于所述第二距离和所述第二安全距离确定第二校验结果。最后,基于所述第一校验结果和/或所述第二校验结果,指示所述行驶设备执行安全操作指令。通过上述方式,可以在不同的场景中给予不同的安全操作指令,避免行车危险。
在另一种可能的实施方式中,所述获取第一安全距离和第二安全距离,包括:获取第一速度和第二速度。所述第一速度为所述行驶设备在所述第一方向上的行驶速度,所述第二速度为所述行驶设备在所述第二方向上的行驶速度。基于所述第一速度确定所述第一安全距离,以及基于所述第二速度确定所述第二安全距离。
第二方面,本申请实施例提供了一种驾驶辅助装置。该驾驶辅助装置包括处理单元、发送单元。可选地,还可以包括获取单元。处理单元,用于根据目标行驶策略和实际行驶数据确定异常行驶数据。所述目标行驶策略是基于所述实际行驶数据驾驶所述驾驶设备时对应的环境数据得到。发送单元,用于向终端设备发送教学策略,所述教学策略与行驶场景对应,所述行驶场景与所述异常行驶数据对应。所述教学策略用于对所述驾驶设备进行行驶指导。
在一种可能的实施方式中,所述发送单元还用于向所述终端设备发送第一视频。所述第一视频包括所述实际行驶数据、所述目标行驶策略、以及与所述异常行驶数据对应的异常行驶行为。
在另一种可能的实施方式中,所述目标行驶策略包括目标行驶轨迹和所述目标行驶轨迹中各个轨迹点对应的目标控制指令。所述实际行驶数据包括实际行驶轨迹和所述实际行驶轨迹中各个轨迹点对应的实际控制指令。所述处理单元用于:计算第一时长内的第一数据和第二数据之间的相似度,并在相似度小于预设相似阈值时,确定所述第一时长内的第二数据为异常行驶数据。其中,所述第一数据为所述目标行驶轨迹和所述目标控制指令中随着行驶时间的变化而发生变化的数据。所述第二数据为所述实际行驶轨迹和所述实际控制指令中随着所述行驶时间的变化而发生改变的数据。所述第一时长为所述行驶时间中至少一组时长中的任一时长。
在另一种可能的实施方式中,所述目标行驶策略还包括目标行驶决策数据。所述处理单元还用于基于所述目标行驶决策数据和所述环境数据确定所述行驶场景。
在另一种可能的实施方式中,所述处理单元还用于:将所述环境数据、所述目标行驶策略和所述实际行驶数据作为行为分析模型的输入,以确定所述异常行驶行为。其中,所述行为分析模型是以确定出所述异常行驶行为为训练目标,以发生所述异常行驶行为时所对应的环境数据、目标行驶策略和实际行驶数据为作为训练数据对初始模型进行训练后获取的模型。
在另一种可能的实施方式中,所述获取单元用于:获取第一距离和第二距离,以及获取第一安全距离和第二安全距离。其中,所述第一距离为所述驾驶设备在第一方向上与障碍物之间的距离。所述第二距离为所述驾驶设备在第二方向上与所述障碍物之间的距离,所述第一方向垂直于所述第二方向。所述第一安全距离为所述驾驶设备在所述第一方向上与所述障碍物之间的安全距离。所述第二安全距离为所述驾驶设备在所述第二方向上与所述障碍物之间的安全距离。所述处理单元用于:根据所述第一距离和所述第一安全距离确定第一校验结果,以及基于所述第二距离和所述第二安全距离确定第二校验结果;基于所述第一校验结果和/或所述第二校验结果,指示所述驾驶设备执行安全操作指令。
在另一种可能的实施方式中,所述获取单元用于:获取第一速度和第二速度。所述第一速度为所述驾驶设备在所述第一方向上的行驶速度,所述第二速度为所述驾驶设备在所述第二方向上的行驶速度。基于所述第一速度确定所述第一安全距离,以及基于所述第二速度确定所述第二安全距离。
第三方面,本申请实施例提供了一种自动驾驶装置。该自动驾驶装置可以包括:存储器和处理器。其中,存储器用于存储计算机可读指令。处理器与存储器耦合。处理器用于执行存储器中的计算机可读指令从而执行如第一方面或第一方面任意一种可能的实施方式中所描述的方法。
第四方面,本申请实施例提供了一种行驶设备。该行驶设备可以包括:存储器和处理器。其中,存储器用于存储计算机可读指令。处理器与存储器耦合。处理器用于执行存储器中的计算机可读指令从而执行如第一方面或第一方面任意一种可能的实施方式中所描述的方法。
第五方面,本申请实施例提供了一种自动驾驶装置。该自动驾驶装置可以为车载装置或者车载装置中的芯片或者片上系统。该自动驾驶装置可以实现上述各方面或者各可能的 设计自动驾驶装置所执行的功能,所述功能可以通过硬件实现。例如,在一种可能的设计中,该自动驾驶装置可以包括:处理器和通信接口,处理器用于运行计算机程序或指令,以实现如第一方面或者第一方面的任一种可能的实现方式中所描述的驾驶辅助的方法。
本申请第六方面提供一种计算机可读存储介质,当指令在计算机装置上运行时,使得计算机装置执行如第一方面或第一方面任意一种可能的实施方式中所描述的方法。
本申请第七方面提供一种计算机程序产品,当在计算机上运行时,使得计算机可以执行如第一方面或第一方面任意一种可能的实施方式中所描述的方法。
本申请实施例提供的技术方案中,通过环境数据得到目标行驶策略后,将该目标行驶策略和实际行驶数据进行差异比较,得到异常行驶数据。并确定出与该异常行驶数据对应的行驶场景,以及进一步确定出与行驶场景对应的教学策略。这样,驾驶员能够通过终端设备查看到教学策略后,按照教学策略的指示正确地操作该行驶设备。不仅结合实际行驶过程的环境数据给出针对性的教学策略,指导驾驶员正确地操作行驶设备,避免行车危险;而且还基于环境数据实时地调整目标行驶策略,避免了后续得到单一的教学策略,能够适应于复杂多变的行驶场景中,具备实际道路上行驶的可用性和实用性。
附图说明
图1示出了本申请实施例提供的一种辅助驾驶系统的示意图;
图2示出了本申请实施例提供的行驶设备的一种结构示意图;
图3示出了本申请实施例提供的一种自动驾驶装置的结构示意图;
图4示出了本申请实施例提供的驾驶辅助的方法的第一种流程示意图;
图5示出了本申请实施例提供的一种行驶设备行驶的场景示意图;
图6示出了本申请实施例提供的一种数据之间的距离的示意图;
图7示出了本申请实施例提供的驾驶辅助的方法的第二种流程示意图;
图8示出了本申请实施例提供的一种行为分析的网络示意图;
图9示出了本申请实施例提供了驾驶辅助装置的一种结构示意图;
图10示出了本申请实施例提供的通信设备的硬件结构示意图。
具体实施方式
本申请实施例提供了一种驾驶辅助的方法以及相关设备。不仅结合实际行驶过程的环境数据给出针对性的教学策略,指导驾驶员正确地操作行驶设备,避免行车危险。而且还基于环境数据实时地调整目标行驶策略,避免了后续得到单一的教学策略,能够适应于复杂多变的行驶场景中,具备实际道路上行驶的可用性和实用性。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例如能够以除了 在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含。在本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,a和b,a和c,b和c或a和b和c,其中a、b和c可以是单个,也可以是多个。值得注意的是,“至少一项(个)”还可以解释成“一项(个)或多项(个)”。
目前,通常是通过教练员为新手驾驶员提供简单的辅助教学计划,指导新手驾驶员驾驶车辆。而在当前的辅助教学计划中,并不考虑实际的交通规则和实际道路场景,教学策略单一,在实际道路上行驶时不具备适用性,并且也无法保障驾驶员等人员的安全,造成行为危险。
为了解决上述所提及的技术问题,本申请实施例提供了一种驾驶辅助的方法。图1示出了本申请实施例提供的一种辅助驾驶系统的示意图。该方法可以应用在图1示出的辅助驾驶系统中。如图1所示,该辅助驾驶系统包括行驶设备和终端设备。行驶设备和终端设备通过网络连接,例如:有线网络或者蓝牙、无线保真(wireless fidelity,WiFi)等无线网络等。其中,行驶设备可以通过比较目标行驶策略和实际行驶数据之间的差异性,以此确定出异常行驶数据。然后,行驶设备确定出该异常行驶数据出现时所对应的行驶场景,并基于该行驶场景从数据库中查找出相应的教学策略。这样,行驶设备将教学策略发送至终端设备,使得终端设备能够根据教学策略指导驾驶员正确地驾驶该行驶设备,纠正不良的驾驶习惯,并且能够适应复杂多变的行驶场景,具备实际道路上的行驶的可用性和实用性。
应理解,行驶设备可以为智能网联驾驶(intelligent network driving)车辆,是一种车联网终端。行驶设备具体可以通过其内部的功能单元或装置执行本申请实施例提供的驾驶辅助的方法。例如:行驶设备中可以包括用于执行本申请实施例提供的驾驶辅助的方法的自动驾驶装置。自动驾驶装置可以通过控制器局域网络(controller area network,CAN)总线与行驶设备的其他部件通信连接。作为一个具体的示例,该行驶设备的具体结构将在后续的图2所示的实施例中详细地描述。
应理解,上述所提及的行驶设备还可以是智能汽车、轿车、卡车、公共汽车、施工车辆、柴油车等搭载了自动驾驶装置的车辆,本申请实施例中不做特别的限定。终端设备可以包括但不限于手机、可折叠电子设备、平板电脑、膝上型计算机、手持设备、笔记本电脑、上网本、个人数字助理(personal digital assistant,PDA)、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备,或者连接无线调制解调器的其他处理设备,以及各种形式的用户设备(user equipment,UE),移动台(mobile station,MS)等等。本申请实施例对该终端设备的具体类型不作特殊限制。
应理解,本申请实施例提供的驾驶辅助的方法除了应用于上述图1示出的辅助驾驶系统以外,在实际应用中,还可能应用在其他的系统架构中,本申请实施例不做具体限定。
图2示出了本申请实施例提供的行驶设备的一种结构示意图。如图2所示,行驶设备包括自动驾驶装置、车身网关、车身天线等等部件。其中,自动驾驶装置可以通过射频(radio frequency,RF)电缆与车身天线通信连接。
其中,自动驾驶装置可以称为车载单元(on board unit,OBU)、车载终端等。例如,自动驾驶装置可以为车载盒子(telematics BOX,T-Box)。自动驾驶装置主要用于执行本申请实施例提供的驾驶辅助的方法。另外,自动驾驶装置可以为车联网芯片等。该自动驾驶装置的具体结构将在图3所示的实施例中详细描述。
车身网关主要用于车辆信息的接收和发送,车身网关可以通过CAN总线与自动驾驶装置连接。示例性的,车身网关可以从自动驾驶装置获取自动驾驶装置执行本申请实施例提供的驾驶辅助的方法后得到的目标行驶策略以及实际行驶数据等,将获取到的目标行驶策略以及实际行驶数据等信息发送给行驶设备的其他部件。
车身天线可以内置通信天线,通信天线负责信号的接收和发送。例如,通信天线可以将行驶设备的驾驶信息等发送给终端设备、其他行驶设备中的自动驾驶装置等;也可以接收来终端设备的指令,或者接收其他自动驾驶装置发送的驾驶信息等。
可以理解的是,图2示意的结构并不构成对行驶设备的具体限定。在一些实施例中,行驶设备可以包括比图2所示部件更多或更少的部件。或者,行驶设备可以包括图2示出的某些部件的组合部件。或者,行驶设备可以包括图2所示部件的拆分部件等。如:行驶设备还可以包括域控制器(domain controller,DC)、多域控制器(multi-domain controller,MDC)等,本申请实施例不做限定。图2示出的部件可以以硬件、软件或软件和硬件的组合实现。
图3示出了本申请实施例提供的一种自动驾驶装置的结构示意图。如图3所示,自动驾驶装置可以包括自动驾驶模块、数据采集模块、数据分析模块以及辅助教学模块。
其中,自动驾驶模块可以用于接收该行驶设备上各类传感器采集到的与行驶设备实际行驶过程中的驾驶数据,例如:环境数据等,并基于该实际行驶过程中的环境数据提供目标决策策略。所描述的目标决策策略包括目标行驶轨迹、与目标行驶轨迹中各个轨迹点相关的目标控制指令、目标驾驶决策数据等。
数据采集模块可以用于实时地接收该自动行驶设备发送的目标行驶策略,以及实时地采集该行驶设备在实际行驶过程中的实际行驶数据。
数据分析模块可以用于判断该目标行驶策略和实际行驶数据之间的差异,并结合环境数据等信息分析出差异较大的数据(后续也称为异常行驶数据)产生时所对应的行驶场景、异常驾驶行为等信息。在一些可能的示例中,数据分析模块也可以包括差异分析子模块、场景分析子模块、行为分析子模块。其中,差异分析子模块用于判断该目标行驶策略和实际行驶数据之间的差异,确定出异常行驶数据。场景分析子模块可以用于分析出异常行驶数据产生时所对应的行驶场景。行为分析子模块用于分析出异常行驶数据产生时所对应的驾驶员的异常驾驶行为等。
辅助教学模块可以用于根据实际行驶数据生成回放视频,并在回放视频中添加异常驾驶行为、目标行驶策略等信息。也可以用于根据行为场景从数据库中查找到相应的教学策略等。示例性地,辅助教学模块可以包括视频生成子模块、教程生成子模块。其中,视频 生成子模块可以用于根据实际行驶数据生成回放视频,并在回放视频中添加异常驾驶行为、目标行驶策略等。教程生成子模块可以用于根据行为场景从数据库中查找到相应的教学策略。
在一些可能的示例中,如图3所示,自动驾驶装置还可以包括安全校验模块。该安全校验模块可以基于环境数据判断该行驶设备当前是否处于安全状态,并在安全的场景或者在不安全的场景中,都可以提供相应的安全操作指令。
可以理解的是,图3示意的结构并不构成对自动驾驶装置的具体限定。在另一些实施例中,自动驾驶装置可以包括比图3所示部件更多或更少的部件。或者,自动驾驶装置可以包括图3示出的某些部件的组合部件。或者,自动驾驶装置可以包括图3所示部件的拆分部件等。图3示的部件可以以硬件、软件或软件和硬件的组合实现。本申请实施例中,图3所示的装置也可以为自动驾驶装置中的芯片或片上系统。片上系统可以由芯片构成,也可以包括芯片和其他分立器件,本申请实施例中不做具体限定。
需要说明的是,本申请实施例中提供的驾驶辅助的方法可以应用在图2或图3示出的自动驾驶装置中。也可以应用在该自动驾驶装置中的芯片或片上系统中。或者也可以应用在图1或图2所示出的行驶设备等等,本申请实施例不做具体限定。后续仅以行驶设备执行本申请实施例提供的驾驶辅助的方法为例进行说明。
图4示出了本申请实施例提供的驾驶辅助的方法的第一种流程示意图。如图4所示,该驾驶辅助的方法可以包括如下步骤:
401、行驶设备基于目标行驶策略和实际行驶数据确定异常行驶数据,其中,目标行驶策略是基于实际行驶数据驾驶行驶设备时对应的环境数据得到。
在该示例中,行驶设备中可以搭载摄像头、毫米波雷达、激光雷达、惯性测量单元(inertial measurement,IMU)、全球定位系统(global positioning system,GPS)等传感器。这些传感器能够用来获取行驶设备在实际行驶过程中的实际行驶数据,例如:摄像头采集环境数据、毫米波雷达可以探测行驶设备与障碍物之间的距离等。这样,行驶设备可以从各类传感器中获取到相应的实际行驶数据,并基于产生该实际行驶数据时所对应的环境数据确定出目标行驶策略。需说明,该目标行驶策略可以包括能够反映出该行驶设备满足自动驾驶需求的数据,以确定出合理的目标行驶轨迹。或者说,按照目标行驶策略驾驶该行驶设备,可以规避掉不良的驾驶行为和行车危险。
上述的目标行驶策略包括目标行驶轨迹、以及目标行驶轨迹中各个轨迹点对应的目标控制指令。示例性地,目标行驶策略还可以包括目标驾驶决策数据。需说明,目标行驶轨迹是由行驶设备在行驶过程中,基于目标驾驶决策数据和环境数据所得到的一系列轨迹点连接形成的轨迹,即合理的行驶路线。在每一个轨迹点中,都包括了相应的位置信息、朝向角、速度、加速度等信息,能够作为控制行驶设备行驶的参考依据。目标控制指令可以理解成用于控制行驶设备行驶时有关执行器的目标控制量。比如:方向盘的目标转动角度、底盘的目标加速度、目标档位、目标转向灯等,本申请不做限定说明。目标驾驶决策数据可以理解成驾驶员想要该行驶设备执行的驾驶行为,例如:执行当前车道跟车、车道保持、变道切入、避让后方车辆、让行前方车辆、超越前方车辆等等驾驶行为,本申请不做限定 说明。
所提及的环境数据可以反映出行驶设备在当前行驶过程中的环境情况。该环境数据包括但不限于该行驶设备的行驶道路上的障碍物信息、交通环境信息等。障碍物信息又包括但不限于车辆、行人或者道路等障碍物。交通环境信息可以包括但不限于道路信息、灯光条件以及天气情况等。例如,道路信息可以为高速公路、国道、省道、城市道路、乡村道路、普通弯道、直道、急转弯道、单行道、多行道、匝道或者城区路口等。道路信息还可以包括交通标志牌、红绿灯、车道线等,本申请不做限定。
类似地,上述的实际行驶数据也可以包括实际行驶轨迹、以及该实际行驶轨迹中各个轨迹点对应的实际控制指令。实际行驶轨迹中的每个轨迹点可以包括位置信息。实际控制指令可以表示出控制行驶设备行驶时有关执行器的实际控制量。譬如说,方向盘的实际转动角度、底盘的实际加速度、实际档位、实际的转向灯信息等等,本申请不做限定。举例来说,图5示出了本申请实施例提供的一种行驶设备行驶的场景示意图。如图5所示,在当前行驶场景中,包括行驶设备1、行驶设备2和行驶设备3。其中,在行驶设备1的当前行驶过程中,目标行驶轨迹指示出行驶设备1应当在当前车道1上直行,而实际行驶轨迹则显示出该行驶设备1从当前车道1向右变道至车道2中。
需说明,上述图5示出的行驶场景仅仅是一个示例性的描述,在实际应用中,还可能是其他的场景,本申请不做限定。
这样,行驶设备在获取到目标行驶策略和实际行驶数据后,可以对目标行驶策略和实际行驶数据进行分析处理,确定出异常行驶数据。需说明,所描述的异常行驶数据也可以理解成目标行驶策略中的目标行驶轨迹和目标控制指令,与实际行驶数据中的实际行驶轨迹和实际控制指令之间存在较大差异的数据。
在一些示例中,对于行驶轨迹中的位置信息、速度等数据,控制指令中的刹车指令、油门指令、档位、转向灯指令等数据都会随着该行驶设备的行驶时间的变化而发生改变。因此,行驶设备确定异常行驶数据,可以通过以下方式来实现,即:计算第一时长内的第一数据和第二数据之间的相似度;在相似度小于预设相似阈值时,确定第一时长内的第二数据为异常行驶数据。
需说明,行驶时间可以被划分为至少一组时长,第一时长为这至少一组时长中的任一时长。第一数据可以理解成目标行驶轨迹和目标控制指令中随着行驶时间的变化而发生改变的数据。第二数据可以理解成实际行驶轨迹和实际控制指令中随着行驶实际的变化而发生改变的数据。应理解,所描述的改变,可以理解成连续变化或者突变。
行驶设备可以根据属性类型划分第一时长内的第一数据,以得到不同属性类型的第一子数据。举例来说,可以将第一数据划分为位置类型、档位类型、指令类型这三种类型的子数据。同样地,行驶设备也可以根据属性类型将第一时长内的第二数据划分成不同属性类型的第二子数据。比如,将第二数据也划分为位置类型、档位类型、指令类型这三种类型的子数据。需说明,指令类型又可以包括但不限于刹车指令、油门指令、转向灯指令等,本申请不做限定说明。
针对同一属性类型,行驶设备例如可以通过欧式距离等算法,计算相应的第一子数据 和第二子数据之间的距离,以得到对应属性类型中的第一子数据和第二子数据之间的相似度。然后,行驶设备再将这第一时长内,所有的属性类型对应的第一子数据和第二子数据之间的相似度进行加权平均处理,进而得到这第一时长内的第一数据和第二数据之间的相似度。需理解,数据之间的距离越大,说明数据之间的相似度越低;反之,数据之间的距离越小,说明数据之间的相似度越高。
这样,行驶设备通过将第一数据和第二数据之间的相似度与预设相似阈值进行比较,并在该相似度小于预设相似阈值时,确定该第一时长内的第二数据为异常行驶数据。需说明,在实际应用中,也可以进一步将第一时长划分,基于上述计算相似度的操作确定出每一组子时长内的数据间的相似度。并在任意一个子时长内的数据间的相似度小于预设相似阈值时,直接确定该第一时长内的第二数据为异常行驶数据。
举例来说,图6为本申请实施例提供的一种数据之间的距离的示意图。以图5示出的行驶轨迹为例,以行驶时间为横坐标、数据之间的距离为纵坐标构建坐标系。从图6可以看出,在第一时长(即t1到t2)内,实际行驶轨迹与目标行驶轨迹之间的距离较大。若在t1到t2内,计算得到与位置类型对应的相似度为0.6、与档位类型对应的相似度为0.5、与指令类型对应的相似度为0.3。而且位置类型、档位类型、指令类型的权重分别为0.4、0.2、0.4。那么,通过加权平均处理,计算得到的第一数据和第二数据之间的相似度为0.46。若预设相似阈值为0.5,很明显可知0.46<0.5,可以确定出t1到t2这段时长内的第二数据为异常行驶数据。
应理解,上述图6中示出的时长、权重、相似度的具体取值仅仅是一个示意性的描述。在实际应用中,还可以是其他的取值,本申请实施例中不做限定。此外,属性类型除了包括位置类型、档位类型以及指令类型以外,在实际应用中还可能是其他的类型,本申请实施例不做限定。
402、行驶设备向终端设备发送教学策略,教学策略与行驶场景对应,行驶场景与异常行驶数据对应。
该示例中,从前述步骤401的内容可知,目标驾驶决策数据可以反映出驾驶员想要该行驶设备执行的驾驶行为,而且环境数据可以反映出行驶设备在当前行驶过程中的环境情况。因此,行驶设备可以从目标驾驶决策数据中分析出当前所执行的驾驶行为,如:直行、跟车、超车、变道或者让行等。同样地,由于环境数据包括交通环境信息和障碍物信息,而且交通环境信息中包括道路信息以及天气情况等信息。那么,可以从道路信息中确定出在产生该异常行驶数据时,行驶设备在哪些行驶区域、哪些类型的道路上行驶,譬如:城区单行道、高速多行道、城区路口、急转弯道等。从天气情况中确定出产生该异常行驶数据时,行驶设备所处的天气,譬如:晴天、雨天、暴雨、雾天、阴天等。
这样,行驶设备在确定出异常行驶数据后,可以进一步基于目标行驶决策数据和环境数据确定出与异常行驶数据对应的行驶场景。所描述的行驶场景,可以理解成异常行驶数据产生时,该行驶设备在行驶过程中所处的场景。行驶场景可以包括但不限于:晴朗天气下城区路口的多行道直线行驶,暴雨天气下城区单行道跟车、高速多行道超车等等,本申请不做限定说明。举例来说,结合前述图5示出的场景,行驶设备基于目标行驶决策数据 和环境数据,可以得到异常行驶数据时所对应的行驶场景为:晴朗天气下城区路口的多行道直行。
教学库中的教学材料会按照场景进行分类,每个教学材料都有与之对应的场景标签。这样,行驶设备在确定出与异常行驶数据对应的行驶场景后,可以根据该行驶场景从教学库中查找匹配的教学材料,进而将查找到的教学材料整合得到与行驶场景匹配的教学策略。也就是说,针对每一个行驶场景,都能够从教学库的教学材料中整合得到与之匹配的教学策略。然后,行驶设备可以将该教学策略发送至终端设备。举例来说,针对异常行驶数据所对应的行驶场景为“晴朗天气下城区路口的多行道直行”,可以将“晴天”、“城区路口”、“多行道”、“直行”这四个场景对应的教学材料进行整合处理,得到最终的教学策略。
需说明,教学策略中可以包括但不限于通过以视频、文字和/或语音等方式给出的正确的驾驶操作、驾驶建议等信息,本申请实施例不做限定。驾驶建议和驾驶操作也包括但不限于城区路口行驶注意事项、如何正确并线、常见车道标志等,本申请不做限定。
在另一些实施方式中,行驶设备还可以根据环境数据判断本行驶设备是否处于安全行驶的状态。在安全行驶或危险行驶的情况下,还可以给予不同的操作指示。示例性地,该驾驶辅助的方法还可以包括:行驶设备获取第一距离和第二距离,以及获取第一安全距离和第二安全距离。然后,行驶设备基于第一距离和第一安全距离确定第一校验结果,根据第二距离和第二安全距离确定第二校验结果。最后,行驶设备基于第一校验结果和/或第二校验结果,指示行驶设备执行安全操作指令。
在该示例中,行驶设备在行驶过程中,可以通过自身搭载的超声波传感器等器件采集到第一距离和第二距离。所描述的第一距离为行驶设备在第一方向上与障碍物之间的距离。第二距离为行驶设备在第二方向上与该障碍物之间的距离。需说明,第一方向垂直于第二方向。举例来说,第一方向可以是该行驶设备前进行驶的方向,第二方向则理解成垂直于第一方向的方向。或者,第二方向也可以是行驶设备前进行驶的方向,本申请实施例中不限定。
同样地,行驶设备还可以通过超声波传感器等探测到第一速度和第二速度。第一速度为所述行驶设备在所述第一方向上的行驶速度。所述第二速度为所述行驶设备在所述第二方向上的行驶速度。然后,行驶设备进一步地基于第一速度计算出第一安全距离,基于第二速度计算出第二安全距离。所描述的第一安全距离是行驶设备在第一方向上与障碍物之间的安全距离,即在第一方向上需要与障碍物保持的安全距离。第二安全距离是行驶设备在第二方向上与障碍物之间的安全距离,即在第二方向上需要与障碍物保持的安全距离。
这样,行驶设备通过比较第一距离和第一安全距离,以及比较第二距离和第二安全距离,分别得到第一校验结果和第二校验结果。然后,行驶设备基于第一校验结果和/或第二校验结果,指示行驶设备执行不同的安全操作指令。具体参照下述的方式进行理解,即:
①若第一校验结果表示为第一距离大于第一安全距离,以及第二校验结果表示为第二距离大于第二安全距离的情况下,指示行驶设备安全地行驶通过。举例来说,若第一安全距离为1.5米(m)、第二安全距离为1m,第一距离和第二距离分别为5m、1.5m。明显可知,行驶设备距离障碍物存在一定的间距,此时行驶设备可以安全地行驶通过。
②若第一校验结果表示为第一距离小于第一安全距离、且大于第一安全距离的1/N(N为不为1的正数),和/或第二校验结果表示为第二距离小于第二安全距离、且大于第二安全距离的1/N的情况下,此时输出安全操作指示,提醒行驶设备注意避免碰撞到障碍物。
③若第一校验结果表示为第一距离小于第一安全距离的1/N,以及第二校验结果表示为第二距离小于第二安全距离的1/N,此时输出紧急控制指令和紧急方向盘指令。
需说明,除了上述所描述的三种情况下执行不同的安全操作指令以外,在实际应用中,还可以在其他的情况下执行相应的操作指令,本申请实施例中不限限定。
403、终端设备根据教学策略对行驶设备进行行驶指导。
该示例中,行驶设备在得到与行驶场景对应的教学策略后,将该教学策略发送至终端设备。这样,终端设备可以获取到该教学策略,并根据该教学策略对行驶设备进行行驶指导。换句话说,终端设备可以按照该教学策略中的指示,正确地操作该行驶设备,使得行驶设备按照正确的行驶轨迹进行行驶。
在本申请实施例中,通过环境数据得到目标行驶策略后,将该目标行驶策略和实际行驶数据进行差异比较,得到异常行驶数据。并确定出与该异常行驶数据对应的行驶场景,以及进一步确定出与行驶场景对应的教学策略。这样,驾驶员能够通过终端设备查看到教学策略后,按照教学策略的指示正确地操作该行驶设备。不仅结合实际行驶过程的环境数据给出针对性的教学策略,指导驾驶员正确地操作行驶设备,避免行车危险;而且还基于环境数据实时地调整目标行驶策略,避免了后续得到单一的教学策略,能够适应于复杂多变的行驶场景中,具备实际道路上行驶的可用性和实用性。
图7示出了本申请实施例提供的驾驶辅助的方法的第二种流程示意图。如图7所示,该驾驶辅助的方法可以包括如下步骤:
701、行驶设备基于目标行驶策略和实际行驶数据确定异常行驶数据,其中,目标行驶策略是基于实际行驶数据驾驶行驶设备时对应的环境数据得到。
702、行驶设备向终端设备发送教学策略,教学策略与行驶场景对应,行驶场景与异常行驶数据对应。
应理解,本实施例中的步骤701-702与前述图4中步骤401-402的内容类似,具体可以参照步骤401-402所描述的内容进行理解,此处不做赘述。
703、行驶设备向终端设备发送第一视频,第一视频包括实际行驶数据、目标行驶策略以及与异常行驶数据对应的异常行驶行为。
在该示例中,行驶设备在确定出异常行驶数据后,还可以通过行为分析模型分析出该异常行驶数据产生时所对应的异常行驶行为。异常行驶行为包括不限于实线变道、超速行驶、高风险并线、未保持安全距离等等,本申请实施例中不做限定。
另外,所描述的行为分析模型是以确定异常行驶行为为训练目标,以发生异常行驶行为时所对应的环境数据、目标行驶策略和实际行驶数据为作为训练数据对初始模型进行训练后获取的模型。图8示出了本申请实施例提供的一种行为分析的网络示意图。如图8所示,首先,按照碰撞风险、交规风险和/或运动风险等角度,通过仿真或者路采等方式采集不同情况下,发生异常行驶行为时所对应的环境数据(如:位置、道路类型、车道类型、 限速、红路灯等)、实际行驶数据(如:实际行驶轨迹和实际控制指令)以及目标行驶策略(如:目标行驶轨迹和目标控制指令)。并且,对该环境数据、实际行驶数据以及目标行驶策略进行去除异常值、归一化或者独热编码等处理。然后,将这些异常行驶行为发生时所对应的环境数据、实际行驶数据以及目标行驶策略作为数据对。在训练的时候可以随机抽取一个batch的数据对进行训练,直到网络拟合的损失值满足要求,以此得到训练后的行为分析模型。
行驶设备可以将当前行驶过程中所采集到的环境数据、目标行驶策略和实际行驶数据输入到训练后的行为分析模型中,便可以得到当前行驶过程中相应的异常行驶行为。举例来说,以前述图5示出的场景为例,可以确定出的异常行驶行为是:实线变道和高风险并线。
需说明,所描述的行为分析模型可以是长短期记忆网络(long short-term memory,LSTM)等,本申请实施例不做限定。
这样,行驶设备可以在记录有实际行驶数据的录像的基础上,自动添加目标行驶策略和异常行驶行为,以生成第一视频。也就是说,从第一视频中,既可以查看到行驶设备实际行驶的情况,也能够查看到正确的目标行驶轨迹和目标控制指令对应的控制曲线,还能够通过文字和/或语音等方式获知本行驶设备在实际行驶中所存在的异常行驶行为。
行驶设备在生成第一视频后,可以将该第一视频发送至终端设备。终端设备可以通过显示界面等方式将该第一视频显示给驾驶员查看,使得驾驶员能够清楚地了解异常行驶行为,及时地纠正不良的驾驶行为。
需说明,本申请实施例不限定步骤702和步骤703的先后执行顺序。在实际应用中,也可以先执行步骤703,后执行步骤702。或者,步骤702和步骤703也可以同步执行。
704、终端设备显示第一视频。
705、终端设备根据教学策略对行驶设备进行行驶指导。
应理解,本实施例中的步骤705与前述图4中步骤403的内容类似,具体可以参照步骤403所描述的内容进行理解,此处不做赘述。
在本申请实施例中,通过环境数据得到目标行驶策略后,将该目标行驶策略和实际行驶数据进行差异比较,得到异常行驶数据。并确定出与该异常行驶数据对应的行驶场景和异常行驶行为,进而生成包括有异常行驶行为的第一视频以及确定出与行驶场景对应的教学策略。这样,驾驶员通过终端设备能够查看到该第一视频中显示出的异常驾驶行为,并按照教学策略的指示正确地操作该行驶设备。一方面,不仅能够使驾驶员回看行驶过程中出现的异常行驶行为,而且还结合实际行驶过程的环境数据给出针对性的教学策略,指导驾驶员正确地操作行驶设备,纠正异常行驶行为,避免行车危险。另一方面,基于环境数据实时地调整目标行驶策略,避免了后续得到单一的教学策略,能够适应于复杂多变的行驶场景中,具备实际道路上行驶的可用性和实用性。
上述主要从方法的角度对本申请实施例提供的方案进行了介绍。可以理解的是,上述的驾驶辅助装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的功能,本申请能够 以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
从功能单元的角度,本申请可以根据上述方法实施例对驾驶辅助装置进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个功能单元中。上述集成的功能单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
比如,以采用集成的方式划分各个功能单元的情况下,图9示出了本申请实施例提供了驾驶辅助装置的一种结构示意图。所描述的驾驶辅助装置可以是行驶设备、自动驾驶装置等,本申请不做限定说明。该驾驶辅助装置可以包括:处理单元901、发送单元902。示例性地,该驾驶辅助装置还可以包括获取单元903。
其中,处理单元901,用于根据目标行驶策略和实际行驶数据确定异常行驶数据。所述目标行驶策略是基于所述实际行驶数据驾驶所述驾驶设备时对应的环境数据得到。具体实现方式请参考图4所示实施例中的步骤401、图7所示实施例中的步骤701的详细说明,此处不做赘述。
发送单元902,用于向终端设备发送教学策略,所述教学策略与行驶场景对应,所述行驶场景与所述异常行驶数据对应。所述教学策略用于对所述驾驶设备进行行驶指导。具体实现方式请参考图4所示实施例中的步骤402至步骤403、图7所示实施例中的步骤702、步骤705的详细说明,此处不做赘述。
在一种可能的实施方式中,所述发送单元902还用于向所述终端设备发送第一视频。所述第一视频包括所述实际行驶数据、所述目标行驶策略、以及与所述异常行驶数据对应的异常行驶行为。具体实现方式请参考图7所示实施例中的步骤703至步骤704的详细说明,此处不做赘述。
在另一种可能的实施方式中,所述目标行驶策略包括目标行驶轨迹和所述目标行驶轨迹中各个轨迹点对应的目标控制指令。所述实际行驶数据包括实际行驶轨迹和所述实际行驶轨迹中各个轨迹点对应的实际控制指令。所述处理单元901用于:计算第一时长内的第一数据和第二数据之间的相似度,并在相似度小于预设相似阈值时,确定所述第一时长内的第二数据为异常行驶数据。其中,所述第一数据为所述目标行驶轨迹和所述目标控制指令中随着行驶时间的变化而发生变化的数据。所述第二数据为所述实际行驶轨迹和所述实际控制指令中随着所述行驶时间的变化而发生改变的数据。所述第一时长为所述行驶时间中至少一组时长中的任一时长。具体实现方式请参考图4所示实施例中的步骤401、图7所示实施例中的步骤701的详细说明,此处不做赘述。
在另一种可能的实施方式中,所述目标行驶策略还包括目标行驶决策数据。所述处理单元901还用于基于所述目标行驶决策数据和所述环境数据确定所述行驶场景。具体实现方式请参考图4所示实施例中的步骤402、图7所示实施例中的步骤702的详细说明,此处不做赘述。
在另一种可能的实施方式中,所述处理单元901还用于:将所述环境数据、所述目标行驶策略和所述实际行驶数据作为行为分析模型的输入,以确定所述异常行驶行为。其中,所述行为分析模型是以确定出所述异常行驶行为为训练目标,以发生所述异常行驶行为时所对应的环境数据、目标行驶策略和实际行驶数据为作为训练数据对初始模型进行训练后获取的模型。具体实现方式请参考图7所示实施例中的步骤703的详细说明,此处不做赘述。
在另一种可能的实施方式中,所述获取单元903用于:获取第一距离和第二距离,以及获取第一安全距离和第二安全距离。其中,所述第一距离为所述驾驶设备在第一方向上与障碍物之间的距离。所述第二距离为所述驾驶设备在第二方向上与所述障碍物之间的距离,所述第一方向垂直于所述第二方向。所述第一安全距离为所述驾驶设备在所述第一方向上与所述障碍物之间的安全距离。所述第二安全距离为所述驾驶设备在所述第二方向上与所述障碍物之间的安全距离。所述处理单元901用于:根据所述第一距离和所述第一安全距离确定第一校验结果,以及基于所述第二距离和所述第二安全距离确定第二校验结果;基于所述第一校验结果和/或所述第二校验结果,指示所述驾驶设备执行安全操作指令。具体实现方式请参考图4所示实施例中的步骤402的详细说明,此处不做赘述。
在另一种可能的实施方式中,所述获取单元903用于:获取第一速度和第二速度。所述第一速度为所述驾驶设备在所述第一方向上的行驶速度,所述第二速度为所述驾驶设备在所述第二方向上的行驶速度。基于所述第一速度确定所述第一安全距离,以及基于所述第二速度确定所述第二安全距离。具体实现方式请参考图4所示实施例中的步骤402的详细说明,此处不做赘述。
需要说明的是,上述装置各模块/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其带来的技术效果与本申请方法实施例相同,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
上面从模块化功能实体的角度对本申请实施例中的驾驶辅助装置。从实体设备角度来描述,上述应用驾驶辅助装置可以由一个实体设备实现,也可以由多个实体设备共同实现,还可以是一个实体设备内的一个逻辑功能单元,本申请实施例对此不做具体限定。
例如,上述驾驶辅助装置可以由图10中的通信设备来实现。图10为本申请实施例提供的通信设备的硬件结构示意图。该通信设备包括至少一个处理器1001,通信线路1007,存储器1003以及至少一个通信接口1004。
处理器1001可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,服务器IC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路1007可包括一通路,在上述组件之间传送信息。
通信接口1004,使用任何收发器一类的装置,用于与其他装置或通信网络通信,如以太网等。
存储器1003可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储装置,随机存取存储器(random access memory,RAM)或者可存储 信息和指令的其他类型的动态存储装置,也可以是电可擦可编程只读存储器(electrically erable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路1007与处理器相连接。存储器1003也可以和处理器1001集成在一起。
其中,存储器1003用于存储执行本申请方案的计算机执行指令,并由处理器1001来控制执行。处理器1001用于执行存储器1003中存储的计算机执行指令,从而实现本申请上述实施例提供的驾驶辅助的方法。
可选的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在具体实现中,作为一种实施例,处理器1001可以包括一个或多个CPU,例如图10中的CPU0和CPU1。
在具体实现中,作为一种实施例,通信设备可以包括多个处理器,例如图10中的处理器1001和处理器1002。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个装置、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,通信设备还可以包括输出装置1005和输入装置1006。输出装置1005和处理器1001通信,可以以多种方式来显示信息。输入装置1006和处理器1001通信,可以以多种方式接收用户的输入。例如,输入装置1006可以是鼠标、触摸屏装置或传感装置等。
上述的通信设备可以是一个通用装置或者是一个专用装置。在具体实现中,通信设备可以是便携式电脑、移动终端等或有图10中类似结构的装置。本申请实施例不限定通信设备的类型。
需说明,图10中的处理器1001可以通过调用存储器1003中存储的计算机执行指令,使得驾驶辅助装置执行如图4、图7对应的方法实施例中的方法。
具体的,图9中的处理单元901的功能/实现过程可以通过图10中的处理器1001调用存储器1003中存储的计算机执行指令来实现。图9中的获取单元903、发送单元902的功能/实现过程可以通过图10中的通信接口1004来实现。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件 可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
该集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例该方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
上述实施例,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现,当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机执行指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如SSD))等。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (19)

  1. 一种驾驶辅助的方法,其特征在于,所述方法包括:
    基于目标行驶策略和实际行驶数据确定异常行驶数据,所述目标行驶策略是基于所述实际行驶数据驾驶行驶设备时对应的环境数据得到;
    向终端设备发送教学策略,所述教学策略与行驶场景对应,所述行驶场景与所述异常行驶数据对应,所述教学策略用于对所述行驶设备进行行驶指导。
  2. 根据权利要求1中所述的方法,其特征在于,所述方法还包括:
    向所述终端设备发送第一视频,所述第一视频包括所述实际行驶数据、所述目标行驶策略、以及与所述异常行驶数据对应的异常行驶行为。
  3. 根据权利要求1或2所述的方法,其特征在于,所述目标行驶策略包括目标行驶轨迹和所述目标行驶轨迹中各个轨迹点对应的目标控制指令,所述实际行驶数据包括实际行驶轨迹和所述实际行驶轨迹中各个轨迹点对应的实际控制指令;所述基于目标行驶策略和实际行驶数据确定异常行驶数据,包括:
    计算第一时长内的第一数据和第二数据之间的相似度,其中,所述第一数据为所述目标行驶轨迹和所述目标控制指令中随着行驶时间的变化而发生变化的数据,所述第二数据为所述实际行驶轨迹和所述实际控制指令中随着所述行驶时间的变化而发生改变的数据,所述第一时长为所述行驶时间中至少一组时长中的任一时长;
    在所述相似度小于预设相似阈值时,确定所述第一时长内的第二数据为异常行驶数据。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述目标行驶策略还包括目标行驶决策数据,所述方法还包括:
    基于所述目标行驶决策数据和所述环境数据确定所述行驶场景。
  5. 根据权利要求2-3中任一项所述的方法,其特征在于,所述方法还包括:
    将所述环境数据、所述目标行驶策略和所述实际行驶数据作为行为分析模型的输入,以确定所述异常行驶行为,其中,所述行为分析模型是以确定出所述异常行驶行为为训练目标,以发生所述异常行驶行为时所对应的环境数据、目标行驶策略和实际行驶数据为作为训练数据对初始模型进行训练后获取的模型。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:
    获取第一距离和第二距离,其中,所述第一距离为所述行驶设备在第一方向上与障碍物之间的距离,所述第二距离为所述行驶设备在第二方向上与所述障碍物之间的距离,所述第一方向垂直于所述第二方向;
    获取第一安全距离和第二安全距离,其中,所述第一安全距离为所述行驶设备在所述第一方向上与所述障碍物之间的安全距离,所述第二安全距离为所述行驶设备在所述第二方向上与所述障碍物之间的安全距离;
    基于所述第一距离和所述第一安全距离确定第一校验结果,以及基于所述第二距离和所述第二安全距离确定第二校验结果;
    基于所述第一校验结果和/或所述第二校验结果,指示所述行驶设备执行安全操作指令。
  7. 根据权利要求6所述的方法,其特征在于,所述获取第一安全距离和第二安全距离, 包括:
    获取第一速度和第二速度,所述第一速度为所述行驶设备在所述第一方向上的行驶速度,所述第二速度为所述行驶设备在所述第二方向上的行驶速度;
    基于所述第一速度确定所述第一安全距离,以及基于所述第二速度确定所述第二安全距离。
  8. 一种驾驶辅助装置,其特征在于,包括:
    处理单元,用于根据目标行驶策略和实际行驶数据确定异常行驶数据,所述目标行驶策略是基于所述实际行驶数据驾驶所述驾驶设备时对应的环境数据得到;
    发送单元,用于向终端设备发送教学策略,所述教学策略与行驶场景对应,所述行驶场景与所述异常行驶数据对应,所述教学策略用于对所述驾驶设备进行行驶指导。
  9. 根据权利要求8中所述的驾驶辅助装置,其特征在于,所述发送单元还用于:
    向所述终端设备发送第一视频,所述第一视频包括所述实际行驶数据、所述目标行驶策略、以及与所述异常行驶数据对应的异常行驶行为。
  10. 根据权利要求8或9所述的驾驶辅助装置,其特征在于,所述目标行驶策略包括目标行驶轨迹和所述目标行驶轨迹中各个轨迹点对应的目标控制指令,所述实际行驶数据包括实际行驶轨迹和所述实际行驶轨迹中各个轨迹点对应的实际控制指令;所述处理单元用于:
    计算第一时长内的第一数据和第二数据之间的相似度,其中,所述第一数据为所述目标行驶轨迹和所述目标控制指令中随着行驶时间的变化而发生变化的数据,所述第二数据为所述实际行驶轨迹和所述实际控制指令中随着所述行驶时间的变化而发生改变的数据,所述第一时长为所述行驶时间中至少一组时长中的任一时长;
    在所述相似度小于预设相似阈值时,确定所述第一时长内的第二数据为异常行驶数据。
  11. 根据权利要求8-10中任一项所述的驾驶辅助装置,其特征在于,所述目标行驶策略还包括目标行驶决策数据,所述处理单元还用于:
    基于所述目标行驶决策数据和所述环境数据确定所述行驶场景。
  12. 根据权利要求9-11中任一项所述的驾驶辅助装置,其特征在于,所述处理单元,还用于:
    将所述环境数据、所述目标行驶策略和所述实际行驶数据作为行为分析模型的输入,以确定所述异常行驶行为,其中,所述行为分析模型是以确定出所述异常行驶行为为训练目标,以发生所述异常行驶行为时所对应的环境数据、目标行驶策略和实际行驶数据为作为训练数据对初始模型进行训练后获取的模型。
  13. 根据权利要求8-12中任一项所述的驾驶辅助装置,其特征在于,所述驾驶辅助装置还包括获取单元;所述获取单元用于:
    获取第一距离和第二距离,其中,所述第一距离为所述驾驶设备在第一方向上与障碍物之间的距离,所述第二距离为所述驾驶设备在第二方向上与所述障碍物之间的距离,所述第一方向垂直于所述第二方向;
    获取第一安全距离和第二安全距离,其中,所述第一安全距离为所述驾驶设备在所述 第一方向上与所述障碍物之间的安全距离,所述第二安全距离为所述驾驶设备在所述第二方向上与所述障碍物之间的安全距离;
    所述处理单元用于:
    根据所述第一距离和所述第一安全距离确定第一校验结果,以及基于所述第二距离和所述第二安全距离确定第二校验结果;
    基于所述第一校验结果和/或所述第二校验结果,指示所述驾驶设备执行安全操作指令。
  14. 根据权利要求13所述的驾驶辅助装置,其特征在于,所述获取单元用于:
    获取第一速度和第二速度,所述第一速度为所述驾驶设备在所述第一方向上的行驶速度,所述第二速度为所述驾驶设备在所述第二方向上的行驶速度;
    基于所述第一速度确定所述第一安全距离,以及基于所述第二速度确定所述第二安全距离。
  15. 一种自动驾驶装置,其特征在于,所述自动驾驶装置包括处理器和存储器,所述处理器与所述存储器耦合;
    所述存储器,用于存储计算机可读指令;
    所述处理器,用于执行所述存储器中的计算机可读指令从而执行如权利要求1至7任一项所描述的方法。
  16. 一种行驶设备,其特征在于,所述行驶设备包括处理器和存储器,所述处理器与所述存储器耦合;
    所述存储器,用于存储计算机可读指令;
    所述处理器,用于执行所述存储器中的计算机可读指令从而执行如权利要求1至7任一项所描述的方法。
  17. 一种芯片,其特征在于,包括:处理器和通信接口,所述处理器通过所述通信接口与存储器耦合;
    所述存储器,用于存储计算机可读指令;
    所述处理器,用于执行所述存储器中的计算机可读指令从而执行如权利要求1至7任一项所描述的方法。
  18. 一种计算机可读存储介质,其特征在于,当指令在计算机上运行时,使得所述计算机执行如权利要求1至7中任一项所述的控制方法。
  19. 一种包含指令的计算机程序产品,其特征在于,当所述指令在计算机上运行时,使得计算机执行如权利要求1至7中任一项所述的控制方法。
PCT/CN2021/142923 2021-12-30 2021-12-30 一种驾驶辅助的方法以及相关设备 WO2023123172A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/142923 WO2023123172A1 (zh) 2021-12-30 2021-12-30 一种驾驶辅助的方法以及相关设备
CN202180017218.6A CN116686028A (zh) 2021-12-30 2021-12-30 一种驾驶辅助的方法以及相关设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/142923 WO2023123172A1 (zh) 2021-12-30 2021-12-30 一种驾驶辅助的方法以及相关设备

Publications (1)

Publication Number Publication Date
WO2023123172A1 true WO2023123172A1 (zh) 2023-07-06

Family

ID=86997086

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/142923 WO2023123172A1 (zh) 2021-12-30 2021-12-30 一种驾驶辅助的方法以及相关设备

Country Status (2)

Country Link
CN (1) CN116686028A (zh)
WO (1) WO2023123172A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116700293A (zh) * 2023-07-19 2023-09-05 上海联适导航技术股份有限公司 农机车辆的自动驾驶系统的调试方法、装置和农机车辆
CN117022312A (zh) * 2023-10-09 2023-11-10 广州市德赛西威智慧交通技术有限公司 基于行车轨迹的驾驶错误智能提醒方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103235933A (zh) * 2013-04-15 2013-08-07 东南大学 一种基于隐马尔科夫模型的车辆异常行为检测方法
CN205131085U (zh) * 2015-11-16 2016-04-06 惠州市物联微电子有限公司 一种基于车联网的练车系统
CN109949611A (zh) * 2019-03-28 2019-06-28 百度在线网络技术(北京)有限公司 无人车的变道方法、装置及存储介质
WO2020169052A1 (en) * 2019-02-21 2020-08-27 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for driving condition identification
US10832593B1 (en) * 2018-01-25 2020-11-10 BlueOwl, LLC System and method of facilitating driving behavior modification through driving challenges
CN112868022A (zh) * 2018-10-16 2021-05-28 法弗人工智能有限公司 自动驾驶车辆的驾驶场景

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103235933A (zh) * 2013-04-15 2013-08-07 东南大学 一种基于隐马尔科夫模型的车辆异常行为检测方法
CN205131085U (zh) * 2015-11-16 2016-04-06 惠州市物联微电子有限公司 一种基于车联网的练车系统
US10832593B1 (en) * 2018-01-25 2020-11-10 BlueOwl, LLC System and method of facilitating driving behavior modification through driving challenges
CN112868022A (zh) * 2018-10-16 2021-05-28 法弗人工智能有限公司 自动驾驶车辆的驾驶场景
WO2020169052A1 (en) * 2019-02-21 2020-08-27 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for driving condition identification
CN109949611A (zh) * 2019-03-28 2019-06-28 百度在线网络技术(北京)有限公司 无人车的变道方法、装置及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116700293A (zh) * 2023-07-19 2023-09-05 上海联适导航技术股份有限公司 农机车辆的自动驾驶系统的调试方法、装置和农机车辆
CN116700293B (zh) * 2023-07-19 2024-03-29 上海联适导航技术股份有限公司 农机车辆的自动驾驶系统的调试方法、装置和农机车辆
CN117022312A (zh) * 2023-10-09 2023-11-10 广州市德赛西威智慧交通技术有限公司 基于行车轨迹的驾驶错误智能提醒方法及装置
CN117022312B (zh) * 2023-10-09 2023-12-29 广州市德赛西威智慧交通技术有限公司 基于行车轨迹的驾驶错误智能提醒方法及装置

Also Published As

Publication number Publication date
CN116686028A (zh) 2023-09-01

Similar Documents

Publication Publication Date Title
US11631200B2 (en) Prediction on top-down scenes based on action data
JP6602352B2 (ja) 自律走行車用の計画フィードバックに基づく決定改善システム
US20200265710A1 (en) Travelling track prediction method and device for vehicle
US11320826B2 (en) Operation of a vehicle using motion planning with machine learning
Ma et al. Artificial intelligence applications in the development of autonomous vehicles: A survey
RU2762786C1 (ru) Планирование траектории
CN111123933B (zh) 车辆轨迹规划的方法、装置、智能驾驶域控制器和智能车
US11545033B2 (en) Evaluation framework for predicted trajectories in autonomous driving vehicle traffic prediction
US11854212B2 (en) Traffic light detection system for vehicle
WO2021134172A1 (zh) 一种轨迹预测方法及相关设备
Elallid et al. A comprehensive survey on the application of deep and reinforcement learning approaches in autonomous driving
CN113439247B (zh) 自主载具的智能体优先级划分
US20190308620A1 (en) Feature-based prediction
CN109697875B (zh) 规划行驶轨迹的方法及装置
WO2022007655A1 (zh) 一种自动换道方法、装置、设备及存储介质
CN109426256A (zh) 自动驾驶车辆的基于驾驶员意图的车道辅助系统
US10635117B2 (en) Traffic navigation for a lead vehicle and associated following vehicles
WO2023123172A1 (zh) 一种驾驶辅助的方法以及相关设备
WO2020040943A2 (en) Using divergence to conduct log-based simulations
Zhao et al. A cooperative vehicle-infrastructure based urban driving environment perception method using a DS theory-based credibility map
KR20210038852A (ko) 조기 경보 방법, 장치, 전자 기기, 컴퓨터 판독 가능 저장 매체 및 컴퓨터 프로그램
CN112512887B (zh) 一种行驶决策选择方法以及装置
US11321211B1 (en) Metric back-propagation for subsystem performance evaluation
CN114061581A (zh) 通过相互重要性对自动驾驶车辆附近的智能体排名
US20230399008A1 (en) Multistatic radar point cloud formation using a sensor waveform encoding schema

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202180017218.6

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21969538

Country of ref document: EP

Kind code of ref document: A1