WO2024087712A1 - Target behavior prediction method, intelligent device and vehicle - Google Patents

Target behavior prediction method, intelligent device and vehicle Download PDF

Info

Publication number
WO2024087712A1
WO2024087712A1 PCT/CN2023/104261 CN2023104261W WO2024087712A1 WO 2024087712 A1 WO2024087712 A1 WO 2024087712A1 CN 2023104261 W CN2023104261 W CN 2023104261W WO 2024087712 A1 WO2024087712 A1 WO 2024087712A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
target
target object
priority
driving
Prior art date
Application number
PCT/CN2023/104261
Other languages
French (fr)
Chinese (zh)
Inventor
胡少伟
李云龙
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024087712A1 publication Critical patent/WO2024087712A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles

Definitions

  • the present application relates to the field of artificial intelligence, and in particular to a target behavior prediction method, intelligent device and vehicle.
  • AI Artificial intelligence
  • digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that machines have the functions of perception, reasoning and decision-making.
  • Autonomous driving is a mainstream application in the field of artificial intelligence.
  • Autonomous driving technology relies on the collaborative efforts of computer vision, radar, monitoring devices, and global positioning systems to enable smart devices (such as autonomous vehicles, robots, or other autonomous driving devices) to achieve automatic driving without the need for active human operation.
  • smart devices usually need to perceive the surrounding environmental information to predict the behavior of target objects (such as pedestrians, vehicles, etc.) in the surrounding environment, that is, to predict the future motion trajectory, so that smart devices can respond and perform corresponding operations in time, such as planning the future driving path of smart devices to avoid collisions, screening possible interactive objects of smart devices to improve interaction efficiency, etc.
  • target objects such as pedestrians, vehicles, etc.
  • smart devices cannot predict the behavior of all target objects in time, resulting in a large prediction delay, which affects the response speed.
  • the present application provides a target behavior prediction method, an intelligent device, and a vehicle, which can predict the future motion trajectory of target objects with different collision risk levels in the surrounding environment of the intelligent device using prediction models of different computational complexity. While achieving the behavior prediction of objects in the surrounding environment, it also reduces the consumption of computing resources, reduces prediction latency, and improves response efficiency.
  • the present application provides a target behavior prediction method that can be applied to smart devices, and the target behavior prediction method includes: performing a collision analysis based on the motion state of the target object to determine the risk coefficient of the target object, and the risk coefficient is used to indicate the possibility of a collision between the target object and the smart device; determining the priority of the target object based on the risk coefficient; determining a target prediction model that matches the priority based on the correspondence between the preset priority level and the prediction model; and predicting the motion trajectory of the target object based on the target prediction model.
  • the intelligent device can perform collision analysis on different objects in the surrounding environment of the intelligent device according to the motion states of different objects in the surrounding environment to determine the collision risk level of each object. Then the intelligent device can divide the objects in the surrounding environment into different priorities according to the collision risk level, and for objects of different priorities, the intelligent device can use different prediction models to predict the future motion trajectory. For example, for high-priority objects, a refined prediction model with high computational complexity can be used to predict the future motion trajectory, while for low-priority objects, a simplified prediction model with low computational complexity can be used to predict the future motion trajectory.
  • the hierarchical prediction of the behavior trajectories of different objects in the surrounding environment by the intelligent device is realized.
  • the above-mentioned collision analysis based on the motion state of the target object and determining the risk coefficient of the target object may include: obtaining multiple driving positions on the driving route of the smart device; determining the risk coefficient corresponding to each of the multiple driving positions according to the motion state of the target object, wherein the risk coefficient corresponding to each driving position is used to indicate the possibility of collision between the target object and the smart device at each driving position; determining the risk coefficient of the target object according to the minimum value of the risk coefficient corresponding to each driving position. Risk factor of the target object.
  • the smart device can select multiple driving positions from the vehicle's driving route as evaluation points for possible future collisions between the vehicle and the target objects in the surrounding environment. The smart device can then comprehensively consider the probability of collision at multiple evaluation points and accurately determine the collision risk level of the target object.
  • the above-mentioned determination of the risk coefficient corresponding to each of the multiple driving positions according to the motion state of the target object may include: determining the first distance between the target object and the target driving position after a specified time period according to the motion state of the target object, wherein the target driving position is any one of the multiple driving positions; determining the second distance between the smart device and the target driving position after a specified time period; obtaining the sum of the first distance and the second distance as the risk coefficient corresponding to the target driving position.
  • the smart device can accurately analyze the possibility of collision at the evaluation point according to the remaining collision distance of the target object and the self-vehicle from the evaluation point after driving for a period of time. It can be understood that the smaller the remaining collision distance is, the more likely it is that the target object and the self-vehicle will collide near the evaluation point after driving for a period of time.
  • the above-mentioned acquisition of multiple driving positions on the driving route of the smart device may include: acquiring the driving route of the smart device within a preset time period in the future; acquiring a driving position on the driving route at every specified interval to obtain multiple driving positions on the driving route.
  • the smart device can obtain multiple evaluation position points where collisions may occur in the future by sampling the target driving route of the vehicle at a fixed distance.
  • the above-mentioned acquisition of multiple driving positions on the driving route of the smart device may include: acquiring the driving route of the smart device within a preset time period in the future; acquiring a driving position on the driving route at a specified interval to obtain multiple driving positions on the driving route.
  • the smart device can obtain multiple evaluation position points that may cause collisions in the future by regularly sampling the target driving route of the vehicle.
  • the above-mentioned collision analysis based on the motion state of the target object and determining the risk factor of the target object may include: determining the collision time of the target object and the smart device according to the motion state of the target object; and determining the risk factor of the target object according to the collision time.
  • the smart device can also accurately determine the collision risk degree of the target object according to the time when the vehicle may collide with the target object in the surrounding environment in the future.
  • the smart device can prioritize the targets according to their importance or urgency based on the degree of collision risk of different objects in the surrounding environment.
  • the targets that are more likely to collide are assigned a higher priority, and the targets that are less likely to collide are assigned a lower priority.
  • determining the priority of each target object according to the sorted multiple targets may include: dividing the sorted multiple targets into different priorities according to a preset number of priorities to obtain the priority of each target object.
  • the smart device may not limit the number of priorities to be divided. When more or fewer priorities need to be divided, the smart device can adjust the parameter size of the number of priorities. Therefore, the expansion of priority levels can be achieved without changing the collision risk analysis method.
  • the target behavior prediction method may further include: displaying a first interface, the first interface being used to input a priority number; and in response to a user input operation, obtaining the priority number input by the user as a preset priority number. In this way, the user can customize the priority number as needed, thereby improving the user experience.
  • the target behavior prediction method may further include: determining the color corresponding to each target object according to the preset correspondence between the priority level and the color and the priority of each target object; and displaying multiple targets based on the color corresponding to each target object.
  • the smart device can use different colors to display targets of different priorities on the display screen, so that the user can directly see the priority grading effect of the smart device on the targets in the surrounding environment.
  • the correspondence between the above-mentioned preset priority levels and prediction models may include: the first priority corresponds to the first prediction model, the second priority corresponds to the second prediction model, the first priority is higher than the second priority, and the computational complexity of the first prediction model is higher than the computational complexity of the second prediction model.
  • the smart device can use prediction models with different computational complexities to predict the future motion trajectory of target objects of different priorities in the surrounding environment. While realizing the prediction of the behavior of objects in the surrounding environment, it also reduces the consumption of computing resources and reduces the prediction delay.
  • the present application provides a smart device, including: an analysis unit, a rating unit, a matching unit, and a prediction unit.
  • the analysis unit is used to perform collision analysis based on the motion state of the target object and determine the risk coefficient of the target object, and the risk coefficient is used to indicate the possibility of collision between the target object and the smart device;
  • the rating unit is used to determine the priority of the target object based on the risk coefficient;
  • the matching unit is used to determine the priority matching the target object based on the correspondence between the preset priority level and the prediction model.
  • the above analysis unit can be used to: obtain multiple driving positions on the driving route of the smart device; determine the risk coefficient corresponding to each of the multiple driving positions according to the motion state of the target object, and the risk coefficient corresponding to each driving position is used to indicate the possibility of collision between the target object and the smart device at each driving position; determine the risk coefficient of the target object according to the minimum value of the risk coefficient corresponding to each driving position.
  • the smart device can select multiple driving positions from the driving route of the vehicle as evaluation points for possible future collisions between the vehicle and the target objects in the surrounding environment, so that the smart device can comprehensively consider the possibility of collision at multiple evaluation points and accurately determine the collision risk level of the target object.
  • the above-mentioned analysis unit can be used to: determine the first distance between the target object and the target driving position after a specified time period according to the motion state of the target object, wherein the target driving position is any one of a plurality of driving positions; determine the second distance between the smart device and the target driving position after a specified time period; obtain the sum of the first distance and the second distance as the risk coefficient corresponding to the target driving position.
  • the smart device can accurately analyze the possibility of collision at the evaluation point according to the remaining collision distance of the target object and the self-vehicle from the evaluation point after driving for a period of time. It can be understood that the smaller the remaining collision distance, the more likely it is that the target object and the self-vehicle will collide near the evaluation point after driving for a period of time.
  • the analysis unit can be used to: obtain the driving route of the smart device within a preset time period in the future; obtain a driving position on the driving route at each specified distance interval to obtain multiple driving positions on the driving route.
  • the smart device can obtain multiple evaluation position points where collisions may occur in the future by sampling the target driving route of the vehicle at a fixed distance.
  • the analysis unit can be used to: obtain the driving route of the smart device within a preset time period in the future; obtain a driving position on the driving route at each specified time interval to obtain multiple driving positions on the driving route.
  • the smart device can obtain multiple evaluation position points where collisions may occur in the future by regularly sampling the target driving route of the vehicle.
  • the above analysis unit can also be used to: determine the collision time of the target object and the smart device according to the motion state of the target object; and determine the risk factor of the target object according to the collision time.
  • the smart device can also accurately determine the collision risk degree of the target object according to the time when the vehicle may collide with the target object in the surrounding environment in the future.
  • the rating unit can be used to: sort the multiple targets according to the risk factor of each target among the multiple targets; and determine the priority of each target according to the sorted multiple targets.
  • the smart device can prioritize the targets according to their importance or urgency based on the degree of collision risk of different objects in the surrounding environment. The targets that are more likely to collide will be assigned a higher priority, and the targets that are less likely to collide will be assigned a lower priority.
  • the rating unit can be used to classify the sorted multiple targets into different priorities according to the preset number of priorities to obtain the priority of each target.
  • the smart device does not need to limit the number of priorities to be divided.
  • the smart device can adjust the parameter size of the number of priorities. Therefore, the expansion of priority levels can be achieved without changing the collision risk analysis method.
  • the smart device may further include: a display unit and an acquisition unit.
  • the display unit is used to display a first interface, and the first interface is used to input a priority number;
  • the acquisition unit is used to respond to the user's input operation and obtain the priority number input by the user as the preset priority number. In this way, the user can customize the priority number according to the needs, thereby improving the user's experience.
  • the smart device may further include: a color picking unit and a display unit.
  • the color picking unit is used to determine the color corresponding to each target object according to the correspondence between the preset priority level and the color and the priority of each target object; the display unit is used to display multiple targets based on the color corresponding to each target object.
  • the smart device can use different colors to display targets of different priorities on the display screen, so that the user can directly see the priority grading effect of the smart device on the targets in the surrounding environment.
  • the correspondence between the preset priority levels and the prediction models may include: the first priority level corresponds to the first prediction model, the second priority level corresponds to the second prediction model, the first priority level is higher than the second priority level, and the computational complexity of the first prediction model is higher than the computational complexity of the second prediction model.
  • the smart device can use prediction models with different computational complexities to predict the future motion trajectory of target objects with different priorities in the surrounding environment. While predicting the behavior of the subject, it also reduces the consumption of computing resources and reduces the prediction delay.
  • the present application provides an intelligent device, comprising one or more processors and one or more memories.
  • the one or more memories are coupled to the one or more processors, and the one or more memories are used to store computer program codes, and the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the intelligent device executes the target behavior prediction method in any possible implementation of the first aspect.
  • the present application provides a vehicle-mounted device, comprising one or more processors and one or more memories.
  • the one or more memories are coupled to the one or more processors, and the one or more memories are used to store computer program codes, and the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the vehicle-mounted device executes the target behavior prediction method in any possible implementation of the first aspect.
  • the present application provides a vehicle, the vehicle comprising the vehicle-mounted device of the fourth aspect of the present application.
  • the vehicle can be used to implement the target behavior prediction method in any possible implementation of the first aspect.
  • the present application provides a robot, comprising one or more processors and one or more memories.
  • the one or more memories are coupled to the one or more processors, and the one or more memories are used to store computer program codes, and the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the robot executes the target behavior prediction method in any possible implementation of the first aspect.
  • the present application provides a target behavior prediction device, which is included in an intelligent device, a robot, a vehicle, or an on-board device, and has the function of realizing the behavior of the intelligent device in any of the methods in the first aspect and the possible implementation of the first aspect.
  • the function can be implemented by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules or units corresponding to the above functions.
  • the present application provides a chip system, which is applied to a smart device.
  • the chip system includes one or more interface circuits and one or more processors.
  • the interface circuit and the processor are interconnected by a line.
  • the interface circuit is used to receive a signal from a memory of the smart device and send the signal to the processor, where the signal includes a computer instruction stored in the memory.
  • the processor executes the computer instruction
  • the smart device executes the target behavior prediction method in any possible implementation of the first aspect above.
  • the present application provides a computer storage medium comprising computer instructions, which, when executed on a smart device, enables the smart device to execute a target behavior prediction method in any possible implementation of the first aspect.
  • the present application provides a computer program product, which, when executed on a computer, enables the computer to execute the target behavior prediction method in any possible implementation of the first aspect.
  • the beneficial effects that can be achieved by the intelligent device of the second aspect and any possible implementation thereof, the intelligent device of the third aspect, the vehicle-mounted device of the fourth aspect, the vehicle of the fifth aspect, the robot of the sixth aspect, the target behavior prediction device of the seventh aspect, the chip system of the eighth aspect, the computer storage medium of the ninth aspect, and the computer program product of the tenth aspect can be referred to the beneficial effects of the first aspect and any possible implementation thereof, and will not be repeated here.
  • FIG1 is a structural schematic diagram 1 of a vehicle provided in an embodiment of the present application.
  • FIG2 is a second structural schematic diagram of a vehicle provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of the structure of a computer system provided in an embodiment of the present application.
  • FIG4 is a schematic diagram 1 of an application of a cloud-side command for an autonomous driving vehicle provided in an embodiment of the present application;
  • FIG5 is a second schematic diagram of an application of a cloud-side commanded autonomous driving vehicle provided in an embodiment of the present application
  • FIG6 is a method flow chart of a target behavior prediction method provided in an embodiment of the present application.
  • FIG7 is a schematic diagram of a collision analysis provided in an embodiment of the present application.
  • FIG8 is a first schematic diagram of an interface of a vehicle-mounted display screen provided in an embodiment of the present application.
  • FIG. 9 is a second schematic diagram of an interface of a vehicle-mounted display screen provided in an embodiment of the present application.
  • first and second are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as “first” and “second” may explicitly or implicitly include one or more of the features. It should be understood that in the present application, “at least one” means one or more, and “more than one” means two or more. "And/or” is used to refer to a plurality of items. For describing the association relationship of associated objects, it indicates that there may be three types of relationships. For example, “A and/or B” may indicate that only A exists, only B exists, and both A and B exist. A and B may be singular or plural.
  • the scheme of the embodiment of the present application can be applied to smart devices.
  • the smart device can be an autonomous driving vehicle, a robot, or other electronic device with autonomous driving capability, and the embodiment of the present application does not limit this.
  • the scheme of the embodiment of the present application can also be applied to other devices (such as cloud servers, mobile phone terminals, etc.) with the function of controlling the above-mentioned smart devices.
  • Smart devices or other devices can implement the target behavior prediction method provided by the embodiment of the present application through the components (including hardware and software) contained therein, and perform collision risk detection on the target objects in the surrounding environment of the smart device according to the data collected by the sensor, so as to determine the priority of the collision risk degree of the target object, so that the smart device or other device can use a refined and high-computational complexity prediction model to predict the future motion trajectory of high-priority targets, and use a simplified and low-computational complexity prediction model to predict the future motion trajectory of low-priority targets.
  • the target behavior prediction method provided in the embodiment of the present application does not use a refined, high-complexity prediction model to predict the future motion trajectory of all targets in the surrounding environment of the smart device, but divides the targets in the surrounding environment of the smart device into different priorities according to the degree of collision risk. Therefore, the future motion trajectory of low-priority targets can be predicted using a simplified, low-complexity prediction model, which greatly reduces the consumption of computing resources and reduces the prediction delay.
  • the smart device as an autonomous driving vehicle (hereinafter referred to as the vehicle) as an example to schematically illustrate the solution provided in the embodiment of the present application.
  • vehicle autonomous driving vehicle
  • FIG. 1 shows a functional block diagram of a vehicle 100 provided in an embodiment of the present application.
  • the vehicle 100 may include various devices, components, etc. disposed in the vehicle 100 and/or the body of the vehicle 100.
  • the devices and components disposed in the vehicle 100 may include, but are not limited to, an automatic driving system and an automatic driving function application. It is understood that an automatic driving system is usually disposed in a vehicle with a certain automatic driving capability.
  • the vehicle 100 may include various subsystems, such as a travel system 102, a sensor system 104, a control system 106, one or more peripheral devices 108, and a power source 110, a computer system 112, a hierarchical prediction system 114, and a user interface 116.
  • the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
  • each subsystem and element of the vehicle 100 may be interconnected by wire or wirelessly.
  • the travel system 102 may include components for providing powered motion for the vehicle 100.
  • the travel system 102 may include an engine, an energy source, a transmission, and wheels/tires.
  • the engine may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, or a hybrid engine consisting of an internal combustion engine and an air compression engine.
  • the engine can convert the energy source into mechanical energy.
  • energy sources include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source can also provide energy for other systems of the vehicle 100.
  • the transmission can transmit mechanical power from the engine to the wheels.
  • the transmission can include a gearbox, a differential, and a drive shaft.
  • the transmission can also include other devices, such as a clutch.
  • the drive shaft can include one or more shafts that can be coupled to one or more wheels.
  • the sensor system 104 may include several sensors for sensing information about the surrounding environment of the vehicle 100.
  • the sensor system 104 may include a positioning system (the positioning system may be a global positioning system (GPS) system, or a Beidou system or other positioning system), an inertial measurement unit (IMU), a radar, a laser rangefinder, and a camera.
  • the sensor system 104 may also include sensors of the internal systems of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (such as position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of the autonomous driving of the vehicle 100.
  • the positioning system may be used to estimate the geographic location of the vehicle 100.
  • the IMU may be used to sense the position and orientation changes of the vehicle 100 based on inertial acceleration.
  • the IMU may be a combination of an accelerometer and a gyroscope.
  • the radar can use radio signals to sense objects in the surrounding environment of the vehicle 100, such as pedestrians, cyclists (i.e., bicyclists), motorcycles, other vehicles, and other types of obstacles.
  • the radar in addition to sensing objects, can also be used to sense one or more states of the object's speed, position, and direction of travel.
  • a laser rangefinder may utilize laser light to sense objects in the environment in which the vehicle 100 is located.
  • a laser rangefinder may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
  • the camera may be used to capture multiple images of the surroundings of the vehicle 100.
  • the camera may be a still camera or a video camera.
  • the control system 106 is for controlling the operation of the vehicle 100 and its components.
  • the control system 106 may include various components, such as a steering system, a throttle, a brake unit, a computer vision system, a route control system, and an obstacle avoidance system.
  • the steering system can be operated to adjust the forward direction of the vehicle 100.
  • the steering system can be a steering wheel system.
  • the throttle can be used to control the operating speed of the engine and thus control the speed of the vehicle 100.
  • the brake unit can be used to control the deceleration of the vehicle 100.
  • the computer vision system can process and analyze the images captured by the camera to identify various types of objects and/or features in the environment surrounding the vehicle 100.
  • the computer vision system can use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM structure from motion
  • the computer vision system can be used to map the environment, track objects, estimate the speed of objects, etc.
  • the route control system is used to determine the driving route of the vehicle 100.
  • the route control system can combine data from sensors, GPS, and one or more predetermined maps to determine the driving route for the vehicle 100.
  • the obstacle avoidance system is used to identify, evaluate, and avoid or otherwise negotiate obstacles in the environment of the vehicle 100 .
  • control system 106 may include additional or alternative components other than the above components, or may reduce some of the above components.
  • the vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through the peripheral device 108.
  • the peripheral device 108 may include a wireless communication system, an onboard computer, a microphone, and/or a speaker.
  • the peripheral device 108 provides a means for the user of the vehicle 100 to interact with the user interface 116.
  • the onboard computer may provide information to the user of the vehicle 100.
  • the user interface 116 may also operate the onboard computer to receive the user's input.
  • the onboard computer may be operated via a touch screen.
  • the peripheral device 108 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle.
  • a microphone may receive audio (e.g., voice commands or other audio input) input by a user of the vehicle 100.
  • a speaker may output audio to the user of the vehicle 100.
  • the computer system 112 may include at least one processor that executes instructions stored in a non-transitory computer-readable medium such as a data storage device.
  • the computer system 112 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
  • the processor may include one or more processing units, for example, the processor may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-network processing unit (neural-network processing unit, NPU), etc.
  • different processing units may be independent devices or integrated in one or more processors.
  • the memory may contain instructions (e.g., program logic) that can be executed by the processor to perform various functions of the vehicle 100, including those described above.
  • the memory may also contain additional instructions, including instructions for one or more of the travel system 102, the sensor system 104, the control system 106, the peripheral device 108, and the hierarchical prediction system 114 to send data, receive data from it, interact with it, and/or control it.
  • the memory may also store data, such as road maps, route information, vehicle data such as the vehicle's location, direction, speed, and other information, such as the location, orientation, speed, etc. of various objects in the vehicle's surroundings. This information can be used by the vehicle 100, the computer system 112, and the hierarchical prediction system 114 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • the memory can obtain information about objects in the surrounding environment obtained by the vehicle 100 based on the sensors in the sensor system 104, such as the location of obstacles such as other vehicles and pedestrians, the distance between the obstacles and the vehicle 100, and other information.
  • the memory can also obtain environmental information from the sensor system 104 or other components of the vehicle 100.
  • the environmental information can be, for example, whether there are green belts, lanes, pedestrians, etc. near the current environment of the vehicle, or whether there are green belts, lanes, pedestrians, etc. near the current environment calculated by the vehicle through a machine learning algorithm.
  • the memory can also store the status information of the vehicle itself, as well as the status information of the target objects (pedestrians, other vehicles, etc.) that interact with the vehicle, wherein the status information of the vehicle includes but is not limited to the location, speed, acceleration, heading angle, etc. of the vehicle.
  • the processor may also execute the target behavior prediction method of the embodiment of the present application to reduce computing resources and reduce prediction delay.
  • the processor may obtain the above information from the memory and predict the target behavior based on the environmental information of the vehicle environment.
  • the priority level of the target object is determined based on the information of the vehicle's own state information, the state information of the target object, etc., so as to determine the prediction model for predicting the motion trajectory of the target object based on the priority level, thereby controlling the vehicle 100 to use a refined prediction model with high computational complexity to predict the future motion trajectory of high-priority targets, and to use a simplified prediction model with low computational complexity to predict the future motion trajectory of low-priority targets.
  • the specific target behavior prediction method can be referred to the following introduction.
  • the computer system 112 may control functions of the vehicle 100 based on input received from various subsystems (e.g., the travel system 102, the sensor system 104, the control system 106, and the hierarchical prediction system 114) and from the user interface 116.
  • the computer system 112 may utilize input from the control system 106 in order to control the steering unit to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system.
  • the computer system 112 may provide control over many aspects of the vehicle 100 and its subsystems.
  • the hierarchical prediction system 114 can determine the degree of collision risk of pedestrians, cyclists, motorcycles, other vehicles and other targets in the surrounding environment of the vehicle 100 based on the vehicle data input from various subsystems (for example, the travel system 102, the sensor system 104, and the control system 106) and the motion state of the targets, and classify them into different priorities, so as to use different prediction models to predict the future motion trajectories of targets of different priorities, thereby realizing the hierarchical prediction function of the vehicle 100.
  • various subsystems for example, the travel system 102, the sensor system 104, and the control system 106
  • the structure shown in FIG. 1 is for illustration only and does not limit the structure of the vehicle in the embodiment of the present application.
  • the vehicle 100 may include more or fewer components than shown in the figure, or combine certain components, or split certain components, or arrange components differently, or have different configurations with the same functions as shown in FIG. 1 or more functions than shown in FIG. 1.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • An autonomous vehicle traveling on a road can determine an adjustment instruction for the current speed based on objects in its surrounding environment.
  • the objects in the surrounding environment of vehicle 100 can be traffic control devices, other types of static objects such as green belts, or various types of dynamic objects such as pedestrians, cyclists, motorcycles, other vehicles, etc.
  • vehicle 100 can consider each object in the surrounding environment independently and determine the speed adjustment instruction of vehicle 100 based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc.
  • the vehicle 100 as an autonomous vehicle or a computer device associated therewith (such as the computer system 112, a computer vision system, and a memory) can evaluate the risk factor of a collision between the identified object and the vehicle 100 based on the characteristics of the identified object and the driving route of the vehicle 100 in the future, and then divide the identified objects into different priorities based on the risk factor obtained from the evaluation, so that different prediction models can be used to predict the future motion trajectories of objects of different priorities, thereby achieving the purpose of reducing computing resources and reducing prediction delays.
  • a computer device associated therewith such as the computer system 112, a computer vision system, and a memory
  • the vehicle 100 can adjust its driving strategy based on the predicted future motion trajectory of the object.
  • the autonomous vehicle can determine what stable state the vehicle needs to adjust to (e.g., accelerate, decelerate, turn, or stop, etc.) based on the predicted future motion trajectory of the object.
  • other factors can also be considered to determine the speed adjustment instructions of the vehicle 100, such as the lateral position of the vehicle 100 in the road, the curvature of the road, the proximity of static and dynamic objects, etc.
  • the computer device may also provide instructions to modify the steering angle of vehicle 100 so that the autonomous vehicle follows a given trajectory and/or maintains a safe lateral and longitudinal distance between the autonomous vehicle and nearby objects (e.g., cars in adjacent lanes).
  • objects e.g., cars in adjacent lanes.
  • the vehicle 100 may be a car, a truck, a motorcycle, a bus, a ship, an airplane, a helicopter, a lawn mower, an entertainment vehicle, an amusement park vehicle, construction equipment, a tram, a golf cart, a train, and a cart, etc., and the present application embodiment does not make any special restrictions.
  • the vehicle 100 may also be a smart car or a smart robot with autonomous driving capability in the field of smart home.
  • the autonomous driving vehicle may further include a hardware structure and/or a software module to implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the vehicle 200 may include the following modules:
  • Environmental perception module 201 used to obtain information about other vehicles and pedestrians in the surrounding environment of vehicle 200 through vehicle-mounted sensors and/or roadside sensors.
  • vehicle 200 may also be vehicle 100 in FIG. 1.
  • the sensor may be a laser radar, a millimeter wave radar, a visual sensor, etc.
  • the perception module 201 obtains the video stream data originally collected by the sensor, the point cloud data of the radar, etc., and then processes these original video stream data and radar point cloud data to obtain identifiable structured data such as the position, size, speed, and direction of travel of people and other vehicles.
  • the perception module 201 can determine the position, speed, direction of travel, and other information of other vehicles and pedestrians in the surrounding environment of the vehicle 200 based on the data collected by all or a certain type or a certain sensor.
  • the perception module 201 is also used to send the position, speed, direction of travel, and other information of other vehicles and pedestrians in the surrounding environment determined based on the data obtained by the sensor to the classification module 203.
  • the route acquisition module 202 is used to obtain the driving route of the vehicle 200 for a period of time in the future according to the planned route of the vehicle 200.
  • the planned route of the vehicle 200 can be obtained through a road navigation (route) module.
  • the route module can guide the vehicle 200 to travel on what kind of road according to the existing map or road network information, the location information of the starting point and the location information of the destination, so as to successfully complete the travel from the starting point to the destination. That is, the route module can navigate a planned route from the starting point to the destination.
  • the route module can obtain the planned route of the vehicle 200 through a navigation request.
  • the navigation request may include the location information of the starting point of the vehicle 200 and the location information of the destination.
  • the vehicle 200 can obtain the navigation request by the user clicking or touching the screen of the vehicle navigation, or by the user's voice command. Then the route module can use the vehicle GPS in conjunction with the electronic map to plan the route, and then obtain the planned route of the vehicle 200, and obtain the driving route of the vehicle 200 for a period of time in the future according to the planned route of the vehicle 200.
  • the solution provided in the present application can obtain the planned route of vehicle 200 in a variety of ways, and the methods for obtaining vehicle navigation information in the relevant technology can be adopted in the embodiments of the present application.
  • the planned route of vehicle 200 can also be obtained through the equivalent route of the lane where vehicle 200 is located provided by the road structure recognition module.
  • the road structure recognition module is used to obtain road information through on-board sensors and/or roadside sensors, such as road boundary information, lane information where vehicle 200 is located, lane boundary information, etc., to determine the road structure of the lane where vehicle 200 is located based on the road information, so that vehicle 200 can generate the driving route of vehicle 200 based on the road structure.
  • the risk classification module 203 is used to obtain the position, speed, and forward direction of other vehicles and pedestrians in the surrounding environment of the vehicle 200 from the environment perception module 201, and to obtain the driving route of the vehicle 200 for a period of time in the future from the route acquisition module 202. Then, based on the obtained driving route of the vehicle 200 for a period of time in the future and the position, speed, and forward direction of other vehicles and pedestrians, collision analysis is performed on other vehicles and pedestrians to obtain the possibility of collision between other vehicles and pedestrians and the vehicle 200 in the future, that is, the risk coefficient, so as to classify other vehicles and pedestrians into different priorities according to the risk coefficient.
  • the risk classification module 203 is also used to determine the priority level to be divided into level 1, level 2, level 3, ...
  • the risk classification module 203 is also used to send the priority of other vehicles and pedestrians in the surrounding environment that it finally obtains to the trajectory prediction module 204.
  • the trajectory prediction module 203 is used to receive the priorities of other vehicles and pedestrians in the surrounding environment of the vehicle 200 sent by the risk classification module 203, and use a matching prediction model to predict the future motion trajectory according to the priorities of other vehicles and pedestrians received. For example, for other vehicles 1 and pedestrian 1 with high priority in the surrounding environment, a refined prediction model with high computational complexity is used to predict the future motion trajectory, and for other vehicles 2 and pedestrian 2 with low priority in the surrounding environment, a simplified prediction model with low computational complexity is used to predict the future motion trajectory.
  • the trajectory prediction module 203 can also determine the one-to-one corresponding prediction model type as prediction model 1, prediction model 2, prediction model 3, ... prediction model n according to the priority level. Therefore, different prediction models can be used to predict other vehicles and pedestrians of different priorities, so as to reduce computing resources and reduce prediction delay.
  • vehicle 200 may also include a display module (not shown in Figure 2) for receiving the priorities of other vehicles and pedestrians in the surrounding environment of vehicle 200 sent by the risk grading module 203, and using different colors to display other vehicles and pedestrians of different priorities, so as to more conveniently and intuitively view the grading effects of other vehicles and pedestrians in the surrounding environment of vehicle 200.
  • a display module (not shown in Figure 2) for receiving the priorities of other vehicles and pedestrians in the surrounding environment of vehicle 200 sent by the risk grading module 203, and using different colors to display other vehicles and pedestrians of different priorities, so as to more conveniently and intuitively view the grading effects of other vehicles and pedestrians in the surrounding environment of vehicle 200.
  • the vehicle 200 may further include a storage component (not shown in FIG. 2 ) for storing executable codes of the above-mentioned modules. Running these executable codes may implement part or all of the method flow of the embodiments of the present application.
  • the computer system 112 shown in FIG1 may include a processor 301, the processor 301 is coupled to a system bus 302, the processor 301 may be one or more processors, each of which may include one or more processor cores.
  • a display adapter (video adapter) 303 may drive a display 324, and the display 324
  • the system bus 302 is coupled to the input/output (I/O) bus 305 through the bus bridge 304.
  • the I/O interface 306 is coupled to the I/O bus 305.
  • the I/O interface 306 communicates with various I/O devices, such as an input device 307 (such as a keyboard, a mouse, a touch screen, etc.), a multimedia disk (media tray) 308 (such as a multimedia interface).
  • the transceiver 309 (which can send and/or receive radio communication signals), the camera 310 (which can capture static and dynamic digital video images) and the external universal serial bus (USB) port 311.
  • the interface connected to the I/O interface 306 can be a USB interface.
  • the processor 301 may be any conventional processor, including a reduced instruction set computer (RISC) processor, a complex instruction set computer (CISC) processor, or a combination thereof.
  • the processor 301 may also be a dedicated device such as an application specific integrated circuit (ASIC).
  • the processor 301 may also be a neural network processor or a combination of a neural network processor and the conventional processors.
  • the computer system 112 may be located remotely from the autonomous vehicle and in wireless communication with the autonomous vehicle.
  • some of the processes described herein may be executed on a processor within the autonomous vehicle, and other processes may be executed by a remote processor, including taking actions required to perform a single maneuver.
  • Network interface 312 can be a hardware network interface, such as a network card.
  • Network 314 can be an external network, such as the Internet, or an internal network, such as Ethernet or a virtual private network (VPN).
  • network 314 can also be a wireless network, such as a wireless fidelity (Wi-Fi) network, a cellular network, etc.
  • Wi-Fi wireless fidelity
  • the hard disk drive interface 315 is coupled to the system bus 302.
  • the hard disk drive interface 315 is connected to a hard disk drive 316.
  • the system memory 317 is coupled to the system bus 302.
  • the data running in the system memory 317 may include an operating system (OS) 318 and an application program 319 of the computer system 112.
  • the operating system (OS) 318 includes, but is not limited to, a shell 320 and a kernel 321.
  • the shell 320 is an interface between the user and the kernel 321 of the operating system 318.
  • the shell 320 is the outermost layer of the operating system 318.
  • the shell manages the interaction between the user and the operating system 318: waiting for user input, interpreting user input to the operating system 318, and processing various output results of the operating system 318.
  • the kernel 321 is composed of the part of the operating system 318 used to manage memory, files, peripherals and system resources, and directly interacts with the hardware.
  • the kernel 321 of the operating system 318 usually runs processes and provides communication between processes, and provides functions such as CPU time slice management, interrupts, memory management, and IO management.
  • Application 319 includes programs 323 related to autonomous driving, such as programs for managing the interaction between the autonomous driving car and obstacles on the road, programs for controlling the driving route or speed of the autonomous driving car, and programs for controlling the interaction between the autonomous driving car and other cars/autonomous driving cars on the road.
  • Application 319 also exists on the system of deploying server 313. In one embodiment, when the application 319 needs to be executed, the computer system 112 can download the application 319 from the deploying server 313.
  • the application 319 may include an application for controlling the vehicle to predict the future motion trajectory of the target object in the surrounding environment according to the hierarchical prediction system 114.
  • the processor 301 of the computer system 112 calls the application 319 to perform the following steps: perform collision analysis according to the motion state of the target object, determine the risk coefficient of the target object, and the risk coefficient is used to indicate the possibility of collision between the target object and the vehicle; determine the priority of the target object according to the risk coefficient; determine the target prediction model that matches the priority according to the correspondence between the preset priority level and the prediction model; and predict the motion trajectory of the target object according to the target prediction model.
  • Sensor 322 is associated with computer system 112. Sensor 322 is used to detect the environment around computer system 112. For example, sensor 322 can detect surrounding animals, cars, people and other objects. Further, sensor 322 can also detect the environment around the above-mentioned animals, cars, people and other objects. For example: the environment around the car, for example, the lane where the car is located, etc.
  • sensor 322 can be at least one of a camera, an infrared sensor, a chemical detector, a microphone and other devices.
  • the computer system 112 may also receive information from other computer systems or transfer information to other computer systems.
  • sensor data collected from the sensor system 120 of the vehicle 100 may be transferred to another computer, which processes the data.
  • data from the computer system 112 may be transmitted to a cloud-side computer system 410 via a network for further processing.
  • the network and intermediate nodes may include various configurations and coordination
  • the present invention relates to a computer system that is capable of communicating with other computers via a variety of protocols, including the Internet, the World Wide Web, an intranet, a virtual private network, a wide area network, a local area network, a private network using one or more company's proprietary communication protocols, Ethernet, WiFi, and HTTP, and various combinations of the foregoing.
  • Such communications may be performed by any device capable of transmitting data to and from other computers, such as modems and wireless interfaces.
  • computer system 112 may include a server having multiple computers, such as a load balancing server cluster.
  • Server 420 exchanges information with different nodes of a network in order to receive, process, and transmit data from computer system 112.
  • Computer system 410 may have a configuration similar to computer system 112, and have processor 430, memory 440, instructions 450, and data 460.
  • the data 460 of the server 420 may include providing weather-related information.
  • the server 420 may receive, monitor, store, update, and transmit various information related to targets in the surrounding environment.
  • the information may include target categories, target shape information, and target tracking information, such as in the form of reports, radar information, forecasts, etc.
  • the cloud service center can receive information (such as data or other information collected by vehicle sensors) from vehicles 513 and 512 in its operating environment 500 via a network 511 such as a wireless communication network.
  • Vehicles 513 and 512 can be autonomous driving vehicles.
  • the cloud service center 520 runs the stored program related to controlling the automatic driving of the vehicle according to the received data to control the vehicles 513 and 512.
  • the program related to controlling the automatic driving of the vehicle may be: a program for managing the interaction between the automatic driving vehicle and obstacles on the road, or a program for controlling the route or speed of the automatic driving vehicle, or a program for controlling the interaction between the automatic driving vehicle and other automatic driving vehicles on the road.
  • cloud service center 520 may provide portions of a map to vehicle 513, vehicle 512 via network 511. In other examples, operations may be divided between different locations. For example, multiple cloud service centers may receive, verify, combine, and/or send information reports. In some examples, information reports and/or sensor data may also be sent between vehicles. Other configurations are also possible.
  • the cloud service center 520 sends the autonomous vehicle a suggested solution for possible driving situations in the environment (e.g., informing the obstacle ahead and informing how to bypass it). For example, the cloud service center 520 can assist the vehicle in determining how to proceed when facing a specific obstacle in the environment.
  • the cloud service center 520 sends a response to the autonomous vehicle indicating how the vehicle should proceed in a given scenario. For example, based on the collected sensor data, the cloud service center 520 can confirm the presence of a temporary parking sign ahead of the road, or, for example, based on the "lane closed" sign and the sensor data of the construction vehicle, determine that the lane is closed due to construction.
  • the cloud service center 520 sends a suggested operating mode for the vehicle to pass through the obstacle (e.g., instructing the vehicle to change lanes to another road).
  • the cloud service center 520 observes the video stream in its operating environment 500 and has confirmed that the autonomous vehicle can safely and successfully pass through the obstacle, the operating steps used by the autonomous vehicle can be added to the driving information map. Accordingly, this information can be sent to other vehicles in the area that may encounter the same obstacle, so as to assist other vehicles not only to identify the closed lane but also to know how to pass through it.
  • the methods in the following embodiments can all be implemented in a vehicle having the above hardware structure or other devices having the function of controlling a vehicle, such as an autonomous driving vehicle, or a processor in a vehicle or other devices having the function of controlling a vehicle, such as the processor 301 and the processor 430 in the computer system 112 mentioned above.
  • vehicles usually need to predict the future motion trajectory of target objects such as pedestrians, cyclists, motorcycles or other vehicles in the surrounding environment, so that the vehicle can respond in time and perform corresponding operations, such as planning the vehicle's current driving path to avoid collisions.
  • target objects such as pedestrians, cyclists, motorcycles or other vehicles in the surrounding environment
  • the computational complexity of the prediction model used to predict the trajectory is usually relatively high.
  • this application provides a solution that can obtain the importance score of the ground truth by simulating the real driving data of the vehicle, and train the neural network to use the trained neural network to predict the importance score of the target objects in the surrounding environment, so that the behavior of the target objects in the surrounding environment can be predicted according to the importance score ranking.
  • this solution is difficult to obtain data, and the neural network itself consumes a lot of computing resources.
  • the present application also provides a solution that can divide the space around the vehicle into different areas according to certain rules, and combine the road topology to divide the objects in the surrounding environment into three levels: caution, normal, and ignore.
  • this division method depends on the road topology, and the rules are complex and difficult to maintain. Different levels depend on different rules. Otherwise, there is no unified comparison index, and it is impossible to divide into more levels, and the level scalability is poor.
  • the embodiments of the present application provide a target behavior prediction method and an intelligent device.
  • the intelligent device can determine the priority of the target object based on the collision risk detection of the target object in the surrounding environment, and then for the high-priority target object, a refined prediction model with high computational complexity can be used to predict the future motion trajectory, while for the low-priority target object, a simplified prediction model with low computational complexity can be used to predict the future motion trajectory.
  • the solution provided in the embodiment of the present application prioritizes the objects in the surrounding environment by calculating the collision risk coefficient, the risk coefficients of different objects can be directly compared, with a unified comparison index.
  • the solution provided in the embodiment of the present application does not rely on maps, does not require data drive, can be used in scenarios with or without maps, and has low computational complexity.
  • the target behavior prediction method may include:
  • S610 The vehicle obtains the motion status of the target object in the surrounding environment.
  • the target object may be an obstacle such as a pedestrian, a cyclist, a motorcycle or other vehicles in the surrounding environment of the vehicle.
  • the target object may be a movable object in the surrounding environment of the vehicle, or a stationary object in the surrounding environment of the vehicle, such as a roadblock, a roadside trash can, etc.
  • the vehicle to which the target behavior prediction method is applied may be referred to as the self-vehicle, and other vehicles in the surrounding environment of the self-vehicle may be referred to as other vehicles.
  • the motion state of the target object may include the position of the target object, the speed of the target object, the forward direction of the target object, that is, the speed direction, etc.
  • the motion state of the target object may also include the distance between the target object and the vehicle.
  • the motion state of the target object can be detected by the environment perception module in Figure 2.
  • the environment perception module can include a radar, a laser rangefinder or a camera, etc.
  • the environment perception module can include the sensor 322 in Figure 3.
  • step S610 may also include obtaining a target driving route of the vehicle.
  • the target driving route may refer to the driving route of the vehicle within a preset time period in the future starting from the current moment.
  • the preset time period may be reasonably set in advance according to the application scenario, and the embodiment of the present application does not limit the length of the preset time period.
  • the preset time period may be 10 seconds (S).
  • the vehicle may determine the target driving path of the vehicle based on parameters such as the planned route of the vehicle, the position of the vehicle, and the speed of the vehicle. For example, the vehicle may determine the approximate driving distance of the vehicle on the planned route within a preset time period in the future based on the position of the vehicle and the speed of the vehicle, thereby determining the target driving path of the vehicle from the planned route of the vehicle.
  • the target driving route of the vehicle may be obtained by the route acquisition module 202 in Fig. 2.
  • the route acquisition module 202 may include a route control system in the control system 106 in Fig. 1 .
  • the route acquisition module 202 can determine the target driving path of the vehicle based on parameters such as the planned route of the vehicle, the position of the vehicle, and the speed of the vehicle.
  • the speed of the vehicle can be detected by a speed sensor.
  • the position of the vehicle can be determined by a positioning system, for example, it can be determined by a positioning system in the vehicle 100 of FIG. 1.
  • the planned route of the vehicle can be determined by a road navigation (route) module.
  • the vehicle may also determine the target driving route of the vehicle based on the road structure of the lane where the vehicle is located.
  • the road structure of the lane includes the lane driving direction, lane boundary information, etc.
  • the road structure of the lane where the vehicle is located may be determined by a road structure recognition module.
  • S620 The vehicle performs a collision analysis based on the motion state of the target object to determine the risk factor of the target object.
  • the risk factor is used to indicate the probability of a collision between the target object and the vehicle.
  • the vehicle can perform a collision analysis based on the motion state of the target object to assess the risk of a collision between the target object and the vehicle in the future.
  • step S620 may also include sampling the target driving route of the vehicle to obtain multiple sampling positions on the target driving route.
  • the vehicle may use the sampling positions as the positions where the vehicle and the target object may collide in the future to perform collision analysis.
  • the vehicle can consider The future interaction relationship between the ego vehicle and the target object can be determined to avoid the problem of low prediction trajectory accuracy caused by subsequent misdivision of priorities.
  • sampling the target driving route of the vehicle can be understood as selecting multiple driving position points on the target driving route of the vehicle as multiple sampling positions.
  • the vehicle can select a position point as a sampling position at every specified distance or every specified time on the target driving route of the vehicle, so that the vehicle can obtain multiple sampling positions on the target driving route.
  • the vehicle can also randomly select multiple position points on the target driving route of the vehicle as multiple sampling positions.
  • the number of sampling positions may be a fixed number, that is, when the vehicle performs a fixed number of samplings on the target driving route of the vehicle, a fixed number of sampling positions may be obtained.
  • the vehicle may randomly select 8 driving position points on the target driving route of the vehicle as 8 sampling positions.
  • the number of sampling positions can also be randomly generated, for example, a number is randomly selected from 5 to 10 as the number of vehicles to be sampled.
  • the vehicle can determine the remaining collision distance between the target object and the vehicle relative to the sampling position after a specified period of time according to the motion state of the target object, and use the remaining collision distance as the collision risk coefficient of the sampling position.
  • the vehicle can determine the risk coefficient of the target object according to the collision risk coefficient of each sampling position on the target driving route.
  • the specified time period can be reasonably set in advance according to the application scenario, and the embodiment of the present application does not limit the length of the specified time period.
  • the specified time period can be 3 seconds (S).
  • the vehicle may determine a first distance between the target object and a target sampling position after a specified time period according to the motion state of the target object, wherein the target sampling position is any sampling position among multiple sampling positions.
  • the vehicle may also determine a second distance between the vehicle and the target sampling position after a specified time period. The vehicle may use the sum of the first distance and the second distance as the remaining collision distance between the target object and the vehicle relative to the target sampling position after a specified time period, and the remaining collision distance may be used as the collision risk coefficient of the target sampling position.
  • is the angle between the velocity direction of the target object 702 and the line connecting the target object 702 to the sampling position pi .
  • the max() function is used to output the maximum value
  • v′ o is the velocity of the target object 702 in the direction of the line connecting the target object 702 to the sampling position pi .
  • the vehicle can calculate the remaining distance d 1 between the target object 702 and the sampling position p i after the specified time period t p :
  • d 1 max(d oi -d o ,0).
  • d oi is the initial distance between the target object 702 and the sampling position pi at the beginning of this collision analysis.
  • the vehicle can then calculate the distance d e that the vehicle 701 moves toward the sampling position pi within the specified time period t p :
  • the vehicle can calculate the remaining distance d 2 between the vehicle 701 and the sampling position p i after the specified time period t p :
  • d ei is the initial distance between the ego vehicle 701 and the sampling position pi at the beginning of this collision analysis.
  • the remaining collision distance dpi can be used as the collision risk coefficient corresponding to the sampling position p1 . It can be understood that when the value of the remaining collision distance dpi is smaller, that is, the collision risk coefficient corresponding to the sampling position p1 is smaller, the vehicle can consider that the target object and the vehicle are more likely to collide near the sampling position p1 after the specified time period tp .
  • the vehicle when the vehicle samples the target driving route of the vehicle and obtains n sampling positions on the target driving route, the vehicle can use the above method to calculate the collision risk coefficient corresponding to each of the n sampling positions, namely dp0 , dp1 , dp2 , ..., dpn .
  • the vehicle can use the smallest collision risk coefficient among the n sampling positions as the risk coefficient of the target object.
  • the vehicle may also compare the minimum collision risk factor and the safety factor among the n sampling positions, and use the ratio of the minimum collision risk factor to the safety factor as the risk factor of the target object.
  • the safety factor may be determined in advance based on historical data analysis and combined with the actual situation of the vehicle.
  • the vehicle may also calculate the collision time when the target object and the vehicle may collide based on the motion state of the target object and the object kinematic model, so as to determine the risk factor of the target object based on the collision time.
  • the vehicle may compare the collision time with a preset time, so as to use the ratio of the collision time to the preset time as the risk factor of the target object. It is understood that the preset time may be determined in advance based on historical data analysis and in combination with the actual situation of the vehicle.
  • the vehicle can predict whether the driving trajectories of the vehicle and the target object intersect based on the motion state of the target. If they intersect, the time from the current position of the vehicle and the target object to the intersection of the two trajectories is calculated respectively, and the absolute value of the time difference between the two is calculated to determine the risk coefficient of the target object based on the absolute value.
  • the vehicle can compare the absolute value of the time difference with the preset time, and use the ratio of the absolute value of the time difference to the preset time as the risk coefficient of the target object. It can be understood that the preset time can be determined in advance based on historical data analysis and combined with the actual situation of the vehicle.
  • S630 The vehicle determines the priority of the target object according to the risk factor.
  • the vehicle after determining the risk factor of each target object in the surrounding environment, the vehicle can prioritize each target object in the surrounding environment according to the risk factor, so as to distinguish targets of different priorities in the surrounding environment.
  • the vehicle can determine the priority level to be divided according to a preset parameter n, and the priority level includes level 1, level 2, level 3, ... level n.
  • parameter n can be understood as the number of priorities (i.e., the number of levels) to be divided for the target objects in the surrounding environment of the vehicle.
  • the vehicle can adjust the size of parameter n. For example, increasing the value of the preset parameter n can achieve the expansion of the priority level. In this way, the expansion of the priority level can be achieved without changing the calculation method of the risk coefficient.
  • the preset parameter n may be pre-stored in the vehicle or acquired in real time.
  • the vehicle may display a first interface through an onboard display screen, and the first interface is used to provide a user with a quick input of the priority number.
  • the first interface may provide multiple options for the number of priorities, such as 3 priorities, 5 priorities, etc.
  • the user may select an option for confirmation.
  • the vehicle may respond to the user's confirmation operation and obtain the number of priorities confirmed by the user as a preset parameter n.
  • the first interface may also provide an input box for the number of priorities.
  • the user may enter a specific value in the input box, and then the vehicle may respond to the user's input operation and obtain the number of priorities entered by the user as a preset parameter n.
  • the vehicle may classify targets in the surrounding environment into different priorities according to the risk factor of each target in the surrounding environment and a preset parameter n.
  • the objects in the surrounding environment can be sorted in order from small to large according to the risk coefficient of each object in the surrounding environment.
  • the higher the ranking of the object the smaller the corresponding risk coefficient, that is, the more likely the object and the vehicle will collide after a specified time period.
  • the sorted objects can be divided into priority levels according to the preset parameter n, from high level (level n) to low level (level 1).
  • level n high level
  • level 1 low level
  • the higher the level of the object the higher the priority, so that the objects that are more likely to collide are classified into higher priorities.
  • a higher level may also mean a lower priority, that is, level 1 is the highest priority and level n is the lowest priority.
  • the sorted targets can be divided into priority levels according to the preset parameter n, in the order from low level (level 1) to high level (level n). Among them, the target closer to the front is divided into a lower level, that is, a higher priority, so that the target that is more likely to collide is divided into a higher priority.
  • the objects in the surrounding environment can also be sorted in descending order according to the risk coefficient of each object in the surrounding environment.
  • the later the object is ranked the smaller its corresponding risk coefficient is, that is, the more likely it is for the object to collide with the vehicle after a specified time period.
  • the sorted objects can be divided into priority levels according to the preset parameter n, in order from high level (level 1) to low level (level n).
  • level 1 high level
  • level n low level
  • the later the object is the higher the level is, that is, the higher the priority is, so that the objects that are more likely to collide are divided into higher levels. priority.
  • the vehicle can also determine the risk coefficient range corresponding to different levels according to the preset parameter n, and the vehicle can match the risk coefficient of each target object in the surrounding environment with the risk coefficient range corresponding to different levels.
  • the risk coefficient of a target object falls within the risk coefficient range corresponding to a certain level, the target object can be classified into the level. In this way, the priority division of the targets in the surrounding environment is achieved.
  • the vehicle may also use different colors to display objects of different priorities on the vehicle display screen, thereby intuitively displaying the priority grading effect of objects in the vehicle's surrounding environment.
  • the vehicle can determine a target color that matches the priority of the target object according to a preset correspondence between colors and priority levels, and display the target object in the target color.
  • the correspondence between colors and priority levels can be reasonably set in advance according to actual application conditions.
  • the correspondence between colors and priority levels can be pre-stored in the vehicle or obtained from other devices such as a cloud server.
  • the vehicle may also randomly generate corresponding colors according to the number of priority levels to be divided, wherein the randomly generated colors correspond to the priorities one by one to distinguish objects of different priorities.
  • Figure 8 shows an intersection scene where vehicle 800 turns left.
  • the boxes in the scene represent different types of targets, such as motor vehicles, motorcycles, pedestrians, etc.
  • the dotted arrow on each target represents the speed direction of the target, and the solid arrow represents the direction of the target.
  • the vehicle can also display information about the target, such as displaying the target's identity, speed, and other information below each target.
  • the target's identity can be the type of target, such as non-motor vehicles, motorcycles, pedestrians, cars, etc.; it can also be a unique identifier (identity document, ID) of the target.
  • target 801 on the zebra crossing in front of the left side of vehicle 800 in Figure 8 is pedestrian A crossing the road; target 802 behind the right side of vehicle 800 is cyclist A; target 803 at the corner of the intersection in front of vehicle 800 is pedestrian B; target 804 close to vehicle 800 after exiting the intersection is vehicle A (i.e., another vehicle); target 805 far from the current position of vehicle 800 after exiting the intersection is vehicle B; target 806 behind vehicle 800 and moving away from vehicle 800 in speed direction is vehicle C; target 807 in front of the right side of vehicle 800 and moving away from vehicle 800 is cyclist B.
  • the targets around vehicle 800 can be divided into three priority levels. Among them, target 801 (i.e. pedestrian A crossing the road) and target 802 (i.e. cyclist A) have the highest priority, target 803 (i.e. pedestrian B) and target 804 (i.e. vehicle A) have the second highest priority, and target 805 (i.e. vehicle B), target 806 (i.e. vehicle C), and target 807 (i.e. cyclist B) have the lowest priority.
  • target 801 i.e. pedestrian A crossing the road
  • target 802 i.e. cyclist A
  • target 803 i.e. pedestrian B
  • target 804 i.e. vehicle A
  • target 805 i.e. vehicle B
  • target 806 i.e. vehicle C
  • target 807 i.e. cyclist B
  • the vehicle can also use three colors to display the above three priority targets on the vehicle display screen. For example, assuming that the highest priority corresponds to red, the second priority corresponds to yellow, and the lowest priority corresponds to blue, the vehicle can display target 801 (i.e. pedestrian A crossing the road) and target 802 (i.e. cyclist A) in red, target 803 (i.e. pedestrian B) and target 804 (i.e. vehicle A) in yellow, and target 805 (i.e. vehicle B), target 806 (i.e. vehicle C), and target 807 (i.e. cyclist B) in blue on the vehicle display screen.
  • target 801 i.e. pedestrian A crossing the road
  • target 802 i.e. cyclist A
  • target 803 i.e. pedestrian B
  • target 804 i.e. vehicle A
  • target 805 i.e. vehicle B
  • target 806 i.e. vehicle C
  • target 807 i.e. cyclist B
  • FIG9 shows a scenario where a vehicle 900 is traveling straight.
  • the boxes in the scenario represent different types of targets, such as motor vehicles, motorcycles, pedestrians, etc.
  • the dotted arrow on each target represents the speed direction of the target, and the solid arrow represents the direction of the target.
  • target 901 in front of the right side of vehicle 900 and with the same speed and direction as vehicle 900 in Figure 9 is cyclist C; target 902 behind vehicle 900 and traveling in the same direction as vehicle 900 is vehicle D; target 903 in front of the right side of vehicle 900 and outside the motor vehicle lane is cyclist D; target 904 in front of the left side of vehicle 900 and outside the motor vehicle lane is cyclist E; target 905 in the left rear side of vehicle 900 and outside the motor vehicle lane is pedestrian C; target 906 in the left rear side of vehicle 900 and traveling in the opposite direction of vehicle 900 is cyclist F.
  • the targets around vehicle 900 can be divided into three priority levels.
  • the priority of target 901 (i.e., cyclist C) and target 902 (i.e., vehicle D) is level 1
  • the priority of target 903 (i.e., cyclist D) and target 904 (i.e., cyclist E) is level 2
  • the priority of target 905 (i.e., pedestrian C) and target 906 (i.e., cyclist F) is level 3.
  • the vehicle can also use three colors to display the targets of the above three priorities on the on-board display screen.
  • S640 The vehicle determines a target prediction model that matches the priority of the target object based on the correspondence between the preset priority levels and the prediction models.
  • different prediction models can be used to predict the future movement trajectories of targets of different priorities.
  • the vehicle may store a preset correspondence between priority levels and prediction models.
  • the vehicle may determine a target prediction model that matches the priority level of the target object based on the preset correspondence.
  • the correspondence between the preset priority levels and the prediction models may be a one-to-one correspondence, that is, each priority level corresponds to a prediction model.
  • Different prediction models have different computational complexities.
  • the correspondence between the preset priority levels and the prediction models may include: a first priority level corresponds to a first prediction model, and a second priority level corresponds to a second prediction model. The first priority level is higher than the second priority level, and the computational complexity of the first prediction model is higher than the computational complexity of the second prediction model.
  • the computational complexity of the prediction model can be determined based on the processing time of the prediction model, the number of operation instructions and the number of memory interaction instructions, the performance consumed, etc.
  • the embodiments of the present application are not limited to this.
  • the prediction model provided in the present application can be a rule-based prediction model such as a constant velocity (CV) model and a Markov model. Such prediction models are usually simpler and have lower computational complexity.
  • the prediction model provided in the present application can also be a prediction model based on a neural network. Such prediction models usually have higher computational complexity.
  • the correspondence between the preset priority levels and the prediction models can also be a many-to-one relationship, that is, there can be multiple priority levels corresponding to one prediction model. For example, assuming that the number of priority levels n is 6, level 6 is the lowest priority, and level 1 is the highest priority. Among them, prediction model 1 can correspond to priority levels 1, 2, and 3, prediction model 2 can correspond to priority levels 4 and 5, and prediction model 3 can only correspond to the highest priority level 6.
  • a more precise prediction model with higher computational complexity can be used to quickly and accurately predict the future motion trajectory of the target object.
  • a simpler prediction model with lower computational complexity can be used to quickly and accurately predict the future motion trajectory of the target object.
  • S650 The vehicle predicts the movement trajectory of the target object according to the target prediction model.
  • the vehicle after the vehicle determines the target prediction model that matches the priority of the target object, it can sample the prediction model to predict the future motion trajectory of the target object. In this way, it is possible to predict the future motion trajectory of target objects of different priorities in the surrounding environment of the smart device using prediction models of different computational complexities. While realizing the prediction of the behavior of objects in the surrounding environment, it also reduces the consumption of computing resources and reduces the prediction delay.
  • the vehicle can perform corresponding operations.
  • the vehicle can plan its own driving path at the current moment or in a specified time period in the future to avoid collision.
  • the vehicle can play a horn to alert the target.
  • the target behavior prediction method of the embodiment of the present application solves the computing power bottleneck problem by predicting the motion trajectories of a large number of targets in a hierarchical manner, that is, for high-priority targets, a refined prediction model with high computational complexity can be used for trajectory prediction, and for low-priority targets, a prediction model with low computational complexity can be used for trajectory prediction.
  • This achieves the purpose of reducing computing resources and reducing prediction latency, and also indirectly improves the prediction accuracy of a large number of targets.
  • target behavior prediction method of the embodiment of the present application can also be applied to robots or other electronic devices with autonomous driving capabilities, and the embodiment of the present application is not limited to this.
  • the sweeping robot can perform collision analysis on multiple characters in the surrounding environment to obtain the risk coefficient of each character, and determine the priority of each character according to the risk coefficient.
  • the sweeping robot can plan its own driving path at the current moment or in a specified time period in the future to avoid collision, or the sweeping robot can screen out characters that may need to interact based on the future motion trajectory of each character to improve interaction efficiency.
  • the target behavior prediction method of the embodiment of the present application can be applied to various scenarios. It is necessary to perceive surrounding targets and judge the risk of collision. For example, it can be applied to scenarios where it is necessary to judge the strength of the kinematic interaction between surrounding objects and itself.
  • the vehicle includes hardware and/or software modules corresponding to the execution of each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application in combination with the embodiments, but such implementation should not be considered to be beyond the scope of this application.
  • the smart watch can be divided into functional modules according to the above method example.
  • each functional module can be divided according to each function, or two or more functions can be integrated into one processing module.
  • the above integrated module can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic and is only a logical function division. There may be other division methods in actual implementation.
  • An embodiment of the present application also provides a vehicle-mounted device, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the vehicle-mounted device implements each function or step performed by the vehicle in each of the above-mentioned method embodiments.
  • the present application also provides a target behavior prediction device, which can be applied to the above-mentioned vehicle-mounted device.
  • the device is used to execute each function or step executed by the vehicle in the above-mentioned method embodiment.
  • An embodiment of the present application also provides a vehicle, which includes the above-mentioned vehicle-mounted equipment or target positioning device.
  • the embodiment of the present application further provides an intelligent device, which includes the above-mentioned target positioning device.
  • the intelligent device may be a robot or other electronic device with autonomous driving capability.
  • the embodiment of the present application also provides a chip system, which includes at least one processor and at least one interface circuit.
  • the processor and the interface circuit can be interconnected through a line.
  • the interface circuit can read the instructions stored in the memory and send the instructions to the processor.
  • the vehicle-mounted device can perform the various functions or steps performed by the vehicle in the above method embodiment.
  • the chip system can also include other discrete devices, which are not specifically limited in the embodiment of the present application.
  • An embodiment of the present application also provides a computer storage medium, which includes computer instructions.
  • the vehicle-mounted device executes each function or step executed by the vehicle computer in the above-mentioned method embodiment.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product When the computer program product is run on a computer, the computer is enabled to execute each function or step executed by the vehicle computer in the above method embodiment.
  • the vehicle-mounted equipment, vehicle, robot, computer storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, and will not be repeated here.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another device, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place or distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to enable a device (which can be a single-chip microcomputer, chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

A target behavior prediction method, which is applied to an intelligent device. The method comprises: acquiring the motion state of a target object in a surrounding environment (S610); performing collision analysis according to the motion state of the target object, so as to determine a risk coefficient of the target object (S620), wherein the risk coefficient is used for indicating the possibility of a collision occurring between the target object and an intelligent device; determining the priority of the target object according to the risk coefficient (S630); according to preset correspondences between priority levels and prediction models, determining a target prediction model, which matches the priority (S640); and predicting the motion trajectory of the target object according to the target prediction model (S650). The method can divide objects in the surrounding environment of an intelligent device into different priorities according to degrees of collision risk, and use prediction models having different computational complexities to predict future motion trajectories for objects having different priorities, and therefore behavior prediction of the objects in the surrounding environment is implemented, and the consumption of computational resources is also reduced and the prediction delay is shortened. Further disclosed are an intelligent device, an on-board device, a vehicle, a robot, a chip system, a computer storage medium and a computer program product.

Description

一种目标行为预测方法、智能设备及车辆A target behavior prediction method, intelligent device and vehicle
本申请要求于2022年10月27日提交国家知识产权局、申请号为202211329915.8、申请名称为“一种目标行为预测方法、智能设备及车辆”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the State Intellectual Property Office on October 27, 2022, with application number 202211329915.8 and application name “A Target Behavior Prediction Method, Intelligent Device and Vehicle”, all contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请涉及人工智能领域,尤其涉及一种目标行为预测方法、智能设备及车辆。The present application relates to the field of artificial intelligence, and in particular to a target behavior prediction method, intelligent device and vehicle.
背景技术Background technique
人工智能(artificial intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。Artificial intelligence (AI) is the theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results. In other words, artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that machines have the functions of perception, reasoning and decision-making.
自动驾驶是人工智能领域的一种主流应用,自动驾驶技术依靠计算机视觉、雷达、监控装置和全球定位系统等协同合作,让智能设备(如自动驾驶车辆、机器人(robot)、或其他自主行驶装置)可以在不需要人类主动操作下,实现自动行驶。Autonomous driving is a mainstream application in the field of artificial intelligence. Autonomous driving technology relies on the collaborative efforts of computer vision, radar, monitoring devices, and global positioning systems to enable smart devices (such as autonomous vehicles, robots, or other autonomous driving devices) to achieve automatic driving without the need for active human operation.
目前,智能设备通常需要感知周围的环境信息,以对周围环境内的目标物体(如行人、车辆等)进行行为预测,即进行未来运动轨迹的预测,使得智能设备能够及时应对并执行相应的操作,如规划智能设备未来的行驶路径以避免碰撞、筛选智能设备可能的交互对象以提高交互效率等。然而,当周围环境中存在大量的目标物体时,由于智能设备的处理能力非常有限,智能设备无法及时对所有的目标物体进行行为预测,导致预测时延较大,影响了响应速度。At present, smart devices usually need to perceive the surrounding environmental information to predict the behavior of target objects (such as pedestrians, vehicles, etc.) in the surrounding environment, that is, to predict the future motion trajectory, so that smart devices can respond and perform corresponding operations in time, such as planning the future driving path of smart devices to avoid collisions, screening possible interactive objects of smart devices to improve interaction efficiency, etc. However, when there are a large number of target objects in the surrounding environment, due to the very limited processing power of smart devices, smart devices cannot predict the behavior of all target objects in time, resulting in a large prediction delay, which affects the response speed.
发明内容Summary of the invention
本申请提供一种目标行为预测方法、智能设备及车辆,能够对智能设备的周边环境中不同碰撞风险程度的目标物体,采用不同计算复杂度的预测模型进行未来运动轨迹的预测,实现了对周边环境内物体的行为预测的同时,也减少了计算资源的消耗,降低了预测时延,提高了响应效率。The present application provides a target behavior prediction method, an intelligent device, and a vehicle, which can predict the future motion trajectory of target objects with different collision risk levels in the surrounding environment of the intelligent device using prediction models of different computational complexity. While achieving the behavior prediction of objects in the surrounding environment, it also reduces the consumption of computing resources, reduces prediction latency, and improves response efficiency.
为达到上述目的,本申请采用如下技术方案:In order to achieve the above objectives, this application adopts the following technical solutions:
第一方面,本申请提供一种目标行为预测方法,可以应用于智能设备,该目标行为预测方法包括:根据目标物的运动状态进行碰撞分析,确定目标物的风险系数,风险系数用于指示目标物与智能设备发生碰撞的可能性大小;根据风险系数,确定目标物的优先级;根据预设的优先级级别与预测模型的对应关系,确定与优先级匹配的目标预测模型;根据目标预测模型,预测目标物的运动轨迹。In a first aspect, the present application provides a target behavior prediction method that can be applied to smart devices, and the target behavior prediction method includes: performing a collision analysis based on the motion state of the target object to determine the risk coefficient of the target object, and the risk coefficient is used to indicate the possibility of a collision between the target object and the smart device; determining the priority of the target object based on the risk coefficient; determining a target prediction model that matches the priority based on the correspondence between the preset priority level and the prediction model; and predicting the motion trajectory of the target object based on the target prediction model.
上述第一方面提供的方案,智能设备能够根据周边环境中的不同物体的运动状态,对智能设备的周边环境中的不同物体进行碰撞分析,以确定每个物体的碰撞风险程度。然后智能设备可以根据碰撞风险程度,将周边环境中的物体划分为不同的优先级,并且针对不同优先级的物体,智能设备能够采用不同的预测模型进行未来运动轨迹的预测。例如,针对高优先级的物体,可以采用精细化、计算复杂度高的预测模型进行未来运动轨迹的预测,而对于低优先级的物体,可以采用简单化、计算复杂度低的预测模型进行未来运动轨迹的预测。实现了智能设备对周边环境内不同物体的行为轨迹的分级预测。这样,即使周围环境中存在大量的目标物体,但由于存在部分物体的行为轨迹是采用简单化、计算复杂度低的预测模型进行预测,因此可以大大减少智能设备的计算资源的消耗,降低了预测时延的同时,也提高了智能设备的响应效率。According to the solution provided in the first aspect above, the intelligent device can perform collision analysis on different objects in the surrounding environment of the intelligent device according to the motion states of different objects in the surrounding environment to determine the collision risk level of each object. Then the intelligent device can divide the objects in the surrounding environment into different priorities according to the collision risk level, and for objects of different priorities, the intelligent device can use different prediction models to predict the future motion trajectory. For example, for high-priority objects, a refined prediction model with high computational complexity can be used to predict the future motion trajectory, while for low-priority objects, a simplified prediction model with low computational complexity can be used to predict the future motion trajectory. The hierarchical prediction of the behavior trajectories of different objects in the surrounding environment by the intelligent device is realized. In this way, even if there are a large number of target objects in the surrounding environment, since the behavior trajectories of some objects are predicted by simplified prediction models with low computational complexity, the consumption of computing resources of the intelligent device can be greatly reduced, which reduces the prediction delay and improves the response efficiency of the intelligent device.
在一种可能的实现中,上述根据目标物的运动状态进行碰撞分析,确定目标物的风险系数,可以包括:获取智能设备的行驶路线上的多个行驶位置;根据目标物的运动状态,确定多个行驶位置中每个行驶位置对应的风险系数,每个行驶位置对应的风险系数用于指示目标物与智能设备在每个行驶位置处发生碰撞的可能性大小;根据每个行驶位置对应的风险系数中的最小值,确定 目标物的风险系数。如此,智能设备可以从自车的行驶路线上,选取多个行驶位置作为自车与周围环境中的目标物未来可能碰撞的评估点,从而智能设备可以综合多个评估点发生碰撞的可能性大小,准确确定目标物的碰撞风险程度。In a possible implementation, the above-mentioned collision analysis based on the motion state of the target object and determining the risk coefficient of the target object may include: obtaining multiple driving positions on the driving route of the smart device; determining the risk coefficient corresponding to each of the multiple driving positions according to the motion state of the target object, wherein the risk coefficient corresponding to each driving position is used to indicate the possibility of collision between the target object and the smart device at each driving position; determining the risk coefficient of the target object according to the minimum value of the risk coefficient corresponding to each driving position. Risk factor of the target object. In this way, the smart device can select multiple driving positions from the vehicle's driving route as evaluation points for possible future collisions between the vehicle and the target objects in the surrounding environment. The smart device can then comprehensively consider the probability of collision at multiple evaluation points and accurately determine the collision risk level of the target object.
在一种可能的实现中,上述根据目标物的运动状态,确定多个行驶位置中每个行驶位置对应的风险系数,可以包括:根据目标物的运动状态,确定指定时间段后目标物与目标行驶位置之间的第一距离,其中,目标行驶位置为多个行驶位置中的任一行驶位置;确定指定时间段后智能设备与目标行驶位置之间的第二距离;获取第一距离与第二距离之和,作为目标行驶位置对应的风险系数。如此,针对自车行驶路线上的每个评估点,智能设备可以根据目标物和自车在行驶一段时间后,各自离评估点的剩余碰撞距离,准确分析评估点的发生碰撞的可能性大小。可以理解,剩余碰撞距离越小,目目标物和自车在行驶一段时间后,在评估点附近越容易发生碰撞。In a possible implementation, the above-mentioned determination of the risk coefficient corresponding to each of the multiple driving positions according to the motion state of the target object may include: determining the first distance between the target object and the target driving position after a specified time period according to the motion state of the target object, wherein the target driving position is any one of the multiple driving positions; determining the second distance between the smart device and the target driving position after a specified time period; obtaining the sum of the first distance and the second distance as the risk coefficient corresponding to the target driving position. In this way, for each evaluation point on the driving route of the self-vehicle, the smart device can accurately analyze the possibility of collision at the evaluation point according to the remaining collision distance of the target object and the self-vehicle from the evaluation point after driving for a period of time. It can be understood that the smaller the remaining collision distance is, the more likely it is that the target object and the self-vehicle will collide near the evaluation point after driving for a period of time.
在一种可能的实现中,上述获取智能设备的行驶路线上的多个行驶位置,可以包括:获取智能设备在未来预设时间段内的行驶路线;每间隔指定距离,获取行驶路线上的一个行驶位置,得到行驶路线上的多个行驶位置。如此,智能设备可以通过对自车的目标行驶路线进行定距采样,得到多个未来可能会发生碰撞的评估位置点。In a possible implementation, the above-mentioned acquisition of multiple driving positions on the driving route of the smart device may include: acquiring the driving route of the smart device within a preset time period in the future; acquiring a driving position on the driving route at every specified interval to obtain multiple driving positions on the driving route. In this way, the smart device can obtain multiple evaluation position points where collisions may occur in the future by sampling the target driving route of the vehicle at a fixed distance.
在一种可能的实现中,上述获取智能设备的行驶路线上的多个行驶位置,可以包括:获取智能设备在未来预设时间段内的行驶路线;每间隔指定时间,获取行驶路线上的一个行驶位置,得到行驶路线上的多个行驶位置。如此,智能设备可以通过对自车的目标行驶路线进行定时采样,得到多个未来可能会发生碰撞的评估位置点。In a possible implementation, the above-mentioned acquisition of multiple driving positions on the driving route of the smart device may include: acquiring the driving route of the smart device within a preset time period in the future; acquiring a driving position on the driving route at a specified interval to obtain multiple driving positions on the driving route. In this way, the smart device can obtain multiple evaluation position points that may cause collisions in the future by regularly sampling the target driving route of the vehicle.
在一种可能的实现中,上述根据目标物的运动状态进行碰撞分析,确定目标物的风险系数,可以包括:根据目标物的运动状态,确定目标物与智能设备发生碰撞的碰撞时间;根据碰撞时间,确定目标物的风险系数。如此,智能设备也可以根据自车与周围环境中的目标物未来可能碰撞的时间,准确确定目标物的碰撞风险程度。In a possible implementation, the above-mentioned collision analysis based on the motion state of the target object and determining the risk factor of the target object may include: determining the collision time of the target object and the smart device according to the motion state of the target object; and determining the risk factor of the target object according to the collision time. In this way, the smart device can also accurately determine the collision risk degree of the target object according to the time when the vehicle may collide with the target object in the surrounding environment in the future.
在一种可能的实现中,智能设备的周边环境中存在多个目标物,上述根据风险系数,确定目标物的优先级,可以包括:根据多个目标物中每个目标物的风险系数,对多个目标物进行排序;根据排序后的多个目标物,确定每个目标物的优先级。如此,智能设备可以根据周围环境内不同物体的碰撞风险程度大小,对目标物进行重要程度或紧急程度的优先级划分。以将越容易发生碰撞的目标物划分至越高的优先级,将越不容易发生碰撞的目标物划分至越低的优先级。In a possible implementation, there are multiple targets in the surrounding environment of the smart device. The above-mentioned determination of the priority of the target according to the risk coefficient may include: sorting the multiple targets according to the risk coefficient of each target among the multiple targets; and determining the priority of each target according to the sorted multiple targets. In this way, the smart device can prioritize the targets according to their importance or urgency based on the degree of collision risk of different objects in the surrounding environment. The targets that are more likely to collide are assigned a higher priority, and the targets that are less likely to collide are assigned a lower priority.
在一种可能的实现中,上述根据排序后的多个目标物,确定每个目标物的优先级,可以包括:按照预设的优先级个数,将排序后的多个目标物划分至不同优先级,得到每个目标物的优先级。如此,智能设备可以不限定划分的优先级个数,当需要划分更多或更少的优先级时,智能设备调整优先级个数的参数大小即可。从而无需改变碰撞风险分析的方法,也能实现优先级级别的扩展。In a possible implementation, determining the priority of each target object according to the sorted multiple targets may include: dividing the sorted multiple targets into different priorities according to a preset number of priorities to obtain the priority of each target object. In this way, the smart device may not limit the number of priorities to be divided. When more or fewer priorities need to be divided, the smart device can adjust the parameter size of the number of priorities. Therefore, the expansion of priority levels can be achieved without changing the collision risk analysis method.
在一种可能的实现中,该目标行为预测方法还可以包括:显示第一界面,第一界面用于输入优先级个数;响应于用户的输入操作,获取用户输入的优先级个数,作为预设的优先级个数。如此,用户可以根据需要自定义划分的优先级个数,提升了用户的使用体验。In a possible implementation, the target behavior prediction method may further include: displaying a first interface, the first interface being used to input a priority number; and in response to a user input operation, obtaining the priority number input by the user as a preset priority number. In this way, the user can customize the priority number as needed, thereby improving the user experience.
在一种可能的实现中,该目标行为预测方法还可以包括:根据预设的优先级级别与颜色的对应关系,以及每个目标物的优先级,确定每个目标物对应的颜色;基于每个目标物对应的颜色显示多个目标物。如此,智能设备可以在显示屏上使用不同的颜色显示不同优先级的目标物,从而用户可以直接看出智能设备对周边环境内的目标物的优先级分级效果。In a possible implementation, the target behavior prediction method may further include: determining the color corresponding to each target object according to the preset correspondence between the priority level and the color and the priority of each target object; and displaying multiple targets based on the color corresponding to each target object. In this way, the smart device can use different colors to display targets of different priorities on the display screen, so that the user can directly see the priority grading effect of the smart device on the targets in the surrounding environment.
在一种可能的实现中,上述预设的优先级级别与预测模型的对应关系,可以包括:第一优先级对应第一预测模型,第二优先级对应第二预测模型,第一优先级高于第二优先级,第一预测模型的计算复杂度高于第二预测模型的计算复杂度。如此,智能设备可以对周边环境中不同优先级的目标物体,采用不同计算复杂度的预测模型进行未来运动轨迹的预测。实现了对周边环境内物体的行为预测的同时,也减少了计算资源的消耗,降低了预测时延。In a possible implementation, the correspondence between the above-mentioned preset priority levels and prediction models may include: the first priority corresponds to the first prediction model, the second priority corresponds to the second prediction model, the first priority is higher than the second priority, and the computational complexity of the first prediction model is higher than the computational complexity of the second prediction model. In this way, the smart device can use prediction models with different computational complexities to predict the future motion trajectory of target objects of different priorities in the surrounding environment. While realizing the prediction of the behavior of objects in the surrounding environment, it also reduces the consumption of computing resources and reduces the prediction delay.
第二方面,本申请提供一种智能设备,包括:分析单元、评级单元、匹配单元和预测单元。其中,分析单元,用于根据目标物的运动状态进行碰撞分析,确定目标物的风险系数,风险系数用于指示目标物与智能设备发生碰撞的可能性大小;评级单元,用于根据风险系数,确定目标物的优先级;匹配单元,用于根据预设的优先级级别与预测模型的对应关系,确定与优先级匹配的 目标预测模型;预测单元,用于根据目标预测模型,预测目标物的运动轨迹。In a second aspect, the present application provides a smart device, including: an analysis unit, a rating unit, a matching unit, and a prediction unit. The analysis unit is used to perform collision analysis based on the motion state of the target object and determine the risk coefficient of the target object, and the risk coefficient is used to indicate the possibility of collision between the target object and the smart device; the rating unit is used to determine the priority of the target object based on the risk coefficient; the matching unit is used to determine the priority matching the target object based on the correspondence between the preset priority level and the prediction model. A target prediction model; a prediction unit, used to predict the motion trajectory of the target object according to the target prediction model.
在一种可能的实现中,上述分析单元可以用于:获取智能设备的行驶路线上的多个行驶位置;根据目标物的运动状态,确定多个行驶位置中每个行驶位置对应的风险系数,每个行驶位置对应的风险系数用于指示目标物与智能设备在每个行驶位置处发生碰撞的可能性大小;根据每个行驶位置对应的风险系数中的最小值,确定目标物的风险系数。如此,智能设备可以从自车的行驶路线上,选取多个行驶位置作为自车与周围环境中的目标物未来可能碰撞的评估点,从而智能设备可以综合多个评估点发生碰撞的可能性大小,准确确定目标物的碰撞风险程度。In a possible implementation, the above analysis unit can be used to: obtain multiple driving positions on the driving route of the smart device; determine the risk coefficient corresponding to each of the multiple driving positions according to the motion state of the target object, and the risk coefficient corresponding to each driving position is used to indicate the possibility of collision between the target object and the smart device at each driving position; determine the risk coefficient of the target object according to the minimum value of the risk coefficient corresponding to each driving position. In this way, the smart device can select multiple driving positions from the driving route of the vehicle as evaluation points for possible future collisions between the vehicle and the target objects in the surrounding environment, so that the smart device can comprehensively consider the possibility of collision at multiple evaluation points and accurately determine the collision risk level of the target object.
在一种可能的实现中,上述分析单元可以用于:根据目标物的运动状态,确定指定时间段后目标物与目标行驶位置之间的第一距离,其中,目标行驶位置为多个行驶位置中的任一行驶位置;确定指定时间段后智能设备与目标行驶位置之间的第二距离;获取第一距离与第二距离之和,作为目标行驶位置对应的风险系数。如此,针对自车行驶路线上的每个评估点,智能设备可以根据目标物和自车在行驶一段时间后,各自离评估点的剩余碰撞距离,准确分析评估点的发生碰撞的可能性大小。可以理解,剩余碰撞距离越小,目目标物和自车在行驶一段时间后,在评估点附近越容易发生碰撞。In a possible implementation, the above-mentioned analysis unit can be used to: determine the first distance between the target object and the target driving position after a specified time period according to the motion state of the target object, wherein the target driving position is any one of a plurality of driving positions; determine the second distance between the smart device and the target driving position after a specified time period; obtain the sum of the first distance and the second distance as the risk coefficient corresponding to the target driving position. In this way, for each evaluation point on the driving route of the self-vehicle, the smart device can accurately analyze the possibility of collision at the evaluation point according to the remaining collision distance of the target object and the self-vehicle from the evaluation point after driving for a period of time. It can be understood that the smaller the remaining collision distance, the more likely it is that the target object and the self-vehicle will collide near the evaluation point after driving for a period of time.
在一种可能的实现中,上述分析单元可以用于:获取智能设备在未来预设时间段内的行驶路线;每间隔指定距离,获取行驶路线上的一个行驶位置,得到行驶路线上的多个行驶位置。如此,智能设备可以通过对自车的目标行驶路线进行定距采样,得到多个未来可能会发生碰撞的评估位置点。In a possible implementation, the analysis unit can be used to: obtain the driving route of the smart device within a preset time period in the future; obtain a driving position on the driving route at each specified distance interval to obtain multiple driving positions on the driving route. In this way, the smart device can obtain multiple evaluation position points where collisions may occur in the future by sampling the target driving route of the vehicle at a fixed distance.
在一种可能的实现中,上述分析单元可以用于:获取智能设备在未来预设时间段内的行驶路线;每间隔指定时间,获取行驶路线上的一个行驶位置,得到行驶路线上的多个行驶位置。如此,智能设备可以通过对自车的目标行驶路线进行定时采样,得到多个未来可能会发生碰撞的评估位置点。In a possible implementation, the analysis unit can be used to: obtain the driving route of the smart device within a preset time period in the future; obtain a driving position on the driving route at each specified time interval to obtain multiple driving positions on the driving route. In this way, the smart device can obtain multiple evaluation position points where collisions may occur in the future by regularly sampling the target driving route of the vehicle.
在一种可能的实现中,上述分析单元也可以用于:根据目标物的运动状态,确定目标物与智能设备发生碰撞的碰撞时间;根据碰撞时间,确定目标物的风险系数。如此,智能设备也可以根据自车与周围环境中的目标物未来可能碰撞的时间,准确确定目标物的碰撞风险程度。In a possible implementation, the above analysis unit can also be used to: determine the collision time of the target object and the smart device according to the motion state of the target object; and determine the risk factor of the target object according to the collision time. In this way, the smart device can also accurately determine the collision risk degree of the target object according to the time when the vehicle may collide with the target object in the surrounding environment in the future.
在一种可能的实现中,智能设备的周边环境中存在多个目标物,上述评级单元可以用于:根据多个目标物中每个目标物的风险系数,对多个目标物进行排序;根据排序后的多个目标物,确定每个目标物的优先级。如此,智能设备可以根据周围环境内不同物体的碰撞风险程度大小,对目标物进行重要程度或紧急程度的优先级划分。以将越容易发生碰撞的目标物划分至越高的优先级,将越不容易发生碰撞的目标物划分至越低的优先级。In a possible implementation, there are multiple targets in the surrounding environment of the smart device, and the rating unit can be used to: sort the multiple targets according to the risk factor of each target among the multiple targets; and determine the priority of each target according to the sorted multiple targets. In this way, the smart device can prioritize the targets according to their importance or urgency based on the degree of collision risk of different objects in the surrounding environment. The targets that are more likely to collide will be assigned a higher priority, and the targets that are less likely to collide will be assigned a lower priority.
在一种可能的实现中,上述评级单元可以用于:按照预设的优先级个数,将排序后的多个目标物划分至不同优先级,得到每个目标物的优先级。如此,智能设备可以不限定划分的优先级个数,当需要划分更多或更少的优先级时,智能设备调整优先级个数的参数大小即可。从而无需改变碰撞风险分析的方法,也能实现优先级级别的扩展。In a possible implementation, the rating unit can be used to classify the sorted multiple targets into different priorities according to the preset number of priorities to obtain the priority of each target. In this way, the smart device does not need to limit the number of priorities to be divided. When more or fewer priorities need to be divided, the smart device can adjust the parameter size of the number of priorities. Therefore, the expansion of priority levels can be achieved without changing the collision risk analysis method.
在一种可能的实现中,智能设备还可以包括:显示单元和获取单元。其中,显示单元,用于显示第一界面,第一界面用于输入优先级个数;获取单元,用于响应于用户的输入操作,获取用户输入的优先级个数,作为预设的优先级个数。如此,用户可以根据需要自定义划分的优先级个数,提升了用户的使用体验。In a possible implementation, the smart device may further include: a display unit and an acquisition unit. The display unit is used to display a first interface, and the first interface is used to input a priority number; the acquisition unit is used to respond to the user's input operation and obtain the priority number input by the user as the preset priority number. In this way, the user can customize the priority number according to the needs, thereby improving the user's experience.
在一种可能的实现中,智能设备还可以包括:取色单元和显示单元。其中,取色单元,用于根据预设的优先级级别与颜色的对应关系,以及每个目标物的优先级,确定每个目标物对应的颜色;显示单元,用于基于每个目标物对应的颜色显示多个目标物。如此,智能设备可以在显示屏上使用不同的颜色显示不同优先级的目标物,从而用户可以直接看出智能设备对周边环境内的目标物的优先级分级效果。In a possible implementation, the smart device may further include: a color picking unit and a display unit. The color picking unit is used to determine the color corresponding to each target object according to the correspondence between the preset priority level and the color and the priority of each target object; the display unit is used to display multiple targets based on the color corresponding to each target object. In this way, the smart device can use different colors to display targets of different priorities on the display screen, so that the user can directly see the priority grading effect of the smart device on the targets in the surrounding environment.
在一种可能的实现中,上述预设的优先级级别与预测模型的对应关系,可以包括:第一优先级对应第一预测模型,第二优先级对应第二预测模型,第一优先级高于第二优先级,第一预测模型的计算复杂度高于第二预测模型的计算复杂度。如此,智能设备可以对周边环境中不同优先级的目标物体,采用不同计算复杂度的预测模型进行未来运动轨迹的预测。实现了对周边环境内物 体的行为预测的同时,也减少了计算资源的消耗,降低了预测时延。In a possible implementation, the correspondence between the preset priority levels and the prediction models may include: the first priority level corresponds to the first prediction model, the second priority level corresponds to the second prediction model, the first priority level is higher than the second priority level, and the computational complexity of the first prediction model is higher than the computational complexity of the second prediction model. In this way, the smart device can use prediction models with different computational complexities to predict the future motion trajectory of target objects with different priorities in the surrounding environment. While predicting the behavior of the subject, it also reduces the consumption of computing resources and reduces the prediction delay.
第三方面,本申请提供了一种智能设备,包括一个或多个处理器和一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得智能设备执行上述第一方面任一项可能的实现中的目标行为预测方法。In a third aspect, the present application provides an intelligent device, comprising one or more processors and one or more memories. The one or more memories are coupled to the one or more processors, and the one or more memories are used to store computer program codes, and the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the intelligent device executes the target behavior prediction method in any possible implementation of the first aspect.
第四方面,本申请提供了一种车载设备,包括一个或多个处理器和一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得车载设备执行上述第一方面任一项可能的实现中的目标行为预测方法。In a fourth aspect, the present application provides a vehicle-mounted device, comprising one or more processors and one or more memories. The one or more memories are coupled to the one or more processors, and the one or more memories are used to store computer program codes, and the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the vehicle-mounted device executes the target behavior prediction method in any possible implementation of the first aspect.
第五方面,本申请提供了一种车辆,该车辆包含如本申请前述第四方面的车载设备。该车辆可以用于实现如上述第一方面任一项可能的实现方式中的目标行为预测方法。In a fifth aspect, the present application provides a vehicle, the vehicle comprising the vehicle-mounted device of the fourth aspect of the present application. The vehicle can be used to implement the target behavior prediction method in any possible implementation of the first aspect.
第六方面,本申请提供了一种机器人,包括一个或多个处理器和一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得机器人执行上述第一方面任一项可能的实现中的目标行为预测方法。In a sixth aspect, the present application provides a robot, comprising one or more processors and one or more memories. The one or more memories are coupled to the one or more processors, and the one or more memories are used to store computer program codes, and the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the robot executes the target behavior prediction method in any possible implementation of the first aspect.
第七方面,本申请提供了一种目标行为预测装置,该装置包含在智能设备、机器人、车辆或车载设备中,该装置具有实现上述第一方面及第一方面的可能实现方式中任一方法中智能设备行为的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块或单元。In a seventh aspect, the present application provides a target behavior prediction device, which is included in an intelligent device, a robot, a vehicle, or an on-board device, and has the function of realizing the behavior of the intelligent device in any of the methods in the first aspect and the possible implementation of the first aspect. The function can be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the above functions.
第八方面,本申请提供了一种芯片系统,该芯片系统应用于智能设备。该芯片系统包括一个或多个接口电路和一个或多个处理器。该接口电路和处理器通过线路互联。该接口电路用于从智能设备的存储器接收信号,并向处理器发送该信号,该信号包括存储器中存储的计算机指令。当处理器执行计算机指令时,智能设备执行上述第一方面任一项可能的实现中的目标行为预测方法。In an eighth aspect, the present application provides a chip system, which is applied to a smart device. The chip system includes one or more interface circuits and one or more processors. The interface circuit and the processor are interconnected by a line. The interface circuit is used to receive a signal from a memory of the smart device and send the signal to the processor, where the signal includes a computer instruction stored in the memory. When the processor executes the computer instruction, the smart device executes the target behavior prediction method in any possible implementation of the first aspect above.
第九方面,本申请提供了一种计算机存储介质,包括计算机指令,当计算机指令在智能设备上运行时,使得智能设备执行上述第一方面任一项可能的实现中的目标行为预测方法。In a ninth aspect, the present application provides a computer storage medium comprising computer instructions, which, when executed on a smart device, enables the smart device to execute a target behavior prediction method in any possible implementation of the first aspect.
第十方面,本申请提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述第一方面任一项可能的实现中的目标行为预测方法。In a tenth aspect, the present application provides a computer program product, which, when executed on a computer, enables the computer to execute the target behavior prediction method in any possible implementation of the first aspect.
可以理解地,上述提供的第二方面及其任一种可能的实现的智能设备,第三方面的智能设备,第四方面的车载设备,第五方面的车辆,第六方面的机器人,第七方面的目标行为预测装置,第八方面的芯片系统,第九方面的计算机存储介质,第十方面的计算机程序产品所能达到的有益效果,可参考第一方面及其任一种可能的实现中的有益效果,此处不再赘述。It can be understood that the beneficial effects that can be achieved by the intelligent device of the second aspect and any possible implementation thereof, the intelligent device of the third aspect, the vehicle-mounted device of the fourth aspect, the vehicle of the fifth aspect, the robot of the sixth aspect, the target behavior prediction device of the seventh aspect, the chip system of the eighth aspect, the computer storage medium of the ninth aspect, and the computer program product of the tenth aspect can be referred to the beneficial effects of the first aspect and any possible implementation thereof, and will not be repeated here.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本申请实施例提供的一种车辆的结构示意图一;FIG1 is a structural schematic diagram 1 of a vehicle provided in an embodiment of the present application;
图2为本申请实施例提供的一种车辆的结构示意图二;FIG2 is a second structural schematic diagram of a vehicle provided in an embodiment of the present application;
图3为本申请实施例提供的一种计算机系统的结构示意图;FIG3 is a schematic diagram of the structure of a computer system provided in an embodiment of the present application;
图4为本申请实施例提供的一种云侧指令自动驾驶车辆的应用示意图一;FIG4 is a schematic diagram 1 of an application of a cloud-side command for an autonomous driving vehicle provided in an embodiment of the present application;
图5为本申请实施例提供的一种云侧指令自动驾驶车辆的应用示意图二;FIG5 is a second schematic diagram of an application of a cloud-side commanded autonomous driving vehicle provided in an embodiment of the present application;
图6为本申请实施例提供的一种目标行为预测方法的方法流程图;FIG6 is a method flow chart of a target behavior prediction method provided in an embodiment of the present application;
图7为本申请实施例提供的一种碰撞分析的示意图;FIG7 is a schematic diagram of a collision analysis provided in an embodiment of the present application;
图8为本申请实施例提供的一种车载显示屏的界面示意图一;FIG8 is a first schematic diagram of an interface of a vehicle-mounted display screen provided in an embodiment of the present application;
图9为本申请实施例提供的一种车载显示屏的界面示意图二。FIG. 9 is a second schematic diagram of an interface of a vehicle-mounted display screen provided in an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。应当理解,在本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”用 于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。The technical solutions in the embodiments of the present application will be described below in conjunction with the drawings in the embodiments of the present application. In the following, the terms "first" and "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more of the features. It should be understood that in the present application, "at least one" means one or more, and "more than one" means two or more. "And/or" is used to refer to a plurality of items. For describing the association relationship of associated objects, it indicates that there may be three types of relationships. For example, "A and/or B" may indicate that only A exists, only B exists, and both A and B exist. A and B may be singular or plural.
本申请实施例的方案可以应用于智能设备。其中,智能设备可以是自动驾驶车辆、机器人、或其他具有自主行驶能力的电子设备,本申请实施例对此并不做限定。在一些实施例中,本申请实施例的方案也可以应用于具有控制上述智能设备的功能的其他设备(比如云端服务器、手机终端等)中。智能设备或者其他设备可以通过其包含的组件(包括硬件和软件),实施本申请实施例提供的目标行为预测方法,根据传感器采集到的数据,对智能设备的周边环境中的目标物进行碰撞风险检测,从而确定目标物的碰撞风险程度优先级,使得智能设备或者其他设备可以对高优先级的目标物,采用精细化、计算复杂度高的预测模型进行未来运动轨迹的预测,对低优先级的目标物,采用简单化、计算复杂度低的预测模型进行未来运动轨迹的预测。The scheme of the embodiment of the present application can be applied to smart devices. Among them, the smart device can be an autonomous driving vehicle, a robot, or other electronic device with autonomous driving capability, and the embodiment of the present application does not limit this. In some embodiments, the scheme of the embodiment of the present application can also be applied to other devices (such as cloud servers, mobile phone terminals, etc.) with the function of controlling the above-mentioned smart devices. Smart devices or other devices can implement the target behavior prediction method provided by the embodiment of the present application through the components (including hardware and software) contained therein, and perform collision risk detection on the target objects in the surrounding environment of the smart device according to the data collected by the sensor, so as to determine the priority of the collision risk degree of the target object, so that the smart device or other device can use a refined and high-computational complexity prediction model to predict the future motion trajectory of high-priority targets, and use a simplified and low-computational complexity prediction model to predict the future motion trajectory of low-priority targets.
可以理解,本申请实施例提供的目标行为预测方法,并非对智能设备的周边环境中的所有目标物,都采用精细化、计算复杂度高的预测模型进行未来运动轨迹的预测,而是根据碰撞风险程度,将智能设备的周边环境中的目标物划分为不同的优先级,从而可以将处于低优先级的目标物,采用简单化、计算复杂度低的预测模型进行未来运动轨迹的预测,大大减少了计算资源的消耗,降低了预测时延。It can be understood that the target behavior prediction method provided in the embodiment of the present application does not use a refined, high-complexity prediction model to predict the future motion trajectory of all targets in the surrounding environment of the smart device, but divides the targets in the surrounding environment of the smart device into different priorities according to the degree of collision risk. Therefore, the future motion trajectory of low-priority targets can be predicted using a simplified, low-complexity prediction model, which greatly reduces the consumption of computing resources and reduces the prediction delay.
以下将以智能设备为自动驾驶车辆(以下简称车辆)为例,对本申请实施例提供的方案进行示意性说明。The following will take the smart device as an autonomous driving vehicle (hereinafter referred to as the vehicle) as an example to schematically illustrate the solution provided in the embodiment of the present application.
请参阅图1,图1示出了本申请实施例提供的一种车辆100的功能框图。其中,车辆100可以包括设置于车辆100中和/或车辆100的车身的各种设备、部件等。在一个实施例中,设置于车辆100中的设备、部件可以包括但不限于自动驾驶系统、自动驾驶功能应用。可以理解,具备一定自动驾驶能力的车辆中通常设置有自动驾驶系统。Please refer to FIG. 1, which shows a functional block diagram of a vehicle 100 provided in an embodiment of the present application. The vehicle 100 may include various devices, components, etc. disposed in the vehicle 100 and/or the body of the vehicle 100. In one embodiment, the devices and components disposed in the vehicle 100 may include, but are not limited to, an automatic driving system and an automatic driving function application. It is understood that an automatic driving system is usually disposed in a vehicle with a certain automatic driving capability.
车辆100可以包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112、分级预测系统114和用户接口116。可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。The vehicle 100 may include various subsystems, such as a travel system 102, a sensor system 104, a control system 106, one or more peripheral devices 108, and a power source 110, a computer system 112, a hierarchical prediction system 114, and a user interface 116. Optionally, the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. In addition, each subsystem and element of the vehicle 100 may be interconnected by wire or wirelessly.
其中,行进系统102可包括用于为车辆100提供动力运动的组件。在一个实施例中,行进系统102可包括引擎、能量源、传动装置和车轮/轮胎。其中,引擎可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。The travel system 102 may include components for providing powered motion for the vehicle 100. In one embodiment, the travel system 102 may include an engine, an energy source, a transmission, and wheels/tires. The engine may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, or a hybrid engine consisting of an internal combustion engine and an air compression engine.
引擎可以将能量源转换成机械能量。能量源的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源也可以为车辆100的其他系统提供能量。The engine can convert the energy source into mechanical energy. Examples of energy sources include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source can also provide energy for other systems of the vehicle 100.
传动装置可以将来自引擎的机械动力传送到车轮。传动装置可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置还可以包括其他器件,比如离合器。其中,驱动轴可以包括可耦合到一个或多个车轮的一个或多个轴。The transmission can transmit mechanical power from the engine to the wheels. The transmission can include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission can also include other devices, such as a clutch. Among them, the drive shaft can include one or more shafts that can be coupled to one or more wheels.
传感器系统104(又称“采集设备”)可包括用于感知关于车辆100周边环境信息的若干个传感器。例如,传感器系统104可包括定位系统(定位系统可以是全球定位系统(global positioning system,GPS)系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)、雷达、激光测距仪以及相机。传感器系统104还可包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(如位置、形状、方向、速度等)。这种检测和识别是车辆100自动驾驶的安全操作的关键功能。The sensor system 104 (also called "collection device") may include several sensors for sensing information about the surrounding environment of the vehicle 100. For example, the sensor system 104 may include a positioning system (the positioning system may be a global positioning system (GPS) system, or a Beidou system or other positioning system), an inertial measurement unit (IMU), a radar, a laser rangefinder, and a camera. The sensor system 104 may also include sensors of the internal systems of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (such as position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of the autonomous driving of the vehicle 100.
其中,定位系统可用于估计车辆100的地理位置。IMU可用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU可以是加速度计和陀螺仪的组合。Among other things, the positioning system may be used to estimate the geographic location of the vehicle 100. The IMU may be used to sense the position and orientation changes of the vehicle 100 based on inertial acceleration. In one embodiment, the IMU may be a combination of an accelerometer and a gyroscope.
雷达可利用无线电信号来感测车辆100的周边环境内的物体,如行人、骑行人(即骑自行车的人)、摩托车、其他车辆等各种类型的障碍物。在一些实施例中,除了感测物体以外,雷达还可用于感测物体的速度、位置、前进方向中的一种或多种状态。 The radar can use radio signals to sense objects in the surrounding environment of the vehicle 100, such as pedestrians, cyclists (i.e., bicyclists), motorcycles, other vehicles, and other types of obstacles. In some embodiments, in addition to sensing objects, the radar can also be used to sense one or more states of the object's speed, position, and direction of travel.
激光测距仪可利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光测距仪可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。A laser rangefinder may utilize laser light to sense objects in the environment in which the vehicle 100 is located. In some embodiments, a laser rangefinder may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
相机可用于捕捉车辆100的周边环境的多个图像。相机可以是静态相机或视频相机。The camera may be used to capture multiple images of the surroundings of the vehicle 100. The camera may be a still camera or a video camera.
控制系统106为控制车辆100及其组件的操作。控制系统106可包括各种元件,如转向系统、油门、制动单元、计算机视觉系统、路线控制系统以及障碍规避系统等。其中,转向系统可操作来调整车辆100的前进方向。可选地,转向系统可以为方向盘系统。油门可用于控制引擎的操作速度并进而控制车辆100的速度。制动单元可用于控制车辆100减速。The control system 106 is for controlling the operation of the vehicle 100 and its components. The control system 106 may include various components, such as a steering system, a throttle, a brake unit, a computer vision system, a route control system, and an obstacle avoidance system. Among them, the steering system can be operated to adjust the forward direction of the vehicle 100. Alternatively, the steering system can be a steering wheel system. The throttle can be used to control the operating speed of the engine and thus control the speed of the vehicle 100. The brake unit can be used to control the deceleration of the vehicle 100.
计算机视觉系统可以处理和分析相机捕捉到的图像,以便识别车辆100周边环境中的各种类型的物体和/或特征。计算机视觉系统可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统可以用于为环境绘制地图、跟踪物体、估计物体的速度等。The computer vision system can process and analyze the images captured by the camera to identify various types of objects and/or features in the environment surrounding the vehicle 100. The computer vision system can use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system can be used to map the environment, track objects, estimate the speed of objects, etc.
路线控制系统用于确定车辆100的行驶路线。在一些实施例中,路线控制系统可结合来自传感器、GPS和一个或多个预定地图的数据,为车辆100确定行驶路线。The route control system is used to determine the driving route of the vehicle 100. In some embodiments, the route control system can combine data from sensors, GPS, and one or more predetermined maps to determine the driving route for the vehicle 100.
障碍规避系统用于识别、评估和避免或者以其他方式越过车辆100的环境中的障碍物。The obstacle avoidance system is used to identify, evaluate, and avoid or otherwise negotiate obstacles in the environment of the vehicle 100 .
当然,在一个实例中,控制系统106可以增加或替换地包括除了上述组件以外的组件。或者也可以减少一部分上述组件。Of course, in one example, the control system 106 may include additional or alternative components other than the above components, or may reduce some of the above components.
车辆100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统、车载电脑、麦克风和/或扬声器。在一些实施例中,外围设备108提供车辆100的用户与用户接口116交互手段。例如,车载电脑可向车辆100的用户提供信息。用户接口116还可操作车载电脑来接收用户的输入。车载电脑可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风可接收车辆100的用户输入的音频(例如,语音命令或其他音频输入)。类似地,扬声器可向车辆100的用户输出音频。The vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through the peripheral device 108. The peripheral device 108 may include a wireless communication system, an onboard computer, a microphone, and/or a speaker. In some embodiments, the peripheral device 108 provides a means for the user of the vehicle 100 to interact with the user interface 116. For example, the onboard computer may provide information to the user of the vehicle 100. The user interface 116 may also operate the onboard computer to receive the user's input. The onboard computer may be operated via a touch screen. In other cases, the peripheral device 108 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle. For example, a microphone may receive audio (e.g., voice commands or other audio input) input by a user of the vehicle 100. Similarly, a speaker may output audio to the user of the vehicle 100.
车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器,处理器执行存储在例如数据存储器这样的非暂态计算机可读介质中的指令。计算机系统112还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。Some or all functions of the vehicle 100 are controlled by a computer system 112. The computer system 112 may include at least one processor that executes instructions stored in a non-transitory computer-readable medium such as a data storage device. The computer system 112 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
处理器可以包括一个或多个处理单元,例如:处理器可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor may include one or more processing units, for example, the processor may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-network processing unit (neural-network processing unit, NPU), etc. Among them, different processing units may be independent devices or integrated in one or more processors.
在一些实施例中,存储器可包含指令(例如,程序逻辑),指令可被处理器执行来执行车辆100的各种功能,包括以上描述的那些功能。存储器也可包含额外的指令,包括行进系统102、传感器系统104、控制系统106、外围设备108和分级预测系统114中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。除了指令以外,存储器还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度等车辆数据,以及其他信息,如车辆周围环境中的各种物体的位置、方位、速度等。这些信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100、计算机系统112和分级预测系统114使用。In some embodiments, the memory may contain instructions (e.g., program logic) that can be executed by the processor to perform various functions of the vehicle 100, including those described above. The memory may also contain additional instructions, including instructions for one or more of the travel system 102, the sensor system 104, the control system 106, the peripheral device 108, and the hierarchical prediction system 114 to send data, receive data from it, interact with it, and/or control it. In addition to instructions, the memory may also store data, such as road maps, route information, vehicle data such as the vehicle's location, direction, speed, and other information, such as the location, orientation, speed, etc. of various objects in the vehicle's surroundings. This information can be used by the vehicle 100, the computer system 112, and the hierarchical prediction system 114 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
示例性地,存储器可以获取车辆100基于传感器系统104中的传感器获取到的周围环境中的物体信息,例如其他车辆、行人等障碍物的位置,障碍物与车辆100的距离等信息。存储器还可以从传感器系统104或车辆100的其他组件获取环境信息,环境信息例如可以为车辆当前所处环境附近是否有绿化带、车道、行人等,或者车辆通过机器学习算法计算当前所处环境附近是否存在绿化带、车道、行人等。除上述内容外,存储器还可以存储该车辆自身的状态信息,以及与该车辆有交互的目标物(行人、其他车辆等)的状态信息,其中,车辆的状态信息包括但不限于车辆的位置、速度、加速度、航向角等。Exemplarily, the memory can obtain information about objects in the surrounding environment obtained by the vehicle 100 based on the sensors in the sensor system 104, such as the location of obstacles such as other vehicles and pedestrians, the distance between the obstacles and the vehicle 100, and other information. The memory can also obtain environmental information from the sensor system 104 or other components of the vehicle 100. The environmental information can be, for example, whether there are green belts, lanes, pedestrians, etc. near the current environment of the vehicle, or whether there are green belts, lanes, pedestrians, etc. near the current environment calculated by the vehicle through a machine learning algorithm. In addition to the above content, the memory can also store the status information of the vehicle itself, as well as the status information of the target objects (pedestrians, other vehicles, etc.) that interact with the vehicle, wherein the status information of the vehicle includes but is not limited to the location, speed, acceleration, heading angle, etc. of the vehicle.
在一些实施例中,上述处理器还可以执行本申请实施例的目标行为预测方法,以减少计算资源、降低预测时延。示例性地,处理器可从存储器获取上述信息,并基于车辆所处环境的环境信 息、车辆自身的状态信息、目标物的状态信息等确定目标物的优先级级别,以基于该优先级级别确定用于预测该目标物的运动轨迹的预测模型,从而控制车辆100对高优先级的目标物,采用精细化、计算复杂度高的预测模型进行未来运动轨迹的预测,对低优先级的目标物,采用简单化、计算复杂度低的预测模型进行未来运动轨迹的预测。其中具体的目标行为预测方法可以参照下文介绍。In some embodiments, the processor may also execute the target behavior prediction method of the embodiment of the present application to reduce computing resources and reduce prediction delay. For example, the processor may obtain the above information from the memory and predict the target behavior based on the environmental information of the vehicle environment. The priority level of the target object is determined based on the information of the vehicle's own state information, the state information of the target object, etc., so as to determine the prediction model for predicting the motion trajectory of the target object based on the priority level, thereby controlling the vehicle 100 to use a refined prediction model with high computational complexity to predict the future motion trajectory of high-priority targets, and to use a simplified prediction model with low computational complexity to predict the future motion trajectory of low-priority targets. The specific target behavior prediction method can be referred to the following introduction.
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104、控制系统106和分级预测系统114)以及从用户接口116接收的输入来控制车辆100的功能。例如,计算机系统112可利用来自控制系统106的输入,以便控制转向单元来避免由传感器系统104和障碍规避系统检测到的障碍物。在一些实施例中,计算机系统112可对车辆100及其子系统的许多方面提供控制。The computer system 112 may control functions of the vehicle 100 based on input received from various subsystems (e.g., the travel system 102, the sensor system 104, the control system 106, and the hierarchical prediction system 114) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering unit to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system. In some embodiments, the computer system 112 may provide control over many aspects of the vehicle 100 and its subsystems.
分级预测系统114可基于从各种子系统(例如,行进系统102、传感器系统104、控制系统106)输入的车辆数据以及目标物的运动状态,判断车辆100周边环境中的行人、骑行人、摩托车、其他车辆等目标物的碰撞风险程度将其划分为不同优先级,以采用不同的预测模型预测不同优先级的目标物的未来运动轨迹,实现车辆100的分级预测功能。The hierarchical prediction system 114 can determine the degree of collision risk of pedestrians, cyclists, motorcycles, other vehicles and other targets in the surrounding environment of the vehicle 100 based on the vehicle data input from various subsystems (for example, the travel system 102, the sensor system 104, and the control system 106) and the motion state of the targets, and classify them into different priorities, so as to use different prediction models to predict the future motion trajectories of targets of different priorities, thereby realizing the hierarchical prediction function of the vehicle 100.
可以理解的是,图1所示的结构仅为示意,其并不对本申请实施例中车辆的结构造成限定。在本申请另一些实施例中,车辆100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置,或者具有与图1所示等同功能或比图1所示功能更多的不同的配置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It is understood that the structure shown in FIG. 1 is for illustration only and does not limit the structure of the vehicle in the embodiment of the present application. In other embodiments of the present application, the vehicle 100 may include more or fewer components than shown in the figure, or combine certain components, or split certain components, or arrange components differently, or have different configurations with the same functions as shown in FIG. 1 or more functions than shown in FIG. 1. The components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
在道路行进的自动驾驶汽车,如上面的车辆100,可以根据其周围环境内的物体确定对当前速度的调整指令。其中,车辆100周围环境内的物体可以是交通控制设备、或者绿化带等其它类型的静态物体,也可以是行人、骑行人、摩托车、其他车辆等各种类型的动态物体。在一些示例中,车辆100可以独立地考虑周围环境内的每个物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,来确定车辆100的速度调整指令。An autonomous vehicle traveling on a road, such as vehicle 100 above, can determine an adjustment instruction for the current speed based on objects in its surrounding environment. The objects in the surrounding environment of vehicle 100 can be traffic control devices, other types of static objects such as green belts, or various types of dynamic objects such as pedestrians, cyclists, motorcycles, other vehicles, etc. In some examples, vehicle 100 can consider each object in the surrounding environment independently and determine the speed adjustment instruction of vehicle 100 based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc.
可选地,作为自动驾驶汽车的车辆100或者与其相关联的计算机设备(如计算机系统112、计算机视觉系统、存储器)可以基于所识别的物体的特性和车辆100未来一段时间的行驶路线,评估所识别的物体与车辆100之间发生碰撞的风险系数,然后基于该评估得到风险系数,将所识别的物体划分为不同的优先级,从而可以使用不同的预测模型预测不同优先级的物体的未来运动轨迹,达到减少计算资源、降低预测时延的目的。Optionally, the vehicle 100 as an autonomous vehicle or a computer device associated therewith (such as the computer system 112, a computer vision system, and a memory) can evaluate the risk factor of a collision between the identified object and the vehicle 100 based on the characteristics of the identified object and the driving route of the vehicle 100 in the future, and then divide the identified objects into different priorities based on the risk factor obtained from the evaluation, so that different prediction models can be used to predict the future motion trajectories of objects of different priorities, thereby achieving the purpose of reducing computing resources and reducing prediction delays.
可选地,车辆100能够基于预测的物体的未来运动轨迹,来调整它的驾驶策略。换句话说,自动驾驶汽车能够基于预测的物体的未来运动轨迹,确定车辆需要调整到什么稳定状态(例如,加速、减速、转向或者停止等)。在这个过程中,也可以考虑其它因素来确定车辆100的速度调整指令,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。Optionally, the vehicle 100 can adjust its driving strategy based on the predicted future motion trajectory of the object. In other words, the autonomous vehicle can determine what stable state the vehicle needs to adjust to (e.g., accelerate, decelerate, turn, or stop, etc.) based on the predicted future motion trajectory of the object. In this process, other factors can also be considered to determine the speed adjustment instructions of the vehicle 100, such as the lateral position of the vehicle 100 in the road, the curvature of the road, the proximity of static and dynamic objects, etc.
除了提供调整自动驾驶汽车的速度的指令之外,计算机设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持自动驾驶汽车与附近的物体(例如相邻车道中的轿车)的安全横向和纵向距离。In addition to providing instructions to adjust the speed of the autonomous vehicle, the computer device may also provide instructions to modify the steering angle of vehicle 100 so that the autonomous vehicle follows a given trajectory and/or maintains a safe lateral and longitudinal distance between the autonomous vehicle and nearby objects (e.g., cars in adjacent lanes).
可以理解,上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。例如,车辆100也可以是智能家居领域中具备自主行驶能力的智能车、智能机器人等。It is understood that the vehicle 100 may be a car, a truck, a motorcycle, a bus, a ship, an airplane, a helicopter, a lawn mower, an entertainment vehicle, an amusement park vehicle, construction equipment, a tram, a golf cart, a train, and a cart, etc., and the present application embodiment does not make any special restrictions. For example, the vehicle 100 may also be a smart car or a smart robot with autonomous driving capability in the field of smart home.
在本申请的另一些实施例中,自动驾驶车辆还可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。In other embodiments of the present application, the autonomous driving vehicle may further include a hardware structure and/or a software module to implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
参见图2,示例性地,车辆200中可以包括以下模块:Referring to FIG. 2 , illustratively, the vehicle 200 may include the following modules:
环境感知模块201:用于通过车载传感器和/或路侧传感器获取车辆200周边环境中的其他车辆、行人的信息等。可选地,车辆200也可以是图1中的车辆100。其中,路侧传感器和车载传 感器可以是激光雷达、毫米波雷达、视觉传感器等。感知模块201获取传感器原始采集的视频流数据、雷达的点云数据等,然后对这些原始的视频流数据、雷达的点云数据进行处理,得到可识别的结构化的人和其他车辆的位置、大小、速度、前进方向等数据。其中,由于传感器的种类多样,感知模块201可以根据所有或者某一类或者某一个传感器采集到的数据,来确定车辆200周边环境中的其他车辆、行人的位置、速度、前进方向等信息。感知模块201,还用于将其根据传感器获取到的数据,确定的周边环境中的其他车辆、行人的位置、速度、前进方向等信息发送给分级模块203。Environmental perception module 201: used to obtain information about other vehicles and pedestrians in the surrounding environment of vehicle 200 through vehicle-mounted sensors and/or roadside sensors. Optionally, vehicle 200 may also be vehicle 100 in FIG. 1. The sensor may be a laser radar, a millimeter wave radar, a visual sensor, etc. The perception module 201 obtains the video stream data originally collected by the sensor, the point cloud data of the radar, etc., and then processes these original video stream data and radar point cloud data to obtain identifiable structured data such as the position, size, speed, and direction of travel of people and other vehicles. Among them, due to the variety of sensors, the perception module 201 can determine the position, speed, direction of travel, and other information of other vehicles and pedestrians in the surrounding environment of the vehicle 200 based on the data collected by all or a certain type or a certain sensor. The perception module 201 is also used to send the position, speed, direction of travel, and other information of other vehicles and pedestrians in the surrounding environment determined based on the data obtained by the sensor to the classification module 203.
路线获取模块202,用于根据车辆200的规划路线,获取车辆200未来一段时间的行驶路线。可选地,可以通过道路导航(route)模块获取车辆200的规划路线。当车辆200需要前往某个地方时,route模块可以根据现有的地图或路网信息、起始地的位置信息以及目的地的位置信息,指导车辆200按照什么样的道路行驶,以便顺利完成从起始地到目的地的行驶。也即,route模块能够导航出一条从起始地到目的地的规划路线。可选地,route模块可以通过导航请求获取车辆200的规划路线。导航请求可以包括车辆200的起始地的位置信息以及目的地的位置信息。比如车辆200可以通过用户点击或者触控车载导航的屏幕获取导航请求,也可以通过用户的语音指令获取导航请求。然后route模块可以利用车载GPS配合电子地图来进行路线规划,进而可以获得车辆200的规划路线,并根据车辆200的规划路线,获取车辆200未来一段时间的行驶路线。The route acquisition module 202 is used to obtain the driving route of the vehicle 200 for a period of time in the future according to the planned route of the vehicle 200. Optionally, the planned route of the vehicle 200 can be obtained through a road navigation (route) module. When the vehicle 200 needs to go to a certain place, the route module can guide the vehicle 200 to travel on what kind of road according to the existing map or road network information, the location information of the starting point and the location information of the destination, so as to successfully complete the travel from the starting point to the destination. That is, the route module can navigate a planned route from the starting point to the destination. Optionally, the route module can obtain the planned route of the vehicle 200 through a navigation request. The navigation request may include the location information of the starting point of the vehicle 200 and the location information of the destination. For example, the vehicle 200 can obtain the navigation request by the user clicking or touching the screen of the vehicle navigation, or by the user's voice command. Then the route module can use the vehicle GPS in conjunction with the electronic map to plan the route, and then obtain the planned route of the vehicle 200, and obtain the driving route of the vehicle 200 for a period of time in the future according to the planned route of the vehicle 200.
需要说明的是,本申请提供的方案可以通过多种方式获取车辆200的规划路线,相关技术中关于获取车辆的导航信息的方式,本申请实施例均可以采用。例如,在没有地图时,也可以通过道路结构认知模块提供的车辆200所在车道等效路由,获取车辆200的规划路线。其中,道路结构认知模块用于通过车载传感器和/或路侧传感器获取道路信息,如道路边界信息、车辆200所在的车道信息、车道边界信息等,以根据道路信息确定车辆200所在车道的道路结构,从而车辆200可以根据道路结构,生成车辆200的行驶路线。It should be noted that the solution provided in the present application can obtain the planned route of vehicle 200 in a variety of ways, and the methods for obtaining vehicle navigation information in the relevant technology can be adopted in the embodiments of the present application. For example, when there is no map, the planned route of vehicle 200 can also be obtained through the equivalent route of the lane where vehicle 200 is located provided by the road structure recognition module. Among them, the road structure recognition module is used to obtain road information through on-board sensors and/or roadside sensors, such as road boundary information, lane information where vehicle 200 is located, lane boundary information, etc., to determine the road structure of the lane where vehicle 200 is located based on the road information, so that vehicle 200 can generate the driving route of vehicle 200 based on the road structure.
风险分级模块203,用于从环境感知模块201获取车辆200周边环境中的其他车辆、行人的位置、速度、前进方向等信息,并用于从路线获取模块202获取车辆200未来一段时间的行驶路线,然后根据获取到的车辆200未来一段时间的行驶路线以及其他车辆、行人的位置、速度、前进方向等信息,对其他车辆、行人进行碰撞分析,得到其他车辆、行人与车辆200未来发生碰撞的可能性大小即风险系数,以根据风险系数将其他车辆、行人划分为不同的优先级。该风险分级模块203,还用于根据预设的参数n(n为正整数),确定要划分的优先级级别为等级1、等级2、等级3、……等级n。从而可以通过调整预设的参数n的数值大小,来改变要划分的优先级级别的多少。例如,增大预设的参数n的数值,即可实现优先级级别的扩展。该风险分级模块203还用于将其最终得到的周边环境中的其他车辆、行人的优先级,发送给轨迹预测模块204。The risk classification module 203 is used to obtain the position, speed, and forward direction of other vehicles and pedestrians in the surrounding environment of the vehicle 200 from the environment perception module 201, and to obtain the driving route of the vehicle 200 for a period of time in the future from the route acquisition module 202. Then, based on the obtained driving route of the vehicle 200 for a period of time in the future and the position, speed, and forward direction of other vehicles and pedestrians, collision analysis is performed on other vehicles and pedestrians to obtain the possibility of collision between other vehicles and pedestrians and the vehicle 200 in the future, that is, the risk coefficient, so as to classify other vehicles and pedestrians into different priorities according to the risk coefficient. The risk classification module 203 is also used to determine the priority level to be divided into level 1, level 2, level 3, ... level n according to the preset parameter n (n is a positive integer). Therefore, the number of priority levels to be divided can be changed by adjusting the numerical value of the preset parameter n. For example, the priority level can be expanded by increasing the numerical value of the preset parameter n. The risk classification module 203 is also used to send the priority of other vehicles and pedestrians in the surrounding environment that it finally obtains to the trajectory prediction module 204.
轨迹预测模块203,用于接收风险分级模块203发送的车辆200周边环境中的其他车辆、行人的优先级,并根据接收到的其他车辆、行人的优先级,采用匹配的预测模型进行未来运动轨迹的预测。例如,对周边环境中的高优先级的其他车辆1、行人1,采用精细化、计算复杂度高的预测模型进行未来运动轨迹的预测,对周边环境中的低优先级的其他车辆2、行人2,采用简单化、计算复杂度低的预测模型进行未来运动轨迹的预测。可选地,当要划分的优先级级别为等级1、等级2、等级3、……等级n时,轨迹预测模块203还可以根据优先级级别,确定一一对应的预测模型类型为预测模型1、预测模型2、预测模型3、……预测模型n。从而可以使用不同的预测模型预测不同优先级的其他车辆、行人,达到减少计算资源、降低预测时延的目的。The trajectory prediction module 203 is used to receive the priorities of other vehicles and pedestrians in the surrounding environment of the vehicle 200 sent by the risk classification module 203, and use a matching prediction model to predict the future motion trajectory according to the priorities of other vehicles and pedestrians received. For example, for other vehicles 1 and pedestrian 1 with high priority in the surrounding environment, a refined prediction model with high computational complexity is used to predict the future motion trajectory, and for other vehicles 2 and pedestrian 2 with low priority in the surrounding environment, a simplified prediction model with low computational complexity is used to predict the future motion trajectory. Optionally, when the priority levels to be divided are level 1, level 2, level 3, ... level n, the trajectory prediction module 203 can also determine the one-to-one corresponding prediction model type as prediction model 1, prediction model 2, prediction model 3, ... prediction model n according to the priority level. Therefore, different prediction models can be used to predict other vehicles and pedestrians of different priorities, so as to reduce computing resources and reduce prediction delay.
在一些实施例中,车辆200也可以包括显示模块(图2中未示出),用于接收风险分级模块203发送的车辆200周边环境中的其他车辆、行人的优先级,并使用不同的颜色显示不同优先级的其他车辆、行人,以更加方便直观的查看到车辆200周边环境中的其他车辆、行人的分级效果。In some embodiments, vehicle 200 may also include a display module (not shown in Figure 2) for receiving the priorities of other vehicles and pedestrians in the surrounding environment of vehicle 200 sent by the risk grading module 203, and using different colors to display other vehicles and pedestrians of different priorities, so as to more conveniently and intuitively view the grading effects of other vehicles and pedestrians in the surrounding environment of vehicle 200.
在一些实施例中,车辆200还可以包括存储组件(图2未示出),用于存储上述各个模块的可执行代码,运行这些可执行代码可实现本申请实施例的部分或全部方法流程。In some embodiments, the vehicle 200 may further include a storage component (not shown in FIG. 2 ) for storing executable codes of the above-mentioned modules. Running these executable codes may implement part or all of the method flow of the embodiments of the present application.
在一种可能的实现方式中,如图3所示,图1所示的计算机系统112可以包括处理器301,处理器301和系统总线302耦合,处理器301可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)303可以驱动显示器324,显示器324 和系统总线302耦合。系统总线302通过总线桥304和输入输出(input/output,I/O)总线(bus)305耦合,I/O接口306和I/O总线305耦合,I/O接口306和多种I/O设备进行通信,比如输入设备307(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)308(例如多媒体接口)。收发器309(可以发送和/或接收无线电通信信号),摄像头310(可以捕捉静态和动态数字视频图像)和外部通用串行总线(universal serial bus,USB)端口311。其中,可选地,和I/O接口306相连接的接口可以是USB接口。In a possible implementation, as shown in FIG3 , the computer system 112 shown in FIG1 may include a processor 301, the processor 301 is coupled to a system bus 302, the processor 301 may be one or more processors, each of which may include one or more processor cores. A display adapter (video adapter) 303 may drive a display 324, and the display 324 The system bus 302 is coupled to the input/output (I/O) bus 305 through the bus bridge 304. The I/O interface 306 is coupled to the I/O bus 305. The I/O interface 306 communicates with various I/O devices, such as an input device 307 (such as a keyboard, a mouse, a touch screen, etc.), a multimedia disk (media tray) 308 (such as a multimedia interface). The transceiver 309 (which can send and/or receive radio communication signals), the camera 310 (which can capture static and dynamic digital video images) and the external universal serial bus (USB) port 311. Optionally, the interface connected to the I/O interface 306 can be a USB interface.
其中,处理器301可以是任何传统处理器,包括精简指令集计算(reduced instruction set computer,RISC)处理器、复杂指令集计算(complex instruction set computer,CISC)处理器或上述的组合。可选地,处理器301还可以是诸如专用集成电路(application specific integrated circuit,ASIC)的专用装置。可选地,处理器301还可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。The processor 301 may be any conventional processor, including a reduced instruction set computer (RISC) processor, a complex instruction set computer (CISC) processor, or a combination thereof. Alternatively, the processor 301 may also be a dedicated device such as an application specific integrated circuit (ASIC). Alternatively, the processor 301 may also be a neural network processor or a combination of a neural network processor and the conventional processors.
可选地,在本申请所述的各种实施例中,计算机系统112可位于远离自动驾驶车辆的地方,且与自动驾驶车辆无线通信。在其它方面,本申请所述的一些过程可设置在自动驾驶车辆内的处理器上执行,其它一些过程由远程处理器执行,包括采取执行单个操纵所需的动作。Alternatively, in various embodiments described herein, the computer system 112 may be located remotely from the autonomous vehicle and in wireless communication with the autonomous vehicle. In other aspects, some of the processes described herein may be executed on a processor within the autonomous vehicle, and other processes may be executed by a remote processor, including taking actions required to perform a single maneuver.
计算机系统112可以通过网络接口312和软件部署服务器(deploying server)313通信。可选的,网络接口312可以是硬件网络接口,比如网卡。网络(network)314可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(virtual private network,VPN),可选地,network314还可以为无线网络,比如无线保真(wireless fidelity,Wi-Fi)网络、蜂窝网络等。Computer system 112 can communicate with software deployment server 313 through network interface 312. Optionally, network interface 312 can be a hardware network interface, such as a network card. Network 314 can be an external network, such as the Internet, or an internal network, such as Ethernet or a virtual private network (VPN). Optionally, network 314 can also be a wireless network, such as a wireless fidelity (Wi-Fi) network, a cellular network, etc.
硬盘驱动器接口315和系统总线302耦合。硬盘驱动器接口315和硬盘驱动器316相连接。系统内存317和系统总线302耦合。运行在系统内存317的数据可以包括计算机系统112的操作系统(OS)318和应用程序319。The hard disk drive interface 315 is coupled to the system bus 302. The hard disk drive interface 315 is connected to a hard disk drive 316. The system memory 317 is coupled to the system bus 302. The data running in the system memory 317 may include an operating system (OS) 318 and an application program 319 of the computer system 112.
操作系统(OS)318包括但不限于壳(shell)320和内核(kernel)321。shell 320是介于使用者和操作系统318的kernel 321间的一个接口。shell 320是操作系统318最外面的一层。shell管理使用者与操作系统318之间的交互:等待使用者的输入,向操作系统318解释使用者的输入,并且处理各种各样的操作系统318的输出结果。The operating system (OS) 318 includes, but is not limited to, a shell 320 and a kernel 321. The shell 320 is an interface between the user and the kernel 321 of the operating system 318. The shell 320 is the outermost layer of the operating system 318. The shell manages the interaction between the user and the operating system 318: waiting for user input, interpreting user input to the operating system 318, and processing various output results of the operating system 318.
内核321由操作系统318中用于管理存储器、文件、外设和系统资源的部分组成,直接与硬件交互。操作系统318的内核321通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等功能。The kernel 321 is composed of the part of the operating system 318 used to manage memory, files, peripherals and system resources, and directly interacts with the hardware. The kernel 321 of the operating system 318 usually runs processes and provides communication between processes, and provides functions such as CPU time slice management, interrupts, memory management, and IO management.
应用程序319包括自动驾驶相关的程序323,比如,管理自动驾驶汽车和路上障碍物交互的程序,控制自动驾驶汽车的行驶路线或者速度的程序,控制自动驾驶汽车和路上其他汽车/自动驾驶汽车交互的程序等。应用程序319也存在于deploying server 313的系统上。在一个实施例中,在需要执行应用程序319时,计算机系统112可以从deploying server 313下载应用程序319。Application 319 includes programs 323 related to autonomous driving, such as programs for managing the interaction between the autonomous driving car and obstacles on the road, programs for controlling the driving route or speed of the autonomous driving car, and programs for controlling the interaction between the autonomous driving car and other cars/autonomous driving cars on the road. Application 319 also exists on the system of deploying server 313. In one embodiment, when the application 319 needs to be executed, the computer system 112 can download the application 319 from the deploying server 313.
在本申请实施例中,应用程序319可以包括控制车辆根据分级预测系统114预测周边环境中目标物的未来运动轨迹的应用程序。计算机系统112的处理器301调用该应用程序319以执行如下步骤:根据目标物的运动状态进行碰撞分析,确定目标物的风险系数,风险系数用于指示目标物与车辆发生碰撞的可能性大小;根据风险系数,确定目标物的优先级;根据预设的优先级级别与预测模型的对应关系,确定与优先级匹配的目标预测模型;根据目标预测模型,预测目标物的运动轨迹。In the embodiment of the present application, the application 319 may include an application for controlling the vehicle to predict the future motion trajectory of the target object in the surrounding environment according to the hierarchical prediction system 114. The processor 301 of the computer system 112 calls the application 319 to perform the following steps: perform collision analysis according to the motion state of the target object, determine the risk coefficient of the target object, and the risk coefficient is used to indicate the possibility of collision between the target object and the vehicle; determine the priority of the target object according to the risk coefficient; determine the target prediction model that matches the priority according to the correspondence between the preset priority level and the prediction model; and predict the motion trajectory of the target object according to the target prediction model.
传感器322和计算机系统112关联。传感器322用于探测计算机系统112周围的环境。举例来说,传感器322可以探测周围的动物,汽车,人等物体。进一步传感器322还可以探测上述动物,汽车,人等物体周围的环境。比如:汽车周围的环境,例如,汽车所在的车道等。可选地,如果计算机系统112位于自动驾驶的汽车上,传感器322可以是摄像头,红外线感应器,化学检测器,麦克风等器件中的至少一项。Sensor 322 is associated with computer system 112. Sensor 322 is used to detect the environment around computer system 112. For example, sensor 322 can detect surrounding animals, cars, people and other objects. Further, sensor 322 can also detect the environment around the above-mentioned animals, cars, people and other objects. For example: the environment around the car, for example, the lane where the car is located, etc. Optionally, if computer system 112 is located in an autonomous driving car, sensor 322 can be at least one of a camera, an infrared sensor, a chemical detector, a microphone and other devices.
在本申请的另一些实施例中,计算机系统112还可以从其它计算机系统接收信息或转移信息到其它计算机系统。或者,从车辆100的传感器系统120收集的传感器数据可以被转移到另一个计算机,由另一计算机对此数据进行处理。如图4所示,来自计算机系统112的数据可以经由网络被传送到云侧的计算机系统410用于进一步的处理。网络以及中间节点可以包括各种配置和协 议,包括因特网、万维网、内联网、虚拟专用网络、广域网、局域网、使用一个或多个公司的专有通信协议的专用网络、以太网、WiFi和HTTP、以及前述的各种组合。这种通信可以由能够传送数据到其它计算机和从其它计算机传送数据的任何设备执行,诸如调制解调器和无线接口。In other embodiments of the present application, the computer system 112 may also receive information from other computer systems or transfer information to other computer systems. Alternatively, sensor data collected from the sensor system 120 of the vehicle 100 may be transferred to another computer, which processes the data. As shown in FIG. 4 , data from the computer system 112 may be transmitted to a cloud-side computer system 410 via a network for further processing. The network and intermediate nodes may include various configurations and coordination The present invention relates to a computer system that is capable of communicating with other computers via a variety of protocols, including the Internet, the World Wide Web, an intranet, a virtual private network, a wide area network, a local area network, a private network using one or more company's proprietary communication protocols, Ethernet, WiFi, and HTTP, and various combinations of the foregoing. Such communications may be performed by any device capable of transmitting data to and from other computers, such as modems and wireless interfaces.
在一个示例中,计算机系统112可以包括具有多个计算机的服务器,例如负载均衡服务器群。为了从计算机系统112接收、处理并传送数据,服务器420与网络的不同节点交换信息。该计算机系统410可以具有类似于计算机系统112的配置,并具有处理器430、存储器440、指令450、和数据460。In one example, computer system 112 may include a server having multiple computers, such as a load balancing server cluster. Server 420 exchanges information with different nodes of a network in order to receive, process, and transmit data from computer system 112. Computer system 410 may have a configuration similar to computer system 112, and have processor 430, memory 440, instructions 450, and data 460.
在一个示例中,服务器420的数据460可以包括提供天气相关的信息。例如,服务器420可以接收、监视、存储、更新、以及传送与周边环境中目标物相关的各种信息。该信息可以包括例如以报告形式、雷达信息形式、预报形式等的目标类别、目标形状信息以及目标跟踪信息。In one example, the data 460 of the server 420 may include providing weather-related information. For example, the server 420 may receive, monitor, store, update, and transmit various information related to targets in the surrounding environment. The information may include target categories, target shape information, and target tracking information, such as in the form of reports, radar information, forecasts, etc.
参见图5,为自主驾驶车辆和云服务中心(云服务器)交互的示例。云服务中心可以经诸如无线通信网络的网络511,从其操作环境500内的车辆513、车辆512接收信息(诸如车辆传感器收集到数据或者其它信息)。其中,车辆513和车辆512可为自动驾驶车辆。5, which is an example of interaction between an autonomous driving vehicle and a cloud service center (cloud server). The cloud service center can receive information (such as data or other information collected by vehicle sensors) from vehicles 513 and 512 in its operating environment 500 via a network 511 such as a wireless communication network. Vehicles 513 and 512 can be autonomous driving vehicles.
云服务中心520根据接收到的数据,运行其存储的控制汽车自动驾驶相关的程序对车辆513、车辆512进行控制。控制汽车自动驾驶相关的程序可以为:管理自动驾驶汽车和路上障碍物交互的程序,或者控制自动驾驶汽车路线或者速度的程序,或者控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。The cloud service center 520 runs the stored program related to controlling the automatic driving of the vehicle according to the received data to control the vehicles 513 and 512. The program related to controlling the automatic driving of the vehicle may be: a program for managing the interaction between the automatic driving vehicle and obstacles on the road, or a program for controlling the route or speed of the automatic driving vehicle, or a program for controlling the interaction between the automatic driving vehicle and other automatic driving vehicles on the road.
示例性的,云服务中心520通过网络511可将地图的部分提供给车辆513、车辆512。在其它示例中,可以在不同位置之间划分操作。例如,多个云服务中心可以接收、证实、组合和/或发送信息报告。在一些示例中还可以在车辆之间发送信息报告和/传感器数据。其它配置也是可能的。Exemplarily, cloud service center 520 may provide portions of a map to vehicle 513, vehicle 512 via network 511. In other examples, operations may be divided between different locations. For example, multiple cloud service centers may receive, verify, combine, and/or send information reports. In some examples, information reports and/or sensor data may also be sent between vehicles. Other configurations are also possible.
在一些示例中,云服务中心520向自动驾驶车辆发送关于环境内可能的驾驶情况所建议的解决方案(如,告知前方障碍物,并告知如何绕开它))。例如,云服务中心520可以辅助车辆确定当面对环境内的特定障碍时如何行进。云服务中心520向自动驾驶车辆发送指示该车辆应当在给定场景中如何行进的响应。例如,云服务中心520基于收集到的传感器数据,可以确认道路前方具有临时停车标志的存在,又比如,基于“车道封闭”标志和施工车辆的传感器数据,确定该车道由于施工而被封闭。相应地,云服务中心520发送用于车辆通过障碍的建议操作模式(例如:指示车辆变道另一条道路上)。云服务中心520观察其操作环境500内的视频流,并且已确认自动驾驶车辆能安全并成功地穿过障碍时,对该自动驾驶车辆所使用的操作步骤可以被添加到驾驶信息地图中。相应地,这一信息可以发送到该区域内可能遇到相同障碍的其它车辆,以便辅助其它车辆不仅识别出封闭的车道还知道如何通过。In some examples, the cloud service center 520 sends the autonomous vehicle a suggested solution for possible driving situations in the environment (e.g., informing the obstacle ahead and informing how to bypass it). For example, the cloud service center 520 can assist the vehicle in determining how to proceed when facing a specific obstacle in the environment. The cloud service center 520 sends a response to the autonomous vehicle indicating how the vehicle should proceed in a given scenario. For example, based on the collected sensor data, the cloud service center 520 can confirm the presence of a temporary parking sign ahead of the road, or, for example, based on the "lane closed" sign and the sensor data of the construction vehicle, determine that the lane is closed due to construction. Accordingly, the cloud service center 520 sends a suggested operating mode for the vehicle to pass through the obstacle (e.g., instructing the vehicle to change lanes to another road). When the cloud service center 520 observes the video stream in its operating environment 500 and has confirmed that the autonomous vehicle can safely and successfully pass through the obstacle, the operating steps used by the autonomous vehicle can be added to the driving information map. Accordingly, this information can be sent to other vehicles in the area that may encounter the same obstacle, so as to assist other vehicles not only to identify the closed lane but also to know how to pass through it.
以下实施例中的方法均可以在具有上述硬件结构的车辆或具有控制车辆的功能的其他设备中实现。例如自动驾驶车辆,也可以是车辆或者具有控制车辆的功能的其他设备中的处理器,例如上述内容中提到的计算机系统112中的处理器301以及处理器430等。The methods in the following embodiments can all be implemented in a vehicle having the above hardware structure or other devices having the function of controlling a vehicle, such as an autonomous driving vehicle, or a processor in a vehicle or other devices having the function of controlling a vehicle, such as the processor 301 and the processor 430 in the computer system 112 mentioned above.
目前,车辆通常需要对周围环境内的行人、骑行人、摩托车或其他车辆等目标物体,进行未来运动轨迹的预测,使得车辆能够及时应对并执行相应的操作,如规划车辆当前时间的行驶路径以避免碰撞。然而为了保证轨迹预测的准确性,用于预测轨迹的预测模型的计算复杂度通常都比较高。而当周围有大量的目标物体需要预测轨迹时,对于计算资源有限的车辆而言,不仅会消耗大量的计算资源,导致车辆的整体性能受到影响,而且无法及时地对所有的目标物体都进行行为预测,导致预测时延较大,影响了车辆的响应速度。At present, vehicles usually need to predict the future motion trajectory of target objects such as pedestrians, cyclists, motorcycles or other vehicles in the surrounding environment, so that the vehicle can respond in time and perform corresponding operations, such as planning the vehicle's current driving path to avoid collisions. However, in order to ensure the accuracy of trajectory prediction, the computational complexity of the prediction model used to predict the trajectory is usually relatively high. When there are a large number of target objects around that need to predict the trajectory, for vehicles with limited computing resources, not only will a large amount of computing resources be consumed, resulting in the overall performance of the vehicle being affected, but also the inability to predict the behavior of all target objects in a timely manner, resulting in a large prediction delay, which affects the response speed of the vehicle.
为了解决上述问题,本申请提供了一种解决方案,可以通过模拟车辆真实行驶数据以获取地面实况的重要性得分,并训练神经网络,以利用训练好的神经网络预测周边环境中的目标物的重要性得分,从而可以按照重要性得分排名,对周边环境中的目标物进行行为预测。然而这种方案数据获取困难,且神经网络本身消耗的计算资源大。In order to solve the above problems, this application provides a solution that can obtain the importance score of the ground truth by simulating the real driving data of the vehicle, and train the neural network to use the trained neural network to predict the importance score of the target objects in the surrounding environment, so that the behavior of the target objects in the surrounding environment can be predicted according to the importance score ranking. However, this solution is difficult to obtain data, and the neural network itself consumes a lot of computing resources.
本申请还提供了一种解决方案,可以将车辆周围空间根据一定规则划分不同区域,结合道路拓扑关系,将周边环境中的目标物分为注意(caution)、正常(normal)、忽略(ignore)三个等级。然而这种划分方式依赖道路拓扑关系,且规则复杂,较难维护。且不同的等级依赖不同的规 则,不具有统一的对比指标,也无法划分更多的等级,等级拓展性差。The present application also provides a solution that can divide the space around the vehicle into different areas according to certain rules, and combine the road topology to divide the objects in the surrounding environment into three levels: caution, normal, and ignore. However, this division method depends on the road topology, and the rules are complex and difficult to maintain. Different levels depend on different rules. Otherwise, there is no unified comparison index, and it is impossible to divide into more levels, and the level scalability is poor.
基于此,本申请实施例提供一种目标行为预测方法以及智能设备,智能设备能够根据对周边环境中的目标物进行碰撞风险检测,从而确定目标物的优先级,然后针对高优先级的目标物,可以采用精细化、计算复杂度高的预测模型进行未来运动轨迹的预测,而对于低优先级的目标物,可以采用简单化、计算复杂度低的预测模型进行未来运动轨迹的预测。如此,并非对周边环境中的所有目标物,都采用精细化、计算复杂度高的预测模型进行未来运动轨迹的预测,而是根据碰撞风险程度,将周边环境中的目标物划分为不同的优先级,从而可以将处于低优先级的目标物,采用简单化、计算复杂度低的预测模型进行未来运动轨迹的预测,大大减少了计算资源的消耗的同时,也降低了预测时延。Based on this, the embodiments of the present application provide a target behavior prediction method and an intelligent device. The intelligent device can determine the priority of the target object based on the collision risk detection of the target object in the surrounding environment, and then for the high-priority target object, a refined prediction model with high computational complexity can be used to predict the future motion trajectory, while for the low-priority target object, a simplified prediction model with low computational complexity can be used to predict the future motion trajectory. In this way, not all targets in the surrounding environment use a refined prediction model with high computational complexity to predict the future motion trajectory, but the targets in the surrounding environment are divided into different priorities according to the degree of collision risk, so that the future motion trajectory of the low-priority target object can be predicted using a simplified prediction model with low computational complexity, which greatly reduces the consumption of computing resources while also reducing the prediction delay.
可以理解,由于本申请实施例提供的方案,通过计算碰撞风险系数来划分周边环境内的目标物的优先级,因此不同目标物的风险系数可以直接对比,具有统一的对比指标。且本申请实施例提供的方案不依赖于地图,不需要数据驱动,在有地图和无地图的场景下都可以使用,计算复杂度也低。It can be understood that, since the solution provided in the embodiment of the present application prioritizes the objects in the surrounding environment by calculating the collision risk coefficient, the risk coefficients of different objects can be directly compared, with a unified comparison index. Moreover, the solution provided in the embodiment of the present application does not rely on maps, does not require data drive, can be used in scenarios with or without maps, and has low computational complexity.
以下将以智能设备为自动驾驶车辆(以下简称车辆)为例,并结合附图介绍本申请实施例提供的一种目标行为预测方法。如图6所示,该目标行为预测方法可以包括:The following will take the intelligent device as an autonomous driving vehicle (hereinafter referred to as vehicle) as an example, and introduce a target behavior prediction method provided by an embodiment of the present application in conjunction with the accompanying drawings. As shown in Figure 6, the target behavior prediction method may include:
S610、车辆获取周围环境中的目标物的运动状态。S610: The vehicle obtains the motion status of the target object in the surrounding environment.
其中,目标物可以是车辆周围环境中的行人、骑行人、摩托车或其他车辆等障碍物。可选地,目标物可以是车辆周围环境内可以移动的物体,也可以是车辆周围环境内静止的物体,如路障、路边垃圾桶等。本申请实施例中,为了便于区分,可以将目标行为预测方法所应用的车辆称为自车,自车周围环境内的其他车辆称为他车。The target object may be an obstacle such as a pedestrian, a cyclist, a motorcycle or other vehicles in the surrounding environment of the vehicle. Optionally, the target object may be a movable object in the surrounding environment of the vehicle, or a stationary object in the surrounding environment of the vehicle, such as a roadblock, a roadside trash can, etc. In the embodiment of the present application, for the sake of distinction, the vehicle to which the target behavior prediction method is applied may be referred to as the self-vehicle, and other vehicles in the surrounding environment of the self-vehicle may be referred to as other vehicles.
本申请实施例中,目标物的运动状态可以包括目标物的位置、目标物的速度、目标物的前进方向也即速度方向等。可选地,目标物的运动状态也可以包括目标物与自车之间的距离。In the embodiment of the present application, the motion state of the target object may include the position of the target object, the speed of the target object, the forward direction of the target object, that is, the speed direction, etc. Optionally, the motion state of the target object may also include the distance between the target object and the vehicle.
可选地,目标物的运动状态可以由图2中的环境感知模块检测得到。作为一种实施方式,环境感知模块可以包括雷达、激光测距仪或者相机等。作为一种实施方式,环境感知模块可以包括图3中的传感器322。Optionally, the motion state of the target object can be detected by the environment perception module in Figure 2. As an implementation, the environment perception module can include a radar, a laser rangefinder or a camera, etc. As an implementation, the environment perception module can include the sensor 322 in Figure 3.
在一些实施例中,步骤S610还可以包括获取自车的目标行驶路线。In some embodiments, step S610 may also include obtaining a target driving route of the vehicle.
其中,目标行驶路线可以是指从当前时刻开始,未来预设时间段内自车的行驶路线。预设时间段可以预先根据应用场景合理设置,本申请实施例并不对预设时间段的长度进行限制。例如,预设时间段可以为10秒(S)。The target driving route may refer to the driving route of the vehicle within a preset time period in the future starting from the current moment. The preset time period may be reasonably set in advance according to the application scenario, and the embodiment of the present application does not limit the length of the preset time period. For example, the preset time period may be 10 seconds (S).
可选地,车辆可以根据自车的规划路线、自车的位置和自车的速度等参数确定自车的目标行驶路径。例如,车辆根据自车的位置和自车的速度,可以确定未来预设时间段内自车在规划路线上大致的行驶长度,从而可以从自车的规划路线上确定出自车的目标行驶路径。Optionally, the vehicle may determine the target driving path of the vehicle based on parameters such as the planned route of the vehicle, the position of the vehicle, and the speed of the vehicle. For example, the vehicle may determine the approximate driving distance of the vehicle on the planned route within a preset time period in the future based on the position of the vehicle and the speed of the vehicle, thereby determining the target driving path of the vehicle from the planned route of the vehicle.
可选地,自车的目标行驶路线可以由图2中的路线获取模块202获取得到。作为一种实施方式,路线获取模块202可以包括图1中控制系统106中的路线控制系统。Optionally, the target driving route of the vehicle may be obtained by the route acquisition module 202 in Fig. 2. As an implementation, the route acquisition module 202 may include a route control system in the control system 106 in Fig. 1 .
可选地,路线获取模块202可以基于自车的规划路线、自车的位置和自车的速度等参数确定自车的目标行驶路径。示例性地,自车的速度可以由速度传感器检测得到。自车的位置可以由定位系统确定,例如,可以由图1的车辆100中的定位系统确定。自车的规划路线可以由道路导航(route)模块确定。Optionally, the route acquisition module 202 can determine the target driving path of the vehicle based on parameters such as the planned route of the vehicle, the position of the vehicle, and the speed of the vehicle. Exemplarily, the speed of the vehicle can be detected by a speed sensor. The position of the vehicle can be determined by a positioning system, for example, it can be determined by a positioning system in the vehicle 100 of FIG. 1. The planned route of the vehicle can be determined by a road navigation (route) module.
在一些实施例中,当不存在自车的规划路线时,车辆也可以根据自车所在车道的道路结构,确定自车的目标行驶路线。其中,车道的道路结构包括车道行驶方向,车道的边界信息等。可选地,自车所在车道的道路结构可以由道路结构认知模块确定。In some embodiments, when there is no planned route for the vehicle, the vehicle may also determine the target driving route of the vehicle based on the road structure of the lane where the vehicle is located. The road structure of the lane includes the lane driving direction, lane boundary information, etc. Optionally, the road structure of the lane where the vehicle is located may be determined by a road structure recognition module.
S620、车辆根据目标物的运动状态进行碰撞分析,确定目标物的风险系数。S620: The vehicle performs a collision analysis based on the motion state of the target object to determine the risk factor of the target object.
其中,风险系数用于指示目标物与自车发生碰撞的可能性大小。本申请实施例中,车辆可以根据目标物的运动状态进行碰撞分析,以评估未来目标物与自车发生碰撞的风险。The risk factor is used to indicate the probability of a collision between the target object and the vehicle. In the embodiment of the present application, the vehicle can perform a collision analysis based on the motion state of the target object to assess the risk of a collision between the target object and the vehicle in the future.
本申请实施例中,步骤S620还可以包括对自车的目标行驶路线进行采样,得到目标行驶路线上的多个采样位置。可选地,车辆可以将采样位置作为未来自车与目标物可能发生碰撞的位置,进行碰撞分析。如此,通过利用自车的未来行驶路线来评估发生碰撞的风险,使得车辆可以考虑 到自车与目标物未来的交互关系,避免后续优先级的误划分所引起的预测轨迹精度低问题。In the embodiment of the present application, step S620 may also include sampling the target driving route of the vehicle to obtain multiple sampling positions on the target driving route. Optionally, the vehicle may use the sampling positions as the positions where the vehicle and the target object may collide in the future to perform collision analysis. In this way, by using the future driving route of the vehicle to assess the risk of collision, the vehicle can consider The future interaction relationship between the ego vehicle and the target object can be determined to avoid the problem of low prediction trajectory accuracy caused by subsequent misdivision of priorities.
其中,对自车的目标行驶路线进行采样,可以理解为,在自车的目标行驶路线上选取多个行驶位置点,作为多个采样位置。可选地,车辆可以在自车的目标行驶路线上,每隔指定距离或者每隔指定时间选取一个位置点作为一个采样位置,从而车辆可以得到目标行驶路线上的多个采样位置。可选地,车辆也可以在自车的目标行驶路线上,随机选取多个位置点,作为多个采样位置。Among them, sampling the target driving route of the vehicle can be understood as selecting multiple driving position points on the target driving route of the vehicle as multiple sampling positions. Optionally, the vehicle can select a position point as a sampling position at every specified distance or every specified time on the target driving route of the vehicle, so that the vehicle can obtain multiple sampling positions on the target driving route. Optionally, the vehicle can also randomly select multiple position points on the target driving route of the vehicle as multiple sampling positions.
可选地,采样位置的个数可以是固定个数,即车辆对自车的目标行驶路线进行固定个数的采样时,可以得到固定个数的采样位置。例如,采样位置的个数固定为8个时,车辆可以在自车的目标行驶路线上,随机选取8个行驶位置点,作为8个采样位置。Optionally, the number of sampling positions may be a fixed number, that is, when the vehicle performs a fixed number of samplings on the target driving route of the vehicle, a fixed number of sampling positions may be obtained. For example, when the number of sampling positions is fixed to 8, the vehicle may randomly select 8 driving position points on the target driving route of the vehicle as 8 sampling positions.
可选地,采样位置的个数也可以随机生成。例如,从5~10里任意抽取一个数作为车辆需要采样的个数。Optionally, the number of sampling positions can also be randomly generated, for example, a number is randomly selected from 5 to 10 as the number of vehicles to be sampled.
作为一种实施方式,针对目标行驶路线上的每个采样位置,车辆可以根据目标物的运动状态,确定目标物和自车在指定时间段后相对于采样位置的剩余碰撞距离,并将该剩余碰撞距离作为该采样位置的碰撞风险系数。车辆可以根据目标行驶路线上的每个采样位置的碰撞风险系数,确定目标物的风险系数。As an implementation mode, for each sampling position on the target driving route, the vehicle can determine the remaining collision distance between the target object and the vehicle relative to the sampling position after a specified period of time according to the motion state of the target object, and use the remaining collision distance as the collision risk coefficient of the sampling position. The vehicle can determine the risk coefficient of the target object according to the collision risk coefficient of each sampling position on the target driving route.
其中,指定时间段可以预先根据应用场景合理设置,本申请实施例并不对指定时间段的长度进行限制。例如,指定时间段可以为3秒(S)。The specified time period can be reasonably set in advance according to the application scenario, and the embodiment of the present application does not limit the length of the specified time period. For example, the specified time period can be 3 seconds (S).
可选地,车辆可以根据目标物的运动状态,确定指定时间段后该目标物与目标采样位置之间的第一距离,其中,目标采样位置为多个采样位置中的任一采样位置。同时车辆也可以确定指定时间段后自车与该目标采样位置之间的第二距离。车辆可以将第一距离与第二距离的和值,作为目标物和自车在指定时间段后相对于目标采样位置的剩余碰撞距离,该剩余碰撞距离即可作为该目标采样位置的碰撞风险系数。Optionally, the vehicle may determine a first distance between the target object and a target sampling position after a specified time period according to the motion state of the target object, wherein the target sampling position is any sampling position among multiple sampling positions. At the same time, the vehicle may also determine a second distance between the vehicle and the target sampling position after a specified time period. The vehicle may use the sum of the first distance and the second distance as the remaining collision distance between the target object and the vehicle relative to the target sampling position after a specified time period, and the remaining collision distance may be used as the collision risk coefficient of the target sampling position.
示例性地,请参阅图7,假设自车701的行驶速度为ve,目标物702的行驶速度为vo,自车的目标行驶路线703上的多个黑点为采样位置,以目标行驶路线703上第i个采样位置pi为例,车辆可以先计算目标物702在指定时间段tp内朝采样位置pi运动的距离do:
v′o=max(vo*cosθ,0);
For example, referring to FIG. 7 , assuming that the driving speed of the vehicle 701 is v e , the driving speed of the target object 702 is v o , and the multiple black dots on the target driving route 703 of the vehicle are sampling positions, taking the i-th sampling position p i on the target driving route 703 as an example, the vehicle can first calculate the distance do that the target object 702 moves toward the sampling position p i within the specified time period t p :
v′ o =max(v o *cosθ,0);
do=v′o*tpd o =v′ o *t p .
其中,θ为目标物702的速度方向与目标物702到采样位置pi连线的夹角。max()函数用于输出最大值,v′o为目标物702在目标物702到采样位置pi连线方向上的速度。Wherein, θ is the angle between the velocity direction of the target object 702 and the line connecting the target object 702 to the sampling position pi . The max() function is used to output the maximum value, and v′ o is the velocity of the target object 702 in the direction of the line connecting the target object 702 to the sampling position pi .
接着,车辆可以计算在指定时间段tp后,目标物702与采样位置pi之间的剩余距离d1:Next, the vehicle can calculate the remaining distance d 1 between the target object 702 and the sampling position p i after the specified time period t p :
d1=max(doi-do,0)。d 1 =max(d oi -d o ,0).
其中,doi为目标物702在本次碰撞分析开始时,与采样位置pi之间的初始距离。Wherein, d oi is the initial distance between the target object 702 and the sampling position pi at the beginning of this collision analysis.
然后车辆可以计算自车701在指定时间段tp内朝采样位置pi运动的距离de:The vehicle can then calculate the distance d e that the vehicle 701 moves toward the sampling position pi within the specified time period t p :
de=ve*tpd e = ve *t p .
同理,车辆可以计算在指定时间段tp后,自车701与采样位置pi之间的剩余距离d2:Similarly, the vehicle can calculate the remaining distance d 2 between the vehicle 701 and the sampling position p i after the specified time period t p :
d2=max(dei-de,0)。d 2 =max( dei - de ,0).
其中,dei为自车701在本次碰撞分析开始时,与采样位置pi之间的初始距离。Wherein, d ei is the initial distance between the ego vehicle 701 and the sampling position pi at the beginning of this collision analysis.
车辆可以根据在指定时间段tp后,目标物702与采样位置pi之间的剩余距离d1以及自车701与采样位置pi之间的剩余距离d2,确定目标物和自车在指定时间段tp后相对于采样位置pi的剩余碰撞距离dpi:
dpi=d1+d2
The vehicle can determine the remaining collision distance dpi between the target object and the vehicle relative to the sampling position p i after the specified time period t p based on the remaining distance d1 between the target object 702 and the sampling position p i and the remaining distance d2 between the vehicle 701 and the sampling position p i :
d pi =d 1 +d 2 .
该剩余碰撞距离dpi可以作为采样位置pi对应的碰撞风险系数。可以理解,当该剩余碰撞距离dpi的值越小时,即采样位置pi对应的碰撞风险系数越小时,车辆可以认为目标物和自车在指定时间段tp后,在采样位置pi附近越容易发生碰撞。The remaining collision distance dpi can be used as the collision risk coefficient corresponding to the sampling position p1 . It can be understood that when the value of the remaining collision distance dpi is smaller, that is, the collision risk coefficient corresponding to the sampling position p1 is smaller, the vehicle can consider that the target object and the vehicle are more likely to collide near the sampling position p1 after the specified time period tp .
本申请实施例中,当车辆对自车的目标行驶路线进行采样,得到目标行驶路线上的n个采样位置时,车辆可以采用上述方式,计算n个采样位置中每个采样位置对应的碰撞风险系数即dp0、dp1、dp2、……、dpnIn the embodiment of the present application, when the vehicle samples the target driving route of the vehicle and obtains n sampling positions on the target driving route, the vehicle can use the above method to calculate the collision risk coefficient corresponding to each of the n sampling positions, namely dp0 , dp1 , dp2 , ..., dpn .
可选地,车辆可以将n个采样位置中最小的碰撞风险系数作为目标物的风险系数。也即目标 物的风险系数 Optionally, the vehicle can use the smallest collision risk coefficient among the n sampling positions as the risk coefficient of the target object. Risk factor of the object
由于碰撞风险系数越小时,目标物和自车在指定时间段tp后越容易发生碰撞,因此可以理解,目标物的风险系数越小,目标物和自车在指定时间段tp后越容易发生碰撞。Since the smaller the collision risk coefficient is, the more likely the target object and the ego vehicle are to collide after the specified time period tp , it can be understood that the smaller the risk coefficient of the target object is, the more likely the target object and the ego vehicle are to collide after the specified time period tp .
可选地,车辆也可以将n个采样位置中最小的碰撞风险系数与安全系数进行比较,将最小的碰撞风险系数与安全系数的比值作为目标物的风险系数。可以理解,安全系数可以预先根据历史数据分析,并结合自车的实际情况来确定。Optionally, the vehicle may also compare the minimum collision risk factor and the safety factor among the n sampling positions, and use the ratio of the minimum collision risk factor to the safety factor as the risk factor of the target object. It is understood that the safety factor may be determined in advance based on historical data analysis and combined with the actual situation of the vehicle.
在一些实施例中,车辆也可以根据目标物的运动状态以及物体运动学模型,计算目标物与自车可能发生碰撞的碰撞时间,以根据碰撞时间确定目标物的风险系数。作为一种实施方式,车辆可以将碰撞时间与预设时间进行比较,以将碰撞时间与预设时间的比值作为目标物的风险系数。可以理解,预设时间可以预先根据历史数据分析,并结合自车的实际情况来确定。In some embodiments, the vehicle may also calculate the collision time when the target object and the vehicle may collide based on the motion state of the target object and the object kinematic model, so as to determine the risk factor of the target object based on the collision time. As an implementation method, the vehicle may compare the collision time with a preset time, so as to use the ratio of the collision time to the preset time as the risk factor of the target object. It is understood that the preset time may be determined in advance based on historical data analysis and in combination with the actual situation of the vehicle.
在一些实施例中,车辆可以根据目标的运动状态,预测本车与目标物的行驶轨迹是否相交。如果相交,分别计算本车与目标物从当前位置至两条轨迹交点的时间,计算二者时间差的绝对值,以根据该绝对值确定目标物的风险系数。作为一种实施方式,车辆可以将将该时间差的绝对值与预设时间进行比较,以将时间差的绝对值与预设时间的比值作为目标物的风险系数。可以理解,预设时间可以预先根据历史数据分析,并结合自车的实际情况来确定。In some embodiments, the vehicle can predict whether the driving trajectories of the vehicle and the target object intersect based on the motion state of the target. If they intersect, the time from the current position of the vehicle and the target object to the intersection of the two trajectories is calculated respectively, and the absolute value of the time difference between the two is calculated to determine the risk coefficient of the target object based on the absolute value. As an implementation method, the vehicle can compare the absolute value of the time difference with the preset time, and use the ratio of the absolute value of the time difference to the preset time as the risk coefficient of the target object. It can be understood that the preset time can be determined in advance based on historical data analysis and combined with the actual situation of the vehicle.
S630、车辆根据风险系数,确定目标物的优先级。S630: The vehicle determines the priority of the target object according to the risk factor.
本申请实施例中,车辆在确定出周边环境中的每个目标物的风险系数后,可以根据风险系数对周边环境中的每个目标物进行优先级划分。以区分出周边环境中处于不同优先级的目标物。In the embodiment of the present application, after determining the risk factor of each target object in the surrounding environment, the vehicle can prioritize each target object in the surrounding environment according to the risk factor, so as to distinguish targets of different priorities in the surrounding environment.
可选地,车辆可以根据预设的参数n,确定要划分的优先级级别,该优先级级别包括等级1、等级2、等级3、……等级n。其中,参数n可以理解为自车周边环境中的目标物要划分的优先级个数(即等级个数),当需要改变目标物分级的等级个数时,车辆调整参数n的大小即可。例如,增大预设的参数n的数值,即可实现优先级级别的扩展。如此,无需改变风险系数的计算方式,也能实现优先级级别的扩展。Optionally, the vehicle can determine the priority level to be divided according to a preset parameter n, and the priority level includes level 1, level 2, level 3, ... level n. Among them, parameter n can be understood as the number of priorities (i.e., the number of levels) to be divided for the target objects in the surrounding environment of the vehicle. When the number of levels of the target object classification needs to be changed, the vehicle can adjust the size of parameter n. For example, increasing the value of the preset parameter n can achieve the expansion of the priority level. In this way, the expansion of the priority level can be achieved without changing the calculation method of the risk coefficient.
可选地,预设的参数n可以预先存储于车辆,也可以实时获取。作为一种实施方式,车辆可以通过车载显示屏显示第一界面,该第一界面用于向用户提供优先级个数的快捷输入。Optionally, the preset parameter n may be pre-stored in the vehicle or acquired in real time. As an implementation mode, the vehicle may display a first interface through an onboard display screen, and the first interface is used to provide a user with a quick input of the priority number.
可选地,第一界面可以提供多种优先级个数的选项,如3个优先级、5个优先级等。用户可以从中选取一个选项进行确认。车辆可以响应用户的确认操作,获取用户确认的优先级个数作为预设的参数n。Optionally, the first interface may provide multiple options for the number of priorities, such as 3 priorities, 5 priorities, etc. The user may select an option for confirmation. The vehicle may respond to the user's confirmation operation and obtain the number of priorities confirmed by the user as a preset parameter n.
可选地,第一界面也可以提供优先级个数的输入框。用户可以在输入框中输入具体的数值,然后车辆可以响应用户的输入操作,获取用户输入的优先级个数作为预设的参数n。Optionally, the first interface may also provide an input box for the number of priorities. The user may enter a specific value in the input box, and then the vehicle may respond to the user's input operation and obtain the number of priorities entered by the user as a preset parameter n.
本申请实施例中,车辆可以根据周边环境中的每个目标物的风险系数,以及预设的参数n,将周边环境中的目标物划分为不同的优先级。In an embodiment of the present application, the vehicle may classify targets in the surrounding environment into different priorities according to the risk factor of each target in the surrounding environment and a preset parameter n.
作为一种方式,可以根据周边环境中的每个目标物的风险系数大小,按照从小到大的顺序,将周围环境中的目标物进行排序。其中,排序越靠前的目标物,其对应的风险系数越小,也即该目标物与自车在指定时间段后越容易发生碰撞。此时,可以根据预设的参数n,按照从等级高(等级n)到等级低(等级1)的顺序,将排序好的目标物进行优先级级别划分。其中,越靠前的目标物划分的等级越高,也即优先级越高,从而实现了将越容易发生碰撞的目标物划分至越高的优先级。As a method, the objects in the surrounding environment can be sorted in order from small to large according to the risk coefficient of each object in the surrounding environment. Among them, the higher the ranking of the object, the smaller the corresponding risk coefficient, that is, the more likely the object and the vehicle will collide after a specified time period. At this time, the sorted objects can be divided into priority levels according to the preset parameter n, from high level (level n) to low level (level 1). Among them, the higher the level of the object, the higher the priority, so that the objects that are more likely to collide are classified into higher priorities.
可选地,等级越高也可以表示优先级越低,即等级1为最高优先级,等级n为最低优先级。此时,可以根据预设的参数n,按照从等级低(等级1)到等级高(等级n)的顺序,将排序好的目标物进行优先级级别划分。其中,越靠前的目标物划分的等级越低,也即优先级越高,从而实现了将越容易发生碰撞的目标物划分至越高的优先级。Optionally, a higher level may also mean a lower priority, that is, level 1 is the highest priority and level n is the lowest priority. At this time, the sorted targets can be divided into priority levels according to the preset parameter n, in the order from low level (level 1) to high level (level n). Among them, the target closer to the front is divided into a lower level, that is, a higher priority, so that the target that is more likely to collide is divided into a higher priority.
作为另一种方式,也可以根据周边环境中的每个目标物的风险系数大小,按照从大到小的顺序,将周围环境中的目标物进行排序。其中,排序越靠后的目标物,其对应的风险系数越小,也即该目标物与自车在指定时间段后越容易发生碰撞。此时,可以根据预设的参数n,按照从等级高(等级1)到等级低(等级n)的顺序,将排序好的目标物进行优先级级别划分。其中,越靠后的目标物划分的等级越高,也即优先级越高,从而实现了将越容易发生碰撞的目标物划分至越高 的优先级。As another way, the objects in the surrounding environment can also be sorted in descending order according to the risk coefficient of each object in the surrounding environment. Among them, the later the object is ranked, the smaller its corresponding risk coefficient is, that is, the more likely it is for the object to collide with the vehicle after a specified time period. At this time, the sorted objects can be divided into priority levels according to the preset parameter n, in order from high level (level 1) to low level (level n). Among them, the later the object is, the higher the level is, that is, the higher the priority is, so that the objects that are more likely to collide are divided into higher levels. priority.
可选地,车辆也可以根据预设的参数n,确定不同等级对应的风险系数范围,车辆可以将周边环境中的每个目标物的风险系数,分别与不同等级对应的风险系数范围进行匹配。当某个目标物的风险系数落入某个等级对应的风险系数范围内时,即可以将该目标物划分为该等级。从而实现对周边环境中的目标物的优先级划分。Optionally, the vehicle can also determine the risk coefficient range corresponding to different levels according to the preset parameter n, and the vehicle can match the risk coefficient of each target object in the surrounding environment with the risk coefficient range corresponding to different levels. When the risk coefficient of a target object falls within the risk coefficient range corresponding to a certain level, the target object can be classified into the level. In this way, the priority division of the targets in the surrounding environment is achieved.
在一些实施例中,车辆还可以在车载显示屏上使用不同的颜色显示不同优先级的目标物,从而可以直观显示出车辆周边环境内的目标物的优先级分级效果。In some embodiments, the vehicle may also use different colors to display objects of different priorities on the vehicle display screen, thereby intuitively displaying the priority grading effect of objects in the vehicle's surrounding environment.
可选地,车辆可以根据预设的颜色与优先级级别的对应关系,确定与目标物的优先级匹配的目标颜色,并将目标物以目标颜色进行显示。其中,颜色与优先级级别的对应关系可以预先根据实际应用情况合理设定。该颜色与优先级级别的对应关系可以预先存储于车辆,也可以从其他设备如云服务器处获取。Optionally, the vehicle can determine a target color that matches the priority of the target object according to a preset correspondence between colors and priority levels, and display the target object in the target color. The correspondence between colors and priority levels can be reasonably set in advance according to actual application conditions. The correspondence between colors and priority levels can be pre-stored in the vehicle or obtained from other devices such as a cloud server.
可选地,车辆也可以根据要划分的优先级等级数,随机生成对应的颜色。其中,随机生成的颜色与优先级一一对应,以区分出不同优先级的目标物。Optionally, the vehicle may also randomly generate corresponding colors according to the number of priority levels to be divided, wherein the randomly generated colors correspond to the priorities one by one to distinguish objects of different priorities.
作为一种示例性场景,请参阅图8,图8示出了车辆800左转的路口场景。其中,场景中的方框代表机动车、摩托车、行人等不同类型的目标物。每个目标物上的虚线箭头代表目标物的速度方向,实线箭头代表目标物的朝向。可选地,车辆也可以显示目标物的信息,如在每个目标物的下方显示目标物的身份标识、速度大小等信息。其中,目标物的身份标识可以是目标物的类型,如非机动车、摩托车、行人、轿车等;也可以是目标物的唯一标识(identity document,ID)。As an exemplary scenario, please refer to Figure 8, which shows an intersection scene where vehicle 800 turns left. The boxes in the scene represent different types of targets, such as motor vehicles, motorcycles, pedestrians, etc. The dotted arrow on each target represents the speed direction of the target, and the solid arrow represents the direction of the target. Optionally, the vehicle can also display information about the target, such as displaying the target's identity, speed, and other information below each target. The target's identity can be the type of target, such as non-motor vehicles, motorcycles, pedestrians, cars, etc.; it can also be a unique identifier (identity document, ID) of the target.
如图8所示,场景中出现了大量的目标物均需要预测运动轨迹。其中,图8中的车辆800左前方斑马线上的目标物801为过马路的行人A;车辆800右后方的目标物802为骑行人A;车辆800前方路口拐角处的目标物803为行人B;车辆800出路口后距离较近的目标物804为车辆A(即他车);车辆800出路口后离车辆800当前位置较远的目标物805为车辆B;车辆800后方、速度方向远离车辆800的目标物806为车辆C;车辆800右前方、朝向远离车辆800的目标物807为骑行人B。As shown in Figure 8, a large number of targets appear in the scene and all of them need to predict their motion trajectories. Among them, target 801 on the zebra crossing in front of the left side of vehicle 800 in Figure 8 is pedestrian A crossing the road; target 802 behind the right side of vehicle 800 is cyclist A; target 803 at the corner of the intersection in front of vehicle 800 is pedestrian B; target 804 close to vehicle 800 after exiting the intersection is vehicle A (i.e., another vehicle); target 805 far from the current position of vehicle 800 after exiting the intersection is vehicle B; target 806 behind vehicle 800 and moving away from vehicle 800 in speed direction is vehicle C; target 807 in front of the right side of vehicle 800 and moving away from vehicle 800 is cyclist B.
可选地,通过本申请实施例的目标行为预测方法,对车辆800周围的目标物进行碰撞分析后,可以将车辆800周围的目标物划分为三个优先级等级。其中,目标物801(即过马路的行人A)、目标物802(即骑行人A)的优先级最高,目标物803(即行人B)、目标物804(即车辆A)的优先级次之,目标物805(即车辆B)、目标物806(即车辆C)、目标物807(即骑行人B)的优先级最低。Optionally, after collision analysis of the targets around vehicle 800 is performed by the target behavior prediction method of the embodiment of the present application, the targets around vehicle 800 can be divided into three priority levels. Among them, target 801 (i.e. pedestrian A crossing the road) and target 802 (i.e. cyclist A) have the highest priority, target 803 (i.e. pedestrian B) and target 804 (i.e. vehicle A) have the second highest priority, and target 805 (i.e. vehicle B), target 806 (i.e. vehicle C), and target 807 (i.e. cyclist B) have the lowest priority.
可选地,车辆还可以在车载显示屏上使用三种的颜色显示上述三个优先级的目标物。例如,假设最高优先级对应红色,次优先级对应黄色,最低优先级对应蓝色,则车辆可以在车载显示屏上将目标物801(即过马路的行人A)、目标物802(即骑行人A)以红色进行显示,将目标物803(即行人B)、目标物804(即车辆A)以黄色进行显示,将目标物805(即车辆B)、目标物806(即车辆C)、目标物807(即骑行人B)以蓝色进行显示。Optionally, the vehicle can also use three colors to display the above three priority targets on the vehicle display screen. For example, assuming that the highest priority corresponds to red, the second priority corresponds to yellow, and the lowest priority corresponds to blue, the vehicle can display target 801 (i.e. pedestrian A crossing the road) and target 802 (i.e. cyclist A) in red, target 803 (i.e. pedestrian B) and target 804 (i.e. vehicle A) in yellow, and target 805 (i.e. vehicle B), target 806 (i.e. vehicle C), and target 807 (i.e. cyclist B) in blue on the vehicle display screen.
作为另一种示例性场景,请参阅9,图9示出了车辆900直行场景。其中,场景中的方框代表机动车、摩托车、行人等不同类型的目标物。每个目标物上的虚线箭头代表目标物的速度方向,实线箭头代表目标物的朝向。As another exemplary scenario, please refer to FIG9 , which shows a scenario where a vehicle 900 is traveling straight. The boxes in the scenario represent different types of targets, such as motor vehicles, motorcycles, pedestrians, etc. The dotted arrow on each target represents the speed direction of the target, and the solid arrow represents the direction of the target.
如图9所示,场景中也出现了大量的目标物均需要预测运动轨迹。其中,图9中的车辆900右前方且速度与车辆900朝向相同的目标物901为骑行人C;车辆900后方且与车辆900同向行驶的目标物902为车辆D;车辆900右前方、机动车道外的目标物903为骑行人D;车辆900左前方、机动车道外的目标物904为骑行人E;车辆900左后方、机动车道外的目标物905为行人C;车辆900左后方且与车辆900反向行驶的目标物906为骑行人F。As shown in Figure 9, a large number of targets also appear in the scene, all of which need to predict their movement trajectories. Among them, target 901 in front of the right side of vehicle 900 and with the same speed and direction as vehicle 900 in Figure 9 is cyclist C; target 902 behind vehicle 900 and traveling in the same direction as vehicle 900 is vehicle D; target 903 in front of the right side of vehicle 900 and outside the motor vehicle lane is cyclist D; target 904 in front of the left side of vehicle 900 and outside the motor vehicle lane is cyclist E; target 905 in the left rear side of vehicle 900 and outside the motor vehicle lane is pedestrian C; target 906 in the left rear side of vehicle 900 and traveling in the opposite direction of vehicle 900 is cyclist F.
可选地,通过本申请实施例的目标行为预测方法,对车辆900周围的目标物进行碰撞分析后,可以将车辆900周围的目标物划分为三个优先级等级。其中,目标物901(即骑行人C)、目标物902(即车辆D)的优先级为等级1,目标物903(即骑行人D)、目标物904(即骑行人E)的优先级为等级2,目标物905(即行人C)、目标物906(即骑行人F)的优先级为等级3。同理,车辆也可以在车载显示屏上使用三种的颜色显示上述三个优先级的目标物。 Optionally, after collision analysis of the targets around vehicle 900 is performed by the target behavior prediction method of the embodiment of the present application, the targets around vehicle 900 can be divided into three priority levels. Among them, the priority of target 901 (i.e., cyclist C) and target 902 (i.e., vehicle D) is level 1, the priority of target 903 (i.e., cyclist D) and target 904 (i.e., cyclist E) is level 2, and the priority of target 905 (i.e., pedestrian C) and target 906 (i.e., cyclist F) is level 3. Similarly, the vehicle can also use three colors to display the targets of the above three priorities on the on-board display screen.
S640、车辆根据预设的优先级级别与预测模型的对应关系,确定与目标物的优先级匹配的目标预测模型。S640: The vehicle determines a target prediction model that matches the priority of the target object based on the correspondence between the preset priority levels and the prediction models.
本申请实施例中,车辆在划分出周边环境中的每个目标物的优先级级别后,可以使用不同的预测模型预测不同优先级的目标物的未来运动轨迹。In the embodiment of the present application, after the vehicle has divided the priority level of each target object in the surrounding environment, different prediction models can be used to predict the future movement trajectories of targets of different priorities.
可选地,车辆可以存储有预设的优先级级别与预测模型的对应关系,当车辆在划分出周边环境中的某个目标物的优先级级别时,可以根据该预设的对应关系,确定出与该目标物的优先级匹配的目标预测模型。Optionally, the vehicle may store a preset correspondence between priority levels and prediction models. When the vehicle divides the priority level of a target object in the surrounding environment, it may determine a target prediction model that matches the priority level of the target object based on the preset correspondence.
可选地,预设的优先级级别与预测模型的对应关系可以是一一对应的关系,即每个优先级级别都对应一种预测模型。不同的预测模型的计算复杂度不同。作为一种实施方式,预设的优先级级别与预测模型的对应关系可以包括:第一优先级对应第一预测模型,第二优先级对应第二预测模型。其中,第一优先级高于第二优先级,第一预测模型的计算复杂度高于第二预测模型的计算复杂度。Optionally, the correspondence between the preset priority levels and the prediction models may be a one-to-one correspondence, that is, each priority level corresponds to a prediction model. Different prediction models have different computational complexities. As an implementation, the correspondence between the preset priority levels and the prediction models may include: a first priority level corresponds to a first prediction model, and a second priority level corresponds to a second prediction model. The first priority level is higher than the second priority level, and the computational complexity of the first prediction model is higher than the computational complexity of the second prediction model.
可选地,预测模型的计算复杂度可以根据预测模型的处理时间、运算指令数量和内存交互指令数量、消耗的性能等确定。本申请实施例对此并不作限定。作为一种实施方式,本申请提供的预测模型可以是恒定速度(constant velocity,CV)模型、马尔可夫模型等基于规则的预测模型,这类预测模型通常比较简单且计算复杂度也比较低。作为另一种实施方式,本申请提供的预测模型也可以是基于神经网络的预测模型,这类预测模型通常计算复杂度比较高。Optionally, the computational complexity of the prediction model can be determined based on the processing time of the prediction model, the number of operation instructions and the number of memory interaction instructions, the performance consumed, etc. The embodiments of the present application are not limited to this. As an implementation method, the prediction model provided in the present application can be a rule-based prediction model such as a constant velocity (CV) model and a Markov model. Such prediction models are usually simpler and have lower computational complexity. As another implementation method, the prediction model provided in the present application can also be a prediction model based on a neural network. Such prediction models usually have higher computational complexity.
可选地,预设的优先级级别与预测模型的对应关系也可以是多对一的关系,即可以存在多个优先级级别都对应一种预测模型。例如,假设优先级的等级数n为6,等级6为最低优先级,等级1为最高优先级。其中,预测模型1可以对应等级1、2、3的优先级级别,预测模型2可以对应等级4、5的优先级级别,预测模型3可以仅对应最高优先级等级6。Optionally, the correspondence between the preset priority levels and the prediction models can also be a many-to-one relationship, that is, there can be multiple priority levels corresponding to one prediction model. For example, assuming that the number of priority levels n is 6, level 6 is the lowest priority, and level 1 is the highest priority. Among them, prediction model 1 can correspond to priority levels 1, 2, and 3, prediction model 2 can correspond to priority levels 4 and 5, and prediction model 3 can only correspond to the highest priority level 6.
本申请实施例中,优先级越高,其对应的预测模型的计算复杂度越高,优先级越低,其对应的预测模型的计算复杂度越低。从而在将越容易发生碰撞的目标物划分至越高的优先级时,可以采用更精确化、计算复杂度更高的预测模型,快速、精准地预测该目标物的未来运动轨迹。在将越不容易发生碰撞的目标物划分至越低的优先级时,可以采用更简单化、计算复杂度更低的预测模型,快速、准确地预测该目标物的未来运动轨迹。如此,通过合理使用不同复杂度的预测模型,能够减少计算资源的消耗,降低预测时延,同时也能实现大量目标物的运动轨迹的准确预测。In the embodiment of the present application, the higher the priority, the higher the computational complexity of the corresponding prediction model, and the lower the priority, the lower the computational complexity of the corresponding prediction model. Thus, when the target objects that are more likely to collide are classified as higher priorities, a more precise prediction model with higher computational complexity can be used to quickly and accurately predict the future motion trajectory of the target object. When the target objects that are less likely to collide are classified as lower priorities, a simpler prediction model with lower computational complexity can be used to quickly and accurately predict the future motion trajectory of the target object. In this way, by rationally using prediction models of different complexities, it is possible to reduce the consumption of computing resources, reduce prediction delays, and also achieve accurate prediction of the motion trajectories of a large number of targets.
S650、车辆根据目标预测模型,预测目标物的运动轨迹。S650: The vehicle predicts the movement trajectory of the target object according to the target prediction model.
本申请实施例中,车辆在确定与目标物的优先级匹配的目标预测模型后,可以采样该预测模型预测该目标物的未来运动轨迹。从而可以实现对智能设备的周边环境中不同优先级的目标物体,采用不同计算复杂度的预测模型进行未来运动轨迹的预测。实现了对周边环境内物体的行为预测的同时,也减少了计算资源的消耗,降低了预测时延。In the embodiment of the present application, after the vehicle determines the target prediction model that matches the priority of the target object, it can sample the prediction model to predict the future motion trajectory of the target object. In this way, it is possible to predict the future motion trajectory of target objects of different priorities in the surrounding environment of the smart device using prediction models of different computational complexities. While realizing the prediction of the behavior of objects in the surrounding environment, it also reduces the consumption of computing resources and reduces the prediction delay.
可选地,车辆在预测出目标物的运动轨迹后,可以执行相应的操作。例如,车辆可以对自车当前时刻或未来指定时间段的行驶路径进行规划,以避免碰撞。又例如,车辆可以播放喇叭,以提醒目标物注意。Optionally, after predicting the target's motion trajectory, the vehicle can perform corresponding operations. For example, the vehicle can plan its own driving path at the current moment or in a specified time period in the future to avoid collision. For another example, the vehicle can play a horn to alert the target.
综上所述,本申请实施例的目标行为预测方法,通过分层次预测大量目标物的运动轨迹的方式,解决算力瓶颈问题,即高优先级的目标物,可以使用精细化、计算复杂度高的预测模型进行轨迹预测,低优先级的目标物,使用计算复杂度低的预测模型进行轨迹预测。以达到减少计算资源、降低预测时延的目的,也间接提升了大量目标物的预测准确率。In summary, the target behavior prediction method of the embodiment of the present application solves the computing power bottleneck problem by predicting the motion trajectories of a large number of targets in a hierarchical manner, that is, for high-priority targets, a refined prediction model with high computational complexity can be used for trajectory prediction, and for low-priority targets, a prediction model with low computational complexity can be used for trajectory prediction. This achieves the purpose of reducing computing resources and reducing prediction latency, and also indirectly improves the prediction accuracy of a large number of targets.
可以理解,本申请实施例的目标行为预测方法,也可以应用于机器人、或其他具有自主行驶能力的电子设备,本申请实施例对此并不做限定。It can be understood that the target behavior prediction method of the embodiment of the present application can also be applied to robots or other electronic devices with autonomous driving capabilities, and the embodiment of the present application is not limited to this.
示例性地,当本申请实施例的目标行为预测方法应用于智能家居领域的扫地机器人时,扫地机器人可以对周边环境中的多个人物进行碰撞分析,得到每个人物的风险系数,以根据风险系数,确定每个人物的优先级。当根据每个人物的优先级采用对应的预测模型预测出人物的未来运动轨迹时,扫地机器人可以对自身当前时刻或未来指定时间段的行驶路径进行规划以避免碰撞,或者扫地机器人可以根据每个人物的未来运动轨迹,筛选出可能需要进行交互的人物以提供交互效率。For example, when the target behavior prediction method of the embodiment of the present application is applied to a sweeping robot in the field of smart home, the sweeping robot can perform collision analysis on multiple characters in the surrounding environment to obtain the risk coefficient of each character, and determine the priority of each character according to the risk coefficient. When the future motion trajectory of the character is predicted using the corresponding prediction model according to the priority of each character, the sweeping robot can plan its own driving path at the current moment or in a specified time period in the future to avoid collision, or the sweeping robot can screen out characters that may need to interact based on the future motion trajectory of each character to improve interaction efficiency.
可以理解,本申请实施例的目标行为预测方法,可以应用于各自场景。例如,可以应用于需 要感知周围目标并判断碰撞风险的场景;又例如,可以应用于需要判断周围物体与自身在运动学上产生交互关系强弱的场景。It can be understood that the target behavior prediction method of the embodiment of the present application can be applied to various scenarios. It is necessary to perceive surrounding targets and judge the risk of collision. For example, it can be applied to scenarios where it is necessary to judge the strength of the kinematic interaction between surrounding objects and itself.
可以理解的是,车辆为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。It is understandable that in order to achieve the above functions, the vehicle includes hardware and/or software modules corresponding to the execution of each function. In combination with the algorithm steps of each example described in the embodiments disclosed herein, the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application in combination with the embodiments, but such implementation should not be considered to be beyond the scope of this application.
本实施例可以根据上述方法示例对智能手表进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In this embodiment, the smart watch can be divided into functional modules according to the above method example. For example, each functional module can be divided according to each function, or two or more functions can be integrated into one processing module. The above integrated module can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic and is only a logical function division. There may be other division methods in actual implementation.
本申请实施例还提供了一种车载设备,包括存储器、处理器以及存储在该存储器中并可在处理器上运行的计算机程序,处理器执行所述计算机程序时,使得车载设备实现上述各个方法实施例中车辆执行的各个功能或者步骤。An embodiment of the present application also provides a vehicle-mounted device, including a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the computer program, the vehicle-mounted device implements each function or step performed by the vehicle in each of the above-mentioned method embodiments.
本申请实施例还提供一种目标行为预测装置,该装置可以应用于上述车载设备。该装置用于执行上述方法实施例中车辆执行的各个功能或者步骤。The present application also provides a target behavior prediction device, which can be applied to the above-mentioned vehicle-mounted device. The device is used to execute each function or step executed by the vehicle in the above-mentioned method embodiment.
本申请实施例还提供了一种车辆,该车辆包括上述车载设备或目标定位装置。An embodiment of the present application also provides a vehicle, which includes the above-mentioned vehicle-mounted equipment or target positioning device.
本申请实施例还提供了一种智能设备,该智能设备包括上述目标定位装置。其中,该智能设备可以是机器人或其他具有自主行驶能力的电子设备。The embodiment of the present application further provides an intelligent device, which includes the above-mentioned target positioning device. The intelligent device may be a robot or other electronic device with autonomous driving capability.
本申请实施例还提供一种芯片系统,该芯片系统包括至少一个处理器和至少一个接口电路。处理器和接口电路可通过线路互联。接口电路可读取存储器中存储的指令,并将该指令发送给处理器。当所述指令被处理器执行时,可使得车载设备执行上述方法实施例中车辆执行的各个功能或者步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。The embodiment of the present application also provides a chip system, which includes at least one processor and at least one interface circuit. The processor and the interface circuit can be interconnected through a line. The interface circuit can read the instructions stored in the memory and send the instructions to the processor. When the instructions are executed by the processor, the vehicle-mounted device can perform the various functions or steps performed by the vehicle in the above method embodiment. Of course, the chip system can also include other discrete devices, which are not specifically limited in the embodiment of the present application.
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述车载设备上运行时,使得该车载设备执行上述方法实施例中车机执行的各个功能或者步骤。An embodiment of the present application also provides a computer storage medium, which includes computer instructions. When the computer instructions are executed on the above-mentioned vehicle-mounted device, the vehicle-mounted device executes each function or step executed by the vehicle computer in the above-mentioned method embodiment.
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中车机执行的各个功能或者步骤。The embodiment of the present application also provides a computer program product. When the computer program product is run on a computer, the computer is enabled to execute each function or step executed by the vehicle computer in the above method embodiment.
其中,本实施例提供的车载设备、车辆、机器人、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Among them, the vehicle-mounted equipment, vehicle, robot, computer storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, and will not be repeated here.
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。Through the description of the above implementation methods, technical personnel in the relevant field can clearly understand that for the convenience and simplicity of description, only the division of the above-mentioned functional modules is used as an example. In actual applications, the above-mentioned functions can be assigned to different functional modules as needed, that is, the internal structure of the device can be divided into different functional modules to complete all or part of the functions described above.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another device, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place or distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。 In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium. Based on this understanding, the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to enable a device (which can be a single-chip microcomputer, chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program code.
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。 The above contents are only specific implementation methods of the present application, but the protection scope of the present application is not limited thereto. Any changes or substitutions within the technical scope disclosed in the present application shall be included in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

  1. 一种目标行为预测方法,其特征在于,应用于智能设备,所述方法包括:A target behavior prediction method, characterized in that it is applied to a smart device, and the method comprises:
    根据目标物的运动状态进行碰撞分析,确定所述目标物的风险系数,所述风险系数用于指示所述目标物与所述智能设备发生碰撞的可能性大小;Performing a collision analysis according to the motion state of the target object to determine a risk factor of the target object, wherein the risk factor is used to indicate the possibility of a collision between the target object and the smart device;
    根据所述风险系数,确定所述目标物的优先级;Determining the priority of the target object according to the risk factor;
    根据预设的优先级级别与预测模型的对应关系,确定与所述优先级匹配的目标预测模型;According to the correspondence between the preset priority level and the prediction model, determining the target prediction model matching the priority;
    根据所述目标预测模型,预测所述目标物的运动轨迹。The motion trajectory of the target object is predicted according to the target prediction model.
  2. 根据权利要求1所述的方法,其特征在于,所述根据目标物的运动状态进行碰撞分析,确定所述目标物的风险系数,包括:The method according to claim 1 is characterized in that the step of performing collision analysis according to the motion state of the target object to determine the risk factor of the target object comprises:
    获取所述智能设备的行驶路线上的多个行驶位置;Acquire multiple driving positions on the driving route of the smart device;
    根据所述目标物的运动状态,确定所述多个行驶位置中每个行驶位置对应的风险系数,所述每个行驶位置对应的风险系数用于指示所述目标物与所述智能设备在所述每个行驶位置处发生碰撞的可能性大小;Determine, according to the motion state of the target object, a risk coefficient corresponding to each of the multiple driving positions, wherein the risk coefficient corresponding to each driving position is used to indicate the possibility of a collision between the target object and the smart device at each driving position;
    根据所述每个行驶位置对应的风险系数中的最小值,确定所述目标物的风险系数。The risk coefficient of the target object is determined according to the minimum value of the risk coefficients corresponding to each driving position.
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述目标物的运动状态,确定所述多个行驶位置中每个行驶位置对应的风险系数,包括:The method according to claim 2 is characterized in that determining the risk coefficient corresponding to each of the multiple driving positions according to the motion state of the target object comprises:
    根据所述目标物的运动状态,确定指定时间段后所述目标物与目标行驶位置之间的第一距离,其中,所述目标行驶位置为所述多个行驶位置中的任一行驶位置;Determining, according to the motion state of the target object, a first distance between the target object and a target driving position after a specified time period, wherein the target driving position is any one of the multiple driving positions;
    确定所述指定时间段后所述智能设备与所述目标行驶位置之间的第二距离;determining a second distance between the smart device and the target driving location after the specified time period;
    获取所述第一距离与所述第二距离之和,作为所述目标行驶位置对应的风险系数。The sum of the first distance and the second distance is obtained as a risk coefficient corresponding to the target driving position.
  4. 根据权利要求2所述的方法,其特征在于,所述获取所述智能设备的行驶路线上的多个行驶位置,包括:The method according to claim 2, characterized in that the step of obtaining a plurality of driving positions on the driving route of the smart device comprises:
    获取所述智能设备在未来预设时间段内的行驶路线;Obtaining a driving route of the smart device within a preset time period in the future;
    每间隔指定距离,获取所述行驶路线上的一个行驶位置,得到所述行驶路线上的多个行驶位置。A driving position on the driving route is acquired at every designated distance, so as to obtain a plurality of driving positions on the driving route.
  5. 根据权利要求2所述的方法,其特征在于,所述获取所述智能设备的行驶路线上的多个行驶位置,包括:The method according to claim 2, characterized in that the step of obtaining a plurality of driving positions on the driving route of the smart device comprises:
    获取所述智能设备在未来预设时间段内的行驶路线;Obtaining a driving route of the smart device within a preset time period in the future;
    每间隔指定时间,获取所述行驶路线上的一个行驶位置,得到所述行驶路线上的多个行驶位置。At each designated time interval, a driving position on the driving route is acquired to obtain a plurality of driving positions on the driving route.
  6. 根据权利要求1所述的方法,其特征在于,所述根据目标物的运动状态进行碰撞分析,确定所述目标物的风险系数,包括:The method according to claim 1 is characterized in that the step of performing collision analysis according to the motion state of the target object to determine the risk factor of the target object comprises:
    根据所述目标物的运动状态,确定所述目标物与所述智能设备发生碰撞的碰撞时间;Determining a collision time when the target object collides with the smart device according to a motion state of the target object;
    根据所述碰撞时间,确定所述目标物的风险系数。A risk factor of the target object is determined according to the collision time.
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述智能设备的周边环境中存在多个目标物,所述根据所述风险系数,确定所述目标物的优先级,包括:The method according to any one of claims 1 to 6, characterized in that there are multiple target objects in the surrounding environment of the smart device, and determining the priority of the target objects according to the risk coefficient comprises:
    根据所述多个目标物中每个目标物的所述风险系数,对所述多个目标物进行排序;sorting the multiple targets according to the risk factor of each target in the multiple targets;
    根据排序后的所述多个目标物,确定所述每个目标物的优先级。The priority of each target object is determined according to the sorted multiple targets.
  8. 根据权利要求7所述的方法,其特征在于,所述根据排序后的所述多个目标物,确定所述每个目标物的优先级,包括:The method according to claim 7, characterized in that the step of determining the priority of each target object according to the sorted multiple targets comprises:
    按照预设的优先级个数,将排序后的所述多个目标物划分至不同优先级,得到所述每个目标物的优先级。According to the preset number of priorities, the sorted multiple targets are divided into different priorities to obtain the priority of each target.
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:The method according to claim 8, characterized in that the method further comprises:
    显示第一界面,所述第一界面用于输入优先级个数;Displaying a first interface, wherein the first interface is used to input a priority number;
    响应于用户的输入操作,获取用户输入的优先级个数,作为预设的优先级个数。In response to the user's input operation, the priority number input by the user is obtained as the preset priority number.
  10. 根据权利要求7所述的方法,其特征在于,所述方法还包括:The method according to claim 7, characterized in that the method further comprises:
    根据预设的优先级级别与颜色的对应关系,以及所述每个目标物的优先级,确定所述每个目 标物对应的颜色;According to the correspondence between the preset priority level and the color, and the priority of each target, the priority of each target is determined. The color corresponding to the marker;
    基于所述每个目标物对应的颜色显示所述多个目标物。The multiple objects are displayed based on the color corresponding to each object.
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述预设的优先级级别与预测模型的对应关系包括:第一优先级对应第一预测模型,第二优先级对应第二预测模型,所述第一优先级高于所述第二优先级,所述第一预测模型的计算复杂度高于所述第二预测模型的计算复杂度。The method according to any one of claims 1-10 is characterized in that the correspondence between the preset priority levels and the prediction models includes: the first priority corresponds to the first prediction model, the second priority corresponds to the second prediction model, the first priority is higher than the second priority, and the computational complexity of the first prediction model is higher than the computational complexity of the second prediction model.
  12. 一种智能设备,其特征在于,所述智能设备包括存储器和一个或多个处理器;所述存储器和所述处理器耦合;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,所述智能设备执行如权利要求1-11中任一项所述的方法。A smart device, characterized in that the smart device includes a memory and one or more processors; the memory and the processor are coupled; the memory is used to store computer program code, the computer program code includes computer instructions, and when the processor executes the computer instructions, the smart device executes the method described in any one of claims 1-11.
  13. 一种车载设备,其特征在于,所述车载设备包括存储器和一个或多个处理器;所述存储器和所述处理器耦合;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,所述车载设备执行如权利要求1-11中任一项所述的方法。A vehicle-mounted device, characterized in that the vehicle-mounted device includes a memory and one or more processors; the memory and the processor are coupled; the memory is used to store computer program code, the computer program code includes computer instructions, and when the processor executes the computer instructions, the vehicle-mounted device executes the method described in any one of claims 1-11.
  14. 一种车辆,其特征在于,所述车辆包含车载设备,所述车载设备执行如权利要求1-11中任一项所述的方法。A vehicle, characterized in that the vehicle comprises an on-board device, and the on-board device executes the method as described in any one of claims 1-11.
  15. 一种机器人,其特征在于,所述机器人包括存储器和一个或多个处理器;所述存储器和所述处理器耦合;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,所述机器人执行如权利要求1-11中任一项所述的方法。A robot, characterized in that the robot includes a memory and one or more processors; the memory and the processor are coupled; the memory is used to store computer program code, the computer program code includes computer instructions, and when the processor executes the computer instructions, the robot executes the method as described in any one of claims 1-11.
  16. 一种芯片系统,其特征在于,所述芯片系统应用于智能设备;所述芯片系统包括一个或多个接口电路和一个或多个处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述智能设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述智能设备执行如权利要求1-11中任一项所述的方法。A chip system, characterized in that the chip system is applied to an intelligent device; the chip system comprises one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected via a line; the interface circuit is used to receive a signal from a memory of the intelligent device and send the signal to the processor, the signal comprising a computer instruction stored in the memory; when the processor executes the computer instruction, the intelligent device executes the method described in any one of claims 1 to 11.
  17. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在智能设备上运行时,使得所述智能设备执行如权利要求1-11中任一项所述的方法。A computer storage medium, characterized in that it includes computer instructions, and when the computer instructions are executed on an intelligent device, the intelligent device executes the method as described in any one of claims 1 to 11.
  18. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-11中任一项所述的方法。 A computer program product, characterized in that when the computer program product is run on a computer, the computer is caused to execute the method according to any one of claims 1 to 11.
PCT/CN2023/104261 2022-10-27 2023-06-29 Target behavior prediction method, intelligent device and vehicle WO2024087712A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211329915.8A CN117944671A (en) 2022-10-27 2022-10-27 Target behavior prediction method, intelligent device and vehicle
CN202211329915.8 2022-10-27

Publications (1)

Publication Number Publication Date
WO2024087712A1 true WO2024087712A1 (en) 2024-05-02

Family

ID=90798899

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/104261 WO2024087712A1 (en) 2022-10-27 2023-06-29 Target behavior prediction method, intelligent device and vehicle

Country Status (2)

Country Link
CN (1) CN117944671A (en)
WO (1) WO2024087712A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110246156A1 (en) * 2008-12-23 2011-10-06 Continental Safety Engineering International Gmbh Method for Determining the Probability of a Collision of a Vehicle With a Living Being
CN110794823A (en) * 2018-07-17 2020-02-14 百度(美国)有限责任公司 Method and system for predicting object movement of autonomous vehicle
CN113012469A (en) * 2021-03-16 2021-06-22 浙江亚太机电股份有限公司 Intelligent traffic early warning system based on target recognition
CN113439247A (en) * 2018-11-20 2021-09-24 伟摩有限责任公司 Agent prioritization for autonomous vehicles
CN114179713A (en) * 2020-09-14 2022-03-15 华为技术有限公司 Vehicle reminding method, system and related equipment
CN114248800A (en) * 2020-09-24 2022-03-29 英特尔公司 Systems, devices, and methods for predictive risk-aware driving
EP4001042A1 (en) * 2020-11-23 2022-05-25 Aptiv Technologies Limited System and method for predicting road collisions with a host vehicle
CN115042782A (en) * 2022-06-27 2022-09-13 重庆长安汽车股份有限公司 Vehicle cruise control method, system, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110246156A1 (en) * 2008-12-23 2011-10-06 Continental Safety Engineering International Gmbh Method for Determining the Probability of a Collision of a Vehicle With a Living Being
CN110794823A (en) * 2018-07-17 2020-02-14 百度(美国)有限责任公司 Method and system for predicting object movement of autonomous vehicle
CN113439247A (en) * 2018-11-20 2021-09-24 伟摩有限责任公司 Agent prioritization for autonomous vehicles
CN114179713A (en) * 2020-09-14 2022-03-15 华为技术有限公司 Vehicle reminding method, system and related equipment
CN114248800A (en) * 2020-09-24 2022-03-29 英特尔公司 Systems, devices, and methods for predictive risk-aware driving
EP4001042A1 (en) * 2020-11-23 2022-05-25 Aptiv Technologies Limited System and method for predicting road collisions with a host vehicle
CN113012469A (en) * 2021-03-16 2021-06-22 浙江亚太机电股份有限公司 Intelligent traffic early warning system based on target recognition
CN115042782A (en) * 2022-06-27 2022-09-13 重庆长安汽车股份有限公司 Vehicle cruise control method, system, equipment and medium

Also Published As

Publication number Publication date
CN117944671A (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN109901574B (en) Automatic driving method and device
CN111123933B (en) Vehicle track planning method and device, intelligent driving area controller and intelligent vehicle
CN110379193B (en) Behavior planning method and behavior planning device for automatic driving vehicle
WO2021102955A1 (en) Path planning method for vehicle and path planning apparatus for vehicle
WO2021000800A1 (en) Reasoning method for road drivable region and device
WO2022016457A1 (en) Method and device for controlling switching of vehicle driving mode
WO2021196879A1 (en) Method and device for recognizing driving behavior of vehicle
WO2021212379A1 (en) Lane line detection method and apparatus
CN113160547B (en) Automatic driving method and related equipment
CN113792566A (en) Laser point cloud processing method and related equipment
CN112672942B (en) Vehicle lane changing method and related equipment
CN112543877B (en) Positioning method and positioning device
WO2022062825A1 (en) Vehicle control method, device, and vehicle
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
CN113954858A (en) Method for planning vehicle driving route and intelligent automobile
CN114693540A (en) Image processing method and device and intelligent automobile
CN114531913A (en) Lane line detection method, related device, and computer-readable storage medium
WO2022017307A1 (en) Autonomous driving scenario generation method, apparatus and system
WO2022178858A1 (en) Vehicle driving intention prediction method and apparatus, terminal and storage medium
CN114261404A (en) Automatic driving method and related device
EP4159564A1 (en) Method and device for planning vehicle longitudinal motion parameters
WO2024087712A1 (en) Target behavior prediction method, intelligent device and vehicle
EP4059799A1 (en) Road structure detection method and device
WO2022001432A1 (en) Method for inferring lane, and method and apparatus for training lane inference model
WO2022061725A1 (en) Traffic element observation method and apparatus