WO2024093768A1 - Vehicle alarm method and related device - Google Patents

Vehicle alarm method and related device Download PDF

Info

Publication number
WO2024093768A1
WO2024093768A1 PCT/CN2023/126656 CN2023126656W WO2024093768A1 WO 2024093768 A1 WO2024093768 A1 WO 2024093768A1 CN 2023126656 W CN2023126656 W CN 2023126656W WO 2024093768 A1 WO2024093768 A1 WO 2024093768A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
driver
sight
line
attention
Prior art date
Application number
PCT/CN2023/126656
Other languages
French (fr)
Chinese (zh)
Inventor
卢远志
吕自波
王燕
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024093768A1 publication Critical patent/WO2024093768A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present application relates to the field of autonomous driving, and in particular to a vehicle warning method and related equipment.
  • the intelligent driving system in the vehicle needs to understand the driver's status and output warning information to the driver in time.
  • the warning information is used to remind the driver to take over the vehicle.
  • the triggering scenarios of the warning information mainly include: periodic triggering and event triggering; for example, the aforementioned event triggering may include: fatigue driving, making phone calls or other abnormal behaviors of the driver.
  • the present application provides a vehicle alarm method and related equipment.
  • the triggering scenario of the alarm information provided by this solution not only takes into account the driver's line of sight inside the vehicle, but also takes into account the environmental information around the vehicle, which is conducive to more accurate output of alarm information.
  • the present application provides a vehicle warning method, which can be used in the field of autonomous driving in the field of artificial intelligence.
  • the method may include: the vehicle obtains the driver's line of sight and the environmental information around the vehicle; when it is determined that the first condition is met according to the driver's line of sight, the warning information is output.
  • the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle;
  • the vehicle's driving intention may include any one or more of the following combinations: one or more objects that the driver needs to pay attention to during driving, the vehicle's driving direction or other types of driving intentions, etc.
  • the aforementioned vehicle may be a car, truck, motorcycle, bus, ship, airplane, helicopter, lawn mower, recreational vehicle, amusement park vehicle, construction equipment, tram and golf cart, etc.
  • warning information is provided.
  • a warning message is output to the driver; in addition, in the determination process of "whether to output warning information", not only the driver's line of sight inside the vehicle is considered, but also the environmental information around the vehicle is considered, which is conducive to more accurate output of warning information.
  • the method further includes: the vehicle determines one or more objects that the driver needs to pay attention to during driving based on environmental information around the vehicle; wherein the driving intention of the vehicle includes objects that the driver needs to pay attention to during driving, and the situation where the driver's line of sight does not match the vehicle's driving intention includes: the driver's line of sight is outside all objects that need attention.
  • the driving behavior needs to be determined based on the surrounding objects during vehicle driving, that is, the intelligent driving system and the driver need to observe the surrounding objects in real time during vehicle driving, and the driver's line of sight being outside the objects that need attention is determined as a situation where the driver's line of sight does not match the vehicle's driving intention, which is consistent with the logic of manual driving, that is, this solution has a high degree of fit with the actual application scenario, which is conducive to accurately determining whether the driver's line of sight matches the vehicle's driving intention.
  • the situation that the driver's line of sight is outside the object that needs attention may include: at a first moment, the driver's line of sight is outside all objects that need attention.
  • the situation that the driver's line of sight is outside the object that needs attention may include: at the first moment, the driver's line of sight is inside the object that needs attention, and the second movement parameter of the driver's line of sight in the first time period does not match the first movement parameter, and the first time period is after the first moment.
  • the situation that the driver's line of sight is outside the object that needs attention may include: at the first moment and multiple consecutive moments after the first moment, the driver's line of sight is outside the object that needs attention.
  • the method may further include: the vehicle determines the first movement parameter of the object requiring attention within the first time period based on the environmental information around the vehicle.
  • the driver's line of sight being outside the object requiring attention also includes: the driver's line of sight being within the object requiring attention at the first moment, and the second movement parameter of the driver's line of sight within the first time period does not match the first movement parameter, and the first time period is after the first moment.
  • the first movement parameter may include the first movement direction of the object requiring attention within the first time period; or,
  • the first movement parameter may also include any one or more of the following information: a first movement distance, a first movement speed, or other movement information of the object of interest within the first time period, etc.
  • the second movement parameter may include a second movement direction of the driver's line of sight within the first time period; or, the second movement parameter may also include any one or more of the following information: a second movement distance, a second movement speed, or other movement information of the driver's line of sight within the first time period, etc.
  • the first movement parameter includes a first movement direction of an object that needs attention within a first time period
  • the second movement parameter includes a second movement direction of the driver's line of sight within the first time period
  • each object that needs attention can be represented as an area
  • the driver's line of sight observation range can also be represented as an area
  • the first movement direction can be the movement direction of any point in the object that needs attention within the first time period
  • the second movement direction can be the movement direction of any point within the driver's line of sight observation range within the first time period.
  • the situation where the second movement parameter does not match the first movement parameter includes: the difference between the first movement direction and the second movement direction satisfies the second condition; for example, the second condition can include that the angle between the first movement direction and the second movement direction is less than the first angle threshold.
  • the first movement parameter and the second movement parameter are both determined as movement directions, that is, an implementation scheme that is easy to implement and has high accuracy is provided.
  • the driving intention of the vehicle includes the driving direction of the vehicle
  • the situation where the driver's line of sight does not match the driving intention of the vehicle includes: the driver's line of sight does not match the driving direction of the vehicle.
  • the method may also include: when it is determined based on the environmental information around the vehicle that there is no object that needs attention at the current moment, it can be determined whether the direction of the driver's line of sight matches the driving direction of the vehicle; wherein, the situation where the direction of the driver's line of sight does not match the driving direction of the vehicle may include that the angle between the driver's line of sight and the driving direction of the vehicle is greater than or equal to the second angle threshold.
  • the method may further include: the vehicle displays objects that require attention to the driver.
  • the vehicle may highlight objects that require attention determined by the vehicle to the driver through the head-up display system HUD; optionally, the vehicle may highlight objects that require attention when displaying the navigation route to the driver through the HUD.
  • the highlighting method includes any one or more of the following: adding prompt text next to the object that requires attention, framing the object that requires attention, or other methods of highlighting the object that requires attention, etc.
  • objects that require attention can also be displayed to the driver, so as to assist the driver in learning the driving ideas of the intelligent driving system, which is conducive to improving the safety of the driver's driving behavior.
  • the vehicle outputs warning information when it is determined that the first condition is met based on the driver's line of sight, including: when the duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, the vehicle outputs warning information, and the value of the first duration is associated with the safety of the driver's driving behavior.
  • the factors determining the safety of the driver's driving behavior may include any one or more of the following: the cumulative number of forward warnings FCW, the number of sudden accelerations, the number of sudden decelerations, the distance from the vehicle in front when following the vehicle, the speed of steering, the average number of times the vehicle's intelligent driving system takes over the user's driving, or other information that can reflect the safety of the driver's driving behavior.
  • the higher the safety of the driver's driving behavior the longer the value of the first duration can be, and the lower the safety of the driver's driving behavior, the shorter the value of the first duration can be.
  • a warning message is output to the driver, that is, the value of the first duration can affect the frequency of outputting the warning message, and the safety of the driver's driving behavior will affect the value of the first duration, which is conducive to reducing the frequency of outputting warning messages to drivers with high safety levels, thereby avoiding disturbing the user; it is also conducive to increasing the frequency of outputting warning messages to drivers with low safety levels, thereby improving the safety of the driving process.
  • the present application provides a vehicle alarm device that can be used in the field of autonomous driving in the field of artificial intelligence.
  • the vehicle alarm device may include: an acquisition module for acquiring the driver's line of sight and environmental information around the vehicle; an alarm module for outputting an alarm message when a first condition is determined to be met based on the driver's line of sight, wherein the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle.
  • the vehicle alarm device can also execute the steps performed by the vehicle in the first aspect and various possible implementation methods of the first aspect.
  • steps performed by the vehicle in the first aspect and various possible implementation methods of the first aspect For the specific implementation steps of the second aspect of the present application and various possible implementation methods of the second aspect, as well as the beneficial effects brought about by each possible implementation method, you can refer to the description of the various possible implementation methods in the first aspect, and will not be repeated here one by one.
  • the present application provides a vehicle, which may include a memory, a processor, and a bus system, wherein the memory is used to store
  • the processor is used to execute the program in the memory, including the following steps: the bus system is used to connect the memory and the processor so that the memory and the processor communicate with each other.
  • the processor can be used to execute the steps executed by the vehicle in each possible implementation of the first aspect, and the details can be referred to the first aspect, which will not be repeated here.
  • the present application provides a computer-readable storage medium, in which a computer program is stored.
  • the computer-readable storage medium is run on a computer, the computer executes the vehicle alarm method described in the first aspect.
  • the present application provides a circuit system, which includes a processing circuit, and the processing circuit is configured to execute the vehicle alarm method described in the first aspect above.
  • the present application provides a computer program which, when executed on a computer, enables the computer to execute the vehicle alarm method described in the first aspect above.
  • the present application provides a chip system, which includes a processor for supporting a server or a vehicle speed generating device to implement the functions involved in the above aspects, for example, sending or processing the data and/or information involved in the above methods.
  • the chip system also includes a memory, which is used to store program instructions and data necessary for the server or communication device.
  • the chip system can be composed of a chip, or it can include a chip and other discrete devices.
  • FIG1 is a schematic diagram of a structure of a vehicle provided in an embodiment of the present application.
  • FIG2 is a flow chart of a vehicle alarm method provided in an embodiment of the present application.
  • FIG3 is another schematic diagram of a flow chart of a vehicle alarm method provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of an object that a driver needs to pay attention to provided in an embodiment of the present application.
  • FIG5 is another schematic diagram of an object that a driver needs to pay attention to provided by an embodiment of the present application.
  • FIG6 is another schematic diagram of an object that a driver needs to pay attention to provided by an embodiment of the present application.
  • FIG7a is a schematic diagram of a driver's line of sight being within an object requiring attention and a driver's line of sight being outside an object requiring attention provided by an embodiment of the present application;
  • FIG7b is a schematic diagram of mapping an object requiring attention in a second coordinate system to a coordinate system corresponding to a virtual camera provided by an embodiment of the present application;
  • FIG7c is another schematic diagram of mapping an object requiring attention in a second coordinate system to a coordinate system corresponding to a virtual camera provided by an embodiment of the present application;
  • FIG8 is a schematic diagram of a first movement parameter and a second movement parameter provided in an embodiment of the present application.
  • FIG9 is a schematic diagram of a structure of a vehicle speed generating device provided in an embodiment of the present application.
  • FIG10 is another schematic diagram of the structure of a vehicle speed generating device provided in an embodiment of the present application.
  • FIG11 is another schematic diagram of the structure of a vehicle provided in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of the structure of a chip provided in an embodiment of the present application.
  • the embodiments of the present application can be applied to vehicles, and specifically can be applied to scenarios for determining whether it is necessary to issue a warning message to the driver.
  • the aforementioned vehicles can be cars, trucks, motorcycles, buses, ships, airplanes, helicopters, lawn mowers, recreational vehicles, amusement park vehicles, construction equipment, trams, and golf carts, etc., and the embodiments of the present application do not make special limitations.
  • the vehicle when the vehicle adopts the automatic driving mode, the vehicle can periodically obtain the driver's information to determine whether it is necessary to output a warning message to the driver.
  • the vehicle adopts the manual driving mode the vehicle can periodically obtain the driver's information to determine whether it is necessary to output a warning message to the driver.
  • the examples given here are only for the convenience of understanding the application scenarios of the embodiments of the present application, and are not intended to be exhaustive of the application scenarios of the embodiments of the present application.
  • FIG. 1 is a schematic diagram of the structure of a vehicle provided in the embodiment of the present application.
  • the vehicle 100 is configured in a fully or partially automatic driving mode.
  • the vehicle 100 can control itself while being in the automatic driving mode, and can determine the current state of the vehicle and its surrounding environment through human operation, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine the confidence level corresponding to the possibility of other vehicles performing possible behaviors, and control the vehicle 100 based on the determined information.
  • the vehicle 100 can also be set to operate without human interaction.
  • the vehicle 100 may include various subsystems, such as a travel system 102, a sensor system 104, a control system 106, one or more peripheral devices 108, and a power source 110, a computer system 112, and a user interface 116.
  • the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components.
  • each subsystem and component of the vehicle 100 may be interconnected by wire or wirelessly.
  • the travel system 102 may include components that provide powered movement to the vehicle 100.
  • the travel system 102 may include an engine 118, a power source 119, a transmission 120, and wheels/tires 121.
  • the engine 118 can be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, for example, a hybrid engine consisting of a gasoline engine and an electric motor, and a hybrid engine consisting of an internal combustion engine and an air compression engine.
  • the engine 118 converts the energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 can also provide energy for other systems of the vehicle 100.
  • the transmission 120 can transmit the mechanical power from the engine 118 to the wheels 121.
  • the transmission 120 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 120 may also include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
  • the sensor system 104 may include several sensors that sense information about the environment surrounding the vehicle 100.
  • the sensor system 104 may include a positioning system 122 (the positioning system may be a global positioning GPS system, or a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128, and a camera 130.
  • the sensor system 104 may also include sensors that obtain internal systems of the vehicle 100 (for example, sensors for obtaining in-vehicle air quality, a fuel gauge, an oil temperature gauge, etc.). The sensing data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of the autonomous vehicle 100.
  • the positioning system 122 can be used to estimate the geographic location of the vehicle 100.
  • the IMU 124 is used to sense the position and orientation changes of the vehicle 100 based on inertial acceleration.
  • the IMU 124 can be a combination of an accelerometer and a gyroscope.
  • the radar 126 can use radio signals to sense objects in the surrounding environment of the vehicle 100, and can be specifically manifested as a millimeter wave radar or a laser radar. In some embodiments, in addition to sensing objects, the radar 126 can also be used to sense the speed and/or direction of travel of the object.
  • the laser rangefinder 128 can use lasers to sense objects in the environment where the vehicle 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, a laser scanner, and one or more detectors, as well as other system components.
  • the camera 130 can be used to capture multiple images of the surrounding environment of the vehicle 100.
  • the camera 130 can be a still camera or a video camera.
  • the control system 106 controls the operation of the vehicle 100 and its components.
  • the control system 106 may include various components, including a steering system 132 , a throttle 134 , a brake unit 136 , a computer vision system 140 , a lane control system 142 , and an obstacle avoidance system 144 .
  • the steering system 132 can be operated to adjust the forward direction of the vehicle 100.
  • it can be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thus control the speed of the vehicle 100.
  • the brake unit 136 is used to control the deceleration of the vehicle 100.
  • the brake unit 136 can use friction to slow down the wheel 121.
  • the brake unit 136 can convert the kinetic energy of the wheel 121 into electric current.
  • the brake unit 136 can also take other forms to slow down the rotation speed of the wheel 121 to control the speed of the vehicle 100.
  • the computer vision system 140 can be operated to process and analyze the images captured by the camera 130 in order to identify objects and/or features in the surrounding environment of the vehicle 100.
  • the objects and/or features may include traffic signals, road boundaries and obstacles.
  • the computer vision system 140 can use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking and other computer vision technologies. In some embodiments, the computer vision system 140 can be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the route control system 142 is used to determine the route and speed of the vehicle 100. In some embodiments, the route control system 142 may include a lateral planning module 1421 and a longitudinal planning module 1422, which are respectively used to determine the route and speed of the vehicle 100 by combining data from the obstacle avoidance system 144, the GPS 122, and one or more predetermined maps.
  • the avoidance system 144 is used to identify, evaluate and avoid or otherwise cross obstacles in the environment of the vehicle 100, which may be represented by actual obstacles and virtual moving bodies that may collide with the vehicle 100.
  • the control system 106 may include components other than those shown and described in addition or in place. Alternatively, some of the components shown above may be reduced.
  • the vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through the peripheral device 108.
  • the peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152.
  • the peripheral device 108 provides a means for the user of the vehicle 100 to interact with the user interface 116.
  • the onboard computer 148 may provide information to the user of the vehicle 100.
  • the user interface 116 may also operate the onboard computer 148 to receive user input.
  • the onboard computer 148 may be operated through a touch screen.
  • the peripheral device 108 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle.
  • the wireless communication system 146 may communicate wirelessly with one or more devices directly or via a communication network.
  • the wireless communication system 146 may use 3G cellular communication, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication.
  • the wireless communication system 146 may communicate using a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
  • DSRC dedicated short range communications
  • the power source 110 can provide power to various components of the vehicle 100.
  • the power source 110 can be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such batteries can be configured as a power source to provide power to various components of the vehicle 100.
  • the power source 110 and the energy source 119 can be implemented together, such as in some all-electric vehicles.
  • the computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as a memory 114.
  • the computer system 112 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor 113 may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • processors, memory, and other components of the computer system 112 may actually include multiple processors, or memories that are not stored in the same physical housing.
  • the memory 114 may be a hard drive or other storage medium located in a housing different from the computer system 112.
  • references to processor 113 or memory 114 will be understood to include references to a collection of processors or memories that may or may not operate in parallel.
  • some components such as the steering component and the deceleration component, may each have their own processor that performs only calculations related to the functionality of the component.
  • the processor 113 may be located remotely from the vehicle 100 and in wireless communication with the vehicle 100. In other aspects, some of the processes described herein are performed on a processor 113 disposed within the vehicle 100 while others are performed by the remote processor 113, including taking the necessary steps to perform a single maneuver.
  • the memory 114 may include instructions 115 (e.g., program logic) that can be executed by the processor 113 to perform various functions of the vehicle 100, including those described above.
  • the memory 114 may also include additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 102, the sensor system 104, the control system 106, and the peripheral devices 108.
  • the memory 114 may also store data such as road maps, route information, the vehicle's location, direction, speed, and other such vehicle data, as well as other information. This information can be used by the vehicle 100 and the computer system 112 during the operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • a user interface 116 is used to provide information to or receive information from a user of the vehicle 100.
  • the user interface 116 may include one or more input/output devices within the set of peripheral devices 108, such as a wireless communication system 146, an onboard computer 148, a microphone 150, and a speaker 152.
  • the computer system 112 may control functions of the vehicle 100 based on input received from various subsystems (e.g., the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 may be operable to provide control over many aspects of the vehicle 100 and its subsystems.
  • various subsystems e.g., the travel system 102, the sensor system 104, and the control system 106
  • the computer system 112 may utilize input from the control system 106 in order to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144.
  • the computer system 112 may be operable to provide control over many aspects of the vehicle 100 and its subsystems.
  • one or more of the above-mentioned components may be installed or associated separately from the vehicle 100.
  • the memory 114 may exist partially or completely separately from the vehicle 100.
  • the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
  • a vehicle traveling on a road can identify objects in its surrounding environment to determine the adjustment of the current speed.
  • the object can be another vehicle, a traffic control device, or other types of objects.
  • each identified object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., it can be used to determine the speed to be adjusted for the vehicle.
  • the vehicle 100 or a computing device associated with the vehicle 100 can predict the behavior of the identified object based on the characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.).
  • each identified object depends on the behavior of each other, so all the identified objects can also be considered together to predict the behavior of a single identified object.
  • the vehicle 100 can adjust its speed based on the predicted behavior of the identified object. In other words, the vehicle 100 can determine what stable state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • the computing device may also provide instructions to modify the steering angle of vehicle 100 so that vehicle 100 follows a given trajectory and/or maintains a safe lateral and longitudinal distance from objects near vehicle 100 (e.g., cars in adjacent lanes on the road).
  • the embodiment of the present application provides a vehicle alarm method, which can be applied to the vehicle 100 shown in Figure 1. Please refer to Figure 2.
  • Figure 2 is a flow chart of the vehicle alarm method provided by the embodiment of the present application.
  • the vehicle alarm method provided by the embodiment of the present application may include: A1. Obtaining the driver's line of sight and the environmental information around the vehicle. A2. Outputting alarm information when the first condition is determined to be satisfied according to the driver's line of sight, wherein the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle.
  • driver's line of sight when the driver's line of sight is inconsistent with the driving direction of the vehicle, it can be regarded that the driver's line of sight does not match the vehicle's driving intention; for another example, when the field of view of the driver's line of sight is outside the object determined by the vehicle to be concerned, it can be regarded that the driver's line of sight does not match the vehicle's driving intention, etc. There may also be other "driver's line of sight does not match the vehicle's driving intention" scenes, which are not exhaustive here.
  • another triggering scenario for warning information is provided.
  • a warning message is output to the driver; in addition, in the determination process of "whether to output warning information", not only the driver's line of sight inside the vehicle is taken into account, but also the environmental information around the vehicle is taken into account, which is conducive to more accurate output of warning information.
  • the specific implementation of the vehicle alarm method provided by the embodiment of the present application is described in detail below. Specifically, please refer to Figure 3, which is another flow chart of the vehicle alarm method provided by the embodiment of the present application.
  • the vehicle alarm method provided by the embodiment of the present application may include:
  • the first sensor in order to obtain the driver's line of sight, one or more first sensors can be deployed inside the vehicle, and the field of view (FOV) of the one or more first sensors must at least cover the driver's head.
  • the first sensor can be specifically manifested as a camera, a radar or other sensor that can collect video information of the driver; for example, the camera can be an infrared camera or other types of cameras.
  • the installation position of any first sensor can be any of the following positions: on the steering wheel of the vehicle, on the rearview mirror inside the vehicle, on the A-pillar of the vehicle, or on other positions inside the vehicle, etc.
  • the A-pillar of the vehicle refers to two pillars located on the left front and right front, respectively.
  • the two A-pillars connect the top of the vehicle and the front cabin of the vehicle. It should be noted that the installation position of the first sensor can be flexibly set according to the actual application product. The example here is only for the convenience of understanding this solution and is not used to limit this solution.
  • the vehicle can collect information about the driver at at least one moment through one or more first sensors inside.
  • the information about the driver at at least one moment can be an image of the driver at one moment; or, it can be a video of the driver at multiple moments; or, it can be at least one set of point cloud data corresponding to at least one moment, and each set of point cloud data reflects the driver's behavior at one moment.
  • the information about the driver at at least one moment includes information about the driver's head at at least one moment.
  • Based on the collected video information about the driver at least the driver's line of sight can be tracked to obtain the driver's line of sight.
  • the vehicle can input the collected at least one video information of the driver into a video recording device.
  • the first neural network is input into the first neural network, and the driver's line of sight is tracked by the first neural network to obtain first information output by the first neural network, where the first information indicates the driver's line of sight at at least one moment.
  • the driver's line of sight at a certain moment may include the position information of the driver's line of sight in the first coordinate system;
  • the first coordinate system may be a coordinate system established with the first sensor as the origin, or a coordinate system established with the center of the vehicle as the origin, or a coordinate system established with the center of the rearview mirror inside the vehicle as the origin, and so on;
  • the first coordinate system may be a two-dimensional coordinate system, a three-dimensional coordinate system, or other types of coordinate systems, and the specific first coordinate system to be adopted may be determined in combination with the actual application scenario, and is not limited here.
  • the vehicle can also obtain any one or more of the following information about the driver based on at least one video information collected of the driver: the driver's fatigue level, whether the driver has abnormal behavior, the driver's user identity number (identity, ID) or other information, etc., which can be flexibly determined in combination with actual application scenarios, and are not exhaustive here.
  • the driver's abnormal behavior may include: the driver making phone calls, the driver's hands leaving the steering wheel, or other abnormal behaviors of the driver, etc., which are not limited here.
  • one or more second sensors may be deployed on the outside of the vehicle, and the one or more second sensors are used to collect environmental information around the vehicle.
  • the second sensor may be specifically a camera, a radar, or other sensor capable of collecting environmental information around the vehicle, etc.
  • the position of each second sensor may be flexibly set in combination with the actual product form, and is not limited here.
  • the environmental information around the vehicle may be expressed as video information of the environment around the vehicle, or the environmental information around the vehicle may be expressed as at least one set of point cloud data corresponding to at least one moment, each set of point cloud data indicating the environment around the vehicle at a moment, or the environmental information around the vehicle may also be expressed as other types of data, which are not exhaustive here.
  • the vehicle can determine the driving intention of the vehicle based on the environmental information around the vehicle.
  • the driving intention of the vehicle may include the driving direction of the vehicle and/or one or more objects that the driver needs to pay attention to during driving.
  • the one or more objects that the driver needs to pay attention to during driving include objects outside the vehicle, such as other vehicles, pedestrians, traffic lights, traffic signs, or other objects around the vehicle; the objects that the driver needs to pay attention to during driving also include objects on the vehicle, such as the rearview mirror inside the vehicle, the rearview mirror on the left front of the vehicle, the rearview mirror on the right front of the vehicle, or other objects on the vehicle, etc.
  • the specific details can be determined in combination with the actual application scenario and are not limited here.
  • the vehicle's driving intention may include that the vehicle is going straight at a speed of 50 km/h, and the vehicle in front of the right is about to merge into the lane where the vehicle is located. When the vehicle is going straight, attention should be paid to the vehicle in front of the right.
  • the vehicle's driving intention may include that the vehicle is turning left at an intersection at a speed of 30 km/h. When the vehicle is turning left, attention should be paid to the vehicle on the left side of the vehicle that is going straight and coming from the opposite side of the intersection.
  • the vehicle's driving intention may include executing after waiting for the traffic light to turn green at the intersection, and the target that needs to be paid attention to is the traffic light, etc. It should be noted that the examples of "the vehicle's driving intention" here are only for the convenience of understanding this solution and are not used to limit this solution.
  • Step 303 may include: the vehicle determines the driving behavior of the vehicle based on the surrounding environmental information, and determines one or more objects that the vehicle needs to pay attention to during driving. Specifically, after the vehicle determines at least one object around the vehicle based on the environmental information around the vehicle, it can determine one or more objects that the driver needs to pay most attention to at the current moment based on the driving behavior planned by the vehicle.
  • Figures 4 to 6 are three schematic diagrams of objects that the driver needs to pay attention to provided in the embodiments of the present application.
  • the driving behavior planned by the vehicle is to go straight, and the vehicle determines that there is a vehicle in front of the vehicle based on the surrounding environmental information, and the vehicle on the left side of the vehicle is ready to squeeze in, then the object that the driver needs to pay most attention to at the current moment is the vehicle on the left side of the vehicle.
  • the planned driving behavior of the ego vehicle is to turn left. Based on the surrounding environmental information, the ego vehicle determines that there are no pedestrians and non-motor vehicles around the ego vehicle. Therefore, the objects that the driver needs to pay most attention to at the current moment are the traffic lights and the vehicles on the right.
  • step 303 may also include: when it is determined based on the environmental information around the vehicle that there is no object that needs attention at the current moment, the vehicle may also determine the vehicle's driving direction as the driver's desired direction of sight. For example, when the vehicle is traveling straight on a highway and there are no other objects around the vehicle, it can be considered that there is no object that needs attention, and the vehicle's driving direction can be determined as the driver's desired direction of sight. It should be understood that the examples here are only used to prove the feasibility of this solution and are not used to limit this solution.
  • step 301 and steps 302 to 303 is not limited in the embodiment of the present application.
  • Step 301 may be executed first, Then execute steps 302 to 303; or execute steps 302 to 303 first and then execute step 301; or execute step 301 and steps 302 to 303 simultaneously.
  • the first condition includes that the driver's line of sight does not match the driving intention of the vehicle.
  • the vehicle after acquiring the driver's line of sight, the vehicle can determine whether the driver's line of sight meets the first condition. If the judgment result is yes, the process can proceed to step 305; if the judgment result is no, it can be determined that it is not necessary to output warning information to the driver.
  • the vehicle can periodically trigger the determination of whether it is necessary to output warning information to the driver, that is, trigger the execution of step 301 every second time period. If the judgment result is no, it can be determined that it is not necessary to output warning information to the driver in the current period.
  • the length of the second time period may be a preset fixed value, for example, the length of the second time period may be 2 minutes, 5 minutes, 10 minutes or other time lengths.
  • the second time period is variable, and the value factors of the length of the second time period may include any one or more of the following: the current speed of the vehicle, the complexity of the driving behavior planned by the vehicle, the safety of the environment around the vehicle, the safety of the driver's driving behavior or other factors, for example, the slower the current speed of the vehicle, the larger the value of the length of the second time period can be; the faster the current speed of the vehicle, the smaller the value of the second time period can be.
  • the first condition includes that the driver's line of sight does not match the driving intention of the vehicle; the situation that the driver's line of sight does not match the driving intention of the vehicle may include: the driver's line of sight is outside the object that needs attention.
  • the situation that the driver's line of sight is outside the object that needs attention may include: at a first moment, the driver's line of sight is outside the object that needs attention; since at a first moment, the object that needs attention may be one or more, as long as the driver's line of sight is within any one of the one or more objects that need attention, it can be regarded that the driver's line of sight is within the object that needs attention; if the driver's line of sight is outside all objects that need attention, it can be regarded that the driver's line of sight is outside the object that needs attention.
  • step 303 is an optional step. If step 303 is not performed, step 304 may include: the vehicle can obtain second information from the first information; the second information indicates the driver's line of sight at the first moment, and the second information may include the position information of the observation range of the driver's line of sight in the first coordinate system at the first moment.
  • a second neural network is deployed on the vehicle, and the second information and the environmental information around the vehicle at the first moment are input into the second neural network, and the second neural network outputs first prediction information, and the first prediction information indicates whether the driver's line of sight is outside the object that needs attention at the first moment.
  • the second neural network can be a convolutional neural network, a residual neural network, or other types of neural networks, etc.
  • step 304 may include: the vehicle may obtain the second information and the third information, the second information may include the position information of the driver's line of sight at the first moment in the first coordinate system, the third information may include the position information of each object that the driver needs to pay attention to at the first moment in the second coordinate system, and the second coordinate system and the first coordinate system may be different coordinate systems or the same coordinate system.
  • first coordinate system and the second coordinate system are both coordinate systems established with the center of the vehicle as the origin; for another example, the first coordinate system is a coordinate system established with the first camera as the origin, and the second coordinate system is a coordinate system established with the second camera as the origin. It should be noted that the specific coordinate system used by the first coordinate system and the second coordinate system can be flexibly determined in combination with the actual application scenario, and is not limited here.
  • the vehicle can determine whether the driver's line of sight at the first moment is outside the object that needs attention based on the second information and the third information. Specifically, if the first coordinate system and the second coordinate system are different coordinate systems, in one implementation, the vehicle can input the second information and the third information into a third neural network, and the third neural network outputs second prediction information, and the second prediction information indicates whether the driver's line of sight at the first moment is outside the object that needs attention.
  • the vehicle may perform a mapping operation based on the second information and/or the third information to map the driver's sight range at the first moment and the object that needs attention at the first moment to the same target coordinate system.
  • the vehicle may determine whether the driver's sight is outside the object that needs attention based on the driver's sight range at the first moment and the object that needs attention at the first moment in the target coordinate system.
  • the above-mentioned mapping operation can be to map the position information of the driver's line of sight at the first moment in the first coordinate system to the position information of the driver's line of sight at the first moment in the second coordinate system, that is, the target coordinate system is the second coordinate system; or, map the position information of the object that the driver needs to pay attention to at the first moment in the second coordinate system to the position information of the object that the driver needs to pay attention to at the first moment in the first coordinate system, that is, the target coordinate system is the first coordinate system; or, map the position information of the driver's line of sight at the first moment in the first coordinate system and the position information of the object that the driver needs to pay attention to at the first moment in the second coordinate system to the same third coordinate system, that is, the target coordinate system is the third coordinate system, and so on.
  • the specific mapping operation to be adopted can be flexibly set according to the actual situation and is not limited here.
  • the vehicle can obtain the area (or volume) of the intersection between the area (or volume) of the driver's line of sight at the first moment and the area (or volume) of the object that needs attention at the first moment, and determine whether the area (or volume) of the aforementioned intersection is greater than or equal to the area (or volume) threshold. If the judgment result is yes, it is determined that the driver's line of sight at the first moment is within the object that needs attention; if the judgment result is no, it is determined that the driver's line of sight at the first moment is outside the object that needs attention.
  • the vehicle can obtain the area (or volume) of the intersection between the area (or volume) of the driver's line of sight at the first moment and the area (or volume) of the object that needs attention at the first moment, and a first ratio between the area (or volume) of the line of sight at the first moment. If the aforementioned first ratio is greater than or equal to a first threshold, it is determined that the driver's line of sight at the first moment is within the object that needs attention; if the first ratio is less than the first threshold, it is determined that the driver's line of sight at the first moment is outside the object that needs attention.
  • the vehicle can obtain the center of the observation range of the driver's line of sight at the first moment, and T vertices of the object that needs to be paid attention to at the first moment.
  • the vehicle can calculate the angle formed by the above center, the origin and each vertex, and repeat the above operation until T angles corresponding to the T vertices are obtained. If the above multiple angles are all less than the first angle threshold, it is determined that the driver's line of sight at the first moment is within the object that needs attention; if there is at least one angle among the above multiple angles that is greater than the first angle threshold, it is determined that the driver's line of sight at the first moment is outside the object that needs attention.
  • the vehicle maps the driver's line of sight at the first moment and the object that needs to be paid attention to at the first moment to the same coordinate system
  • a variety of methods can be used to determine whether the driver's line of sight at the first moment is outside the object that needs to be paid attention to.
  • the example here is only used to prove the feasibility of this solution and is not used to limit this solution.
  • the vehicle can determine whether the driver's line of sight is outside the object that needs to be paid attention to based on the driver's line of sight at the first moment and the object that needs to be paid attention to at the first moment under the same coordinate system.
  • the specific implementation method of the above steps can refer to the above description and will not be repeated here.
  • FIG. 7a is a schematic diagram of the driver's line of sight being within the object that needs attention and the driver's line of sight being outside the object that needs attention provided by the embodiment of the present application.
  • FIG. 7a includes two upper and lower sub-schematic diagrams.
  • the gray elliptical area in FIG. 7a represents the field of vision of the driver's line of sight, and the area in the dotted box represents the object that needs attention.
  • the upper sub-schematic diagram in FIG. 7a represents that the driver's line of sight is within the object that needs attention, and the lower sub-schematic diagram in FIG. 7a represents that the driver's line of sight is outside the object that needs attention.
  • FIG. 7a is only for the convenience of understanding this solution and is not used to limit this solution.
  • the driver's eyes can be used as the location of the virtual camera, and the position information of each object that needs to be paid attention to in the second coordinate system can be mapped to the coordinate system corresponding to the virtual camera, thereby obtaining the position information of the object that needs to be paid attention to in the coordinate system corresponding to the virtual camera.
  • the second coordinate system can be a three-dimensional coordinate system
  • the coordinate system corresponding to the virtual camera can be a two-dimensional coordinate system.
  • Xw , Yw and Zw represent the coordinates of any point in the object of interest (hereinafter referred to as the "target point" for the convenience of description) in the second coordinate system, and u and v represent the coordinates of the target point mapped to the coordinate system corresponding to the virtual camera.
  • the target point represents the intrinsic parameter matrix of the camera
  • R represents the rotation matrix
  • T represents the translation matrix.
  • Figures 7b and 7c are two schematic diagrams of mapping objects that require attention in the second coordinate system to the coordinate system corresponding to the virtual camera provided in an embodiment of the present application.
  • the object that requires attention in the surrounding environment of the vehicle (that is, the vehicle in Figure 7b) is a three-dimensional vehicle in the second coordinate system.
  • the origin of the second coordinate system is taken as the forward-looking sensor of the vehicle as an example; the aforementioned vehicle is mapped to the coordinate system corresponding to the virtual camera, and the virtual camera is determined based on the position of the driver's glasses.
  • the position of each object that requires attention in the coordinate system corresponding to the virtual camera is the expected position of the driver's field of vision.
  • the vehicle can then determine whether the driver's line of sight is outside the object that requires attention based on the position information of each object that requires attention in the coordinate system corresponding to the virtual camera.
  • FIG. 7c includes two upper and lower sub-schematic diagrams.
  • the upper sub-schematic diagram of FIG. 7c shows the environmental information around the vehicle collected by the forward-looking sensor on the top of the vehicle, that is, the position of the objects around the vehicle in FIG. 7c in the second coordinate system;
  • the lower sub-schematic diagram of FIG. 7c shows the objects that need to be paid attention to around the vehicle are mapped to the position in the coordinate system corresponding to the human eye (that is, the coordinate system corresponding to the virtual camera determined based on the driver's eyes).
  • 7c are examples only for the convenience of understanding the concept of "taking the driver's eyes as the location of the virtual camera, and mapping the position information of each object that needs to be paid attention to in the second coordinate system to the coordinate system corresponding to the virtual camera", and are not used to limit this solution.
  • the vehicle can determine whether the driver's line of sight at the first moment is outside the object that needs attention based on the position information of each object that needs attention in the coordinate system corresponding to the virtual camera and the position information of the driver's line of sight in the first coordinate system (that is, the second information).
  • the driver's line of sight being outside the object requiring attention may also include: the driver's line of sight at the first moment is within the object requiring attention, and the second movement parameter of the driver's line of sight within the first time period does not match the first movement parameter, and the first time period is after the first moment.
  • the vehicle when the vehicle determines that the driver's line of sight at the first moment is within the object requiring attention, it may also determine the first movement parameter of the object requiring attention within the first time period based on environmental information around the vehicle; obtain the second movement parameter of the driver's line of sight within the first time period; determine whether the first movement parameter and the second movement parameter match, and if the judgment result is yes, determine that the driver's line of sight is within the object requiring attention; if the judgment result is no, determine that the driver's line of sight is outside the object requiring attention.
  • the first movement parameter may include a first movement direction of the object to be concerned about within the first time period; correspondingly, the second movement parameter may include a second movement direction of the driver's line of sight within the first time period.
  • the first movement direction may be a movement direction of the center of the object to be concerned about within the first time period
  • the second movement direction may be a movement direction of the center of the driver's line of sight within the first time period
  • the first movement direction may be a movement direction of any point in the object to be concerned about within the first time period
  • the second movement direction may be a movement direction of any point within the driver's line of sight within the first time period.
  • the specific implementation method of the vehicle determining whether the first movement parameter and the second movement parameter match may include: determining whether the angle between the first movement direction and the second movement direction is less than or equal to the first angle threshold, if the determination result is yes, it can be determined that the first movement parameter and the second movement parameter match; if the determination result is no, it can be determined that the first movement parameter and the second movement parameter do not match.
  • the value of the first angle threshold is less than 90 degrees, for example, the value of the first angle threshold can be 30 degrees, 45 degrees, 60 degrees or other values, etc. The examples here are only for the convenience of understanding this solution and are not used to limit this solution.
  • FIG8 is a schematic diagram of the first movement parameter and the second movement parameter provided in the embodiment of the present application.
  • the first movement parameter and the second movement parameter are both movement directions.
  • the movement direction of the object to be concerned in the first time period is to the left, and the movement direction of the driver's line of sight in the first time period is to the upper right. Then, the first movement parameter and the second movement parameter do not match.
  • the example in FIG8 is only for the convenience of understanding this solution and is not used to limit this solution.
  • the first movement parameter may also include any one or more of the following information: a first movement distance, a first movement speed, or other movement information of the object of interest within a first time period, etc., which are not exhaustively listed here.
  • the second movement parameter may also include any one or more of the following information: a second movement distance, a second movement speed, or other movement information of the driver's line of sight within the first time period, etc., which are not exhaustively listed here.
  • the specific implementation method of the vehicle determining whether the first movement parameter and the second movement parameter match may include: the vehicle determining whether the angle between the first movement direction and the second movement direction is less than or equal to the first angle threshold, and may also include: the vehicle determining whether the difference between the first movement distance and the second movement distance is less than or equal to the distance threshold, and/or determining whether the difference between the first movement speed and the second movement speed is less than or equal to the speed threshold, and/or determining whether the difference between the object requiring attention and other movement parameters of the driver's line of sight within the first time period is less than the second threshold. If the judgment results of the above judgment operations are all yes, it can be determined that the first movement parameter and the second movement parameter match. The two movement parameters match; if the judgment result of any judgment operation is no, it can be determined that the first movement parameter and the second movement parameter do not match.
  • the first movement parameter and the second movement parameter are both determined as the movement direction, which provides an implementation solution that is easy to implement and has high accuracy.
  • the driver's line of sight being outside the object that needs attention includes: at a first moment and at multiple consecutive moments after the first moment, the driver's line of sight is outside the object that needs attention.
  • the vehicle can determine whether the driver's line of sight is outside the object that needs attention for each of the first moment and multiple consecutive moments after the first moment. If the driver's line of sight is outside the object that needs attention at any moment in the first moment and multiple consecutive moments after the first moment, it is determined that the driver's line of sight is outside the object that needs attention; if the driver's line of sight is inside the object that needs attention at each moment in the first moment and multiple consecutive moments after the first moment, it is determined that the driver's line of sight is inside the object that needs attention.
  • the specific implementation method of the vehicle determining whether the driver's line of sight is outside the object that needs attention at any moment can be referred to the above description, which will not be repeated here.
  • the vehicle obtains third information from the first information, the third information including position information of the driver's line of sight in the first coordinate system at a first moment and multiple consecutive moments after the first moment; a third neural network can be deployed on the vehicle, and the environmental information around the vehicle at the first moment and multiple consecutive moments after the first moment and the third information are input into the third neural network, and the third neural network outputs third prediction information, the third prediction information indicates whether there is any moment in the first moment and multiple consecutive moments after the first moment when the driver's line of sight is outside the object that needs attention, thereby determining whether the driver's line of sight is outside the object that needs attention.
  • the intelligent driving system and the driver both need to observe the surrounding objects in real time during the driving of the vehicle, and determine that the driver's line of sight is outside the object that needs attention as a situation where the driver's line of sight does not match the driving intention of the vehicle, which is in line with the logic of manual driving, that is, this solution has a high degree of fit with the actual application scenario, which is conducive to accurately determining whether the driver's line of sight matches the driving intention of the vehicle.
  • step 303 may include: determining whether the direction of the driver's line of sight matches the driving direction of the vehicle; if the judgment result is no, it can be determined that the driver's line of sight meets the first condition, and if the judgment result is yes, it can be determined that the driver's line of sight does not meet the first condition.
  • the situation that the direction of the driver's line of sight does not match the driving direction of the vehicle may include that the angle between the driver's line of sight and the driving direction of the vehicle is greater than or equal to the second angle threshold.
  • the value of the second angle threshold may be 45 degrees, 50 degrees or other values.
  • the direction of vehicle driving is forward, if the direction of the driver's line of sight is to the left, then the vehicle can determine that the direction of the driver's line of sight does not match the driving intention of the vehicle, that is, it is determined that the first condition is met; if the driver's line of sight is forward, then the vehicle can determine that the direction of the driver's line of sight matches the driving intention of the vehicle, that is, it is determined that the first condition is not met.
  • the example here is only for the convenience of understanding this solution and is not used to limit this solution.
  • the first condition may also include abnormal behavior of the driver.
  • the vehicle may also determine whether the driver has abnormal behavior based on the information of the driver at at least one time obtained in step 301. If it is determined that the driver has abnormal behavior, it may also trigger to enter step 305, that is, output warning information to the driver.
  • abnormal behavior may include: fatigue driving, making phone calls, or other abnormal behaviors of the driver, etc., which are not exhaustive here.
  • the vehicle when the vehicle determines that the driver's line of sight does not match the vehicle's driving intention, it can directly trigger to enter step 305, that is, trigger to output warning information to the driver; in another implementation, when the vehicle determines that the driver's line of sight does not match the vehicle's driving intention for a first time, it determines to trigger to enter step 305.
  • the first time can be pre-set in the vehicle, for example, the length of the first time can be 15 seconds, 20 seconds, 25 seconds or other time, etc., which are not exhaustive here.
  • the vehicle may output warning information in at least two ways, and the vehicle may determine which warning method to use based on the accumulated time that the driver's line of sight does not match the vehicle's driving intention.
  • the vehicle may determine which warning method to use based on the accumulated time that the driver's line of sight does not match the vehicle's driving intention.
  • the cumulative duration of the image mismatch reaches a first duration, it can be determined to use the first type of warning method;
  • the vehicle determines that the cumulative duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a second duration, it can be determined to use the second type of warning method; the value of the second duration can be greater than the first duration.
  • At least two ways in which the vehicle outputs warning information may include: a first type of warning method, a second type of warning method, and a third type of warning method.
  • a first type of warning method when the vehicle determines that the cumulative duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, it may be determined to use the first type of warning method; when the vehicle determines that the cumulative duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a second duration, it may be determined to use the second type of warning method; when the vehicle determines that the cumulative duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a third duration, it may be determined to use the third type of warning method.
  • the first type of warning method may include visual type warning information
  • the second type of warning method may include visual type warning information and acoustic type warning information
  • the third type of warning method may include visual type warning information, acoustic type warning information, and tactile type warning information.
  • visual warning information may include: outputting warning information to the driver through a warning light on the dashboard of the vehicle, or outputting warning information to the driver through a head-up display (HUD), or outputting visual warning information through other means, etc., which are not exhaustive here.
  • Tactile warning information may include: tightening the seat belt, vibrating the steering wheel, or other tactile warning information, etc., which are examples here only for the convenience of understanding this solution and are not used to limit this solution.
  • the value of the first duration may be associated with the safety of the driver's driving behavior
  • the values of the second duration and/or the third duration may also be associated with the safety of the driver's driving behavior.
  • the higher the safety of the driver's driving behavior the longer the value of the first duration may be, and the lower the safety of the driver's driving behavior, the shorter the value of the first duration may be.
  • the higher the safety of the driver's driving behavior the longer the value of the second duration and/or the third duration may be, and the lower the safety of the driver's driving behavior, the shorter the value of the second duration and/or the third duration may be.
  • the vehicle can determine the safety of the driver's driving behavior based on any one or more of the following driving behavior information: the cumulative number of forward collision warnings (FCW), the number of sudden accelerations, the number of sudden decelerations, the distance to the vehicle in front when following, the speed of steering, the average number of times the vehicle's intelligent driving system takes over the user's driving, or other information that can reflect the safety of the driver's driving behavior, etc., which are not listed exhaustively here.
  • FCW cumulative number of forward collision warnings
  • the vehicle may perform dimensionless processing on each type of at least one type of driving behavior information of the driver and then perform weighted summation to obtain a safety index of the driver's driving behavior, which indicates the safety of the driver's driving behavior; optionally, the higher the safety index, the lower the safety of the driver's driving behavior.
  • a safety index of the driver's driving behavior which indicates the safety of the driver's driving behavior; optionally, the higher the safety index, the lower the safety of the driver's driving behavior.
  • K1, K2 and K3 in Table 1 represent different safety indexes respectively.
  • the safety index is K1
  • the cumulative time that the driver's line of sight does not match the vehicle's driving intention reaches 20 seconds, triggering the vehicle to output the first type of warning information
  • the cumulative time that the driver's line of sight does not match the vehicle's driving intention reaches 40 seconds
  • triggering the vehicle to output the second type of warning information reaches 60 seconds
  • the cumulative time that the driver's line of sight does not match the vehicle's driving intention reaches 60 seconds, triggering the vehicle to output the third type of warning information.
  • the second and third rows in Table 1 please refer to the above explanation of the first row of Table 1, which will not be repeated here. It should be understood that the examples in Table 1 are only for the convenience of understanding this solution and are not used to limit this solution.
  • a warning message is output to the driver, that is, the value of the first duration can affect the frequency of outputting the warning message, and the safety of the driver's driving behavior will affect the value of the first duration, which is beneficial to reducing the frequency of outputting warning messages to drivers with high safety levels, thereby avoiding disturbing the user; it is also beneficial to increase the frequency of outputting warning messages to drivers with low safety levels, thereby improving the safety of the driving process.
  • the vehicle can also display objects that need attention to the driver.
  • the vehicle can highlight the objects that the vehicle has determined to need attention to the driver through the HUD; optionally, the vehicle can highlight the objects that need attention when displaying the navigation route to the driver through the HUD.
  • the highlighting method includes any one or more of the following: adding prompt text next to the object that needs attention, framing the object that needs attention, or other methods of highlighting the object that needs attention, etc., which are not exhaustive here.
  • the vehicle can prompt the user of the objects that need attention in the form of voice playback, etc., and the vehicle can also use other methods to show the user the objects that need attention, which are not exhaustive here.
  • the driver can also be shown objects that need attention, so as to assist the driver in learning the driving ideas of the intelligent driving system, which is conducive to improving the safety of the driver's driving behavior.
  • another triggering scenario for warning information is provided.
  • a warning message is output to the driver; in addition, in the determination process of "whether to output warning information", not only the driver's line of sight inside the vehicle is taken into account, but also the environmental information around the vehicle is taken into account, which is conducive to more accurate output of warning information.
  • Figure 9 is a structural schematic diagram of a vehicle speed generating device provided in the embodiment of the present application.
  • the vehicle alarm device 900 may include an acquisition module 901 and an alarm module 902, wherein the acquisition module 901 is used to obtain the driver's line of sight and the environmental information around the vehicle; the alarm module 902 is used to output an alarm message when it is determined that the first condition is met according to the driver's line of sight, wherein the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle.
  • the vehicle warning device 900 also includes: a processing module 903, which is used to determine the objects that the driver needs to pay attention to during driving according to the environmental information around the vehicle, wherein the driving intention of the vehicle includes the objects that the driver needs to pay attention to during driving, and the situation where the driver's line of sight does not match the driving intention of the vehicle includes: the driver's line of sight is outside the object that needs attention.
  • a processing module 903 which is used to determine the objects that the driver needs to pay attention to during driving according to the environmental information around the vehicle, wherein the driving intention of the vehicle includes the objects that the driver needs to pay attention to during driving, and the situation where the driver's line of sight does not match the driving intention of the vehicle includes: the driver's line of sight is outside the object that needs attention.
  • the processing module 903 is also used to determine the first movement parameter of the object that needs attention within the first time period based on the environmental information around the vehicle, wherein the driver's line of sight is outside the object that needs attention and also includes: the driver's line of sight at the first moment is within the object that needs attention, and the second movement parameter of the driver's line of sight in the first time period does not match the first movement parameter, and the first time period is after the first moment.
  • the first movement parameter includes a first movement direction of the object of interest within a first time period
  • the second movement parameter includes a second movement direction of the driver's line of sight within the first time period
  • the situation where the second movement parameter does not match the first movement parameter includes: the difference between the first movement direction and the second movement direction satisfies the second condition.
  • the vehicle warning device 900 further includes: a display module 904 for displaying objects requiring attention to the driver.
  • the warning module 902 is specifically used to output a warning message when the duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, and the value of the first duration is correlated with the safety of the driver's driving behavior.
  • FIG. 11 is another structural schematic diagram of the vehicle provided in the embodiment of the present application, wherein the vehicle 100 may be deployed with the vehicle alarm device 900 described in the corresponding embodiments of FIG. 9 and FIG. 10 , for realizing the functions of the vehicle in the corresponding embodiments of FIG. 2 to FIG. 8 .
  • the vehicle 100 may also include a communication function
  • the vehicle 100 may include, in addition to the components shown in FIG. 1 , a receiver 1101 and a transmitter 1102, wherein the processor 113 may include an application processor 1131 and a communication processor 1132.
  • the receiver 1101, the transmitter 1102, the processor 113 and the memory 114 may be connected via a bus or other means.
  • the processor 113 controls the operation of the vehicle.
  • the various components of the vehicle 100 are coupled together through a bus system, wherein the bus system may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • the bus system may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • various buses are referred to as bus systems in the figure.
  • the receiver 1101 can be used to receive input digital or character information and generate signal input related to the relevant settings and function control of the vehicle.
  • the transmitter 1102 can be used to output digital or character information through the first interface; the transmitter 1102 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group; the transmitter 1102 can also include a display device such as a display screen.
  • the application processor 1131 is used to execute the vehicle alarm method executed by the vehicle in the embodiment corresponding to Figure 2. Specifically, the application processor 1131 is used to execute the following steps: obtain the driver's line of sight and the environmental information around the vehicle; when it is determined that the first condition is met based on the driver's line of sight, output an alarm message, wherein the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle. It should be noted that for the specific implementation method of the application processor 1131 executing the vehicle alarm method and the beneficial effects brought about, reference can be made to the descriptions in the various method embodiments corresponding to Figures 2 to 8, and they will not be repeated here one by one.
  • a computer-readable storage medium is also provided in an embodiment of the present application, in which a program for generating a vehicle driving speed is stored.
  • the program When the program is run on a computer, the computer executes the steps executed by the vehicle in the method described in the embodiments shown in the aforementioned Figures 2 to 8.
  • Also provided in an embodiment of the present application is a computer program product, which, when executed on a computer, enables the computer to execute the steps executed by the vehicle in the method described in the embodiments shown in the aforementioned Figures 2 to 8.
  • a circuit system is also provided in an embodiment of the present application, wherein the circuit system includes a processing circuit, and the processing circuit is configured to execute the steps performed by the vehicle in the method described in the embodiments shown in Figures 2 to 8 above.
  • the vehicle speed generating device or vehicle provided in the embodiment of the present application may be a chip, and the chip includes: a processing unit and a communication unit, wherein the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output interface, a pin or a circuit, etc.
  • the processing unit may execute the computer execution instructions stored in the storage unit so that the chip in the server executes the vehicle alarm method described in the embodiments shown in Figures 2 to 8 above.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc.
  • the storage unit may also be a storage unit located outside the chip in the wireless access device end, such as a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (RAM), etc.
  • ROM read-only memory
  • RAM random access memory
  • FIG. 12 is a schematic diagram of a structure of a chip provided in an embodiment of the present application, wherein the chip may be a neural network processor NPU 120, which is mounted on the host CPU (Host CPU) as a coprocessor and is assigned tasks by the Host CPU.
  • the core part of the NPU is the operation circuit 120, which controls the operation circuit 1203 through the controller 1205 to extract matrix data in the memory and perform multiplication operations.
  • the operation circuit 1203 includes multiple processing units (Process Engine, PE) inside.
  • the operation circuit 1203 is a two-dimensional systolic array.
  • the operation circuit 1203 can also be a one-dimensional systolic array or other electronic circuits capable of performing mathematical operations such as multiplication and addition.
  • the operation circuit 1203 is a general-purpose matrix processor.
  • the operation circuit takes the corresponding data of matrix B from the weight memory 1202 and caches it on each PE in the operation circuit.
  • the operation circuit takes the matrix A data from the input memory 1201 and performs matrix operation with matrix B, and the partial result or final result of the matrix is stored in the accumulator 1208.
  • the unified memory 1206 is used to store input data and output data.
  • the weight data is directly transferred to the weight memory 1202 through the direct memory access controller (DMAC) 1205.
  • the input data is also transferred to the unified memory 1206 through the DMAC.
  • DMAC direct memory access controller
  • BIU stands for Bus Interface Unit, that is, the bus interface unit 1210, which is used for the interaction between AXI bus and DMAC and instruction fetch buffer (IFB) 1209.
  • IOB instruction fetch buffer
  • the bus interface unit 1210 (Bus Interface Unit, BIU for short) is used for the instruction fetch memory 1209 to obtain instructions from the external memory, and is also used for the storage unit access controller 1205 to obtain the original data of the input matrix A or the weight matrix B from the external memory.
  • BIU Bus Interface Unit
  • DMAC is mainly used to transfer input data in the external memory DDR to the unified memory 1206 or to transfer weight data to the weight memory 1202 or to transfer input data to the input memory 1201.
  • the vector calculation unit 1207 includes multiple operation processing units, which further process the output of the operation circuit when necessary, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, etc. It is mainly used for non-convolutional/fully connected layer network calculations in neural networks, such as Batch Normalization, pixel-level summation, upsampling of feature planes, etc.
  • the vector calculation unit 1207 can store the processed output vector to the unified memory 1206.
  • the vector The calculation unit 1207 can apply a linear function and/or a nonlinear function to the output of the operation circuit 1203, such as linear interpolation of the feature plane extracted by the convolution layer, and then, for example, a vector of accumulated values to generate an activation value.
  • the vector calculation unit 1207 generates a normalized value, a pixel-level summed value, or both.
  • the processed output vector can be used as an activation input to the operation circuit 1203, such as for use in a subsequent layer in a neural network.
  • An instruction fetch buffer 1209 connected to the controller 1205 is used to store instructions used by the controller 1205;
  • Unified memory 1206, input memory 1201, weight memory 1202 and instruction fetch memory 1209 are all on-chip memories. External memories are private to the NPU hardware architecture.
  • each layer in the neural network mentioned in each method embodiment shown in Figures 2 to 8 can be performed by the operation circuit 1203 or the vector calculation unit 1207.
  • the processor mentioned in any of the above places may be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the above-mentioned first aspect method.
  • the device embodiments described above are merely schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the scheme of this embodiment.
  • the connection relationship between the modules indicates that there is a communication connection between them, which may be specifically implemented as one or more communication buses or signal lines.
  • the technical solution of the present application is essentially or the part that contributes to the prior art can be embodied in the form of a software product, which is stored in a readable storage medium, such as a computer floppy disk, U disk, mobile hard disk, ROM, RAM, disk or optical disk, etc., including a number of instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods described in each embodiment of the present application.
  • a computer device which can be a personal computer, a server, or a network device, etc.
  • all or part of the embodiments may be implemented by software, hardware, firmware or any combination thereof.
  • all or part of the embodiments may be implemented in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the process or function described in the embodiment of the present application is generated in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, a computer, a server, or a data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) mode to another website site, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that a computer can store or a data storage device such as a server or a data center that includes one or more available media integrations.
  • the available medium may be a magnetic medium, (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state drive (SSD)), etc.
  • a magnetic medium e.g., a floppy disk, a hard disk, a tape
  • an optical medium e.g., a DVD
  • a semiconductor medium e.g., a solid-state drive (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Traffic Control Systems (AREA)

Abstract

A vehicle alarm method and a related device, applicable to the field of autonomous driving in artificial intelligence. The method comprises: acquiring a line of sight of a driver (301) and environment information around a vehicle (302), and when it is determined, according to the line of sight of the driver, that a first condition is satisfied (304), outputting alarm information (305), wherein the first condition comprises a mismatch between the line of sight of the driver and a driving intention of the vehicle (304), and the driving intention of the vehicle is determined on the basis of the environment information around the vehicle (303). In the scenario of triggering alarm information provided by the method, a line of sight of a driver in a vehicle is considered, and environment information around the vehicle is also considered, thereby facilitating more accurate output of the alarm information.

Description

一种车辆告警方法以及相关设备Vehicle warning method and related equipment
本申请要求于2022年10月31日提交国家知识产权局、申请号为202211366481.9、发明名称为“一种车辆告警方法以及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to a Chinese patent application filed with the State Intellectual Property Office on October 31, 2022, with application number 202211366481.9 and invention name “A vehicle alarm method and related equipment”, the entire contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请涉及自动驾驶领域,尤其涉及一种车辆告警方法以及相关设备。The present application relates to the field of autonomous driving, and in particular to a vehicle warning method and related equipment.
背景技术Background technique
在车辆的行驶过程中,车辆中的智能驾驶系统需要了解驾驶员的状态以及时向驾驶员输出告警信息,告警信息用于提醒驾驶员接管车辆。告警信息的触发场景主要包括:周期性触发和事件性触发;例如,前述事件性触发可以包括:疲劳驾驶、接打电话或驾驶员的其他异常行为等。During the driving process of the vehicle, the intelligent driving system in the vehicle needs to understand the driver's status and output warning information to the driver in time. The warning information is used to remind the driver to take over the vehicle. The triggering scenarios of the warning information mainly include: periodic triggering and event triggering; for example, the aforementioned event triggering may include: fatigue driving, making phone calls or other abnormal behaviors of the driver.
但是,在上述告警信息的触发场景中,均仅考虑了位于车辆内部的驾驶员的信息,而没有考虑车辆外部的信息,导致“是否输出告警信息”的确定过程不够准确。However, in the triggering scenarios of the above warning information, only the information of the driver inside the vehicle is considered, while the information outside the vehicle is not considered, resulting in an inaccurate determination process of "whether to output the warning information".
发明内容Summary of the invention
本申请提供了一种车辆告警方法以及相关设备,本方案提供的告警信息的触发场景中不仅考虑了车辆内部的驾驶员的视线,还考虑了车辆周围的环境信息,有利于更精准的输出告警信息。The present application provides a vehicle alarm method and related equipment. The triggering scenario of the alarm information provided by this solution not only takes into account the driver's line of sight inside the vehicle, but also takes into account the environmental information around the vehicle, which is conducive to more accurate output of alarm information.
为解决上述技术问题,本申请提供以下技术方案:In order to solve the above technical problems, this application provides the following technical solutions:
第一方面,本申请提供一种车辆告警方法,可用于人工智能领域的自动驾驶领域中,方法可以包括:车辆获取驾驶员的视线和车辆周围的环境信息;在根据驾驶员的视线确定满足第一条件的情况下,输出告警信息。其中,第一条件包括驾驶员的视线与车辆的行驶意图不匹配,车辆的行驶意图基于车辆周围的环境信息确定;车辆的行驶意图可以包括如下任一项或多项的组合:驾驶员在行驶过程中需要关注的一个或多个物体、车辆的行驶方向或其他类型的驾驶意图等。前述车辆可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车和高尔夫球车等。In the first aspect, the present application provides a vehicle warning method, which can be used in the field of autonomous driving in the field of artificial intelligence. The method may include: the vehicle obtains the driver's line of sight and the environmental information around the vehicle; when it is determined that the first condition is met according to the driver's line of sight, the warning information is output. Among them, the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle; the vehicle's driving intention may include any one or more of the following combinations: one or more objects that the driver needs to pay attention to during driving, the vehicle's driving direction or other types of driving intentions, etc. The aforementioned vehicle may be a car, truck, motorcycle, bus, ship, airplane, helicopter, lawn mower, recreational vehicle, amusement park vehicle, construction equipment, tram and golf cart, etc.
本申请中,提供了告警信息的另一种触发场景,在确定驾驶员的视线与车辆的行驶意图不匹配的情况下,也即在确定驾驶员关注的物体与车辆的智能驾驶系统确定的行驶意图不匹配的情况下,向驾驶员输出告警信息;此外,在“是否输出告警信息”的确定过程中,不仅考虑了车辆内部的驾驶员的视线,还考虑了车辆周围的环境信息,有利于更精准的输出告警信息。In the present application, another triggering scenario for warning information is provided. When it is determined that the driver's line of sight does not match the vehicle's driving intention, that is, when it is determined that the object the driver is paying attention to does not match the driving intention determined by the vehicle's intelligent driving system, a warning message is output to the driver; in addition, in the determination process of "whether to output warning information", not only the driver's line of sight inside the vehicle is considered, but also the environmental information around the vehicle is considered, which is conducive to more accurate output of warning information.
可选地,方法还包括:车辆根据车辆周围的环境信息,确定驾驶员在行驶过程中需要关注的一个或多个物体;其中,车辆的行驶意图包括驾驶员在行驶过程中需要关注的物体,驾驶员的视线与车辆的行驶意图不匹配的情况包括:驾驶员的视线位于所有需要关注的物体之外。本申请中,在车辆行驶的过程中需要根据周围的物体来确定驾驶行为,也即智能驾驶系统和驾驶员在车辆行驶过程中均需要实时观察周围的物体,将驾驶员的视线位于需要关注的物体之外确定为驾驶员的视线与车辆的行驶意图不匹配的一种情况,符合人工驾驶的逻辑,也即本方案与实际应用场景的贴合度较高,有利于准确的确定驾驶员的视线是否与车辆的行驶意图匹配。Optionally, the method further includes: the vehicle determines one or more objects that the driver needs to pay attention to during driving based on environmental information around the vehicle; wherein the driving intention of the vehicle includes objects that the driver needs to pay attention to during driving, and the situation where the driver's line of sight does not match the vehicle's driving intention includes: the driver's line of sight is outside all objects that need attention. In the present application, the driving behavior needs to be determined based on the surrounding objects during vehicle driving, that is, the intelligent driving system and the driver need to observe the surrounding objects in real time during vehicle driving, and the driver's line of sight being outside the objects that need attention is determined as a situation where the driver's line of sight does not match the vehicle's driving intention, which is consistent with the logic of manual driving, that is, this solution has a high degree of fit with the actual application scenario, which is conducive to accurately determining whether the driver's line of sight matches the vehicle's driving intention.
可选地,驾驶员的视线位于需要关注的物体之外的情况可以包括:在第一时刻,驾驶员的视线位于所有需要关注的物体之外。或者,驾驶员的视线位于需要关注的物体之外的情况可以包括:驾驶员在第一时刻的视线位于需要关注的物体之内,且驾驶员的视线在第一时间段内的第二移动参数与第一移动参数不匹配,第一时间段位于第一时刻之后。或者,驾驶员的视线位于需要关注的物体之外的情况可以包括:在第一时刻以及第一时刻之后的连续的多个时刻,驾驶员的视线均位于需要关注的物体之外。Optionally, the situation that the driver's line of sight is outside the object that needs attention may include: at a first moment, the driver's line of sight is outside all objects that need attention. Alternatively, the situation that the driver's line of sight is outside the object that needs attention may include: at the first moment, the driver's line of sight is inside the object that needs attention, and the second movement parameter of the driver's line of sight in the first time period does not match the first movement parameter, and the first time period is after the first moment. Alternatively, the situation that the driver's line of sight is outside the object that needs attention may include: at the first moment and multiple consecutive moments after the first moment, the driver's line of sight is outside the object that needs attention.
可选地,方法还可以包括:车辆根据车辆周围的环境信息,确定需要关注的物体在第一时间段内的第一移动参数。驾驶员的视线位于需要关注的物体之外还包括:驾驶员在第一时刻的视线位于需要关注的物体之内,且驾驶员的视线在第一时间段内的第二移动参数与第一移动参数不匹配,第一时间段位于第一时刻之后。其中,第一移动参数可以包括需要关注的物体在第一时间段内的第一移动方向;或者, 第一移动参数还可以包括如下任一种或多种信息:需要关注的物体在第一时间段内的第一移动距离、第一移动速度或其他移动信息等等。对应的,第二移动参数可以包括驾驶员的视线在第一时间段内的第二移动方向;或者,第二移动参数还可以包括如下任一种或多种信息:驾驶员的视线在第一时间段内的第二移动距离、第二移动速度或其他移动信息等等。Optionally, the method may further include: the vehicle determines the first movement parameter of the object requiring attention within the first time period based on the environmental information around the vehicle. The driver's line of sight being outside the object requiring attention also includes: the driver's line of sight being within the object requiring attention at the first moment, and the second movement parameter of the driver's line of sight within the first time period does not match the first movement parameter, and the first time period is after the first moment. The first movement parameter may include the first movement direction of the object requiring attention within the first time period; or, The first movement parameter may also include any one or more of the following information: a first movement distance, a first movement speed, or other movement information of the object of interest within the first time period, etc. Correspondingly, the second movement parameter may include a second movement direction of the driver's line of sight within the first time period; or, the second movement parameter may also include any one or more of the following information: a second movement distance, a second movement speed, or other movement information of the driver's line of sight within the first time period, etc.
本申请中,在考量驾驶员的视线是否位于需要关注的物体之外时,不仅考虑单个第一时刻中,驾驶员的视线是否位于需要关注的物体之外,还会考虑第一时刻之后的一个第一时间段内,驾驶员的视线的移动参数是否与需要关注的物体的移动参数一致,有利于提高“驾驶员的视线是否位于需要关注的物体之外”这一判断过程的准确度,也有利于提高车辆行驶过程的安全度。In the present application, when considering whether the driver's line of sight is outside the object that needs attention, not only whether the driver's line of sight is outside the object that needs attention at a single first moment is considered, but also whether the movement parameters of the driver's line of sight in a first time period after the first moment are consistent with the movement parameters of the object that needs attention. This is beneficial to improving the accuracy of the judgment process of "whether the driver's line of sight is outside the object that needs attention", and is also beneficial to improving the safety of the vehicle driving process.
可选地,第一移动参数包括需要关注的物体在第一时间段内的第一移动方向,第二移动参数包括驾驶员的视线在第一时间段内的第二移动方向;每个需要关注的物体可以表现为一个区域,驾驶员的视线的观察范围也可以表现为一个区域,则第一移动方向可以为需要关注的物体中任意一个点在第一时间段内的移动方向,第二移动方向可以为驾驶员的视线的观察范围内的任意一个点在第一时间段内的移动方向。第二移动参数与第一移动参数不匹配的情况包括:第一移动方向和第二移动方向之间的差异满足第二条件;例如,第二条件可以包括第一移动方向和第二移动方向之间的夹角小于第一角度阈值。本申请中,将第一移动参数和第二移动参数均确定为移动方向,也即提供了一种易于实现且准确度较高的实现方案。Optionally, the first movement parameter includes a first movement direction of an object that needs attention within a first time period, and the second movement parameter includes a second movement direction of the driver's line of sight within the first time period; each object that needs attention can be represented as an area, and the driver's line of sight observation range can also be represented as an area, then the first movement direction can be the movement direction of any point in the object that needs attention within the first time period, and the second movement direction can be the movement direction of any point within the driver's line of sight observation range within the first time period. The situation where the second movement parameter does not match the first movement parameter includes: the difference between the first movement direction and the second movement direction satisfies the second condition; for example, the second condition can include that the angle between the first movement direction and the second movement direction is less than the first angle threshold. In the present application, the first movement parameter and the second movement parameter are both determined as movement directions, that is, an implementation scheme that is easy to implement and has high accuracy is provided.
可选地,车辆的行驶意图包括车辆的行驶方向,驾驶员的视线与车辆的行驶意图不匹配的情况包括:驾驶员的视线与车辆的行驶方向不匹配。方法还可以包括:在根据车辆周围的环境信息,确定当前时刻不存在需要关注的某个物体的情况下,可以判断驾驶员的视线的方向是否与车辆的行驶方向匹配;其中,驾驶员的视线的方向与车辆的行驶方向不匹配的情况可以包括驾驶员的视线方向与车辆的行驶方向之间的夹角大于或等于第二角度阈值。Optionally, the driving intention of the vehicle includes the driving direction of the vehicle, and the situation where the driver's line of sight does not match the driving intention of the vehicle includes: the driver's line of sight does not match the driving direction of the vehicle. The method may also include: when it is determined based on the environmental information around the vehicle that there is no object that needs attention at the current moment, it can be determined whether the direction of the driver's line of sight matches the driving direction of the vehicle; wherein, the situation where the direction of the driver's line of sight does not match the driving direction of the vehicle may include that the angle between the driver's line of sight and the driving direction of the vehicle is greater than or equal to the second angle threshold.
可选地,方法还可以包括:车辆向驾驶员展示需要关注的物体。例如,车辆可以通过抬头显示系统HUD向驾驶员突出显示车辆确定的需要关注的物体;可选地,车辆可以在通过HUD向驾驶员显示导航路线时,突出显示需要关注的物体。突出显示的方式包括如下任一种或多种:在需要关注的物体旁边增设提示文本、将需要关注的物体框起来或其他突出显示需要关注的物体的方式等等。本申请中,还可以向驾驶员展示需要关注的物体,从而可以协助驾驶员学习智能驾驶系统的驾驶思路,有利于提高驾驶员的驾驶行为的安全度。Optionally, the method may further include: the vehicle displays objects that require attention to the driver. For example, the vehicle may highlight objects that require attention determined by the vehicle to the driver through the head-up display system HUD; optionally, the vehicle may highlight objects that require attention when displaying the navigation route to the driver through the HUD. The highlighting method includes any one or more of the following: adding prompt text next to the object that requires attention, framing the object that requires attention, or other methods of highlighting the object that requires attention, etc. In the present application, objects that require attention can also be displayed to the driver, so as to assist the driver in learning the driving ideas of the intelligent driving system, which is conducive to improving the safety of the driver's driving behavior.
可选地,车辆在根据驾驶员的视线确定满足第一条件的情况下,输出告警信息,包括:当驾驶员的视线与车辆的行驶意图不匹配的时长达到第一时长时,车辆输出告警信息,第一时长的取值与驾驶员的驾驶行为的安全度具有关联关系。其中,驾驶员的驾驶行为的安全度的确定因素可以包括如下任意一项或多项:前向预警累计次数FCW、急加速次数、急减速次数、跟车时与前车的距离、打方向盘的缓急程度、车辆的智能驾驶系统接管用户驾驶的平均次数或其他能够反映驾驶员的驾驶行为的安全度的信息等。例如,驾驶员的驾驶行为的安全度越高,第一时长的取值可以越长,驾驶员的驾驶行为的安全度越低,第一时长的取值可以越短。Optionally, the vehicle outputs warning information when it is determined that the first condition is met based on the driver's line of sight, including: when the duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, the vehicle outputs warning information, and the value of the first duration is associated with the safety of the driver's driving behavior. Among them, the factors determining the safety of the driver's driving behavior may include any one or more of the following: the cumulative number of forward warnings FCW, the number of sudden accelerations, the number of sudden decelerations, the distance from the vehicle in front when following the vehicle, the speed of steering, the average number of times the vehicle's intelligent driving system takes over the user's driving, or other information that can reflect the safety of the driver's driving behavior. For example, the higher the safety of the driver's driving behavior, the longer the value of the first duration can be, and the lower the safety of the driver's driving behavior, the shorter the value of the first duration can be.
本申请中,驾驶员的视线与车辆的行驶意图不匹配的时长达到第一时长时,向驾驶员输出告警信息,也即第一时长的取值可以影响输出告警信息的频率,而驾驶员的驾驶行为的安全度会影响第一时长的取值,有利于降低向安全度高的驾驶员输出告警信息的频率,从而避免打扰到用户;也有利于提高向安全度低的驾驶员输出告警信息的频率,以提高驾驶过程的安全度。In the present application, when the duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, a warning message is output to the driver, that is, the value of the first duration can affect the frequency of outputting the warning message, and the safety of the driver's driving behavior will affect the value of the first duration, which is conducive to reducing the frequency of outputting warning messages to drivers with high safety levels, thereby avoiding disturbing the user; it is also conducive to increasing the frequency of outputting warning messages to drivers with low safety levels, thereby improving the safety of the driving process.
第二方面,本申请提供一种车辆告警装置,可用于人工智能领域的自动驾驶领域中,车辆告警装置可以包括:获取模块,用于获取驾驶员的视线和车辆周围的环境信息;告警模块,用于在根据驾驶员的视线确定满足第一条件的情况下,输出告警信息,其中,第一条件包括驾驶员的视线与车辆的行驶意图不匹配,车辆的行驶意图基于车辆周围的环境信息确定。On the second aspect, the present application provides a vehicle alarm device that can be used in the field of autonomous driving in the field of artificial intelligence. The vehicle alarm device may include: an acquisition module for acquiring the driver's line of sight and environmental information around the vehicle; an alarm module for outputting an alarm message when a first condition is determined to be met based on the driver's line of sight, wherein the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle.
本申请第二方面中,车辆告警装置还可以执行第一方面以及第一方面的各个可能实现方式中车辆执行的步骤,对于本申请第二方面以及第二方面的各种可能实现方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第一方面中各种可能的实现方式中的描述,此处不再一一赘述。In the second aspect of the present application, the vehicle alarm device can also execute the steps performed by the vehicle in the first aspect and various possible implementation methods of the first aspect. For the specific implementation steps of the second aspect of the present application and various possible implementation methods of the second aspect, as well as the beneficial effects brought about by each possible implementation method, you can refer to the description of the various possible implementation methods in the first aspect, and will not be repeated here one by one.
第三方面,本申请提供了一种车辆,可以包括存储器、处理器以及总线系统,其中,存储器用于存 储程序,处理器用于执行存储器中的程序,包括如下步骤:总线系统用于连接存储器以及处理器,以使存储器以及处理器进行通信。处理器可以用于执行第一方面的各个可能实现方式中车辆执行的步骤,具体均可以参阅第一方面,此处不再赘述。In a third aspect, the present application provides a vehicle, which may include a memory, a processor, and a bus system, wherein the memory is used to store The processor is used to execute the program in the memory, including the following steps: the bus system is used to connect the memory and the processor so that the memory and the processor communicate with each other. The processor can be used to execute the steps executed by the vehicle in each possible implementation of the first aspect, and the details can be referred to the first aspect, which will not be repeated here.
第四方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面所述的车辆告警方法。In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored. When the computer-readable storage medium is run on a computer, the computer executes the vehicle alarm method described in the first aspect.
第五方面,本申请提供了一种电路系统,所述电路系统包括处理电路,所述处理电路配置为执行上述第一方面所述的车辆告警方法。In a fifth aspect, the present application provides a circuit system, which includes a processing circuit, and the processing circuit is configured to execute the vehicle alarm method described in the first aspect above.
第六方面,本申请提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面所述的车辆告警方法。In a sixth aspect, the present application provides a computer program which, when executed on a computer, enables the computer to execute the vehicle alarm method described in the first aspect above.
第七方面,本申请提供了一种芯片系统,该芯片系统包括处理器,用于支持服务器或车辆行驶速度生成装置实现上述方面中所涉及的功能,例如,发送或处理上述方法中所涉及的数据和/或信息。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存服务器或通信设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。In a seventh aspect, the present application provides a chip system, which includes a processor for supporting a server or a vehicle speed generating device to implement the functions involved in the above aspects, for example, sending or processing the data and/or information involved in the above methods. In a possible design, the chip system also includes a memory, which is used to store program instructions and data necessary for the server or communication device. The chip system can be composed of a chip, or it can include a chip and other discrete devices.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本申请实施例提供的车辆的一种结构示意图;FIG1 is a schematic diagram of a structure of a vehicle provided in an embodiment of the present application;
图2为本申请实施例提供的车辆告警方法的一种流程示意图;FIG2 is a flow chart of a vehicle alarm method provided in an embodiment of the present application;
图3为本申请实施例提供的车辆告警方法的另一种流程示意图;FIG3 is another schematic diagram of a flow chart of a vehicle alarm method provided in an embodiment of the present application;
图4为本申请实施例提供的驾驶员需要关注的物体的一种示意图;FIG4 is a schematic diagram of an object that a driver needs to pay attention to provided in an embodiment of the present application;
图5为本申请实施例提供的驾驶员需要关注的物体的另一种示意图;FIG5 is another schematic diagram of an object that a driver needs to pay attention to provided by an embodiment of the present application;
图6为本申请实施例提供的驾驶员需要关注的物体的又一种示意图;FIG6 is another schematic diagram of an object that a driver needs to pay attention to provided by an embodiment of the present application;
图7a为本申请实施例提供的驾驶员的视线位于需要关注的物体之内和驾驶员的视线位于需要关注的物体之外的一种示意图;FIG7a is a schematic diagram of a driver's line of sight being within an object requiring attention and a driver's line of sight being outside an object requiring attention provided by an embodiment of the present application;
图7b为本申请实施例提供的将需要关注的物体在第二坐标系映射至虚拟相机所对应的坐标系中的一种示意图;FIG7b is a schematic diagram of mapping an object requiring attention in a second coordinate system to a coordinate system corresponding to a virtual camera provided by an embodiment of the present application;
图7c为本申请实施例提供的将需要关注的物体在第二坐标系映射至虚拟相机所对应的坐标系中的另一种示意图;FIG7c is another schematic diagram of mapping an object requiring attention in a second coordinate system to a coordinate system corresponding to a virtual camera provided by an embodiment of the present application;
图8为本申请实施例提供的第一移动参数和第二移动参数的一种示意图;FIG8 is a schematic diagram of a first movement parameter and a second movement parameter provided in an embodiment of the present application;
图9为本申请实施例提供的车辆行驶速度生成装置的一种结构示意图;FIG9 is a schematic diagram of a structure of a vehicle speed generating device provided in an embodiment of the present application;
图10为本申请实施例提供的车辆行驶速度生成装置的另一种结构示意图;FIG10 is another schematic diagram of the structure of a vehicle speed generating device provided in an embodiment of the present application;
图11为本申请实施例提供的车辆的另一种结构示意图;FIG11 is another schematic diagram of the structure of a vehicle provided in an embodiment of the present application;
图12为本申请实施例提供的芯片的一种结构示意图。FIG. 12 is a schematic diagram of the structure of a chip provided in an embodiment of the present application.
具体实施方式Detailed ways
本申请的说明书和权利要求书及上述附图中的术语“第一”、第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。The terms "first", "second", etc. in the specification and claims of the present application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the terms used in this way can be interchanged under appropriate circumstances. This is merely a way of distinguishing objects with the same properties when describing the embodiments of the present application. In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions, so that a process, method, system, product, or device that includes a series of units is not necessarily limited to those units, but may include other units that are not explicitly listed or inherent to these processes, methods, products, or devices.
下面结合附图,对本申请的实施例进行描述。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。The embodiments of the present application are described below in conjunction with the accompanying drawings. It is known to those skilled in the art that with the development of technology and the emergence of new scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
本申请实施例可以应用于车辆中,具体可以应用于判断是否需要对驾驶员发出告警信息的场景中。其中,前述车辆可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车和高尔夫球车等,本申请实施例不做特别的限定。例如,当车辆采用自动驾驶模式时,车辆可以周期性地获取驾驶员的信息,以确定是否需要向驾驶员输出告警信息。又例如,当车辆采用人工驾驶模式时,车辆可以周期性地获取驾驶员的信息,以确定是否需要向驾驶员输出告警信息 等,应当理解,此处举例仅为方便理解本申请实施例的应用场景,不对本申请实施例的应用场景进行穷举。The embodiments of the present application can be applied to vehicles, and specifically can be applied to scenarios for determining whether it is necessary to issue a warning message to the driver. Among them, the aforementioned vehicles can be cars, trucks, motorcycles, buses, ships, airplanes, helicopters, lawn mowers, recreational vehicles, amusement park vehicles, construction equipment, trams, and golf carts, etc., and the embodiments of the present application do not make special limitations. For example, when the vehicle adopts the automatic driving mode, the vehicle can periodically obtain the driver's information to determine whether it is necessary to output a warning message to the driver. For another example, when the vehicle adopts the manual driving mode, the vehicle can periodically obtain the driver's information to determine whether it is necessary to output a warning message to the driver. It should be understood that the examples given here are only for the convenience of understanding the application scenarios of the embodiments of the present application, and are not intended to be exhaustive of the application scenarios of the embodiments of the present application.
为了便于理解本方案,本申请实施例中首先结合图1对车辆的结构进行介绍,请先参阅图1,图1为本申请实施例提供的车辆的一种结构示意图,车辆100配置为完全或部分地自动驾驶模式,例如,车辆100可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制车辆100。在车辆100处于自动驾驶模式中时,也可以将车辆100置为在没有和人交互的情况下操作。In order to facilitate understanding of the present solution, the structure of the vehicle is first introduced in conjunction with FIG. 1 in the embodiment of the present application. Please refer to FIG. 1 first. FIG. 1 is a schematic diagram of the structure of a vehicle provided in the embodiment of the present application. The vehicle 100 is configured in a fully or partially automatic driving mode. For example, the vehicle 100 can control itself while being in the automatic driving mode, and can determine the current state of the vehicle and its surrounding environment through human operation, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine the confidence level corresponding to the possibility of other vehicles performing possible behaviors, and control the vehicle 100 based on the determined information. When the vehicle 100 is in the automatic driving mode, the vehicle 100 can also be set to operate without human interaction.
车辆100可包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个部件。另外,车辆100的每个子系统和部件可以通过有线或者无线互连。The vehicle 100 may include various subsystems, such as a travel system 102, a sensor system 104, a control system 106, one or more peripheral devices 108, and a power source 110, a computer system 112, and a user interface 116. Optionally, the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each subsystem and component of the vehicle 100 may be interconnected by wire or wirelessly.
行进系统102可包括为车辆100提供动力运动的组件。在一个实施例中,行进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。The travel system 102 may include components that provide powered movement to the vehicle 100. In one embodiment, the travel system 102 may include an engine 118, a power source 119, a transmission 120, and wheels/tires 121.
其中,引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如,汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为车辆100的其他系统提供能量。传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。Among them, the engine 118 can be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, for example, a hybrid engine consisting of a gasoline engine and an electric motor, and a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 118 converts the energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 119 can also provide energy for other systems of the vehicle 100. The transmission 120 can transmit the mechanical power from the engine 118 to the wheels 121. The transmission 120 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 120 may also include other devices, such as a clutch. Among them, the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
传感器系统104可包括感测关于车辆100周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是全球定位GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可包括获取车辆100的内部系统的传感器(例如,用于获取车内空气质量的传感器器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主车辆100的安全操作的关键功能。The sensor system 104 may include several sensors that sense information about the environment surrounding the vehicle 100. For example, the sensor system 104 may include a positioning system 122 (the positioning system may be a global positioning GPS system, or a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128, and a camera 130. The sensor system 104 may also include sensors that obtain internal systems of the vehicle 100 (for example, sensors for obtaining in-vehicle air quality, a fuel gauge, an oil temperature gauge, etc.). The sensing data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of the autonomous vehicle 100.
其中,定位系统122可用于估计车辆100的地理位置。IMU 124用于基于惯性加速度来感知车辆100的位置和朝向变化。在一个实施例中,IMU 124可以是加速度计和陀螺仪的组合。雷达126可利用无线电信号来感知车辆100的周边环境内的物体,具体可以表现为毫米波雷达或激光雷达。在一些实施例中,除了感知物体以外,雷达126还可用于感知物体的速度和/或前进方向。激光测距仪128可利用激光来感知车辆100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。相机130可用于捕捉车辆100的周边环境的多个图像。相机130可以是静态相机或视频相机。Among them, the positioning system 122 can be used to estimate the geographic location of the vehicle 100. The IMU 124 is used to sense the position and orientation changes of the vehicle 100 based on inertial acceleration. In one embodiment, the IMU 124 can be a combination of an accelerometer and a gyroscope. The radar 126 can use radio signals to sense objects in the surrounding environment of the vehicle 100, and can be specifically manifested as a millimeter wave radar or a laser radar. In some embodiments, in addition to sensing objects, the radar 126 can also be used to sense the speed and/or direction of travel of the object. The laser rangefinder 128 can use lasers to sense objects in the environment where the vehicle 100 is located. In some embodiments, the laser rangefinder 128 may include one or more laser sources, a laser scanner, and one or more detectors, as well as other system components. The camera 130 can be used to capture multiple images of the surrounding environment of the vehicle 100. The camera 130 can be a still camera or a video camera.
控制系统106为控制车辆100及其组件的操作。控制系统106可包括各种部件,其中包括转向系统132、油门134、制动单元136、计算机视觉系统140、线路控制系统142以及障碍避免系统144。The control system 106 controls the operation of the vehicle 100 and its components. The control system 106 may include various components, including a steering system 132 , a throttle 134 , a brake unit 136 , a computer vision system 140 , a lane control system 142 , and an obstacle avoidance system 144 .
其中,转向系统132可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘系统。油门134用于控制引擎118的操作速度并进而控制车辆100的速度。制动单元136用于控制车辆100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制车辆100的速度。计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍体。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。线路控制系统142用于确定车辆100的行驶路线以及行驶速度。在一些实施例中,线路控制系统142可以包括横向规划模块1421和纵向规划模块1422,横向规划模块1421和纵向规划模块1422分别用于结合来自障碍避免系统144、GPS 122和一个或多个预定地图的数据为车辆100确定行驶路线和行驶速度。障碍避 免系统144用于识别、评估和避免或者以其他方式越过车辆100的环境中的障碍体,前述障碍体具体可以表现为实际障碍体和可能与车辆100发生碰撞的虚拟移动体。在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。Among them, the steering system 132 can be operated to adjust the forward direction of the vehicle 100. For example, in one embodiment, it can be a steering wheel system. The throttle 134 is used to control the operating speed of the engine 118 and thus control the speed of the vehicle 100. The brake unit 136 is used to control the deceleration of the vehicle 100. The brake unit 136 can use friction to slow down the wheel 121. In other embodiments, the brake unit 136 can convert the kinetic energy of the wheel 121 into electric current. The brake unit 136 can also take other forms to slow down the rotation speed of the wheel 121 to control the speed of the vehicle 100. The computer vision system 140 can be operated to process and analyze the images captured by the camera 130 in order to identify objects and/or features in the surrounding environment of the vehicle 100. The objects and/or features may include traffic signals, road boundaries and obstacles. The computer vision system 140 can use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking and other computer vision technologies. In some embodiments, the computer vision system 140 can be used to map the environment, track objects, estimate the speed of objects, and so on. The route control system 142 is used to determine the route and speed of the vehicle 100. In some embodiments, the route control system 142 may include a lateral planning module 1421 and a longitudinal planning module 1422, which are respectively used to determine the route and speed of the vehicle 100 by combining data from the obstacle avoidance system 144, the GPS 122, and one or more predetermined maps. The avoidance system 144 is used to identify, evaluate and avoid or otherwise cross obstacles in the environment of the vehicle 100, which may be represented by actual obstacles and virtual moving bodies that may collide with the vehicle 100. In one example, the control system 106 may include components other than those shown and described in addition or in place. Alternatively, some of the components shown above may be reduced.
车辆100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。在一些实施例中,外围设备108为车辆100的用户提供与用户接口116交互的手段。例如,车载电脑148可向车辆100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风150可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向车辆100的用户输出音频。无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如CDMA、EVD0、GSM/GPRS,或者4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信系统146可利用无线局域网(wireless localarea network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。The vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through the peripheral device 108. The peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152. In some embodiments, the peripheral device 108 provides a means for the user of the vehicle 100 to interact with the user interface 116. For example, the onboard computer 148 may provide information to the user of the vehicle 100. The user interface 116 may also operate the onboard computer 148 to receive user input. The onboard computer 148 may be operated through a touch screen. In other cases, the peripheral device 108 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle. For example, the microphone 150 may receive audio (e.g., voice commands or other audio input) from the user of the vehicle 100. Similarly, the speaker 152 may output audio to the user of the vehicle 100. The wireless communication system 146 may communicate wirelessly with one or more devices directly or via a communication network. For example, the wireless communication system 146 may use 3G cellular communication, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system 146 may communicate using a wireless local area network (WLAN). In some embodiments, the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
电源110可向车辆100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为车辆100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。The power source 110 can provide power to various components of the vehicle 100. In one embodiment, the power source 110 can be a rechargeable lithium-ion or lead-acid battery. One or more battery packs of such batteries can be configured as a power source to provide power to various components of the vehicle 100. In some embodiments, the power source 110 and the energy source 119 can be implemented together, such as in some all-electric vehicles.
车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如存储器114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。可选地,处理器113可以是诸如专用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机系统112的其它部件,但是本领域的普通技术人员应该理解该处理器、或存储器实际上可以包括不存储在相同的物理外壳内的多个处理器、或存储器。例如,存储器114可以是硬盘驱动器或位于不同于计算机系统112的外壳内的其它存储介质。因此,对处理器113或存储器114的引用将被理解为包括可以并行操作或者可以不并行操作的处理器或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。Some or all functions of the vehicle 100 are controlled by a computer system 112. The computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as a memory 114. The computer system 112 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner. The processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor 113 may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor. Although FIG. 1 functionally illustrates the processor, memory, and other components of the computer system 112 in the same block, it should be understood by those skilled in the art that the processor, or memory, may actually include multiple processors, or memories that are not stored in the same physical housing. For example, the memory 114 may be a hard drive or other storage medium located in a housing different from the computer system 112. Thus, references to processor 113 or memory 114 will be understood to include references to a collection of processors or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the deceleration component, may each have their own processor that performs only calculations related to the functionality of the component.
在此处所描述的各个方面中,处理器113可以位于远离车辆100并且与车辆100进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆100内的处理器113上执行而其它则由远程处理器113执行,包括采取执行单一操纵的必要步骤。In various aspects described herein, the processor 113 may be located remotely from the vehicle 100 and in wireless communication with the vehicle 100. In other aspects, some of the processes described herein are performed on a processor 113 disposed within the vehicle 100 while others are performed by the remote processor 113, including taking the necessary steps to perform a single maneuver.
在一些实施例中,存储器114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行车辆100的各种功能,包括以上描述的那些功能。存储器114也可包含额外的指令,包括向行进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。除了指令115以外,存储器114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机系统112使用。用户接口116,用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车载电脑148、麦克风150和扬声器152。In some embodiments, the memory 114 may include instructions 115 (e.g., program logic) that can be executed by the processor 113 to perform various functions of the vehicle 100, including those described above. The memory 114 may also include additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 102, the sensor system 104, the control system 106, and the peripheral devices 108. In addition to the instructions 115, the memory 114 may also store data such as road maps, route information, the vehicle's location, direction, speed, and other such vehicle data, as well as other information. This information can be used by the vehicle 100 and the computer system 112 during the operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes. A user interface 116 is used to provide information to or receive information from a user of the vehicle 100. Optionally, the user interface 116 may include one or more input/output devices within the set of peripheral devices 108, such as a wireless communication system 146, an onboard computer 148, a microphone 150, and a speaker 152.
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制车辆100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向系统132来避免由传感器系统104和障碍避免系统144检测到的障碍体。在一些实施例中,计算机系统112可操作来对车辆100及其子系统的许多方面提供控制。 The computer system 112 may control functions of the vehicle 100 based on input received from various subsystems (e.g., the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 may be operable to provide control over many aspects of the vehicle 100 and its subsystems.
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,存储器114可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。Alternatively, one or more of the above-mentioned components may be installed or associated separately from the vehicle 100. For example, the memory 114 may exist partially or completely separately from the vehicle 100. The above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。在道路行进的车辆,如上面的车辆100,可以识别其周围环境内的物体以确定对当前速度的调整。所述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定车辆所要调整的速度。Optionally, the above components are just an example. In actual applications, the components in the above modules may be added or deleted according to actual needs, and Figure 1 should not be understood as a limitation on the embodiments of the present application. A vehicle traveling on a road, such as the vehicle 100 above, can identify objects in its surrounding environment to determine the adjustment of the current speed. The object can be another vehicle, a traffic control device, or other types of objects. In some examples, each identified object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., it can be used to determine the speed to be adjusted for the vehicle.
可选地,车辆100或者与车辆100相关联的计算设备如图1的计算机系统112、计算机视觉系统140、存储器114可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。车辆100能够基于预测的所识别的物体的行为来调整它的速度。换句话说,车辆100能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定车辆100的速度,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。除了提供调整车辆的速度的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得车辆100遵循给定的轨迹和/或维持与车辆100附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。Optionally, the vehicle 100 or a computing device associated with the vehicle 100, such as the computer system 112, computer vision system 140, and memory 114 of FIG. 1, can predict the behavior of the identified object based on the characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so all the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 100 can adjust its speed based on the predicted behavior of the identified object. In other words, the vehicle 100 can determine what stable state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors can also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, etc. In addition to providing instructions to adjust the speed of the vehicle, the computing device may also provide instructions to modify the steering angle of vehicle 100 so that vehicle 100 follows a given trajectory and/or maintains a safe lateral and longitudinal distance from objects near vehicle 100 (e.g., cars in adjacent lanes on the road).
结合上述描述,本申请实施例提供了一种车辆告警方法,可应用于图1中示出的车辆100中,请参阅图2,图2为本申请实施例提供的车辆告警方法的一种流程示意图,本申请实施例提供的车辆告警方法可以包括:A1、获取驾驶员的视线和车辆周围的环境信息。A2、在根据驾驶员的视线确定满足第一条件的情况下,输出告警信息,其中,第一条件包括驾驶员的视线与车辆的行驶意图不匹配,车辆的行驶意图基于车辆周围的环境信息确定。例如,当驾驶员的视线与车辆的行驶方向不一致时,可以视为驾驶员的视线与车辆的行驶意图不匹配;又例如,当驾驶员的视线的视野范围位于车辆确定的需要关注的物体之外时,可以视为驾驶员的视线与车辆的行驶意图不匹配等,还可以存在其他“驾驶员的视线与车辆的行驶意图不匹配”的场景,此处不做穷举。In combination with the above description, the embodiment of the present application provides a vehicle alarm method, which can be applied to the vehicle 100 shown in Figure 1. Please refer to Figure 2. Figure 2 is a flow chart of the vehicle alarm method provided by the embodiment of the present application. The vehicle alarm method provided by the embodiment of the present application may include: A1. Obtaining the driver's line of sight and the environmental information around the vehicle. A2. Outputting alarm information when the first condition is determined to be satisfied according to the driver's line of sight, wherein the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle. For example, when the driver's line of sight is inconsistent with the driving direction of the vehicle, it can be regarded that the driver's line of sight does not match the vehicle's driving intention; for another example, when the field of view of the driver's line of sight is outside the object determined by the vehicle to be concerned, it can be regarded that the driver's line of sight does not match the vehicle's driving intention, etc. There may also be other "driver's line of sight does not match the vehicle's driving intention" scenes, which are not exhaustive here.
本申请实施例中,提供了告警信息的另一种触发场景,在确定驾驶员的视线与车辆的行驶意图不匹配的情况下,也即在确定驾驶员关注的物体与车辆的智能驾驶系统确定的行驶意图不匹配的情况下,向驾驶员输出告警信息;此外,在“是否输出告警信息”的确定过程中,不仅考虑了车辆内部的驾驶员的视线,还考虑了车辆周围的环境信息,有利于更精准的输出告警信息。In an embodiment of the present application, another triggering scenario for warning information is provided. When it is determined that the driver's line of sight does not match the vehicle's driving intention, that is, when it is determined that the object the driver is paying attention to does not match the driving intention determined by the vehicle's intelligent driving system, a warning message is output to the driver; in addition, in the determination process of "whether to output warning information", not only the driver's line of sight inside the vehicle is taken into account, but also the environmental information around the vehicle is taken into account, which is conducive to more accurate output of warning information.
以下对本申请实施例提供的车辆告警方法的具体实现方式进行详细描述。具体的,请参阅图3,图3为本申请实施例提供的车辆告警方法的另一种流程示意图,本申请实施例提供的车辆告警方法可以包括:The specific implementation of the vehicle alarm method provided by the embodiment of the present application is described in detail below. Specifically, please refer to Figure 3, which is another flow chart of the vehicle alarm method provided by the embodiment of the present application. The vehicle alarm method provided by the embodiment of the present application may include:
301、获取驾驶员的视线。301. Obtain the driver's line of sight.
本申请实施例中,为了能够获取到驾驶员的视线,车辆内部可以部署一个或多个第一传感器,前述一个或多个第一传感器的视野范围(Field of View,FOV)至少要覆盖驾驶员的头部。其中,第一传感器具体可以表现为摄像头、雷达或其他能够采集驾驶员的视频信息的传感器等;例如,摄像头可以为红外摄像头或其他类型的摄像头。示例性地,任意一个第一传感器的安装位置可以为如下任一种位置:车辆的方向盘上、车辆内部的后视镜上、车辆的A柱(A-pillar)上或车辆内部的其它位置上等等,车辆的A柱指的是分别位于左前方和右前方的两个柱子,两个A柱连接了车辆的顶部和车辆的前舱,需要说明的是,第一传感器的安装位置可以根据实际应用产品灵活设定,此处举例仅为方便理解本方案,不用于限定本方案。In the embodiment of the present application, in order to obtain the driver's line of sight, one or more first sensors can be deployed inside the vehicle, and the field of view (FOV) of the one or more first sensors must at least cover the driver's head. Among them, the first sensor can be specifically manifested as a camera, a radar or other sensor that can collect video information of the driver; for example, the camera can be an infrared camera or other types of cameras. Exemplarily, the installation position of any first sensor can be any of the following positions: on the steering wheel of the vehicle, on the rearview mirror inside the vehicle, on the A-pillar of the vehicle, or on other positions inside the vehicle, etc. The A-pillar of the vehicle refers to two pillars located on the left front and right front, respectively. The two A-pillars connect the top of the vehicle and the front cabin of the vehicle. It should be noted that the installation position of the first sensor can be flexibly set according to the actual application product. The example here is only for the convenience of understanding this solution and is not used to limit this solution.
车辆可以通过内部的一个或多个第一传感器采集驾驶员在至少一个时刻的信息,驾驶员在至少一个时刻的信息可以为驾驶员在一个时刻的图像;或者,可以为驾驶员在多个时刻的视频;或者,可以为与至少一个时刻一一对应的至少一组点云数据,每组点云数据反映驾驶员在一个时刻的行为。驾驶员在至少一个时刻的信息包括驾驶员头部在至少一个时刻的信息。根据采集到的驾驶员的视频信息,至少可以对驾驶员进行视线追踪,得到驾驶员的视线。例如,车辆可以将采集到的驾驶员的至少一个视频信息输 入第一神经网络中,通过第一神经网络对驾驶员进行视线追踪,得到第一神经网络输出的第一信息,第一信息指示驾驶员在至少一个时刻的视线。The vehicle can collect information about the driver at at least one moment through one or more first sensors inside. The information about the driver at at least one moment can be an image of the driver at one moment; or, it can be a video of the driver at multiple moments; or, it can be at least one set of point cloud data corresponding to at least one moment, and each set of point cloud data reflects the driver's behavior at one moment. The information about the driver at at least one moment includes information about the driver's head at at least one moment. Based on the collected video information about the driver, at least the driver's line of sight can be tracked to obtain the driver's line of sight. For example, the vehicle can input the collected at least one video information of the driver into a video recording device. The first neural network is input into the first neural network, and the driver's line of sight is tracked by the first neural network to obtain first information output by the first neural network, where the first information indicates the driver's line of sight at at least one moment.
其中,驾驶员在某个时刻的视线可以包括驾驶员的视线的观察范围在第一坐标系下的位置信息;第一坐标系可以为以第一传感器为原点建立的坐标系,也可以为以车辆的中心为原点建立的坐标系,还可以为以车辆内部的后视镜的中心为原点建立的坐标系等等;第一坐标系可以为二维坐标系、三维坐标系或其他类型的坐标系等等,具体可以结合实际应用场景确定具体采用什么样的第一坐标系,此处不做限定。Among them, the driver's line of sight at a certain moment may include the position information of the driver's line of sight in the first coordinate system; the first coordinate system may be a coordinate system established with the first sensor as the origin, or a coordinate system established with the center of the vehicle as the origin, or a coordinate system established with the center of the rearview mirror inside the vehicle as the origin, and so on; the first coordinate system may be a two-dimensional coordinate system, a three-dimensional coordinate system, or other types of coordinate systems, and the specific first coordinate system to be adopted may be determined in combination with the actual application scenario, and is not limited here.
可选地,车辆根据采集到的驾驶员的至少一个视频信息还可以得到驾驶员的如下任一种或多种信息:驾驶员的疲劳程度、驾驶员是否存在异常行为、驾驶员的用户身份标识号(identity,ID)或其他信息等,具体可以结合实际应用场景灵活确定,此处不做穷举。其中,驾驶员的异常行为可以包括:驾驶员接打电话、驾驶员手部离开方向盘、或驾驶员的其他异常行为等等,此处不做限定。Optionally, the vehicle can also obtain any one or more of the following information about the driver based on at least one video information collected of the driver: the driver's fatigue level, whether the driver has abnormal behavior, the driver's user identity number (identity, ID) or other information, etc., which can be flexibly determined in combination with actual application scenarios, and are not exhaustive here. Among them, the driver's abnormal behavior may include: the driver making phone calls, the driver's hands leaving the steering wheel, or other abnormal behaviors of the driver, etc., which are not limited here.
302、获取车辆周围的环境信息。302. Obtain environmental information around the vehicle.
本申请实施例中,车辆的外部可以部署一个或多个第二传感器,前述一个或多个第二传感器用于采集车辆周围的环境信息,第二传感器具体可以表现为摄像头、雷达或其他能够采集车辆周围的环境信息的传感器等,每个第二传感器的位置可以结合实际产品形态灵活设定,此处不做限定。其中,车辆周围的环境信息可以表现为车辆周围环境的视频信息,或者,车辆周围的环境信息可以表现为与至少一个时刻一一对应的至少一组点云数据,每组点云数据指示在一个时刻车辆周围的环境,或者,车辆周围的环境信息还可以表现为其他类型的数据,此处不做穷举。In the embodiment of the present application, one or more second sensors may be deployed on the outside of the vehicle, and the one or more second sensors are used to collect environmental information around the vehicle. The second sensor may be specifically a camera, a radar, or other sensor capable of collecting environmental information around the vehicle, etc. The position of each second sensor may be flexibly set in combination with the actual product form, and is not limited here. The environmental information around the vehicle may be expressed as video information of the environment around the vehicle, or the environmental information around the vehicle may be expressed as at least one set of point cloud data corresponding to at least one moment, each set of point cloud data indicating the environment around the vehicle at a moment, or the environmental information around the vehicle may also be expressed as other types of data, which are not exhaustive here.
303、根据车辆周围的环境信息,确定车辆的行驶意图。303. Determine the driving intention of the vehicle based on the environmental information around the vehicle.
本申请的一些实施例中,车辆可以根据车辆周围的环境信息,确定车辆的行驶意图,车辆的行驶意图可以包括车辆的行驶方向和/或驾驶员在行驶过程中需要关注的一个或多个物体。其中,驾驶员在行驶过程中需要关注的一个或多个物体包括车辆外部的物体,例如自车周围的其他车辆、行人、交通指挥灯、交通指挥牌或自车周围的其他物体等;驾驶员在行驶过程汇总需要关注的物体还包括自车上的物体,例如自车内部的后视镜、自车左前方的后视镜、自车右前方的后视镜或自车上的其他物体等等,具体可以结合实际应用场景确定,此处不做限定。In some embodiments of the present application, the vehicle can determine the driving intention of the vehicle based on the environmental information around the vehicle. The driving intention of the vehicle may include the driving direction of the vehicle and/or one or more objects that the driver needs to pay attention to during driving. Among them, the one or more objects that the driver needs to pay attention to during driving include objects outside the vehicle, such as other vehicles, pedestrians, traffic lights, traffic signs, or other objects around the vehicle; the objects that the driver needs to pay attention to during driving also include objects on the vehicle, such as the rearview mirror inside the vehicle, the rearview mirror on the left front of the vehicle, the rearview mirror on the right front of the vehicle, or other objects on the vehicle, etc. The specific details can be determined in combination with the actual application scenario and are not limited here.
示例性地,车辆的行驶意图可以包括自车以50Km/h的速度直行,右前方的车辆正准备汇入自车所在车道,在自车直行的过程中需要注意右前方的车辆。又例如,车辆的行驶意图可以包括自车以30Km/h的速度在十字路口左转,在自车左转的过程中需要注意位于自车左侧、从十字路口对面驶来的直行的车辆。又例如,车辆的行驶意图可以包括在路口等待红绿灯变为绿色之后执行,需要关注的目标是红绿灯等等,需要说明的是,此处对“车辆的行驶意图”的举例仅为方便理解本方案,不用于限定本方案。Exemplarily, the vehicle's driving intention may include that the vehicle is going straight at a speed of 50 km/h, and the vehicle in front of the right is about to merge into the lane where the vehicle is located. When the vehicle is going straight, attention should be paid to the vehicle in front of the right. For another example, the vehicle's driving intention may include that the vehicle is turning left at an intersection at a speed of 30 km/h. When the vehicle is turning left, attention should be paid to the vehicle on the left side of the vehicle that is going straight and coming from the opposite side of the intersection. For another example, the vehicle's driving intention may include executing after waiting for the traffic light to turn green at the intersection, and the target that needs to be paid attention to is the traffic light, etc. It should be noted that the examples of "the vehicle's driving intention" here are only for the convenience of understanding this solution and are not used to limit this solution.
步骤303可以包括:车辆根据周围的环境信息,确定车辆的驾驶行为,并确定车辆在行驶过程中需要关注的一个或多个物体。具体的,车辆在根据车辆周围的环境信息,确定了自车周围的至少一个物体之后,可以根据车辆规划的驾驶行为,确定驾驶员在当前时刻最需要关注的一个或多个物体。为更直观地理解本方案,请参阅图4至图6,图4至图6为本申请实施例提供的驾驶员需要关注的物体的三种示意图。先参阅图4,自车规划的驾驶行为为直行,自车根据周围的环境信息,确定自车前方存在一个车辆,自车左侧的车辆准备加塞,则驾驶员在当前时刻最需要关注的物体是自车左侧的车辆。Step 303 may include: the vehicle determines the driving behavior of the vehicle based on the surrounding environmental information, and determines one or more objects that the vehicle needs to pay attention to during driving. Specifically, after the vehicle determines at least one object around the vehicle based on the environmental information around the vehicle, it can determine one or more objects that the driver needs to pay most attention to at the current moment based on the driving behavior planned by the vehicle. For a more intuitive understanding of this solution, please refer to Figures 4 to 6, which are three schematic diagrams of objects that the driver needs to pay attention to provided in the embodiments of the present application. First refer to Figure 4, the driving behavior planned by the vehicle is to go straight, and the vehicle determines that there is a vehicle in front of the vehicle based on the surrounding environmental information, and the vehicle on the left side of the vehicle is ready to squeeze in, then the object that the driver needs to pay most attention to at the current moment is the vehicle on the left side of the vehicle.
请再参阅图5,自车规划的驾驶行为为左转,自车根据周围的环境信息,确定自车周围没有行人和非机动车辆,则驾驶员在当前时刻最需要关注的物体是交通信号灯和右侧的车辆。Please refer to Figure 5 again. The planned driving behavior of the ego vehicle is to turn left. Based on the surrounding environmental information, the ego vehicle determines that there are no pedestrians and non-motor vehicles around the ego vehicle. Therefore, the objects that the driver needs to pay most attention to at the current moment are the traffic lights and the vehicles on the right.
请再参阅图6,自车根据周围的环境信息,确定自车周围存在横穿马路的行人,则驾驶员在当前时刻最需要关注的物体是横穿马路的行人,应理解,图4至图6中的示例仅为方便理解“车辆在行驶过程中需要关注的一个或多个物体”这一概念,不用于限定本方案。Please refer to Figure 6 again. The vehicle determines that there are pedestrians crossing the road around the vehicle based on the surrounding environmental information. Then the object that the driver needs to pay most attention to at the current moment is the pedestrians crossing the road. It should be understood that the examples in Figures 4 to 6 are only for the convenience of understanding the concept of "one or more objects that the vehicle needs to pay attention to while driving" and are not used to limit this solution.
可选地,步骤303还可以包括:在根据车辆周围的环境信息,确定当前时刻不存在需要关注的某个物体的情况下,车辆还可以将车辆的行驶方向确定为驾驶员的视线的期望方向。例如,当自车在高速上直行时,自车周围不存在其他物体,则可以视为不存在需要关注的某个物体,则可以将车辆的行驶方向确定为驾驶员的视线的期望方向,应理解,此处举例仅为证明本方案具有可实现性,不用于限定本方案。Optionally, step 303 may also include: when it is determined based on the environmental information around the vehicle that there is no object that needs attention at the current moment, the vehicle may also determine the vehicle's driving direction as the driver's desired direction of sight. For example, when the vehicle is traveling straight on a highway and there are no other objects around the vehicle, it can be considered that there is no object that needs attention, and the vehicle's driving direction can be determined as the driver's desired direction of sight. It should be understood that the examples here are only used to prove the feasibility of this solution and are not used to limit this solution.
需要说明的是,本申请实施例中不限定步骤301和步骤302至303的执行顺序,可以先执行步骤301, 再执行步骤302至303;也可以先执行步骤302至303,再执行步骤301;也可以同时执行步骤301和步骤302至303。It should be noted that the execution order of step 301 and steps 302 to 303 is not limited in the embodiment of the present application. Step 301 may be executed first, Then execute steps 302 to 303; or execute steps 302 to 303 first and then execute step 301; or execute step 301 and steps 302 to 303 simultaneously.
304、判断驾驶员的视线是否满足第一条件,若判断结果为是,则进入步骤305;若判断结果为否,则可以确定不需要对驾驶员输出告警信息,第一条件包括驾驶员的视线与车辆的行驶意图不匹配。304. Determine whether the driver's line of sight meets the first condition. If the judgment result is yes, proceed to step 305; if the judgment result is no, it can be determined that there is no need to output warning information to the driver. The first condition includes that the driver's line of sight does not match the driving intention of the vehicle.
本申请实施例中,车辆在获取到驾驶员的视线之后,可以判断驾驶员的视线是否满足第一条件,若判断结果为是,则可以进入步骤305;若判断结果为否,则可以确定不需要对驾驶员输出告警信息。可选地,车辆可以周期性地触发判断是否需要对驾驶员输出告警信息,也即每隔第二时间段触发开始执行步骤301,则若判断结果为否,可以确定当前周期内不需要对驾驶员输出告警信息。In the embodiment of the present application, after acquiring the driver's line of sight, the vehicle can determine whether the driver's line of sight meets the first condition. If the judgment result is yes, the process can proceed to step 305; if the judgment result is no, it can be determined that it is not necessary to output warning information to the driver. Optionally, the vehicle can periodically trigger the determination of whether it is necessary to output warning information to the driver, that is, trigger the execution of step 301 every second time period. If the judgment result is no, it can be determined that it is not necessary to output warning information to the driver in the current period.
在一种实现方式中,第二时间段的长度可以为预先设定的固定值,例如,第二时间段的长度可以为2分钟、5分钟、10分钟或其他时间长度等。在另一种实现方式中,第二时间段为可变的,第二时间段的长度的取值因素可以包括如下任一种或多种:车辆的当前行驶速度、车辆规划的驾驶行为的复杂程度、车辆周围的环境的安全程度、驾驶员的驾驶行为的安全度或其他因素等等,例如,车辆的当前行驶速度越慢,第二时间段的长度的取值可以越大;车辆的当前行驶速度越快,第二时间段的取值可以越小。车辆规划的驾驶行为越复杂,第二时间段的取值可以越小,车辆规划的驾驶行为越简单,第二时间段的取值可以越大。车辆周围的环境的安全程度越高,第二时间段的取值可以越大,车辆周围的环境的安全程度越低,第二时间段的取值可以越小。驾驶员的驾驶行为的安全度越高,第二时间段的取值可以越大,驾驶员的驾驶行为的安全度越低,第二时间段的取值可以越小。具体可以结合实际应用场景灵活设定,此处不做限定。In one implementation, the length of the second time period may be a preset fixed value, for example, the length of the second time period may be 2 minutes, 5 minutes, 10 minutes or other time lengths. In another implementation, the second time period is variable, and the value factors of the length of the second time period may include any one or more of the following: the current speed of the vehicle, the complexity of the driving behavior planned by the vehicle, the safety of the environment around the vehicle, the safety of the driver's driving behavior or other factors, for example, the slower the current speed of the vehicle, the larger the value of the length of the second time period can be; the faster the current speed of the vehicle, the smaller the value of the second time period can be. The more complex the driving behavior planned by the vehicle, the smaller the value of the second time period can be, and the simpler the driving behavior planned by the vehicle, the larger the value of the second time period can be. The higher the safety of the environment around the vehicle, the larger the value of the second time period can be, and the lower the safety of the environment around the vehicle, the smaller the value of the second time period can be. The higher the safety of the driver's driving behavior, the larger the value of the second time period can be, and the lower the safety of the driver's driving behavior, the smaller the value of the second time period can be. It can be flexibly set in combination with the actual application scenario, and is not limited here.
其中,第一条件包括驾驶员的视线与车辆的行驶意图不匹配;驾驶员的视线与车辆的行驶意图不匹配的情况可以包括:驾驶员的视线位于需要关注的物体之外。在一种实现方式中,驾驶员的视线位于需要关注的物体之外的情况可以包括:在第一时刻,驾驶员的视线位于需要关注的物体之外;由于在一个第一时刻,需要关注的物体可以为一个或多个,则只要驾驶员的视线位于一个或多个需要关注的物体中的任意一个物体之内,就可以视为驾驶员的视线位于需要关注的物体之内;若驾驶员的视线位于所有需要关注的物体之外,则可以视为驾驶员的视线位于需要关注的物体之外。具体的,在一种情况中,步骤303为可选步骤,若不执行步骤303,则步骤304可以包括:车辆可以从第一信息中获取第二信息;第二信息指示驾驶员在第一时刻的视线,第二信息可以包括在第一时刻,驾驶员的视线的观察范围在第一坐标系下的位置信息。车辆上部署有第二神经网络,将第二信息和该第一时刻车辆周围的环境信息输入第二神经网络,由第二神经网络输出第一预测信息,第一预测信息指示在该第一时刻,驾驶员的视线是否位于需要关注的物体之外。其中,第二神经网络可以表现为卷积神经网络、残差神经网络或其他类型的神经网络等等。Among them, the first condition includes that the driver's line of sight does not match the driving intention of the vehicle; the situation that the driver's line of sight does not match the driving intention of the vehicle may include: the driver's line of sight is outside the object that needs attention. In one implementation, the situation that the driver's line of sight is outside the object that needs attention may include: at a first moment, the driver's line of sight is outside the object that needs attention; since at a first moment, the object that needs attention may be one or more, as long as the driver's line of sight is within any one of the one or more objects that need attention, it can be regarded that the driver's line of sight is within the object that needs attention; if the driver's line of sight is outside all objects that need attention, it can be regarded that the driver's line of sight is outside the object that needs attention. Specifically, in one case, step 303 is an optional step. If step 303 is not performed, step 304 may include: the vehicle can obtain second information from the first information; the second information indicates the driver's line of sight at the first moment, and the second information may include the position information of the observation range of the driver's line of sight in the first coordinate system at the first moment. A second neural network is deployed on the vehicle, and the second information and the environmental information around the vehicle at the first moment are input into the second neural network, and the second neural network outputs first prediction information, and the first prediction information indicates whether the driver's line of sight is outside the object that needs attention at the first moment. The second neural network can be a convolutional neural network, a residual neural network, or other types of neural networks, etc.
在另一种情况中,若执行步骤303,且步骤303中车辆根据周围的环境信息,确定了驾驶员在行驶过程中的第一时刻需要关注的物体,则步骤304可以包括:车辆可以获取第二信息和第三信息,第二信息可以包括驾驶员在第一时刻的视线的观察范围在第一坐标系下的位置信息,第三信息可以包括驾驶员在第一时刻每个需要关注的物体在第二坐标系下的位置信息,第二坐标系和第一坐标系可以为不同的坐标系,也可以为相同的坐标系。例如,第一坐标系和第二坐标系均为以车辆的中心为原点建立的坐标系;又例如,第一坐标系为以第一摄像头为原点建立的坐标系,第二坐标系为以第二摄像头为原点建立的坐标系,需要说明的是,第一坐标系和第二坐标系具体采用什么样的坐标系可以结合实际应用场景灵活确定,此处不做限定。In another case, if step 303 is executed, and in step 303 the vehicle determines the objects that the driver needs to pay attention to at the first moment in the driving process based on the surrounding environmental information, then step 304 may include: the vehicle may obtain the second information and the third information, the second information may include the position information of the driver's line of sight at the first moment in the first coordinate system, the third information may include the position information of each object that the driver needs to pay attention to at the first moment in the second coordinate system, and the second coordinate system and the first coordinate system may be different coordinate systems or the same coordinate system. For example, the first coordinate system and the second coordinate system are both coordinate systems established with the center of the vehicle as the origin; for another example, the first coordinate system is a coordinate system established with the first camera as the origin, and the second coordinate system is a coordinate system established with the second camera as the origin. It should be noted that the specific coordinate system used by the first coordinate system and the second coordinate system can be flexibly determined in combination with the actual application scenario, and is not limited here.
车辆可以根据第二信息和第三信息,确定驾驶员在第一时刻的视线是否位于需要关注的物体之外。具体的,若第一坐标系和第二坐标系为不同的坐标系,则在一种实现方式中,车辆可以将第二信息和第三信息输入第三神经网络,由第三神经网络输出第二预测信息,第二预测信息指示驾驶员在第一时刻的视线是否位于需要关注的物体之外。The vehicle can determine whether the driver's line of sight at the first moment is outside the object that needs attention based on the second information and the third information. Specifically, if the first coordinate system and the second coordinate system are different coordinate systems, in one implementation, the vehicle can input the second information and the third information into a third neural network, and the third neural network outputs second prediction information, and the second prediction information indicates whether the driver's line of sight at the first moment is outside the object that needs attention.
在另一种实现方式中,车辆可以根据第二信息和/或第三信息执行映射操作,以将驾驶员在第一时刻的视线的观察范围和第一时刻需要关注的物体映射至同一目标坐标系下。车辆可以根据该目标坐标系下,驾驶员在第一时刻的视线的观察范围和第一时刻需要关注的物体,判断驾驶员的视线是否位于需要关注的物体之外。 In another implementation, the vehicle may perform a mapping operation based on the second information and/or the third information to map the driver's sight range at the first moment and the object that needs attention at the first moment to the same target coordinate system. The vehicle may determine whether the driver's sight is outside the object that needs attention based on the driver's sight range at the first moment and the object that needs attention at the first moment in the target coordinate system.
其中,上述映射操作可以为将驾驶员在第一时刻的视线的观察范围在第一坐标系下的位置信息映射至驾驶员在第一时刻的视线的观察范围在第二坐标系下的位置信息,也即目标坐标系为第二坐标系;或者,将驾驶员在第一时刻需要关注的物体在第二坐标系下的位置信息映射至驾驶员在第一时刻需要关注的物体在第一坐标系下的位置信息,也即目标坐标系为第一坐标系;或者,将驾驶员在第一时刻的视线的观察范围在第一坐标系下的位置信息和驾驶员在第一时刻需要关注的物体在第二坐标系下的位置信息均映射至同样的第三坐标系下,也即目标坐标系为第三坐标系等等,具体采用哪种映射操作可以结合实际情况灵活设定,此处不做限定。Among them, the above-mentioned mapping operation can be to map the position information of the driver's line of sight at the first moment in the first coordinate system to the position information of the driver's line of sight at the first moment in the second coordinate system, that is, the target coordinate system is the second coordinate system; or, map the position information of the object that the driver needs to pay attention to at the first moment in the second coordinate system to the position information of the object that the driver needs to pay attention to at the first moment in the first coordinate system, that is, the target coordinate system is the first coordinate system; or, map the position information of the driver's line of sight at the first moment in the first coordinate system and the position information of the object that the driver needs to pay attention to at the first moment in the second coordinate system to the same third coordinate system, that is, the target coordinate system is the third coordinate system, and so on. The specific mapping operation to be adopted can be flexibly set according to the actual situation and is not limited here.
具体的,在一种实现方式中,车辆可以获取驾驶员在第一时刻的视线的观察范围的面积(或体积)和第一时刻需要关注的物体的面积(或体积)之间的交集的面积(或体积),判断前述交集的面积(或体积)是否大于或等于面积(或体积)阈值,若判断结果为是,则确定驾驶员在第一时刻的视线位于需要关注的物体之内;若判断结果为否,则确定驾驶员在第一时刻的视线位于需要关注的物体之外。Specifically, in one implementation, the vehicle can obtain the area (or volume) of the intersection between the area (or volume) of the driver's line of sight at the first moment and the area (or volume) of the object that needs attention at the first moment, and determine whether the area (or volume) of the aforementioned intersection is greater than or equal to the area (or volume) threshold. If the judgment result is yes, it is determined that the driver's line of sight at the first moment is within the object that needs attention; if the judgment result is no, it is determined that the driver's line of sight at the first moment is outside the object that needs attention.
在另一种实现方式中,车辆可以获取驾驶员在第一时刻的视线的观察范围的面积(或体积)和第一时刻需要关注的物体的面积(或体积)之间的交集的面积(或体积),与,第一时刻的视线的观察范围的面积(或体积)之间的第一比值,若前述第一比值大于或等于第一阈值,则确定驾驶员在第一时刻的视线位于需要关注的物体之内;若第一比值小于第一阈值,则确定驾驶员在第一时刻的视线位于需要关注的物体之外。In another implementation, the vehicle can obtain the area (or volume) of the intersection between the area (or volume) of the driver's line of sight at the first moment and the area (or volume) of the object that needs attention at the first moment, and a first ratio between the area (or volume) of the line of sight at the first moment. If the aforementioned first ratio is greater than or equal to a first threshold, it is determined that the driver's line of sight at the first moment is within the object that needs attention; if the first ratio is less than the first threshold, it is determined that the driver's line of sight at the first moment is outside the object that needs attention.
在另一种实现方式中,车辆可以获取驾驶员在第一时刻的视线的观察范围的中心,和,第一时刻需要关注的物体的T个顶点,车辆可以计算由上述中心、原点和每个顶点构成的夹角的角度,重复执行前述操作,直至得到与T个顶点对应的T个角度,若前述多个角度均小于第一角度阈值,则确定驾驶员在第一时刻的视线位于需要关注的物体之内;若前述多个角度中存在至少一个角度大于第一角度阈值,则确定驾驶员在第一时刻的视线位于需要关注的物体之外。In another implementation, the vehicle can obtain the center of the observation range of the driver's line of sight at the first moment, and T vertices of the object that needs to be paid attention to at the first moment. The vehicle can calculate the angle formed by the above center, the origin and each vertex, and repeat the above operation until T angles corresponding to the T vertices are obtained. If the above multiple angles are all less than the first angle threshold, it is determined that the driver's line of sight at the first moment is within the object that needs attention; if there is at least one angle among the above multiple angles that is greater than the first angle threshold, it is determined that the driver's line of sight at the first moment is outside the object that needs attention.
需要说明的是,车辆在将驾驶员在第一时刻的视线的观察范围和第一时刻需要关注的物体映射至同一坐标系后,可以采用多种方式来确定驾驶员在第一时刻的视线是否位于需要关注的物体之外,此处举例仅为证明本方案的可实现性,不用于限定本方案。若第一坐标系和第二坐标系为相同的坐标系,车辆可以根据前述同一坐标系下,驾驶员在第一时刻的视线的观察范围和第一时刻需要关注的物体,判断驾驶员的视线是否位于需要关注的物体之外,前述步骤的具体实现方式可以参阅上述描述,此处不再进行赘述。It should be noted that after the vehicle maps the driver's line of sight at the first moment and the object that needs to be paid attention to at the first moment to the same coordinate system, a variety of methods can be used to determine whether the driver's line of sight at the first moment is outside the object that needs to be paid attention to. The example here is only used to prove the feasibility of this solution and is not used to limit this solution. If the first coordinate system and the second coordinate system are the same coordinate system, the vehicle can determine whether the driver's line of sight is outside the object that needs to be paid attention to based on the driver's line of sight at the first moment and the object that needs to be paid attention to at the first moment under the same coordinate system. The specific implementation method of the above steps can refer to the above description and will not be repeated here.
为更直观地理解本方案,请参阅图7a,图7a为本申请实施例提供的驾驶员的视线位于需要关注的物体之内和驾驶员的视线位于需要关注的物体之外的一种示意图。图7a包括上和下两个子示意图,图7a中的灰色椭圆区域代表驾驶员的视线的视野范围,虚线框的区域代表需要关注的物体,图7a中的上子示意图代表驾驶员的视线位于需要关注的物体之内,图7a的下子示意图代表驾驶员的视线位于需要关注的物体之外,应理解,图7a中的示例仅为方便理解本方案,不用于限定本方案。To understand this solution more intuitively, please refer to FIG. 7a, which is a schematic diagram of the driver's line of sight being within the object that needs attention and the driver's line of sight being outside the object that needs attention provided by the embodiment of the present application. FIG. 7a includes two upper and lower sub-schematic diagrams. The gray elliptical area in FIG. 7a represents the field of vision of the driver's line of sight, and the area in the dotted box represents the object that needs attention. The upper sub-schematic diagram in FIG. 7a represents that the driver's line of sight is within the object that needs attention, and the lower sub-schematic diagram in FIG. 7a represents that the driver's line of sight is outside the object that needs attention. It should be understood that the example in FIG. 7a is only for the convenience of understanding this solution and is not used to limit this solution.
在另一种实现方式中,车辆在获取到第三信息之后,也即获取到驾驶员在第一时刻每个需要关注的物体在第二坐标系下的位置信息之后,可以将驾驶员的眼睛作为虚拟相机所在位置,将前述每个需要关注的物体在第二坐标系下的位置信息映射至与该虚拟相机对应的坐标系下,得到需要关注的物体在与该虚拟相机对应的坐标系下的位置信息。可选地,第二坐标系可以为三维坐标系,该虚拟相机对应的坐标系可以为二维坐标系。为进一步了解本方案,上述映射操作采用的公式可以如下:
In another implementation, after the vehicle obtains the third information, that is, after obtaining the position information of each object that the driver needs to pay attention to in the second coordinate system at the first moment, the driver's eyes can be used as the location of the virtual camera, and the position information of each object that needs to be paid attention to in the second coordinate system can be mapped to the coordinate system corresponding to the virtual camera, thereby obtaining the position information of the object that needs to be paid attention to in the coordinate system corresponding to the virtual camera. Optionally, the second coordinate system can be a three-dimensional coordinate system, and the coordinate system corresponding to the virtual camera can be a two-dimensional coordinate system. To further understand this solution, the formula used in the above mapping operation can be as follows:
其中,Xw、Yw和Zw代表需要关注的物体中的任意一个点(为方便描述,后续称为“目标点”)在第二坐标系下的坐标,u和v代表将目标点映射至与该虚拟相机对应的坐标系下的坐标,代表相机的内参矩阵,代表相机的外参矩阵,R代表旋转矩阵,T代表平移矩阵,应理解,此处举例仅为方便理解本方案,不用于限定本方案。 Wherein, Xw , Yw and Zw represent the coordinates of any point in the object of interest (hereinafter referred to as the "target point" for the convenience of description) in the second coordinate system, and u and v represent the coordinates of the target point mapped to the coordinate system corresponding to the virtual camera. represents the intrinsic parameter matrix of the camera, Represents the external parameter matrix of the camera, R represents the rotation matrix, and T represents the translation matrix. It should be understood that the examples here are only for the convenience of understanding this solution and are not used to limit this solution.
为更直观地理解本方案,请参阅图7b和图7c,图7b和图7c为本申请实施例提供的将需要关注的物体在第二坐标系映射至虚拟相机所对应的坐标系中的两种示意图。在图7b中自车周围环境中需要关注的物体(也即图7b中的车辆)在第二坐标系下为三维的车辆,图7b中以第二坐标系的原点为车辆的前视传感器为例;将前述车辆映射至虚拟相机所对应的坐标系下,虚拟相机是基于驾驶员的眼镜的位置确定的,每个需要关注的物体在虚拟相机所对应的坐标系下的位置为驾驶员的视线范围的期望位置。进而车辆可以根据每个需要关注的物体在该虚拟相机所对应的坐标系下的位置信息,确定驾驶员的视线是否位于需要关注的物体之外,To understand the present solution more intuitively, please refer to Figures 7b and 7c, which are two schematic diagrams of mapping objects that require attention in the second coordinate system to the coordinate system corresponding to the virtual camera provided in an embodiment of the present application. In Figure 7b, the object that requires attention in the surrounding environment of the vehicle (that is, the vehicle in Figure 7b) is a three-dimensional vehicle in the second coordinate system. In Figure 7b, the origin of the second coordinate system is taken as the forward-looking sensor of the vehicle as an example; the aforementioned vehicle is mapped to the coordinate system corresponding to the virtual camera, and the virtual camera is determined based on the position of the driver's glasses. The position of each object that requires attention in the coordinate system corresponding to the virtual camera is the expected position of the driver's field of vision. The vehicle can then determine whether the driver's line of sight is outside the object that requires attention based on the position information of each object that requires attention in the coordinate system corresponding to the virtual camera.
再参阅7c,图7c包括上和下两个子示意图,图7c的上子示意图示出的是车辆的顶部的前视传感器采集到的自车周围的环境信息,也即图7c中车辆周围的物体在第二坐标系中的位置;图7c的下子示意图示出的是将自车周围需要关注的物体映射至人眼所对应的坐标系(也即基于驾驶员的眼睛确定的虚拟相机所对应的坐标系)中的位置。需要说明的是,图7b和图7c仅为方便理解“将驾驶员的眼睛作为虚拟相机所在位置,将每个需要关注的物体在第二坐标系下的位置信息映射至与该虚拟相机对应的坐标系下”这一概念所做的示例,不用于限定本方案。Referring to 7c again, FIG. 7c includes two upper and lower sub-schematic diagrams. The upper sub-schematic diagram of FIG. 7c shows the environmental information around the vehicle collected by the forward-looking sensor on the top of the vehicle, that is, the position of the objects around the vehicle in FIG. 7c in the second coordinate system; the lower sub-schematic diagram of FIG. 7c shows the objects that need to be paid attention to around the vehicle are mapped to the position in the coordinate system corresponding to the human eye (that is, the coordinate system corresponding to the virtual camera determined based on the driver's eyes). It should be noted that FIG. 7b and FIG. 7c are examples only for the convenience of understanding the concept of "taking the driver's eyes as the location of the virtual camera, and mapping the position information of each object that needs to be paid attention to in the second coordinate system to the coordinate system corresponding to the virtual camera", and are not used to limit this solution.
车辆可以根据上述每个需要关注的物体在该虚拟相机所对应的坐标系下的位置信息和驾驶员的视线的观察范围在第一坐标系下的位置信息(也即第二信息),判断驾驶员在第一时刻的视线是否位于需要关注的物体之外。The vehicle can determine whether the driver's line of sight at the first moment is outside the object that needs attention based on the position information of each object that needs attention in the coordinate system corresponding to the virtual camera and the position information of the driver's line of sight in the first coordinate system (that is, the second information).
可选地,驾驶员的视线位于需要关注的物体之外还可以包括:驾驶员在第一时刻的视线位于需要关注的物体之内,且驾驶员的视线在第一时间段内的第二移动参数与第一移动参数不匹配,第一时间段位于第一时刻之后。具体的,车辆在确定驾驶员在第一时刻的视线位于需要关注的物体之内的情况下,还可以根据车辆周围的环境信息,确定需要关注的物体在第一时间段内的第一移动参数;获取驾驶员的视线在第一时间段内的第二移动参数;判断第一移动参数和第二移动参数是否匹配,若判断结果为是,则确定驾驶员的视线位于需要关注的物体之内;若判断结果为否,则确定驾驶员的视线位于需要关注的物体之外。Optionally, the driver's line of sight being outside the object requiring attention may also include: the driver's line of sight at the first moment is within the object requiring attention, and the second movement parameter of the driver's line of sight within the first time period does not match the first movement parameter, and the first time period is after the first moment. Specifically, when the vehicle determines that the driver's line of sight at the first moment is within the object requiring attention, it may also determine the first movement parameter of the object requiring attention within the first time period based on environmental information around the vehicle; obtain the second movement parameter of the driver's line of sight within the first time period; determine whether the first movement parameter and the second movement parameter match, and if the judgment result is yes, determine that the driver's line of sight is within the object requiring attention; if the judgment result is no, determine that the driver's line of sight is outside the object requiring attention.
其中,第一移动参数可以包括需要关注的物体在第一时间段内的第一移动方向;对应的,第二移动参数可以包括驾驶员的视线在第一时间段内的第二移动方向。例如,第一移动方向可以为需要关注的物体的中心在第一时间段内的移动方向,第二移动方向可以为驾驶员的视线的观察范围的中心在第一时间段内的移动方向,或者,第一移动方向可以为需要关注的物体中任意一个点在第一时间段内的移动方向,第二移动方向可以为驾驶员的视线的观察范围内的任意一个点在第一时间段内的移动方向。The first movement parameter may include a first movement direction of the object to be concerned about within the first time period; correspondingly, the second movement parameter may include a second movement direction of the driver's line of sight within the first time period. For example, the first movement direction may be a movement direction of the center of the object to be concerned about within the first time period, and the second movement direction may be a movement direction of the center of the driver's line of sight within the first time period, or the first movement direction may be a movement direction of any point in the object to be concerned about within the first time period, and the second movement direction may be a movement direction of any point within the driver's line of sight within the first time period.
车辆判断第一移动参数和第二移动参数是否匹配的具体实现方式可以包括:判断第一移动方向和第二移动方向之间的夹角是否小于或等于第一角度阈值,若判断结果为是,则可以确定第一移动参数和第二移动参数匹配;若判断结果为否,则可以确定第一移动参数和第二移动参数不匹配。例如,第一角度阈值的取值小于90度,例如,第一角度阈值的取值可以为30度、45度、60度或其他取值等等,此处举例仅为方便理解本方案,不用于限定本方案。The specific implementation method of the vehicle determining whether the first movement parameter and the second movement parameter match may include: determining whether the angle between the first movement direction and the second movement direction is less than or equal to the first angle threshold, if the determination result is yes, it can be determined that the first movement parameter and the second movement parameter match; if the determination result is no, it can be determined that the first movement parameter and the second movement parameter do not match. For example, the value of the first angle threshold is less than 90 degrees, for example, the value of the first angle threshold can be 30 degrees, 45 degrees, 60 degrees or other values, etc. The examples here are only for the convenience of understanding this solution and are not used to limit this solution.
为更直观地理解本方案,请参阅图8,图8为本申请实施例提供的第一移动参数和第二移动参数的一种示意图。图8中以第一移动参数和第二移动参数均为移动方向为例,图8中需要关注的物体在第一时间段内的移动方向为向左移动,驾驶员的视线在第一时间段内的移动方向为向右上方移动,则第一移动参数和第二移动参数不匹配,应理解,图8中的示例仅为方便理解本方案,不用于限定本方案。For a more intuitive understanding of this solution, please refer to FIG8, which is a schematic diagram of the first movement parameter and the second movement parameter provided in the embodiment of the present application. In FIG8, the first movement parameter and the second movement parameter are both movement directions. In FIG8, the movement direction of the object to be concerned in the first time period is to the left, and the movement direction of the driver's line of sight in the first time period is to the upper right. Then, the first movement parameter and the second movement parameter do not match. It should be understood that the example in FIG8 is only for the convenience of understanding this solution and is not used to limit this solution.
可选地,第一移动参数还可以包括如下任一种或多种信息:需要关注的物体在第一时间段内的第一移动距离、第一移动速度或其他移动信息等等,此处不做穷举。对应的,第二移动参数还可以包括如下任一种或多种信息:驾驶员的视线在第一时间段内的第二移动距离、第二移动速度或其他移动信息等等,此处不做穷举。Optionally, the first movement parameter may also include any one or more of the following information: a first movement distance, a first movement speed, or other movement information of the object of interest within a first time period, etc., which are not exhaustively listed here. Correspondingly, the second movement parameter may also include any one or more of the following information: a second movement distance, a second movement speed, or other movement information of the driver's line of sight within the first time period, etc., which are not exhaustively listed here.
则车辆判断第一移动参数和第二移动参数是否匹配的具体实现方式可以包括:车辆判断第一移动方向和第二移动方向之间的夹角是否小于或等于第一角度阈值,还可以包括:车辆判断第一移动距离和第二移动距离之间的差值是否小于或等于距离阈值,和/或,判断第一移动速度和第二移动速度之间的差值是否小于或等于速度阈值,和/或,判断需要关注的物体和驾驶员的视线在第一时间段内的其他移动参数之间的差值是否小于第二阈值。若前述判断操作的判断结果均为是,则可以确定第一移动参数和第 二移动参数匹配;若任一个判断操作的判断结果为否,则可以确定第一移动参数和第二移动参数不匹配。The specific implementation method of the vehicle determining whether the first movement parameter and the second movement parameter match may include: the vehicle determining whether the angle between the first movement direction and the second movement direction is less than or equal to the first angle threshold, and may also include: the vehicle determining whether the difference between the first movement distance and the second movement distance is less than or equal to the distance threshold, and/or determining whether the difference between the first movement speed and the second movement speed is less than or equal to the speed threshold, and/or determining whether the difference between the object requiring attention and other movement parameters of the driver's line of sight within the first time period is less than the second threshold. If the judgment results of the above judgment operations are all yes, it can be determined that the first movement parameter and the second movement parameter match. The two movement parameters match; if the judgment result of any judgment operation is no, it can be determined that the first movement parameter and the second movement parameter do not match.
本申请实施例中,在考量驾驶员的视线是否位于需要关注的物体之外时,不仅考虑单个第一时刻中,驾驶员的视线是否位于需要关注的物体之外,还会考虑第一时刻之后的一个第一时间段内,驾驶员的视线的移动参数是否与需要关注的物体的移动参数一致,有利于提高“驾驶员的视线是否位于需要关注的物体之外”这一判断过程的准确度,也有利于提高车辆行驶过程的安全度。此外,将第一移动参数和第二移动参数均确定为移动方向,也即提供了一种易于实现且准确度较高的实现方案。In the embodiment of the present application, when considering whether the driver's line of sight is outside the object that needs attention, not only whether the driver's line of sight is outside the object that needs attention in a single first moment is considered, but also whether the movement parameters of the driver's line of sight in a first time period after the first moment are consistent with the movement parameters of the object that needs attention, which is conducive to improving the accuracy of the judgment process of "whether the driver's line of sight is outside the object that needs attention", and is also conducive to improving the safety of the vehicle driving process. In addition, the first movement parameter and the second movement parameter are both determined as the movement direction, which provides an implementation solution that is easy to implement and has high accuracy.
在另一种实现方式中,驾驶员的视线位于需要关注的物体之外包括:在第一时刻以及第一时刻之后的连续的多个时刻,驾驶员的视线均位于需要关注的物体之外。具体的,在一种实现方式中,车辆可以针对第一时刻以及第一时刻之后的连续的多个时刻中的每个时刻,均判断驾驶员的视线是否位于需要关注的物体之外,若第一时刻以及第一时刻之后的连续的多个时刻中任意一个时刻驾驶员的视线位于需要关注的物体之外,则确定驾驶员的视线位于需要关注的物体之外;若第一时刻以及第一时刻之后的连续的多个时刻中的每个时刻驾驶员的视线均位于需要关注的物体之内,则确定驾驶员的视线位于需要关注的物体之内。车辆判断任意一个时刻驾驶员的视线是否位于需要关注的物体之外的具体实现方式,可以参阅上述描述,此处不做赘述。In another implementation, the driver's line of sight being outside the object that needs attention includes: at a first moment and at multiple consecutive moments after the first moment, the driver's line of sight is outside the object that needs attention. Specifically, in one implementation, the vehicle can determine whether the driver's line of sight is outside the object that needs attention for each of the first moment and multiple consecutive moments after the first moment. If the driver's line of sight is outside the object that needs attention at any moment in the first moment and multiple consecutive moments after the first moment, it is determined that the driver's line of sight is outside the object that needs attention; if the driver's line of sight is inside the object that needs attention at each moment in the first moment and multiple consecutive moments after the first moment, it is determined that the driver's line of sight is inside the object that needs attention. The specific implementation method of the vehicle determining whether the driver's line of sight is outside the object that needs attention at any moment can be referred to the above description, which will not be repeated here.
在另一种实现方式中,车辆从第一信息中获取第三信息,第三信息包括在第一时刻以及第一时刻之后的连续的多个时刻,驾驶员的视线的观察范围在第一坐标系下的位置信息;车辆上可以部署第三神经网络,将第一时刻以及第一时刻之后的连续的多个时刻的车辆周围的环境信息和第三信息输入第三神经网络,由第三神经网络输出第三预测信息,第三预测信息指示第一时刻以及第一时刻之后的连续的多个时刻中是否存在任意一个时刻驾驶员的视线位于需要关注的物体之外,从而确定驾驶员的视线是否位于需要关注的物体之外。In another implementation, the vehicle obtains third information from the first information, the third information including position information of the driver's line of sight in the first coordinate system at a first moment and multiple consecutive moments after the first moment; a third neural network can be deployed on the vehicle, and the environmental information around the vehicle at the first moment and multiple consecutive moments after the first moment and the third information are input into the third neural network, and the third neural network outputs third prediction information, the third prediction information indicates whether there is any moment in the first moment and multiple consecutive moments after the first moment when the driver's line of sight is outside the object that needs attention, thereby determining whether the driver's line of sight is outside the object that needs attention.
本申请实施例中,在车辆行驶的过程中需要根据周围的物体来确定驾驶行为,也即智能驾驶系统和驾驶员在车辆行驶过程中均需要实时观察周围的物体,将驾驶员的视线位于需要关注的物体之外确定为驾驶员的视线与车辆的行驶意图不匹配的一种情况,符合人工驾驶的逻辑,也即本方案与实际应用场景的贴合度较高,有利于准确的确定驾驶员的视线是否与车辆的行驶意图匹配。In the embodiment of the present application, it is necessary to determine the driving behavior based on the surrounding objects during the driving of the vehicle, that is, the intelligent driving system and the driver both need to observe the surrounding objects in real time during the driving of the vehicle, and determine that the driver's line of sight is outside the object that needs attention as a situation where the driver's line of sight does not match the driving intention of the vehicle, which is in line with the logic of manual driving, that is, this solution has a high degree of fit with the actual application scenario, which is conducive to accurately determining whether the driver's line of sight matches the driving intention of the vehicle.
可选地,若执行步骤303,且在步骤303中车辆根据周围的环境信息,确定驾驶员在行驶过程中的第一时刻不存在需要关注的某个物体的情况下,车辆规划的行驶方向确定为驾驶员的视线的期望方向;则步骤304可以包括:判断驾驶员的视线的方向是否与车辆的行驶方向匹配;若判断结果为否,则可以确定驾驶员的视线满足第一条件,若判断结果为是,则可以确定驾驶员的视线不满足第一条件。可选地,驾驶员的视线的方向与车辆的行驶方向不匹配的情况可以包括驾驶员的视线方向与车辆的行驶方向之间的夹角大于或等于第二角度阈值。例如,第二角度阈值的取值可以为45度、50度或其他取值等。示例性地,车辆行驶的方向是向前,若驾驶员的视线的方向是向左,则车辆可以确定驾驶员的视线的方向与车辆的行驶意图不匹配,也即确定满足第一条件;若驾驶员的视线的方式是向前,则车辆可以确定驾驶员的视线的方向与车辆的行驶意图匹配,也即确定不满足第一条件,应理解,此处举例仅为方便理解本方案,不用于限定本方案。Optionally, if step 303 is executed, and in step 303, the vehicle determines that there is no object that the driver needs to pay attention to at the first moment of the driving process according to the surrounding environmental information, the driving direction planned by the vehicle is determined as the expected direction of the driver's line of sight; then step 304 may include: determining whether the direction of the driver's line of sight matches the driving direction of the vehicle; if the judgment result is no, it can be determined that the driver's line of sight meets the first condition, and if the judgment result is yes, it can be determined that the driver's line of sight does not meet the first condition. Optionally, the situation that the direction of the driver's line of sight does not match the driving direction of the vehicle may include that the angle between the driver's line of sight and the driving direction of the vehicle is greater than or equal to the second angle threshold. For example, the value of the second angle threshold may be 45 degrees, 50 degrees or other values. Exemplarily, the direction of vehicle driving is forward, if the direction of the driver's line of sight is to the left, then the vehicle can determine that the direction of the driver's line of sight does not match the driving intention of the vehicle, that is, it is determined that the first condition is met; if the driver's line of sight is forward, then the vehicle can determine that the direction of the driver's line of sight matches the driving intention of the vehicle, that is, it is determined that the first condition is not met. It should be understood that the example here is only for the convenience of understanding this solution and is not used to limit this solution.
可选地,第一条件还可以包括驾驶员存在异常行为,则步骤304中,车辆还可以根据步骤301中获取的驾驶员在至少一个时刻的信息,确定驾驶员是否存在异常行为,若确定驾驶员存在异常行为,也可以触发进入步骤305,也即向驾驶员输出告警信息。例如,异常行为可以包括:疲劳驾驶、接打电话或驾驶员的其他异常行为等,此处不做穷举。Optionally, the first condition may also include abnormal behavior of the driver. In step 304, the vehicle may also determine whether the driver has abnormal behavior based on the information of the driver at at least one time obtained in step 301. If it is determined that the driver has abnormal behavior, it may also trigger to enter step 305, that is, output warning information to the driver. For example, abnormal behavior may include: fatigue driving, making phone calls, or other abnormal behaviors of the driver, etc., which are not exhaustive here.
305、输出告警信息。305. Output warning information.
本申请实施例中,在一种实现方式中,车辆在确定驾驶员的视线与车辆的行驶意图不匹配的情况下,可以直接触发进入步骤305,也即触发对驾驶员输出告警信息;在另一种实现方式中,车辆在确定驾驶员的视线与车辆的行驶意图不匹配的时长达到第一时长时,确定触发进入步骤305。其中,第一时长可以为预先设定在车辆中的,例如,第一时长的长度可以为15秒、20秒、25秒或其他时长等等,此处不做穷举。In the embodiment of the present application, in one implementation, when the vehicle determines that the driver's line of sight does not match the vehicle's driving intention, it can directly trigger to enter step 305, that is, trigger to output warning information to the driver; in another implementation, when the vehicle determines that the driver's line of sight does not match the vehicle's driving intention for a first time, it determines to trigger to enter step 305. The first time can be pre-set in the vehicle, for example, the length of the first time can be 15 seconds, 20 seconds, 25 seconds or other time, etc., which are not exhaustive here.
可选地,车辆中输出告警信息的方式可以包括至少两种,车辆可以根据驾驶员的视线与车辆的行驶意图不匹配的累计时长,确定具体采用哪种告警方式。例如,当车辆确定驾驶员的视线与车辆的行驶意 图不匹配的累计时长达到第一时长时,可以确定采用第一类型的告警方式;当车辆确定驾驶员的视线与车辆的行驶意图不匹配的累计时长达到第二时长时,可以确定采用第二类型的告警方式;第二时长的取值可以大于第一时长。Optionally, the vehicle may output warning information in at least two ways, and the vehicle may determine which warning method to use based on the accumulated time that the driver's line of sight does not match the vehicle's driving intention. When the cumulative duration of the image mismatch reaches a first duration, it can be determined to use the first type of warning method; when the vehicle determines that the cumulative duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a second duration, it can be determined to use the second type of warning method; the value of the second duration can be greater than the first duration.
可选地,车辆输出告警信息的至少两种方式可以包括:第一类型的告警方式、第二类型的告警方式和第三类型的告警方式,对应的,当车辆确定驾驶员的视线与车辆的行驶意图不匹配的累计时长达到第一时长时,可以确定采用第一类型的告警方式;当车辆确定驾驶员的视线与车辆的行驶意图不匹配的累计时长达到第二时长时,可以确定采用第二类型的告警方式;当车辆确定驾驶员的视线与车辆的行驶意图不匹配的累计时长达到第三时长时,可以确定采用第三类型的告警方式。第一类型的告警方式可以包括视觉类型的告警信息,第二类型的告警方式可以包括视觉类型的告警信息和声觉类型的告警信息,第三类型的告警方式均可以包括视觉类型的告警信息、声觉类型的告警信息和触觉类型的告警信息。Optionally, at least two ways in which the vehicle outputs warning information may include: a first type of warning method, a second type of warning method, and a third type of warning method. Correspondingly, when the vehicle determines that the cumulative duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, it may be determined to use the first type of warning method; when the vehicle determines that the cumulative duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a second duration, it may be determined to use the second type of warning method; when the vehicle determines that the cumulative duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a third duration, it may be determined to use the third type of warning method. The first type of warning method may include visual type warning information, the second type of warning method may include visual type warning information and acoustic type warning information, and the third type of warning method may include visual type warning information, acoustic type warning information, and tactile type warning information.
例如,视觉类型的告警信息可以包括:通过车辆的仪表盘上的提示灯向驾驶员输出告警信息,或者,可以通过抬头显示系统(head up display,HUD)向驾驶员输出告警信息,或者,还可以通过其他方式输出视觉类型的告警信息等等,此处不做穷举。触觉类型的告警信息可以包括:拉紧安全带、方向盘发出振动或其他方式的触觉类型的告警信息等等,此处举例仅为方便理解本方案,不用于限定本方案。For example, visual warning information may include: outputting warning information to the driver through a warning light on the dashboard of the vehicle, or outputting warning information to the driver through a head-up display (HUD), or outputting visual warning information through other means, etc., which are not exhaustive here. Tactile warning information may include: tightening the seat belt, vibrating the steering wheel, or other tactile warning information, etc., which are examples here only for the convenience of understanding this solution and are not used to limit this solution.
可选地,第一时长的取值可以与驾驶员的驾驶行为的安全度具有关联关系,第二时长和/或第三时长的取值也可以与驾驶员的驾驶行为的安全度具有关联关系。例如,驾驶员的驾驶行为的安全度越高,第一时长的取值可以越长,驾驶员的驾驶行为的安全度越低,第一时长的取值可以越短。对应的,驾驶员的驾驶行为的安全度越高,第二时长和/或第三时长的取值可以越长,驾驶员的驾驶行为的安全度越低,第二时长和/或第三时长的取值可以越短。Optionally, the value of the first duration may be associated with the safety of the driver's driving behavior, and the values of the second duration and/or the third duration may also be associated with the safety of the driver's driving behavior. For example, the higher the safety of the driver's driving behavior, the longer the value of the first duration may be, and the lower the safety of the driver's driving behavior, the shorter the value of the first duration may be. Correspondingly, the higher the safety of the driver's driving behavior, the longer the value of the second duration and/or the third duration may be, and the lower the safety of the driver's driving behavior, the shorter the value of the second duration and/or the third duration may be.
具体的,车辆可以根据驾驶员的如下任一种或多种驾驶行为信息,确定驾驶员的驾驶行为的安全度:前向预警累计次数(Forward collision Warning,FCW)、急加速次数、急减速次数、跟车时与前车的距离、打方向盘的缓急程度、车辆的智能驾驶系统接管用户驾驶的平均次数或其他能够反映驾驶员的驾驶行为的安全度的信息等,此处不进行穷举。Specifically, the vehicle can determine the safety of the driver's driving behavior based on any one or more of the following driving behavior information: the cumulative number of forward collision warnings (FCW), the number of sudden accelerations, the number of sudden decelerations, the distance to the vehicle in front when following, the speed of steering, the average number of times the vehicle's intelligent driving system takes over the user's driving, or other information that can reflect the safety of the driver's driving behavior, etc., which are not listed exhaustively here.
更具体的,车辆可以对驾驶员的至少一种驾驶行为信息中每种驾驶行为信息进行无量纲化处理后进行加权求和,得到驾驶员的驾驶行为的安全指数,该安全指数指示了驾驶员的驾驶行为的安全度;可选地,安全指数越高,驾驶员的驾驶行为的安全度可以越低。为更直观地理解本方案,如下通过表1展示驾驶员的驾驶行为的安全指数和第一时长、第二时长以及第三时长的关联关系。
More specifically, the vehicle may perform dimensionless processing on each type of at least one type of driving behavior information of the driver and then perform weighted summation to obtain a safety index of the driver's driving behavior, which indicates the safety of the driver's driving behavior; optionally, the higher the safety index, the lower the safety of the driver's driving behavior. To more intuitively understand the present solution, the relationship between the safety index of the driver's driving behavior and the first duration, the second duration, and the third duration is shown in Table 1 below.
表1Table 1
其中,表1中的K1、K2和K3分别代表不同的安全指数,对于表1中的第一行,当安全指数为K1时,驾驶员的视线与车辆的行驶意图不匹配的累计时长达到20秒,触发车辆输出第一类型的警示信息,驾驶员的视线与车辆的行驶意图不匹配的累计时长达到40秒,触发车辆输出第二类型的警示信息,驾驶员的视线与车辆的行驶意图不匹配的累计时长达到60秒,触发车辆输出第三类型的警示信息。对于表1中的第二行和第三行的理解,可以参阅上述对表1的第一行的解释,此处不再赘述,应理解,表1中的示例仅为方便理解本方案,不用于限定本方案。Among them, K1, K2 and K3 in Table 1 represent different safety indexes respectively. For the first row in Table 1, when the safety index is K1, the cumulative time that the driver's line of sight does not match the vehicle's driving intention reaches 20 seconds, triggering the vehicle to output the first type of warning information, the cumulative time that the driver's line of sight does not match the vehicle's driving intention reaches 40 seconds, triggering the vehicle to output the second type of warning information, and the cumulative time that the driver's line of sight does not match the vehicle's driving intention reaches 60 seconds, triggering the vehicle to output the third type of warning information. For the understanding of the second and third rows in Table 1, please refer to the above explanation of the first row of Table 1, which will not be repeated here. It should be understood that the examples in Table 1 are only for the convenience of understanding this solution and are not used to limit this solution.
本申请实施例中,驾驶员的视线与车辆的行驶意图不匹配的时长达到第一时长时,向驾驶员输出告警信息,也即第一时长的取值可以影响输出告警信息的频率,而驾驶员的驾驶行为的安全度会影响第一时长的取值,有利于降低向安全度高的驾驶员输出告警信息的频率,从而避免打扰到用户;也有利于提高向安全度低的驾驶员输出告警信息的频率,以提高驾驶过程的安全度。In an embodiment of the present application, when the duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, a warning message is output to the driver, that is, the value of the first duration can affect the frequency of outputting the warning message, and the safety of the driver's driving behavior will affect the value of the first duration, which is beneficial to reducing the frequency of outputting warning messages to drivers with high safety levels, thereby avoiding disturbing the user; it is also beneficial to increase the frequency of outputting warning messages to drivers with low safety levels, thereby improving the safety of the driving process.
306、向驾驶员显示需要关注的物体。 306. Display objects that require attention to the driver.
本申请的一些实施例中,车辆还可以向驾驶员展示需要关注的物体,例如,车辆可以通过HUD向驾驶员突出显示车辆确定的需要关注的物体;可选地,车辆可以在通过HUD向驾驶员显示导航路线时,突出显示需要关注的物体。突出显示的方式包括如下任一种或多种:在需要关注的物体旁边增设提示文本、将需要关注的物体框起来或其他突出显示需要关注的物体的方式等等,此处不做穷举。又例如,车辆可以通过语音播放的形式向用户提示需要关注的物体等等,车辆也可以采用其他方式向用户展示需要关注的物体,此处不做穷举。本申请实施例中,还可以向驾驶员展示需要关注的物体,从而可以协助驾驶员学习智能驾驶系统的驾驶思路,有利于提高驾驶员的驾驶行为的安全度。In some embodiments of the present application, the vehicle can also display objects that need attention to the driver. For example, the vehicle can highlight the objects that the vehicle has determined to need attention to the driver through the HUD; optionally, the vehicle can highlight the objects that need attention when displaying the navigation route to the driver through the HUD. The highlighting method includes any one or more of the following: adding prompt text next to the object that needs attention, framing the object that needs attention, or other methods of highlighting the object that needs attention, etc., which are not exhaustive here. For another example, the vehicle can prompt the user of the objects that need attention in the form of voice playback, etc., and the vehicle can also use other methods to show the user the objects that need attention, which are not exhaustive here. In the embodiment of the present application, the driver can also be shown objects that need attention, so as to assist the driver in learning the driving ideas of the intelligent driving system, which is conducive to improving the safety of the driver's driving behavior.
本申请实施例中,提供了告警信息的另一种触发场景,在确定驾驶员的视线与车辆的行驶意图不匹配的情况下,也即在确定驾驶员关注的物体与车辆的智能驾驶系统确定的行驶意图不匹配的情况下,向驾驶员输出告警信息;此外,在“是否输出告警信息”的确定过程中,不仅考虑了车辆内部的驾驶员的视线,还考虑了车辆周围的环境信息,有利于更精准的输出告警信息。In an embodiment of the present application, another triggering scenario for warning information is provided. When it is determined that the driver's line of sight does not match the vehicle's driving intention, that is, when it is determined that the object the driver is paying attention to does not match the driving intention determined by the vehicle's intelligent driving system, a warning message is output to the driver; in addition, in the determination process of "whether to output warning information", not only the driver's line of sight inside the vehicle is taken into account, but also the environmental information around the vehicle is taken into account, which is conducive to more accurate output of warning information.
在图1至图8所对应的实施例的基础上,为了更好的实施本申请实施例的上述方案,下面还提供用于实施上述方案的相关设备。具体参阅图9,图9为本申请实施例提供的车辆行驶速度生成装置的一种结构示意图。车辆告警装置900包括可以包括获取模块901和告警模块902,其中,获取模块901,用于获取驾驶员的视线和车辆周围的环境信息;告警模块902,用于在根据驾驶员的视线确定满足第一条件的情况下,输出告警信息,其中,第一条件包括驾驶员的视线与车辆的行驶意图不匹配,车辆的行驶意图基于车辆周围的环境信息确定。On the basis of the embodiments corresponding to Figures 1 to 8, in order to better implement the above-mentioned scheme of the embodiment of the present application, the following also provides related equipment for implementing the above-mentioned scheme. Please refer to Figure 9 in detail, which is a structural schematic diagram of a vehicle speed generating device provided in the embodiment of the present application. The vehicle alarm device 900 may include an acquisition module 901 and an alarm module 902, wherein the acquisition module 901 is used to obtain the driver's line of sight and the environmental information around the vehicle; the alarm module 902 is used to output an alarm message when it is determined that the first condition is met according to the driver's line of sight, wherein the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle.
可选地,请参阅图10,图10为本申请实施例提供的车辆行驶速度生成装置的另一种结构示意图。车辆告警装置900还包括:处理模块903,用于根据车辆周围的环境信息,确定驾驶员在行驶过程中需要关注的物体,其中,车辆的行驶意图包括驾驶员在行驶过程中需要关注的物体,驾驶员的视线与车辆的行驶意图不匹配的情况包括:驾驶员的视线位于需要关注的物体之外。Optionally, please refer to Figure 10, which is another structural schematic diagram of the vehicle speed generating device provided in an embodiment of the present application. The vehicle warning device 900 also includes: a processing module 903, which is used to determine the objects that the driver needs to pay attention to during driving according to the environmental information around the vehicle, wherein the driving intention of the vehicle includes the objects that the driver needs to pay attention to during driving, and the situation where the driver's line of sight does not match the driving intention of the vehicle includes: the driver's line of sight is outside the object that needs attention.
可选地,请参阅图10,处理模块903,还用于根据车辆周围的环境信息,确定需要关注的物体在第一时间段内的第一移动参数,其中,驾驶员的视线位于需要关注的物体之外还包括:驾驶员在第一时刻的视线位于需要关注的物体之内,且驾驶员的视线在第一时间段内的第二移动参数与第一移动参数不匹配,第一时间段位于第一时刻之后。Optionally, referring to Figure 10, the processing module 903 is also used to determine the first movement parameter of the object that needs attention within the first time period based on the environmental information around the vehicle, wherein the driver's line of sight is outside the object that needs attention and also includes: the driver's line of sight at the first moment is within the object that needs attention, and the second movement parameter of the driver's line of sight in the first time period does not match the first movement parameter, and the first time period is after the first moment.
可选地,第一移动参数包括需要关注的物体在第一时间段内的第一移动方向,第二移动参数包括驾驶员的视线在第一时间段内的第二移动方向,第二移动参数与第一移动参数不匹配的情况包括:第一移动方向和第二移动方向之间的差异满足第二条件。Optionally, the first movement parameter includes a first movement direction of the object of interest within a first time period, the second movement parameter includes a second movement direction of the driver's line of sight within the first time period, and the situation where the second movement parameter does not match the first movement parameter includes: the difference between the first movement direction and the second movement direction satisfies the second condition.
可选地,请参阅图10,车辆告警装置900还包括:展示模块904,用于向驾驶员展示需要关注的物体。Optionally, referring to FIG. 10 , the vehicle warning device 900 further includes: a display module 904 for displaying objects requiring attention to the driver.
可选地,告警模块902,具体用于当驾驶员的视线与车辆的行驶意图不匹配的时长达到第一时长时,输出告警信息,第一时长的取值与驾驶员的驾驶行为的安全度具有关联关系。Optionally, the warning module 902 is specifically used to output a warning message when the duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, and the value of the first duration is correlated with the safety of the driver's driving behavior.
需要说明的是,车辆告警装置900中各模块/单元之间的信息交互、执行过程等内容,与本申请中图2至图8对应的各个方法实施例基于同一构思,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。It should be noted that the information interaction, execution process, etc. between the modules/units in the vehicle alarm device 900 are based on the same concept as the various method embodiments corresponding to Figures 2 to 8 in the present application. The specific contents can be found in the description of the method embodiments shown in the previous part of the present application, and will not be repeated here.
本申请实施例还提供了一种车辆,结合上述对图1的描述,请参阅图11,图11为本申请实施例提供的车辆的另一种结构示意图,其中,车辆100上可以部署有图9和图10对应实施例中所描述的车辆告警装置900,用于实现图2至图8对应实施例中车辆的功能。由于在部分实施例中,车辆100还可以包括通信功能,则车辆100除了包括图1中所示的组件,还可以包括:接收器1101和发射器1102,其中,处理器113可以包括应用处理器1131和通信处理器1132。在本申请的一些实施例中,接收器1101、发射器1102、处理器113和存储器114可通过总线或其它方式连接。The embodiment of the present application also provides a vehicle. In combination with the above description of FIG. 1 , please refer to FIG. 11 . FIG. 11 is another structural schematic diagram of the vehicle provided in the embodiment of the present application, wherein the vehicle 100 may be deployed with the vehicle alarm device 900 described in the corresponding embodiments of FIG. 9 and FIG. 10 , for realizing the functions of the vehicle in the corresponding embodiments of FIG. 2 to FIG. 8 . Since in some embodiments, the vehicle 100 may also include a communication function, the vehicle 100 may include, in addition to the components shown in FIG. 1 , a receiver 1101 and a transmitter 1102, wherein the processor 113 may include an application processor 1131 and a communication processor 1132. In some embodiments of the present application, the receiver 1101, the transmitter 1102, the processor 113 and the memory 114 may be connected via a bus or other means.
处理器113控制车辆的操作。具体的应用中,车辆100的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。 The processor 113 controls the operation of the vehicle. In a specific application, the various components of the vehicle 100 are coupled together through a bus system, wherein the bus system may include a power bus, a control bus, and a status signal bus in addition to a data bus. However, for the sake of clarity, various buses are referred to as bus systems in the figure.
接收器1101可用于接收输入的数字或字符信息,以及产生与车辆的相关设置以及功能控制有关的信号输入。发射器1102可用于通过第一接口输出数字或字符信息;发射器1102还可用于通过第一接口向磁盘组发送指令,以修改磁盘组中的数据;发射器1102还可以包括显示屏等显示设备。The receiver 1101 can be used to receive input digital or character information and generate signal input related to the relevant settings and function control of the vehicle. The transmitter 1102 can be used to output digital or character information through the first interface; the transmitter 1102 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group; the transmitter 1102 can also include a display device such as a display screen.
本申请实施例中,应用处理器1131,用于执行图2对应实施例中的车辆执行的车辆告警方法。具体的,应用处理器1131用于执行如下步骤:获取驾驶员的视线和车辆周围的环境信息;在根据驾驶员的视线确定满足第一条件的情况下,输出告警信息,其中,第一条件包括驾驶员的视线与车辆的行驶意图不匹配,车辆的行驶意图基于车辆周围的环境信息确定。需要说明的是,对于应用处理器1131执行车辆告警方法的具体实现方式以及带来的有益效果,均可以参考图2至图8对应的各个方法实施例中的叙述,此处不再一一赘述。In an embodiment of the present application, the application processor 1131 is used to execute the vehicle alarm method executed by the vehicle in the embodiment corresponding to Figure 2. Specifically, the application processor 1131 is used to execute the following steps: obtain the driver's line of sight and the environmental information around the vehicle; when it is determined that the first condition is met based on the driver's line of sight, output an alarm message, wherein the first condition includes that the driver's line of sight does not match the vehicle's driving intention, and the vehicle's driving intention is determined based on the environmental information around the vehicle. It should be noted that for the specific implementation method of the application processor 1131 executing the vehicle alarm method and the beneficial effects brought about, reference can be made to the descriptions in the various method embodiments corresponding to Figures 2 to 8, and they will not be repeated here one by one.
本申请实施例中还提供一种计算机可读存储介质,该计算机可读存储介质中存储有用于生成车辆行驶速度的程序,当其在计算机上运行时,使得计算机执行如前述图2至图8所示实施例描述的方法中车辆所执行的步骤。A computer-readable storage medium is also provided in an embodiment of the present application, in which a program for generating a vehicle driving speed is stored. When the program is run on a computer, the computer executes the steps executed by the vehicle in the method described in the embodiments shown in the aforementioned Figures 2 to 8.
本申请实施例中还提供一种包括计算机程序产品,当其在计算机上运行时,使得计算机执行如前述图2至图8所示实施例描述的方法中车辆所执行的步骤。Also provided in an embodiment of the present application is a computer program product, which, when executed on a computer, enables the computer to execute the steps executed by the vehicle in the method described in the embodiments shown in the aforementioned Figures 2 to 8.
本申请实施例中还提供一种电路系统,所述电路系统包括处理电路,所述处理电路配置为执行如前述图2至图8所示实施例描述的方法中车辆所执行的步骤。A circuit system is also provided in an embodiment of the present application, wherein the circuit system includes a processing circuit, and the processing circuit is configured to execute the steps performed by the vehicle in the method described in the embodiments shown in Figures 2 to 8 above.
本申请实施例提供的车辆行驶速度生成装置或车辆具体可以为芯片,芯片包括:处理单元和通信单元,所述处理单元例如可以是处理器,所述通信单元例如可以是输入/输出接口、管脚或电路等。该处理单元可执行存储单元存储的计算机执行指令,以使服务器内的芯片执行上述图2至图8所示实施例描述的车辆告警方法。可选地,所述存储单元为所述芯片内的存储单元,如寄存器、缓存等,所述存储单元还可以是所述无线接入设备端内的位于所述芯片外部的存储单元,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。The vehicle speed generating device or vehicle provided in the embodiment of the present application may be a chip, and the chip includes: a processing unit and a communication unit, wherein the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output interface, a pin or a circuit, etc. The processing unit may execute the computer execution instructions stored in the storage unit so that the chip in the server executes the vehicle alarm method described in the embodiments shown in Figures 2 to 8 above. Optionally, the storage unit is a storage unit in the chip, such as a register, a cache, etc. The storage unit may also be a storage unit located outside the chip in the wireless access device end, such as a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (RAM), etc.
具体的,请参阅图12,图12为本申请实施例提供的芯片的一种结构示意图,所述芯片可以表现为神经网络处理器NPU 120,NPU 120作为协处理器挂载到主CPU(Host CPU)上,由Host CPU分配任务。NPU的核心部分为运算电路120,通过控制器1205控制运算电路1203提取存储器中的矩阵数据并进行乘法运算。Specifically, please refer to FIG. 12 , which is a schematic diagram of a structure of a chip provided in an embodiment of the present application, wherein the chip may be a neural network processor NPU 120, which is mounted on the host CPU (Host CPU) as a coprocessor and is assigned tasks by the Host CPU. The core part of the NPU is the operation circuit 120, which controls the operation circuit 1203 through the controller 1205 to extract matrix data in the memory and perform multiplication operations.
在一些实现中,运算电路1203内部包括多个处理单元(Process Engine,PE)。在一些实现中,运算电路1203是二维脉动阵列。运算电路1203还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路1203是通用的矩阵处理器。In some implementations, the operation circuit 1203 includes multiple processing units (Process Engine, PE) inside. In some implementations, the operation circuit 1203 is a two-dimensional systolic array. The operation circuit 1203 can also be a one-dimensional systolic array or other electronic circuits capable of performing mathematical operations such as multiplication and addition. In some implementations, the operation circuit 1203 is a general-purpose matrix processor.
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路从权重存储器1202中取矩阵B相应的数据,并缓存在运算电路中每一个PE上。运算电路从输入存储器1201中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)1208中。For example, assume there is an input matrix A, a weight matrix B, and an output matrix C. The operation circuit takes the corresponding data of matrix B from the weight memory 1202 and caches it on each PE in the operation circuit. The operation circuit takes the matrix A data from the input memory 1201 and performs matrix operation with matrix B, and the partial result or final result of the matrix is stored in the accumulator 1208.
统一存储器1206用于存放输入数据以及输出数据。权重数据直接通过存储单元访问控制器(Direct Memory Access Controller,DMAC)1205,DMAC被搬运到权重存储器1202中。输入数据也通过DMAC被搬运到统一存储器1206中。The unified memory 1206 is used to store input data and output data. The weight data is directly transferred to the weight memory 1202 through the direct memory access controller (DMAC) 1205. The input data is also transferred to the unified memory 1206 through the DMAC.
BIU为Bus Interface Unit即,总线接口单元1210,用于AXI总线与DMAC和取指存储器(Instruction Fetch Buffer,IFB)1209的交互。BIU stands for Bus Interface Unit, that is, the bus interface unit 1210, which is used for the interaction between AXI bus and DMAC and instruction fetch buffer (IFB) 1209.
总线接口单元1210(Bus Interface Unit,简称BIU),用于取指存储器1209从外部存储器获取指令,还用于存储单元访问控制器1205从外部存储器获取输入矩阵A或者权重矩阵B的原数据。The bus interface unit 1210 (Bus Interface Unit, BIU for short) is used for the instruction fetch memory 1209 to obtain instructions from the external memory, and is also used for the storage unit access controller 1205 to obtain the original data of the input matrix A or the weight matrix B from the external memory.
DMAC主要用于将外部存储器DDR中的输入数据搬运到统一存储器1206或将权重数据搬运到权重存储器1202中或将输入数据数据搬运到输入存储器1201中。DMAC is mainly used to transfer input data in the external memory DDR to the unified memory 1206 or to transfer weight data to the weight memory 1202 or to transfer input data to the input memory 1201.
向量计算单元1207包括多个运算处理单元,在需要的情况下,对运算电路的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。主要用于神经网络中非卷积/全连接层网络计算,如Batch Normalization(批归一化),像素级求和,对特征平面进行上采样等。The vector calculation unit 1207 includes multiple operation processing units, which further process the output of the operation circuit when necessary, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, etc. It is mainly used for non-convolutional/fully connected layer network calculations in neural networks, such as Batch Normalization, pixel-level summation, upsampling of feature planes, etc.
在一些实现中,向量计算单元1207能将经处理的输出的向量存储到统一存储器1206。例如,向量 计算单元1207可以将线性函数和/或非线性函数应用到运算电路1203的输出,例如对卷积层提取的特征平面进行线性插值,再例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元1207生成归一化的值、像素级求和的值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路1203的激活输入,例如用于在神经网络中的后续层中的使用。In some implementations, the vector calculation unit 1207 can store the processed output vector to the unified memory 1206. For example, the vector The calculation unit 1207 can apply a linear function and/or a nonlinear function to the output of the operation circuit 1203, such as linear interpolation of the feature plane extracted by the convolution layer, and then, for example, a vector of accumulated values to generate an activation value. In some implementations, the vector calculation unit 1207 generates a normalized value, a pixel-level summed value, or both. In some implementations, the processed output vector can be used as an activation input to the operation circuit 1203, such as for use in a subsequent layer in a neural network.
控制器1205连接的取指存储器(instruction fetch buffer)1209,用于存储控制器1205使用的指令;An instruction fetch buffer 1209 connected to the controller 1205 is used to store instructions used by the controller 1205;
统一存储器1206,输入存储器1201,权重存储器1202以及取指存储器1209均为On-Chip存储器。外部存储器私有于该NPU硬件架构。Unified memory 1206, input memory 1201, weight memory 1202 and instruction fetch memory 1209 are all on-chip memories. External memories are private to the NPU hardware architecture.
其中,图2至图8示出的各个方法实施例中提及的神经网络中各层的运算可以由运算电路1203或向量计算单元1207执行。Among them, the operations of each layer in the neural network mentioned in each method embodiment shown in Figures 2 to 8 can be performed by the operation circuit 1203 or the vector calculation unit 1207.
其中,上述任一处提到的处理器,可以是一个通用中央处理器,微处理器,ASIC,或一个或多个用于控制上述第一方面方法的程序执行的集成电路。The processor mentioned in any of the above places may be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the above-mentioned first aspect method.
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。It should also be noted that the device embodiments described above are merely schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the scheme of this embodiment. In addition, in the drawings of the device embodiments provided by the present application, the connection relationship between the modules indicates that there is a communication connection between them, which may be specifically implemented as one or more communication buses or signal lines.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CLU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、ROM、RAM、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above implementation mode, the technicians in the relevant field can clearly understand that the present application can be implemented by means of software plus necessary general hardware, and of course, it can also be implemented by special hardware including special integrated circuits, special CLUs, special memories, special components, etc. In general, all functions completed by computer programs can be easily implemented by corresponding hardware, and the specific hardware structure used to implement the same function can also be various, such as analog circuits, digital circuits or special circuits. However, for the present application, software program implementation is a better implementation mode in more cases. Based on such an understanding, the technical solution of the present application is essentially or the part that contributes to the prior art can be embodied in the form of a software product, which is stored in a readable storage medium, such as a computer floppy disk, U disk, mobile hard disk, ROM, RAM, disk or optical disk, etc., including a number of instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods described in each embodiment of the present application.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。In the above embodiments, all or part of the embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented by software, all or part of the embodiments may be implemented in the form of a computer program product.
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。 The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the process or function described in the embodiment of the present application is generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, a computer, a server, or a data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) mode to another website site, computer, server, or data center. The computer-readable storage medium may be any available medium that a computer can store or a data storage device such as a server or a data center that includes one or more available media integrations. The available medium may be a magnetic medium, (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state drive (SSD)), etc.

Claims (15)

  1. 一种车辆告警方法,其特征在于,所述方法包括:A vehicle alarm method, characterized in that the method comprises:
    获取驾驶员的视线和车辆周围的环境信息;Obtain the driver's line of sight and the environment around the vehicle;
    在根据所述驾驶员的视线确定满足第一条件的情况下,输出告警信息,其中,所述第一条件包括所述驾驶员的视线与所述车辆的行驶意图不匹配,所述车辆的行驶意图基于所述车辆周围的环境信息确定。When a first condition is determined to be satisfied according to the driver's line of sight, a warning message is output, wherein the first condition includes that the driver's line of sight does not match the driving intention of the vehicle, and the driving intention of the vehicle is determined based on environmental information around the vehicle.
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, characterized in that the method further comprises:
    根据所述车辆周围的环境信息,确定所述驾驶员在行驶过程中需要关注的物体,其中,所述车辆的行驶意图包括所述驾驶员在行驶过程中需要关注的物体,所述驾驶员的视线与所述车辆的行驶意图不匹配的情况包括:所述驾驶员的视线位于所述需要关注的物体之外。Objects that the driver needs to pay attention to during driving are determined based on environmental information around the vehicle, wherein the driving intention of the vehicle includes the objects that the driver needs to pay attention to during driving, and the situation where the driver's line of sight does not match the driving intention of the vehicle includes: the driver's line of sight is outside the objects that need attention.
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:The method according to claim 2, characterized in that the method further comprises:
    根据所述车辆周围的环境信息,确定所述需要关注的物体在第一时间段内的第一移动参数,其中,所述驾驶员的视线位于所述需要关注的物体之外还包括:所述驾驶员在第一时刻的视线位于所述需要关注的物体之内,且所述驾驶员的视线在所述第一时间段内的第二移动参数与所述第一移动参数不匹配,所述第一时间段位于所述第一时刻之后。Based on the environmental information around the vehicle, a first movement parameter of the object that needs attention within a first time period is determined, wherein the driver's line of sight being outside the object that needs attention also includes: the driver's line of sight at a first moment is within the object that needs attention, and a second movement parameter of the driver's line of sight within the first time period does not match the first movement parameter, and the first time period is located after the first moment.
  4. 根据权利要求3所述的方法,其特征在于,所述第一移动参数包括所述需要关注的物体在所述第一时间段内的第一移动方向,所述第二移动参数包括所述驾驶员的视线在所述第一时间段内的第二移动方向,所述第二移动参数与所述第一移动参数不匹配的情况包括:所述第一移动方向和所述第二移动方向之间的差异满足第二条件。The method according to claim 3 is characterized in that the first movement parameter includes a first movement direction of the object of concern within the first time period, the second movement parameter includes a second movement direction of the driver's line of sight within the first time period, and the situation where the second movement parameter does not match the first movement parameter includes: the difference between the first movement direction and the second movement direction satisfies a second condition.
  5. 根据权利要求2所述的方法,其特征在于,所述方法还包括:The method according to claim 2, characterized in that the method further comprises:
    向驾驶员展示所述需要关注的物体。The object requiring attention is displayed to the driver.
  6. 根据权利要求1至5任一项所述的方法,其特征在于,所述在根据所述驾驶员的视线确定满足第一条件的情况下,输出告警信息,包括:The method according to any one of claims 1 to 5, characterized in that, when determining that the first condition is satisfied according to the driver's line of sight, outputting warning information comprises:
    当所述驾驶员的视线与所述车辆的行驶意图不匹配的时长达到第一时长时,输出所述告警信息,所述第一时长的取值与所述驾驶员的驾驶行为的安全度具有关联关系。When the duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, the warning information is output, and the value of the first duration is correlated with the safety of the driver's driving behavior.
  7. 一种车辆告警装置,其特征在于,所述装置包括:A vehicle warning device, characterized in that the device comprises:
    获取模块,用于获取驾驶员的视线和车辆周围的环境信息;An acquisition module, used to acquire the driver's line of sight and the environmental information around the vehicle;
    告警模块,用于在根据所述驾驶员的视线确定满足第一条件的情况下,输出告警信息,其中,所述第一条件包括所述驾驶员的视线与所述车辆的行驶意图不匹配,所述车辆的行驶意图基于所述车辆周围的环境信息确定。An alarm module is used to output an alarm message when a first condition is determined to be satisfied according to the driver's line of sight, wherein the first condition includes that the driver's line of sight does not match the driving intention of the vehicle, and the driving intention of the vehicle is determined based on the environmental information around the vehicle.
  8. 根据权利要求7所述的装置,其特征在于,所述装置还包括:The device according to claim 7, characterized in that the device further comprises:
    处理模块,用于根据所述车辆周围的环境信息,确定所述驾驶员在行驶过程中需要关注的物体,其中,所述车辆的行驶意图包括所述驾驶员在行驶过程中需要关注的物体,所述驾驶员的视线与所述车辆的行驶意图不匹配的情况包括:所述驾驶员的视线位于所述需要关注的物体之外。A processing module is used to determine objects that the driver needs to pay attention to during driving based on environmental information around the vehicle, wherein the driving intention of the vehicle includes the objects that the driver needs to pay attention to during driving, and the situation where the driver's line of sight does not match the driving intention of the vehicle includes: the driver's line of sight is outside the object that needs attention.
  9. 根据权利要求8所述的装置,其特征在于,The device according to claim 8, characterized in that
    所述处理模块,还用于根据所述车辆周围的环境信息,确定所述需要关注的物体在第一时间段内的第一移动参数,其中,所述驾驶员的视线位于所述需要关注的物体之外还包括:所述驾驶员在第一时刻的视线位于所述需要关注的物体之内,且所述驾驶员的视线在所述第一时间段内的第二移动参数与所述第一移动参数不匹配,所述第一时间段位于所述第一时刻之后。The processing module is further used to determine a first movement parameter of the object that needs attention within a first time period based on environmental information around the vehicle, wherein the driver's line of sight being outside the object that needs attention also includes: the driver's line of sight at a first moment is within the object that needs attention, and a second movement parameter of the driver's line of sight within the first time period does not match the first movement parameter, and the first time period is located after the first moment.
  10. 根据权利要求9所述的装置,其特征在于,所述第一移动参数包括所述需要关注的物体在所述第一时间段内的第一移动方向,所述第二移动参数包括所述驾驶员的视线在所述第一时间段内的第二移动方向,所述第二移动参数与所述第一移动参数不匹配的情况包括:所述第一移动方向和所述第二移动方向之间的差异满足第二条件。The device according to claim 9 is characterized in that the first movement parameter includes a first movement direction of the object of concern within the first time period, the second movement parameter includes a second movement direction of the driver's line of sight within the first time period, and the situation where the second movement parameter does not match the first movement parameter includes: the difference between the first movement direction and the second movement direction satisfies a second condition.
  11. 根据权利要求8所述的装置,其特征在于,所述装置还包括:The device according to claim 8, characterized in that the device further comprises:
    展示模块,用于向驾驶员展示所述需要关注的物体。The display module is used to display the object requiring attention to the driver.
  12. 根据权利要求7至11任一项所述的装置,其特征在于, The device according to any one of claims 7 to 11, characterized in that
    所述告警模块,具体用于当所述驾驶员的视线与所述车辆的行驶意图不匹配的时长达到第一时长时,输出所述告警信息,所述第一时长的取值与所述驾驶员的驾驶行为的安全度具有关联关系。The warning module is specifically used to output the warning information when the duration of the mismatch between the driver's line of sight and the vehicle's driving intention reaches a first duration, and the value of the first duration is correlated with the safety of the driver's driving behavior.
  13. 一种车辆,其特征在于,包括处理器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现权利要求1至6中任一项所述的方法。A vehicle, characterized in that it comprises a processor, the processor is coupled to a memory, the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the method described in any one of claims 1 to 6 is implemented.
  14. 一种计算机可读存储介质,包括程序,当其在计算机上运行时,使得计算机执行如权利要求1至6中任一项所述的方法。A computer-readable storage medium comprises a program, which, when executed on a computer, causes the computer to execute the method according to any one of claims 1 to 6.
  15. 一种电路系统,其特征在于,所述电路系统包括处理电路,所述处理电路配置为执行如权利要求1至6中任一项所述的方法。 A circuit system, characterized in that the circuit system comprises a processing circuit, and the processing circuit is configured to execute the method according to any one of claims 1 to 6.
PCT/CN2023/126656 2022-10-31 2023-10-26 Vehicle alarm method and related device WO2024093768A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211366481.9A CN115675504A (en) 2022-10-31 2022-10-31 Vehicle warning method and related equipment
CN202211366481.9 2022-10-31

Publications (1)

Publication Number Publication Date
WO2024093768A1 true WO2024093768A1 (en) 2024-05-10

Family

ID=85047145

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/126656 WO2024093768A1 (en) 2022-10-31 2023-10-26 Vehicle alarm method and related device

Country Status (2)

Country Link
CN (1) CN115675504A (en)
WO (1) WO2024093768A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115675504A (en) * 2022-10-31 2023-02-03 华为技术有限公司 Vehicle warning method and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017011186A1 (en) * 2016-12-12 2018-06-14 Scania Cv Ab Method, control unit and system for reducing false alarms
KR20180129044A (en) * 2017-05-25 2018-12-05 (주)에이다스원 Driver assistance apparatus in vehicle and method for guidance a safety driving thereof
US20190088130A1 (en) * 2017-09-18 2019-03-21 Anuj Kapuria Monitoring drivers and external environment for vehicles
CN112654547A (en) * 2020-09-25 2021-04-13 华为技术有限公司 Driving reminding method, device and system
US20210357670A1 (en) * 2019-06-10 2021-11-18 Huawei Technologies Co., Ltd. Driver Attention Detection Method
CN115675504A (en) * 2022-10-31 2023-02-03 华为技术有限公司 Vehicle warning method and related equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017011186A1 (en) * 2016-12-12 2018-06-14 Scania Cv Ab Method, control unit and system for reducing false alarms
KR20180129044A (en) * 2017-05-25 2018-12-05 (주)에이다스원 Driver assistance apparatus in vehicle and method for guidance a safety driving thereof
US20190088130A1 (en) * 2017-09-18 2019-03-21 Anuj Kapuria Monitoring drivers and external environment for vehicles
US20210357670A1 (en) * 2019-06-10 2021-11-18 Huawei Technologies Co., Ltd. Driver Attention Detection Method
CN112654547A (en) * 2020-09-25 2021-04-13 华为技术有限公司 Driving reminding method, device and system
CN115675504A (en) * 2022-10-31 2023-02-03 华为技术有限公司 Vehicle warning method and related equipment

Also Published As

Publication number Publication date
CN115675504A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
WO2021052213A1 (en) Method and device for adjusting accelerator pedal characteristic
WO2021102955A1 (en) Path planning method for vehicle and path planning apparatus for vehicle
WO2022204925A1 (en) Image obtaining method and related equipment
WO2021057344A1 (en) Data presentation method and terminal device
WO2022205243A1 (en) Method and apparatus for obtaining lane change area
WO2022204855A1 (en) Image processing method and related terminal device
WO2021036592A1 (en) Adaptive adjustment method and device for rear-view mirror
CN112512887B (en) Driving decision selection method and device
WO2024093768A1 (en) Vehicle alarm method and related device
US20240137721A1 (en) Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle
WO2021217575A1 (en) Identification method and identification device for object of interest of user
EP4180297A1 (en) Automatic driving control method and apparatus
WO2022061702A1 (en) Method, apparatus, and system for driving alerts
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
WO2022057745A1 (en) Assisted-driving control method and apparatus
CN115334109A (en) System architecture, transmission method, vehicle, medium and chip for traffic signal identification
WO2024092559A1 (en) Navigation method and corresponding device
CN114572219B (en) Automatic overtaking method and device, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
EP4292896A1 (en) Vehicle traveling control method, electronic device, storage medium, chip and vehicle
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115221260B (en) Data processing method, device, vehicle and storage medium
WO2023050058A1 (en) Method and apparatus for controlling angle of view of vehicle-mounted camera, and vehicle
WO2024108380A1 (en) Automatic parking method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23884700

Country of ref document: EP

Kind code of ref document: A1