WO2023004736A1 - 车辆控制方法及其装置 - Google Patents
车辆控制方法及其装置 Download PDFInfo
- Publication number
- WO2023004736A1 WO2023004736A1 PCT/CN2021/109557 CN2021109557W WO2023004736A1 WO 2023004736 A1 WO2023004736 A1 WO 2023004736A1 CN 2021109557 W CN2021109557 W CN 2021109557W WO 2023004736 A1 WO2023004736 A1 WO 2023004736A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sector
- vehicle
- vision
- field
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 238000011217 control strategy Methods 0.000 claims abstract description 68
- 238000003062 neural network model Methods 0.000 claims description 33
- 230000008569 process Effects 0.000 claims description 32
- 230000015654 memory Effects 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 22
- 230000003044 adaptive effect Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims 3
- 238000001514 detection method Methods 0.000 abstract description 9
- 230000035484 reaction time Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 19
- 238000012549 training Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 12
- 238000012544 monitoring process Methods 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 230000008859 change Effects 0.000 description 9
- 238000013507 mapping Methods 0.000 description 8
- 230000003993 interaction Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006403 short-term memory Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 210000005252 bulbus oculi Anatomy 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/10—Interpretation of driver requests or demands
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0953—Predicting travel path or likelihood of collision the prediction being responsive to vehicle dynamic parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/09—Taking automatic action to avoid collision, e.g. braking and steering
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/10—Path keeping
- B60W30/12—Lane keeping
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/14—Adaptive cruise control
- B60W30/143—Speed control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/18—Propelling the vehicle
- B60W30/18009—Propelling the vehicle related to particular drive situations
- B60W30/18163—Lane change; Overtaking manoeuvres
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/10—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/10—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
- B60W40/105—Speed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0097—Predicting future conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0098—Details of control systems ensuring comfort, safety or stability not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2510/00—Input parameters relating to a particular sub-units
- B60W2510/20—Steering systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2520/00—Input parameters relating to overall vehicle dynamics
- B60W2520/10—Longitudinal speed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/225—Direction of gaze
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2552/00—Input parameters relating to infrastructure
- B60W2552/50—Barriers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2720/00—Output or target parameters relating to overall vehicle dynamics
- B60W2720/10—Longitudinal speed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Definitions
- the embodiments of the present application relate to the field of smart vehicles, and more specifically, relate to a vehicle control method and device thereof.
- the vehicle's own information, vehicle environment information, driver information, etc. can be obtained through sensors and other devices, and then the vehicle can be controlled based on these information.
- Embodiments of the present application provide a vehicle control method and device thereof, which can improve the stability of field of view detection, thereby improving the accuracy of vehicle control.
- a vehicle control method comprising: obtaining gaze information of a driver, and obtaining a control strategy at least according to the gaze information.
- the gaze information above includes the driver's attention sector, and the attention sector includes at least one vision sector among the plurality of vision sectors of the vehicle.
- the attention sector is used to represent the driver's field of view, and the stability of the detection of the field of vision is improved, thereby improving the accuracy of vehicle control.
- the field of vision sector divides all the possible field of vision of the driver into multiple areas, that is, multiple sectors, and the sector of interest is the area where the driver's gaze is fixed.
- the detection results of the field of view are relatively stable, and there will be no situation where the detection results will continue to jump due to slight movements such as the driver tilting his head, thereby improving the accuracy of vehicle control.
- Gaze information can be obtained by using sensing devices such as cameras and eye trackers to obtain the driver's line of sight direction, and then determine the above-mentioned concerned sector according to the line of sight direction.
- the control strategy when the control strategy is obtained according to the gaze information, the control strategy can be obtained according to the gaze information and the own vehicle information of the vehicle, and the own vehicle information includes at least one of the following: steering wheel angle , angular velocity, turn signal or vehicle speed.
- Self-vehicle information can be understood as chassis information or vehicle information.
- the control strategy when the control strategy is obtained according to the gaze information and the vehicle's self-vehicle information, it can be completed by performing the following steps: using the trained neural network model to analyze the gaze information and the vehicle's own information.
- the self-vehicle information is processed to obtain the driver's driving intention, and the control strategy is obtained according to the driving intention.
- the driving intention may be lane keeping, turning or changing lanes, and the driving intention may also be acceleration, deceleration or parking, etc., but it should be understood that acceleration, deceleration, parking, etc. may also be regarded as conditions included in lane keeping.
- the above-mentioned plurality of field of view sectors include the following items: left car window field of view, left rearview mirror field of view, front window field of view , the vision sector of the interior rearview mirror, the vision sector of the right window and the vision sector of the right rearview mirror.
- the front window view sector may include a front window left view sector and a front window right view sector. Since the field of view of the front window is relatively large, the driver may not pay attention to the entire front window area. For example, when turning right, the driver will look to the right and only look out through the right area of the front window. . Therefore, in order to further improve the accuracy of determining the concerned sector, the front window view sector can be divided into the front window left view sector and the front window right view sector, that is, the front window view sector can be divided into two.
- the sector of interest may be obtained at least according to blind spots and/or obstacles.
- the sector of interest may at least be obtained according to the driver's line of sight direction.
- the above control strategy includes at least one of the following: anti-collision warning strategy, automatic emergency braking strategy, adaptive cruise control strategy, lane departure warning strategy, lane keeping assist Strategy or Lane Centering Assist Strategy.
- a display unit (display device) having a display function, such as a human-computer interaction interface or a display screen, may also be used to present the sector of interest.
- the above method further includes using a display unit to display the above concerned sector.
- a reaction time can also be introduced, which can be obtained in combination with the reaction time when obtaining the above control strategy.
- the control strategy can be obtained according to the reaction time and the sector of attention, or the control strategy can be obtained according to the reaction time and driving intention, or the control strategy can be obtained according to the reaction time, driving intention and attention sector to get the control strategy.
- the introduction of reaction time takes into account the driver's attention in addition to the above-mentioned considerations for the concerned sector, which reduces the driving risk caused by the driver's lack of attention, so it can further improve the accuracy of vehicle control and vehicle control. driving safety.
- a vehicle control device in a second aspect, includes a unit for performing the method in any one implementation manner of the above-mentioned first aspect.
- a vehicle control device which includes: a memory for storing programs; a processor for executing the programs stored in the memory, and when the programs stored in the memory are executed, the processor uses To execute the method in any one of the implementation manners in the first aspect.
- the device can be set in various equipment or systems that need to control the vehicle.
- the device can also be a chip.
- a computer-readable medium stores program code for execution by a device, where the program code includes a method for executing any one of the implementation manners in the first aspect.
- a computer program product containing instructions is provided, and when the computer program product is run on a computer, it causes the computer to execute the method in any one of the implementation manners in the first aspect above.
- a chip in a sixth aspect, includes a processor and a data interface, the processor reads the instructions stored on the memory through the data interface, and executes any one of the implementations in the first aspect above. method.
- the chip may further include a memory, the memory stores instructions, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the The processor is configured to execute the method in any one of the implementation manners in the first aspect.
- FIG. 1 is a schematic structural diagram of a vehicle control device according to an embodiment of the present application.
- Fig. 2 is a schematic diagram of a field of vision sector of a vehicle according to an embodiment of the present application.
- Fig. 3 is a schematic diagram of a field of view sector according to an embodiment of the present application.
- Fig. 4 is a schematic diagram of a concerned sector according to an embodiment of the present application.
- 5 to 10 are application diagrams of the vehicle control scheme of the embodiment of the present application.
- Fig. 11 is a schematic flowchart of a vehicle control method according to an embodiment of the present application.
- Fig. 12 is a schematic diagram of a method for acquiring attention information according to an embodiment of the present application.
- Fig. 13 is a schematic diagram of the vehicle control process of the embodiment of the present application.
- Fig. 14 is a schematic block diagram of a vehicle control device according to an embodiment of the present application.
- Fig. 15 is a schematic diagram of the hardware structure of the vehicle control device according to the embodiment of the present application.
- FIG. 1 is a schematic structural diagram of a vehicle control device according to an embodiment of the present application.
- the vehicle control device 100 may include an acquisition module 110 and a control strategy module 120 .
- the vehicle control device 100 may be a module in a vehicle terminal or a control unit of the vehicle.
- the acquiring module 110 is used to acquire gaze information of the driver, the gaze information includes the driver's attention sector, and the attention sector includes at least one vision sector among the plurality of vision sectors of the vehicle.
- Gaze information can be obtained by using sensing devices such as cameras and eye trackers to obtain the driver's line of sight direction, and then determine the above-mentioned concerned sector according to the line of sight direction.
- the acquiring module 110 can directly acquire the above-mentioned gaze information; it can also first acquire the above-mentioned gaze direction from the sensing device, and then obtain the gaze information according to the gaze direction;
- the sensing device is also integrated in the acquisition module 110), and then extract the gaze direction from the image and obtain gaze information according to the gaze direction.
- the acquisition module 110 may be the above-mentioned sensing device, or a device capable of acquiring the gaze direction and determining gaze information according to the gaze direction, or an interface circuit or a reading device capable of reading gaze information from a storage device, or a A communication interface capable of obtaining gaze information through the network.
- the line of sight direction can be understood as the direction of the line of sight, which can be represented by a line or an angle, such as the angle between the line of sight direction and the driving direction of the vehicle.
- the field of vision sector divides all the possible field of vision of the driver into multiple areas, that is, multiple sectors, and the sector of interest is the area where the driver's gaze is fixed.
- the detection results of the field of view are relatively stable, and there will be no situation where the detection results will continue to jump due to slight movements such as the driver tilting his head, thereby improving the accuracy of vehicle control.
- the above-mentioned field of view may include at least one of the following: a field of view of the left window, a sector of field of view of the left rearview mirror, a sector of field of view of the front window, a sector of field of view of the interior rearview mirror, The right side window field of view and the right rearview mirror field of view.
- the front window field of view may also include the front window left field of view and the front window right field of view.
- the field of view of the left window is the field of vision that the driver can see from the left window;
- the field of view of the left rearview mirror is the field of vision that the driver can see from the left rearview mirror;
- the sector refers to the field of vision that the driver can see from the front window,
- the field of vision of the interior rearview mirror refers to the field of vision that the driver can see from the interior rearview mirror,
- the field of vision of the right window refers to the field of vision of the driver.
- the field of vision that can be seen from the right side window, and the field of view of the right rearview mirror is the field of vision that the driver can see from the side window. Since the field of view of the front window is relatively large, the driver may not pay attention to the entire front window area.
- the front window view sector can be divided into the front window left view sector and the front window right view sector, that is, the front window view sector can be divided into two.
- the control strategy module 120 is configured to acquire a control strategy according to the gaze information.
- control strategy module 120 can control the vehicle according to the sector of interest in the gaze information, which can be assisted driving control or automatic driving control, for example, it can control the vehicle to accelerate, decelerate, change lanes, turn, Parking, obstacle avoidance or various warnings, etc.
- the above control strategies may include at least one of the following: anti-collision warning strategy, automatic emergency braking (autonomous emergency braking, AEB) strategy, adaptive cruise control (adaptive cruise control, ACC) strategy, lane departure warning (lane departure warning, LDW) ) strategy, lane keeping assist (lane keeping assist, LKA) strategy or lane centering assist strategy (lane centering control, LCC), etc.
- the anti-collision warning strategy refers to giving a warning when the vehicle is at risk of collision. For example, it can be determined whether the driver is at risk of colliding with an obstacle according to the sector of concern, so as to determine whether a warning is required. For example, assuming three situations, the first case is that the obstacle is not in the driver’s attention sector, and is on the vehicle’s driving track; the second case is that the obstacle is not in the driver’s attention sector, but not on the vehicle’s driving track; The third is that the obstacle is in the driver's attention sector and is not on the vehicle's driving track. It is obvious that the collision risk of situation 1 is much higher than that of situation 2 and situation 3, and situation 3 will not affect the driving of the vehicle, so different levels of early warning can be carried out for situation 1 and situation 2, but no warning is given for situation 3.
- control strategies such as automatic emergency braking strategy, adaptive cruise control strategy, lane departure warning strategy, or lane keeping assist strategy can also refer to the above situation to determine the control actions for the vehicle, and will not be carried out one by one.
- control strategy module 120 can also obtain the driver's driving intention according to gaze information and vehicle information.
- the driving intention may be lane keeping, turning, or changing lanes, and may also be acceleration, deceleration, or parking.
- Self-vehicle information can be understood as chassis information or vehicle information.
- the ego vehicle information may include at least one of the following: steering wheel angle, angular velocity, turn signal or vehicle speed.
- the driver's attention sector is the field of view of the left rearview mirror and the field of view of the left window, and the left turn signal flashes
- the driver's driving intention is to change lanes to the left.
- the driver's attention sector is the left field of view of the front window and the field of view of the left window, and the left turn signal is flashing, it can be inferred that the driver's driving intention is to turn left.
- control strategy module 120 may also obtain the above-mentioned control strategy according to the driving intention and sum. For example, assuming that the driver's driving intention is to keep the lane, then the stationary obstacles in other lanes can no longer be considered.
- the control strategy can be obtained only according to the driving intention, or can be obtained based on the driving intention and gaze information, or can be further combined with other factors such as the predicted driving trajectory of the vehicle to obtain the control strategy.
- a model such as a trained neural network model can be used to process the gaze information to obtain the above driving intention.
- the neural network model can be understood as a model that establishes a corresponding relationship between input and output, the input is gaze information and self-vehicle information, and the output is driving intention. It can be understood that the neural network model establishes the mapping relationship between the input data and the label. Here, it establishes the mapping relationship between the gaze information and the driving intention, and the mapping relationship between the self-vehicle information and the driving intention. Training is to make the above mapping relationship more accurate.
- the neural network model can adopt convolutional neural network (convolutional neuron network, CNN), recurrent neural network (recurrent neural networks, RNN) or long short-term memory (long short-term memory, LSTM) neural network, etc.
- the training data includes input data and labels.
- the input data includes the above-mentioned gaze information and self-vehicle information, and the labels include the above-mentioned driving intention.
- Each input data corresponds to a label.
- the parameters of the neural network model (such as the initial neural network model) are updated by using the above training data to obtain a trained neural network model, which can be used in the above vehicle control method.
- the neural network model often involves a training process and an inference process.
- the initial neural network model (which can be understood here as an untrained neural network model) is trained using labeled training data. That is to update the parameters of the neural network model.
- the trained neural network model (that is, the neural network model whose parameters have been updated) is used to process the data to be processed (such as the above-mentioned gaze information and self-vehicle information).
- An inference result is obtained, and the inference result is the driving intention corresponding to the data to be processed.
- the vehicle control device 100 may also include a display unit, which may be used to display the sector of interest, that is, use the display unit to present the sector of interest.
- the display unit may be, for example, a man-machine interface, a vehicle display screen etc.
- the human-computer interaction interface can also be called a human machine interface (human machine interaction, HMI), user interface, user interface, or interactive interface.
- the control strategy module 120 may also obtain the control strategy according to the response time and the aforementioned concerned sector, driving intention and other information.
- the reaction time is the time for the driver to take countermeasures when encountering an emergency. For example, when the driver brakes suddenly, the time between seeing an obstacle and executing the emergency brake is the reaction time.
- the reaction time is not only related to the driver's own reaction sensitivity, but also related to the driver's attention level. Therefore, the driver's reaction time can be inferred by detecting the driver's attention level. For example, assuming that the driver is concentrating on driving, the degree of attention is high and the reaction time is relatively short. If an emergency occurs, the driver can quickly deal with it; Then the degree of attention is low, and the reaction time is relatively long. If an unexpected situation is encountered, the driver may not respond in time.
- the attention information can be obtained according to the own vehicle information and driver state monitoring information.
- the driver status monitoring information can be obtained by using the driver monitoring system (driver monitoring system, DMS) to obtain the driver's facial image information.
- DMS driver monitoring system
- reaction time takes into account the driver's attention in addition to the above-mentioned considerations for the concerned sector, which reduces the driving risk caused by the driver's lack of attention, so it can further improve the accuracy of vehicle control and vehicle control. driving safety.
- Fig. 2 is a schematic diagram of a field of vision sector of a vehicle according to an embodiment of the present application.
- the vision sector of the vehicle includes: 1 left window vision sector, 2 left rearview mirror vision sector, 3 front left window vision sector, 4 interior rearview mirror vision sector area, 5 front window right field of vision sector, 6 right rearview mirror field of vision sector and 7 right window field of vision sector.
- Fig. 3 is a schematic diagram of a field of view sector according to an embodiment of the present application.
- the field of view of the vehicle includes: 1 the field of vision of the left window, 2 the field of vision of the left rearview mirror, 4 the field of vision of the interior rearview mirror, 6 the field of vision of the right Side rearview mirror vision sector, 7 right side window vision sector and 8 front window vision sector.
- the field of view of the vehicle includes: 1 the field of view of the left window, 2 the field of view of the left rearview mirror, 3 the sector of the left field of view of the front window, and 4 the field of vision in the vehicle
- the rearview mirror vision sector 5 the right vision sector of the front window, 6 the right rearview mirror vision sector and 7 the right vehicle window vision sector, the difference between (b) and (a) is that in (b)
- the front window sector is divided into left and right areas, which is more precise.
- the field of view of the vehicle includes: 1 the field of vision of the left window, 2 the field of vision of the left rearview mirror, 4 the field of vision of the interior rearview mirror, 6 the field of vision of the right The field of view of the side rearview mirror, 7the field of view of the right window and 8the field of view of the front window, the difference between (c) and (a) is that (c) also shows two blind spots, that is, the vehicle The two blind spots on both sides of the front caused by its own structure can also improve accuracy.
- the field of view of the vehicle includes: 1 the field of view of the left window, 2 the field of view of the left rearview mirror, 3 the sector of the left field of view of the front window, and 4 the field of vision in the vehicle Rearview mirror vision sector, 5 front window right vision sector, 6 right rearview mirror vision sector and 7 right vehicle window vision sector, the difference between (d) and (b) is that in (d) Also shown are two dead zones.
- the blind spot is not a field of vision sector, but may be included in a certain field of vision sector, because the driver has no field of vision in the blind spot.
- the 8 front window field of vision sector in (c) in Figure 3 can be regarded as the result of removing two blind spots on the basis of the 8 front window field of view sector in (a).
- Fig. 3 only gives examples of division of several view sectors, and there may be other division manners in actual situations.
- small round mirrors are also installed on the rearview mirrors on both sides of some vehicles, and the field of view obtained by the driver from the small round mirrors can be divided into the left side rearview mirror round mirror field of view sector and the right side rearview mirror round mirror field of view sector. district.
- large vehicles have relatively many blind spots in the field of view, so for each field of view sector mentioned above, the coverage angle of a certain field of view sector of the large vehicle will be different.
- a corresponding camera can be installed on the vehicle. At this time, the field of view displayed on the display screen of the camera can also be divided into a field of view sector. I won't list them one by one here.
- the concerned sector is the field of vision sector included in the driver's actual field of vision, so the concerned sector will include at least one field of view sector among the above-mentioned multiple field of view sectors.
- the driver's attention sector will include the right field of view sector of the front window and the field of view sector of the left window.
- the driver may also observe the rearview mirror, and at this time, the driver's attention sector may include the field of view of the right rearview mirror.
- the view sector may or may not include a blind area. Therefore, when determining the attention sector, the blind area can be further considered, because although the blind area can be included in a certain field of vision, the driver cannot actually observe the blind area, so in order to improve accuracy, the attention sector The area can be obtained according to the blind area information.
- the concerned sector can be cropped to remove blind areas in the concerned sector. In the case that the attention sector does not include the blind area, the accuracy of gaze information can be improved.
- the attention sector may also consider the influence of obstacles, that is, the attention sector may be obtained according to obstacle information.
- the determination of the concerned sector is introduced below in conjunction with FIG. 4 under the condition of considering the influence of blind spots and obstacles.
- Fig. 4 is a schematic diagram of a concerned sector according to an embodiment of the present application.
- the vehicle’s field of view includes: 1 left window field of vision, 2 left rearview mirror field of vision, 3 front window left field of vision, 4 interior rear View mirror field of view, 5 front window right field of view, 6 right side rearview mirror field of view and 7 right side window field of view, and two blind spots are shown.
- the driver's field of vision is the area between straight line A and straight line B
- the field of view actually includes two blind spots, 3 the left field of view of the front window, 5 the front window
- the right field of view sector, and 5 a small part of the right field of view sector of the front window is blocked by obstacles.
- the blind area and the part blocked by obstacles can be removed, the concerned sector shown in (b) in Figure 4.
- the driver's field of vision is the area between straight line C and straight line D, and the concerned sectors only include 3 the left field of view of the front window and 5 the right field of view of the front window.
- the vehicle control solutions of the embodiments of the present application can be applied to various scenarios such as automatic driving and assisted driving.
- it can be combined with an advanced driver assistance system (advanced driver assistance system, ADAS) to realize some assisted driving functions.
- advanced driver assistance system advanced driver assistance system
- ADAS advanced driver assistance system
- 5 to 10 are application diagrams of the vehicle control scheme of the embodiment of the present application. Specifically, it is an example of combining gaze information and driving intention to determine a control strategy.
- the bold parallel dotted line in the figure indicates the lane line, and the sector of concern is represented by two straight lines.
- the area between line A and line B is the sector of concern
- the area between the straight line E and the straight line F is also the concerned sector.
- the predicted driving trajectory of the vehicle is represented by two curves.
- the area between the curve C and the curve D is the passing area when the vehicle drives according to the predicted driving trajectory.
- the curve G and the curve H are also the predicted driving trajectory of the vehicle, but the curve C and the curve D is a predicted driving trajectory obtained in consideration of the driving intention, and curves G and H are predicted driving trajectories obtained without considering the driving intention.
- An object with an arrow means that the object is moving relative to the ground, and the direction of the arrow is the direction of motion.
- An object without an arrow means that the object is stationary relative to the ground.
- an obstacle with an arrow means that it is moving, and an obstacle without an arrow Objects represent stationary obstacles.
- the driver's attention sector is the area between line A and line B in Figure 5, that is, the attention sector includes the field of view of the front window (or the left side of the front window field of vision and front window right field of vision).
- the driving intention of the driver is lane keeping.
- Curve C and curve D represent the predicted driving trajectory of the vehicle.
- Obstacle #1 is a moving object within the sector of interest
- obstacle #2 is a moving object outside the sector of interest.
- the moving direction of obstacle #1 is consistent with the vehicle
- the moving direction of obstacle #2 is consistent with the predicted driving direction of the vehicle. Tracks cross. In this scenario, obstacle #2 may bring danger, so anti-collision warning is required, that is, the obtained control strategy is: perform anti-collision warning.
- the early warning signal can be presented in various ways such as sound, flashing warning lights, or screen presentation, and there is no limitation.
- the driver can be reminded of the risk of collision by means of, for example, a voice prompt, or the warning light can be flashed to give an early warning reminder, or the Pre-warning reminders are presented on the human-computer interaction interface and on-vehicle display screens. These warning reminder methods can also be superimposed.
- the driver's attention sector is the area between line A and line B in Figure 6, that is, the attention sector includes the left field of view of the front window and the left window field of view.
- the driving intention of the driver is lane keeping.
- Curve C and curve D represent the predicted driving trajectory of the vehicle.
- Both obstacle #1 and obstacle #2 are moving objects in the concerned sector, the moving direction of obstacle #1 is consistent with the vehicle, and the moving direction of obstacle #2 intersects with the predicted driving trajectory of the vehicle.
- the anti-collision warning may not be performed at this time, that is, the obtained control strategy is: no anti-collision warning.
- the driver's attention sector is the area between straight line A and line B in FIG. 7 , that is, the attention sector includes the right field of view of the front window.
- the driver's driving intention is to turn right.
- Curve C and curve D represent the predicted driving trajectory of the vehicle.
- Obstacle #1 is a moving object in the sector of interest
- obstacle #2 and obstacle #3 are moving objects outside the sector of interest
- obstacle 2 is an obstacle in the blind area
- the moving direction is consistent with the vehicle, and the moving directions of obstacle #2 and obstacle #3 intersect with the predicted driving trajectory of the vehicle.
- the obtained control strategy is: anti-collision warning.
- the driver's attention sector is the area between the straight line A and the straight line B and the area between the straight line E and the straight line F in Figure 8, that is, the attention sector includes the front vehicle Window field of view (or front window left field of view and front window right field of view) and left rearview mirror field of view.
- the driver's driving intention is to change lanes to the left.
- Curve C and curve D represent the predicted driving trajectory of the vehicle.
- Obstacle #1 is a moving object within the concerned sector
- obstacle #2 is a moving object outside the concerned sector
- the moving directions of obstacle #1 and obstacle #2 are consistent with the vehicle.
- the obtained control strategy is: perform anti-collision warning.
- the driver’s attention sector also includes the left window field of view
- obstacle #2 becomes a moving obstacle in the attention sector, and the driver has noticed two obstacles
- the anti-collision warning may not be performed, that is, the obtained control strategy is: no anti-collision warning is performed.
- the driver's attention sector is the area between the straight line A and the straight line B in Figure 8, that is, the attention sector includes the front window field of view (or front window left field of view and front window right field of view).
- Curves C and D represent the predicted driving trajectory #1 of the vehicle
- curves G and H represent the predicted driving trajectory #2 of the vehicle
- the predicted driving trajectory #1 is obtained by integrating the driver's driving intention.
- Trajectory #2 does not consider the driver's driving intention and is biased.
- Obstacle #1 is a moving object within the sector of interest and obstacle #2 is a stationary object outside the sector of interest. In this scenario, the driver has paid attention to obstacle #1.
- the obtained control strategy is: do not perform anti-collision warning or do not need to perform automatic emergency braking.
- the obtained control strategy is: anti-collision warning or automatic emergency Braking, resulting in wrong control, affecting the driving experience.
- Figure 9 mainly shows whether the consideration of driving intention will bring difference to the control strategy.
- the driver's attention sector is the area between straight line A and line B in Figure 10, that is, the attention sector includes the front window field of view (or the front window left field of vision and front window right field of vision).
- the driver's driving intention is to change lanes to the left.
- Curve C and curve D represent the predicted driving trajectory of the vehicle.
- Obstacle #1 is a moving object in the concerned sector, and obstacle #1 is not on the predicted driving trajectory of the vehicle. In this scenario, the driver's driving trajectory will be biased to the left lane, so there is no need for lane keeping assist warning or lane departure warning, that is, the obtained control strategy is: no lane keeping assist warning or no lane departure warning early warning.
- the driving intention is not considered, since the driver’s attention sector is directly ahead, it is very likely that the predicted driving trajectory will still be in the lane, that is, the predicted vehicle will keep driving in the lane, which will cause the above-mentioned early warning strategy to be an early warning and affect driving. experience.
- FIGS. 5 to 10 are only examples of some driving scenarios, and may also be applied to other control strategies, and will not be listed one by one.
- Fig. 11 is a schematic flowchart of a vehicle control method according to an embodiment of the present application. Each step in Fig. 11 is introduced below.
- the concerned sector is obtained at least according to blind spot information and/or obstacle information. That is to say, the factors of blind spots and/or obstacles are considered in the process of confirming the sector of interest, and corresponding adjustments are made, or immediately, the blind spots and/or obstacles are eliminated when the sector of concern is determined influence of things. This can improve the accuracy of the attention sector, that is, the accuracy of gaze information, thereby improving the accuracy of vehicle control.
- control strategy may include at least one of the following: an anti-collision warning strategy, an automatic emergency braking strategy, an adaptive cruise control strategy, a lane departure warning strategy, a lane keeping assist strategy, or a lane centering assist strategy.
- the driver's driving intention can be obtained according to gaze information and vehicle information.
- the driving intention may be lane keeping, turning, or changing lanes, and may also be acceleration, deceleration, or parking.
- control strategy can be obtained according to driving intention and gaze information. For example, assuming that the driver's driving intention is lane keeping, stationary obstacles in other lanes can no longer be considered.
- the control strategy can be obtained only according to the driving intention, or can be obtained based on the driving intention and gaze information, or can be further combined with other factors such as the predicted driving trajectory of the vehicle to obtain the control strategy.
- the driver's driving intention can be obtained first according to the gaze information and the vehicle information, and then the control strategy can be obtained according to the driving intention.
- the driving intention can be obtained by using models such as a trained neural network model.
- the trained neural network model can be used to process the gaze information and the vehicle information to obtain the above-mentioned driving intention.
- the neural network model can be understood as a model that establishes a corresponding relationship between input and output, the input is gaze information and self-vehicle information, and the output is driving intention. It can be understood that the neural network model establishes the mapping relationship between the input data and the label. Here, it establishes the mapping relationship between the gaze information and the driving intention, and the mapping relationship between the self-vehicle information and the driving intention. Training is to make the above mapping relationship more accurate.
- the neural network model can use convolutional neural network, deep neural network, recurrent neural network or long short-term memory neural network, etc.
- the training data includes input data and labels.
- the input data includes the above-mentioned gaze information and self-vehicle information, and the labels include the above-mentioned driving intention.
- Each input data corresponds to a label.
- the parameters of the neural network model (such as the initial neural network model) are updated by using the above training data to obtain a trained neural network model, which can be used in the above vehicle control method.
- control strategy can also be obtained according to the reaction time and the sector of concern, or the control strategy can be obtained according to the reaction time and driving intention, or the control strategy can be obtained according to the reaction time, driving intention and the sector of concern.
- the introduction of the reaction time can refer to the above.
- reaction time can be obtained by using the attention information, and the method for obtaining the attention information will be introduced below in conjunction with Figure 12.
- Fig. 12 is a schematic diagram of a method for acquiring attention information according to an embodiment of the present application.
- the neural network model can be used to process the self-vehicle information and driver state monitoring information to obtain attention information.
- the neural network model represented by RNN in the figure can be used to process the own vehicle information such as steering wheel angle, vehicle speed and steering angle (ie, the steering angle of the vehicle head), and the neural network model represented by NN in the figure can be used to process the driver's
- the state monitoring information is processed, and the results obtained from the above processing are input into the fully connected (FC) layer represented by FC in the figure, and the attention information can be obtained.
- the attention information is information indicating the degree of attention of the driver.
- the neural network used to process the information of the self-vehicle may adopt, for example, a recurrent neural network such as LSTM, and the neural network used to process the driver's state monitoring information may, for example, adopt a multilayer perceptron (multilayer perceptron, MLP) neural network.
- a recurrent neural network such as LSTM
- MLP multilayer perceptron
- the above-mentioned neural network model can also be trained by using the training data, and the process can refer to the introduction about the training of the neural network model above.
- the RNN, NN, and FC shown above can be regarded as jointly forming an attention model.
- the input of the attention model is the self-vehicle information and the driver's state monitoring information, and the output is the attention information.
- the attention model is used to process the self-vehicle information and driver state monitoring information to obtain attention information.
- the attention model includes LSTM, MLP and FC, wherein, LSTM is used to process the own vehicle information, and the obtained processing results are input to FC; MLP is used to process the driver state monitoring information, And the obtained processing results are input to FC; FC is used to continue processing the processing results from LSTM and MLP to obtain attention information.
- the focus sector is used to represent the driver's field of view, and the stability of the detection of the field of view is improved, thereby improving the accuracy of vehicle control.
- the method shown in FIG. 11 may further include: displaying the aforementioned sector of interest on a display unit.
- the display unit may be a vehicle-mounted display screen or a human-computer interaction interface.
- Fig. 13 is a schematic diagram of the vehicle control process of the embodiment of the present application.
- Fig. 13 can be regarded as a specific example of vehicle control using the method shown in Fig. 11, mainly taking the control strategy of anti-collision warning as an example.
- the acquisition module acquires the line of sight direction, blind area information and obstacle information, and obtains gaze information according to the line of sight direction, blind area information and obstacle information, and the gaze information includes the attention sector.
- This process can be regarded as a specific example of step 1101, that is, the sector of interest is obtained comprehensively according to the line-of-sight direction, blind area information, and obstacle information.
- the acquisition module here can be regarded as an example of the acquisition module 110 shown in FIG. 1 .
- the neural network model analyzes the driving intention according to the information of the vehicle and the gaze information to obtain the driving intention of the driver. Turn and change lanes.
- FIG. 13 is just an example, so the above-mentioned self-vehicle information, driving intention, etc. may also be composed in other ways, and there is no limitation.
- the driving intention may also include parking and the like.
- the neural network model here can be regarded as an example of the driving intention analysis module 130 shown in FIG. model is a trained neural network model.
- the vehicle's control unit predicts the vehicle's driving trajectory and vehicle collision risk according to the driving intention and gaze information, and obtains the control strategy.
- the control unit can be regarded as the control strategy module 120 shown in Figure 1. A concrete example.
- the neural network model processes the vehicle information and gaze information to obtain the driving intention, and then the control unit obtains the control strategy according to the driving intention and gaze information.
- the above process can be regarded as a specific example of step 1102 .
- FIG. 13 can be regarded as a specific example of vehicle control using the method shown in FIG. 11 , so there may be other examples.
- Figure 13 can also include reaction time, which can be obtained according to attention information, and the control unit can predict vehicle trajectory and vehicle collision risk according to reaction time, driving intention and gaze information, and obtain Control Strategy.
- FIG. 13 may also include an attention model, which is used to process the self-vehicle information and driver state monitoring information to obtain attention information, etc., which will not be listed here.
- Fig. 14 is a schematic block diagram of a vehicle control device according to an embodiment of the present application.
- the apparatus 2000 shown in FIG. 14 includes an acquisition unit 2001 and a processing unit 2002 .
- the acquisition unit 2001 and the processing unit 2002 can be used to implement the vehicle control method of the embodiment of the present application. Specifically, the acquiring unit 2001 may execute the above step 1101, and the processing unit 2002 may execute the above step 1102.
- the device 2000 may be the vehicle control device 100 shown in FIG. 1
- the acquisition unit 2001 may include the acquisition module 110 shown in FIG. 1
- the processing unit 2002 may include the control strategy module 120 shown in FIG. 1
- the processing unit 2002 may also include a driving intention analysis module 130 .
- the apparatus 2000 may further include a display unit 2003, and the display unit 2003 is configured to display the aforementioned sector of interest.
- the display unit 2003 can also be used to present the warning signal to the driver in the form of a picture.
- the display unit 2003 can also be integrated in the processing unit 2002 .
- processing unit 2002 in the above device 2000 may be equivalent to the processor 3002 in the device 3000 hereinafter.
- Fig. 15 is a schematic diagram of the hardware structure of the vehicle control device according to the embodiment of the present application.
- the vehicle control device 3000 shown in FIG. 15 includes a memory 3001 , a processor 3002 , a communication interface 3003 and a bus 3004 .
- the memory 3001 , the processor 3002 , and the communication interface 3003 are connected to each other through a bus 3004 .
- the memory 3001 may be a read only memory (read only memory, ROM), a static storage device, a dynamic storage device or a random access memory (random access memory, RAM).
- the memory 3001 can store programs. When the programs stored in the memory 3001 are executed by the processor 3002, the processor 3002 and the communication interface 3003 are used to execute various steps of the vehicle control method of the embodiment of the present application.
- the processor 3002 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application specific integrated circuit (application specific integrated circuit, ASIC), a graphics processing unit (graphics processing unit, GPU) or one or more
- the integrated circuit is used to execute related programs to realize the functions required by the units in the vehicle control device of the embodiment of the present application, or to execute the vehicle control method of the method embodiment of the present application.
- the processor 3002 may also be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the vehicle control method of the present application can be completed by an integrated logic circuit of hardware in the processor 3002 or instructions in the form of software.
- the above-mentioned processor 3002 can also be a general-purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistors Logic devices, discrete hardware components.
- DSP digital signal processing
- ASIC application-the-shelf programmable gate array
- FPGA field programmable gate array
- the various methods, steps and logic block diagrams disclosed in the embodiments of the present application can be realized or executed.
- a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
- the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
- the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
- the storage medium is located in the memory 3001, and the processor 3002 reads the information in the memory 3001, and combines its hardware to complete the functions required by the units included in the vehicle control device of the embodiment of the application, or execute the vehicle control of the method embodiment of the application method.
- the communication interface 3003 implements communication between the apparatus 3000 and other devices or communication networks by using a transceiver device such as but not limited to a transceiver.
- a transceiver device such as but not limited to a transceiver.
- the gaze information above can be obtained through the communication interface 3003 .
- the bus 3004 may include a pathway for transferring information between various components of the device 3000 (eg, memory 3001 , processor 3002 , communication interface 3003 ).
- the device 3000 shown in FIG. 15 only shows a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the device 3000 also includes other devices necessary for normal operation. . Meanwhile, according to specific needs, those skilled in the art should understand that the apparatus 3000 may also include hardware devices for implementing other additional functions. In addition, those skilled in the art should understand that the device 3000 may also only include the devices necessary to realize the embodiment of the present application, and does not necessarily include all the devices shown in FIG. 15 .
- the disclosed systems, methods and devices can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: Universal Serial Bus flash disk (UFD), UFD can also be referred to as U disk or USB flash drive, mobile hard disk, ROM, RAM, magnetic disk or optical disk, etc., which can store program codes. medium.
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- General Health & Medical Sciences (AREA)
- Traffic Control Systems (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
Abstract
一种车辆控制方法及使用该控制方法的装置,其中,该控制方法包括:获取驾驶员的目光信息,该目光信息包括驾驶员的关注扇区,关注扇区包括车辆的多个视野扇区中的至少一个视野扇区(1101);至少根据目光信息获取车辆的控制策略(1102)。该控制方法利用关注扇区来表示驾驶员的视野范围,视野范围的检测结果的稳定性得到了提高,从而提高了车辆控制的准确性。
Description
本申请实施例涉及智能车领域,并且更具体地,涉及一种车辆控制方法及其装置。
随着智能车技术的发展,可以通过传感器等设备获取车辆的自车信息、车辆的环境信息、驾驶员信息等,然后基于这些信息对车辆进行控制。
目前出现了一类考虑了驾驶员的视野范围的防碰撞预警的方案,在碰撞预警中,会根据自车与驾驶员视野角度范围内的车辆的距离和自车与驾驶员视野角度范围外的车辆的距离来综合得到预警策略。但即使驾驶员的轻微偏头等一些细微动作都会导致视野角度范围的不断变化,导致预警策略也过度频繁地发生变化,预警的准确性太低。
因此,如何提高视野范围检测的稳定性,从而提升车辆控制的准确性是亟待解决的技术问题。
发明内容
本申请实施例提供一种车辆控制方法及其装置,能够提高视野范围检测的稳定性,从而提高车辆控制的准确性。
第一方面,提供一种车辆控制方法,该方法包括:获取驾驶员的目光信息,以及至少根据目光信息获取控制策略。上述目光信息包括驾驶员的关注扇区,关注扇区包括车辆的多个视野扇区中的至少一个视野扇区。
在本申请的技术方案中,利用关注扇区来表示驾驶员的视野范围,视野范围的检测的稳定性得到了提高,从而提高了车辆控制的准确性。
在现有技术中,由于只考虑了视野角度范围,而当驾驶员轻微转动眼珠等行为都可能导致视野角度范围的变化,很不稳定,这给后续的车辆控制带来了较大的困难和误差。而在本申请实施例中,视野扇区则是把驾驶员的所有可能视野范围进行了划分,划分成多个区域,即多个扇区,关注扇区即为驾驶员目光注视的区域,该视野范围的检测结果相对稳定,不会出现因为驾驶员偏偏头等轻微动作就导致检测结果不断跳动的情况,从而提高车辆控制的准确性。
目光信息可以是利用摄像头、眼动仪等感知设备获取驾驶员的视线方向,再根据视线方向来确定上述关注扇区。
结合第一方面,在第一方面的某些实现方式中,在根据目光信息得到控制策略时,可以根据目光信息和车辆的自车信息得到控制策略,自车信息包括以下至少一种:方向盘转角、角速度、转向灯或车速。自车信息可以理解为底盘信息或车辆的信息。
结合第一方面,在第一方面的某些实现方式中,在根据目光信息和车辆的自车信息得 到控制策略时,可以通过执行下面的步骤完成:利用训练好的神经网络模型对目光信息和自车信息进行处理得到驾驶员的驾驶意图,以及根据驾驶意图得到控制策略。该驾驶意图可以为车道保持、转向或变道,驾驶意图还可以为加速、减速或停车等,但应理解,加速、减速、停车等也可以看作是包括在车道保持中的情况。
结合第一方面,在第一方面的某些实现方式中,上述多个视野扇区包括以下多项:左侧车窗视野扇区、左侧后视镜视野扇区、前方车窗视野扇区、车内后视镜视野扇区、右侧车窗视野扇区和右侧后视镜视野扇区。
结合第一方面,在第一方面的某些实现方式中,前方车窗视野扇区可以包括前方车窗左视野扇区和前方车窗右视野扇区。由于前车窗的视野区域相对较大,而驾驶员未必会关注整个前车窗区域,例如右转弯的时候,驾驶员会向右侧看,此时只会通过前车窗的右边区域往外看。因此,为了进一步提高关注扇区确定的准确性,可以将前方车窗视野扇区划分为前方车窗左视野扇区和前方车窗右视野扇区,即把前车窗视野扇区一分为二。
结合第一方面,在第一方面的某些实现方式中,关注扇区至少可以是根据盲区和/或障碍物得到的。
结合第一方面,在第一方面的某些实现方式中,关注扇区至少可以是根据驾驶员的视线方向得到的。
结合第一方面,在第一方面的某些实现方式中,上述控制策略包括以下至少一项:防碰撞预警策略、自动紧急制动策略、自适应巡航控制策略、车道偏离预警策略、车道保持辅助策略或车道居中辅助策略。
可选地,还可以利用人机交互界面、显示屏等具备显示功能的显示单元(显示装置)来将关注扇区进行呈现。结合第一方面,在第一方面的某些实现方式中,上述方法还包括利用显示单元显示上述关注扇区。
可选地,还可以引入反应时间,在上述获取控制策略时,可以结合反应时间来获取。结合第一方面,在第一方面的某些实现方式中,可以根据反应时间和关注扇区来得到控制策略,或者根据反应时间和驾驶意图来得到控制策略,或者根据反应时间、驾驶意图和关注扇区来得到控制策略。反应时间的引入在除了上述对于关注扇区的考虑外还考虑了驾驶员的注意力情况,减小了因为驾驶员注意力程度不够导致的驾驶风险,因此能够进一步提升车辆控制的准确性和车辆驾驶的安全性。
第二方面,提供一种车辆控制装置,该装置包括用于执行上述第一方面的任意一种实现方式的方法的单元。
第三方面,提供一种车辆控制装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行第一方面中的任意一种实现方式中的方法。该装置可以设置在各类需要进行车辆控制的设备或系统中。该装置还可以为芯片。
第四方面,提供一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行第一方面中的任意一种实现方式中的方法。
第五方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面中的任意一种实现方式中的方法。
第六方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数 据接口读取存储器上存储的指令,执行上述第一方面中的任意一种实现方式中的方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面中的任意一种实现方式中的方法。
图1是本申请实施例的车辆控制装置的结构示意图。
图2是本申请实施例的车辆的视野扇区的示意图。
图3是本申请实施例的视野扇区的示意图。
图4是本申请实施例的关注扇区的示意图。
图5至图10是本申请实施例的车辆控制方案的应用示意图。
图11是本申请实施例的车辆控制方法的示意性流程图。
图12是本申请实施例的获取注意力信息的方法的示意图。
图13是本申请实施例的车辆控制过程的示意图。
图14是本申请实施例的车辆控制装置的示意性框图。
图15是本申请实施例的车辆控制装置的硬件结构示意图。
下面将结合附图,对本申请实施例中的技术方案进行描述。
图1是本申请实施例的车辆控制装置的结构示意图。如图1所示,车辆控制装置100可以包括获取模块110和控制策略模块120。车辆控制装置100可以是车载终端中的模块或车辆的控制单元。
获取模块110用于获取驾驶员的目光信息,该目光信息包括驾驶员的关注扇区,该关注扇区包括车辆的多个视野扇区中的至少一个视野扇区。
目光信息可以是利用摄像头、眼动仪等感知设备获取驾驶员的视线方向,再根据视线方向来确定上述关注扇区。在这个过程中,获取模块110可以直接获取上述目光信息;也可以先获取来自于感知设备的上述视线方向,再根据视线方向来得到目光信息;还可以获取包含有视线方向的图像等(即上述感知设备也集成在获取模块110中的情况),再从图像中提取出视线方向以及根据视线方向来得到目光信息。也就是说,获取模块110可以是上述感知设备,或者是能够获取视线方向和根据视线方向确定目光信息的设备,或者是能够从存储装置中读取目光信息的接口电路、读取装置,或者是能够通过网络获取目光信息的通信接口。
视线方向可以理解为视线的朝向,可以用一条线表示,也可以用一个角度表示,例如视线方向与车辆行驶方向的夹角等。
在现有技术中,由于只考虑了视野角度范围,而当驾驶员轻微转动眼珠等行为都可能导致视野角度范围的变化,很不稳定,这给后续的车辆控制带来了较大的困难和误差。而在本申请实施例中,视野扇区则是把驾驶员的所有可能视野范围进行了划分,划分成多个区域,即多个扇区,关注扇区即为驾驶员目光注视的区域,该视野范围的检测结果相对稳定,不会出现因为驾驶员偏偏头等轻微动作就导致检测结果不断跳动的情况,从而提高车 辆控制的准确性。
在一些实现方式中,上述视野扇区可以包括以下至少一项:左侧车窗视野扇区、左侧后视镜视野扇区、前方车窗视野扇区、车内后视镜视野扇区、右侧车窗视野扇区和右侧后视镜视野扇区。其中,前方车窗视野扇区还可以包括前方车窗左视野扇区和前方车窗右视野扇区。左侧车窗视野扇区即驾驶员从左侧车窗能够看到的视野区域,左侧后视镜视野扇区即驾驶员从左侧后视镜能够看到的视野区域,前方车窗视野扇区即驾驶员从前方车窗能够看到的视野区域,车内后视镜视野扇区即驾驶员从车内后视镜能够看到的视野区域,右侧车窗视野扇区即驾驶员从右侧车窗能够看到的视野区域,右侧后视镜视野扇区即驾驶员从又侧车窗能够看到的视野区域。由于前车窗的视野区域相对较大,而驾驶员未必会关注整个前车窗区域,例如右转弯的时候,驾驶员会向右侧看,此时只会通过前车窗的右边区域往外看。因此,为了进一步提高关注扇区确定的准确性,可以将前方车窗视野扇区划分为前方车窗左视野扇区和前方车窗右视野扇区,即把前车窗视野扇区一分为二。
为了便于理解,下文会结合附图2至图4介绍视野扇区和关注扇区,此处不再重复。
控制策略模块120用于根据上述目光信息获取控制策略。
在一些实现方式中,控制策略模块120可以根据目光信息中的关注扇区来对车辆进行控制,可以是辅助驾驶控制也可以是自动驾驶控制,例如可以控制车辆加速、减速、变道、转弯、停车、避障或各类预警等等。
上述控制策略可以包括一下至少一项:防碰撞预警策略、自动紧急制动(autonomous emergency braking,AEB)策略、自适应巡航控制(adaptive cruise control,ACC)策略、车道偏离预警(lane departure warning,LDW)策略、车道保持辅助(lane keeping assist,LKA)策略或车道居中辅助策略(lane centering control,LCC)等。
防碰撞预警策略是指在车辆有碰撞风险的时候提出警示,比如可以根据关注扇区来确定驾驶员是否有碰撞到障碍物的风险,从而确定是否需要预警。例如假设三种情况,情况一是障碍物不在驾驶员的关注扇区,且在车辆的行驶轨迹上;情况二是障碍物不在驾驶员的关注扇区,但并不在车辆的行驶轨迹上;情况三是障碍物在驾驶员的关注扇区,也不在车辆的行驶轨迹上。则明显情况一的碰撞风险远高于情况二和情况三,且情况三对车辆行驶不会产生影响,所以可以对情况一和情况二进行不同级别的预警,而对于情况三则不进行预警。
自动紧急制动策略、自适应巡航控制策略、车道偏离预警策略或车道保持辅助策略等其他控制策略同样可以参照上面的情况确定对车辆的控制动作,不再一一展开。
在一些实现方式中,控制策略模块120还可以根据目光信息和自车信息得到驾驶员的驾驶意图。该驾驶意图可以为车道保持、转向或变道等,还可以为加速、减速或停车等。
自车信息可以理解为底盘信息或车辆的信息。自车信息可以包括一下至少一种:方向盘转角、角速度、转向灯或车速。例如假设驾驶员的关注扇区是左侧后视镜视野扇区、左侧车窗视野扇区,且左转向灯闪烁,则可以推断该驾驶员的驾驶意图是向左变道。假设驾驶员的关注扇区是前方车窗左视野扇区和左侧车窗视野扇区,且左转向灯闪烁,则可以推断该驾驶员的驾驶意图是左转。
在另一些实现方式中,控制策略模块120还可以根据驾驶意图和来得到上述控制策略。例如,假设驾驶员的驾驶意图是车道保持,则对于其他车道中的静止障碍物就可以不 再考虑。可以是只根据驾驶意图来得到控制策略,也可以是根据驾驶意图和目光信息来得到控制策略,还可以进一步结合车辆的预测行驶轨迹等其他因素来得到控制策略。
可选地,可以利用训练好的神经网络模型等模型来对目光信息进行处理,得到上述驾驶意图。该神经网络模型可以理解为建立了输入和输出之间的对应关系的模型,输入为目光信息和自车信息,输出为驾驶意图。可以理解为,该神经网络模型建立了输入数据和标签之间的映射关系,此处即为建立了目光信息和驾驶意图之间的映射关系,以及自车信息和驾驶意图之间的映射关系,训练则是使得上述映射关系更加准确。神经网络模型可以采用卷积神经网络(convolutional neuron network,CNN)、循环神经网络(recurrent neural networks,RNN)或长短期记忆(long short-term memory,LSTM)神经网络等。训练数据包括输入数据和标签,输入数据包括上述目光信息和自车信息,标签包括上述驾驶意图,每个输入数据都对应一个标签。在训练过程中,利用上述训练数据更新神经网络模型(例如初始的神经网络模型)的参数,就可以得到训练后的神经网络模型,该训练后的神经网络模型就可以用于上述车辆控制方法。
需要说明的是,神经网络模型往往涉及训练过程和推理过程,在训练过程,是利用带有标签的训练数据对初始神经网络模型(此处可以理解为没有训练好的神经网络模型)进行训练,即更新神经网络模型的参数,在推理过程,则是利用训练好的神经网络模型(即参数已经更新过的神经网络模型)对待处理数据(例如上述获取的目光信息、自车信息)进行处理,得到一个推理结果,该推理结果即为待处理数据对应的驾驶意图。可选地,车辆控制装置100还可以包括显示单元,该显示单元可以用于显示关注扇区,也就是利用显示单元将关注扇区呈现出来,该显示单元例如可以是人机交互界面、车载显示屏等。人机交互界面也可以称之为人机界面(human machine interaction,HMI)、用户界面、使用者界面或交互界面等。
在又一些实现方式中,控制策略模块120还可以根据反应时间以及上述关注扇区、驾驶意图等信息来得到控制策略。反应时间即驾驶员遇到紧急情况时采取应对措施的时间,例如,当驾驶员紧急刹车的时候,从看到障碍物到执行紧急刹车之间的时间即为反应时间。反应时间除了受驾驶员的自身反应灵敏度有关以外,还跟驾驶员的注意力程度相关。所以可以通过检测驾驶员的注意力程度来推断驾驶员的反应时间。例如,假设驾驶员正在集中注意力开车,那注意力程度较高,反应时间相对较短,如果遇到突发状况,驾驶员能够很快进行处理;而如果驾驶员正在东张西望或者走神瞌睡等,则注意力程度较低,反映时间相对较长,如果遇到突发状况,驾驶员可能反应不及时。
可选地,注意力信息可以根据自车信息和驾驶员状态监测信息得到。驾驶员状态监测信息可以利用驾驶员状态监测系统(driver monitoring system,DMS)获取驾驶员的面部图像信息来得到。关于如何获取注意力信息的方法会在下文详细介绍,在此不再展开。
反应时间的引入在除了上述对于关注扇区的考虑外还考虑了驾驶员的注意力情况,减小了因为驾驶员注意力程度不够导致的驾驶风险,因此能够进一步提升车辆控制的准确性和车辆驾驶的安全性。
图2是本申请实施例的车辆的视野扇区的示意图。如图2所示,该车辆的视野扇区包括:①左侧车窗视野扇区、②左侧后视镜视野扇区、③前方车窗左视野扇区、④车内后视镜视野扇区、⑤前方车窗右视野扇区、⑥右侧后视镜视野扇区和⑦右侧车窗视野扇区。
图3是本申请实施例的视野扇区的示意图。如图3中的(a)所示,该车辆的视野扇区包括:①左侧车窗视野扇区、②左侧后视镜视野扇区、④车内后视镜视野扇区、⑥右侧后视镜视野扇区、⑦右侧车窗视野扇区和⑧前方车窗视野扇区。如图3中的(b)所示,该车辆的视野扇区包括:①左侧车窗视野扇区、②左侧后视镜视野扇区、③前方车窗左视野扇区、④车内后视镜视野扇区、⑤前方车窗右视野扇区、⑥右侧后视镜视野扇区和⑦右侧车窗视野扇区,(b)与(a)的区别在于,(b)中前方车窗扇区被分为左右两个区域,更加精准。如图3中的(c)所示,该车辆的视野扇区包括:①左侧车窗视野扇区、②左侧后视镜视野扇区、④车内后视镜视野扇区、⑥右侧后视镜视野扇区、⑦右侧车窗视野扇区和⑧前方车窗视野扇区,(c)与(a)的区别在于,(c)中还示出了两个盲区,即车辆自身结构导致的前方两侧的两个盲区,同样可以提高准确性。如图3中的(d)所示,该车辆的视野扇区包括:①左侧车窗视野扇区、②左侧后视镜视野扇区、③前方车窗左视野扇区、④车内后视镜视野扇区、⑤前方车窗右视野扇区、⑥右侧后视镜视野扇区和⑦右侧车窗视野扇区,(d)与(b)的区别在于,(d)中还示出了两个盲区。
应理解,盲区不是视野扇区,但是可以被包含在某个视野扇区内,因为驾驶员在盲区是没有视野的。图3中的(c)中的⑧前方车窗视野扇区就可以看作是在(a)中的⑧前方车窗视野扇区的基础上去除掉两个盲区得到的。
图3只是给出了几种视野扇区的划分示例,在实际情况中还可以存在其他划分方式。例如有些车辆的两侧后视镜上还会安装小圆镜,则可以把驾驶员从小圆镜得到的视野划分为左侧后视镜圆镜视野扇区和右侧后视镜圆镜视野扇区。又例如,大型车辆的视野盲区相对较多,所以对于上述每个视野扇区,大型车辆的某个视野扇区的覆盖角度会有所不同。又例如,针对盲区问题,可以在车辆上安装相应的摄像头,此时,摄像头的显示屏所显示出来的视野也可以划分为一个视野扇区。在此不再一一列举。
关注扇区则是驾驶员实际视野所包括的视野扇区,因此关注扇区会包括上述多个视野扇区中的至少一个视野扇区。例如,假设驾驶员想要右转,且视线方向为右前方,此时,驾驶员的关注扇区会包括前方车窗右侧视野扇区和左侧车窗视野扇区。驾驶员还可能观察后视镜,此时,驾驶员的关注扇区会包括右侧后视镜视野扇区。
如图3所示,视野扇区可以包括盲区也可以不包括盲区。因此,在确定关注扇区时,还可以进一步考虑盲区,因为虽然盲区可以被包含在某个视野扇区中,但实际上驾驶员并不能观察到盲区的情况,所以为了提高准确性,关注扇区可以是根据盲区信息得到的。可以对关注扇区进行裁剪,去除关注扇区中的盲区。在关注扇区不包括盲区的情况下,能够提高目光信息的准确性。
此外,关注扇区还可以考虑障碍物的影响,也就是,关注扇区可以是根据障碍物信息得到的。下面结合图4,介绍在考虑盲区和障碍物的影响的情况下,关注扇区的确定。
图4是本申请实施例的关注扇区的示意图。如图4中的(a)所示,该车辆的视野区域包括:①左侧车窗视野扇区、②左侧后视镜视野扇区、③前方车窗左视野扇区、④车内后视镜视野扇区、⑤前方车窗右视野扇区、⑥右侧后视镜视野扇区和⑦右侧车窗视野扇区,且示出了两个盲区。假设驾驶员的视野为直线A和直线B之间的区域,从图4中的(a)可以看出实际上该视野包括了两个盲区、③前方车窗左视野扇区、⑤前方车窗右视野扇区,且⑤前方车窗右视野扇区的一小部分被障碍物遮挡。在这种情况下可以去除盲区和障碍物 遮挡的部分,图4中的(b)所示的关注扇区。如图4中的(a)所示,该驾驶员的视野为直线C和直线D之间的区域,关注扇区只包括③前方车窗左视野扇区和⑤前方车窗右视野扇区。
如上文所述,本申请实施例的车辆控制方案可以应用于各种自动驾驶、辅助驾驶等场景。例如可以结合高级驾驶辅助系统(advanced driver assistance system,ADAS)来实现一些辅助驾驶功能。为了便于理解,下面结合图5至图10进行介绍。
图5至图10是本申请实施例的车辆控制方案的应用示意图。具体而言,是结合目光信息和驾驶意图来确定控制策略的示例。
为了便于理解,首先对图中的一些元素进行介绍,图中的加粗平行虚线表示的是车道线,关注扇区用两条直线表示,例如直线A和直线B之间的区域就是关注扇区,例如直线E和直线F之间的区域也是关注扇区。车辆的预测行驶轨迹用两条曲线表示,例如曲线C和曲线D之间的就是车辆按照预测行驶轨迹行驶时的通行区域,曲线G和曲线H也是车辆的预测行驶轨迹,但曲线C和曲线D是在考虑了驾驶意图的情况下得到的预测行驶轨迹,曲线G和曲线H是在没有考虑驾驶意图的情况下得到的预测行驶轨迹。带箭头的物体表示该物体相对地面运动,箭头方向即为运动方向,不带箭头的物体表示该物体相对地面静止,例如,带箭头的障碍物表示是在移动的障碍物,不带箭头的障碍物表示静止的障碍物。
在图5所示的防碰撞预警的场景中,驾驶员的关注扇区为图5中直线A和直线B之间的区域,即关注扇区包括前方车窗视野扇区(或者前方车窗左视野扇区和前方车窗右视野扇区)。驾驶员的驾驶意图是车道保持。曲线C和曲线D表示的是车辆的预测行驶轨迹。障碍物#1是在关注扇区内的移动物体,障碍物#2是在关注扇区外的移动物体,障碍物#1的移动方向与车辆一致,障碍物#2的移动方向与车辆预测行驶轨迹有交叉。在该场景下,障碍物#2可能会带来危险,所以需要进行防碰撞预警,即得到的控制策略是:进行防碰撞预警。
需要说明的是,在图5至图10所示的车辆控制方案的应用场景中,预警信号可以采用声音、警示灯闪烁或画面呈现等多种呈现方式,不存在限定。例如图5所示场景中,假设得到了上述进行防碰撞预警的控制策略,就可以采用例如声音提示的方式提醒驾驶员存在碰撞风险,或者可以采用警示灯闪烁的方式进行预警提醒,也可以采用呈现在人机交互界面、车载显示屏上的方式进行预警提醒,这些预警提醒方式还可以叠加使用,例如可以既警示灯闪烁又声音报警,不再一一列举。
在图6所示的防碰撞预警的场景中,驾驶员的关注扇区为图6中直线A和直线B之间的区域,即关注扇区包括前方车窗左视野扇区和左侧车窗视野扇区。驾驶员的驾驶意图是车道保持。曲线C和曲线D表示的是车辆的预测行驶轨迹。障碍物#1和障碍物#2都是在关注扇区内的移动物体,障碍物#1的移动方向与车辆一致,障碍物#2的移动方向与车辆预测行驶轨迹有交叉。在该场景下,虽然障碍物#2可能会带来危险,但驾驶员已经注意到障碍物#2,所以此时可以不进行防碰撞预警,即得到的控制策略是:不进行防碰撞预警。
在图7所示的防碰撞预警的场景中,驾驶员的关注扇区为图7中直线A和直线B之间的区域,即关注扇区包括前方车窗右视野扇区。驾驶员的驾驶意图是右转。曲线C和曲 线D表示的是车辆的预测行驶轨迹。障碍物#1是在关注扇区内的移动物体,障碍物#2和障碍物#3是在关注扇区外的移动物体,且障碍物2是在盲区内的障碍物,障碍物#1的移动方向与车辆一致,障碍物#2和障碍物#3的移动方向与车辆的预测行驶轨迹有交叉。在该场景下,即使驾驶员已经关注到了障碍物#1,且采取了刹车减速的避障方式,但由于障碍物#2和障碍物#3都依然可能会带来危险,所以依然需要进行防碰撞预警,即得到的控制策略是:进行防碰撞预警。
在图8所示的防碰撞预警的场景中,驾驶员的关注扇区为图8中直线A和直线B之间的区域以及直线E和直线F之间的区域,即关注扇区包括前方车窗视野扇区(或者前方车窗左视野扇区和前方车窗右视野扇区)和左侧后视镜视野扇区。驾驶员的驾驶意图是向左变道。曲线C和曲线D表示的是车辆的预测行驶轨迹。障碍物#1是在关注扇区内的移动物体,障碍物#2是在关注扇区外的移动物体,障碍物#1和障碍物#2的移动方向与车辆一致。在该场景下,即使驾驶员已经关注到了障碍物#1,但由于障碍物#2依然可能会带来危险,所以依然需要进行防碰撞预警,即得到的控制策略是:进行防碰撞预警。假设在该场景下,驾驶员的关注扇区还包括左侧车窗视野扇区,此时障碍物#2就成为了关注扇区内的移动障碍物,驾驶员已经注意到两个障碍物,此时,可以不进行防碰撞预警,即得到的控制策略是:不进行防碰撞预警。
此外,假设图8是车道偏离预警的场景,则此时由于已经知道驾驶员的驾驶意图是向左变道,所以可以不进行车道偏离预警,即得到的控制策略是:进行车道偏离预警。
在图9所示的防碰撞预警或纵向辅助驾驶的场景中,驾驶员的关注扇区为图8中直线A和直线B之间的区域,即关注扇区包括前方车窗视野扇区(或者前方车窗左视野扇区和前方车窗右视野扇区)。曲线C和曲线D表示的是车辆的预测行驶轨迹#1,曲线G和曲线H表示的是车辆的预测行驶轨迹#2,预测行驶轨迹#1是综合了驾驶员的驾驶意图得到的,预测行驶轨迹#2没有考虑驾驶员的驾驶意图,是存在偏差的。障碍物#1是在关注扇区内的移动物体,障碍物#2是在关注扇区外的静止物体。在该场景下,驾驶员已经关注到了障碍物#1,虽然驾驶员没有关注到障碍物#2,但由于障碍物#2并不在车辆的预测行驶轨迹#1上,所以不需要进行防碰撞预警或不需要进行自动紧急制动,即得到的控制策略是:不进行防碰撞预警或不需要进行自动紧急制动。假设在该场景下,没有考虑驾驶员的驾驶意图,此时障碍物#2就成为关注扇区外且在预测行驶轨迹上的障碍物,得到的控制策略是:进行防碰撞预警或进行自动紧急制动,导致错误的控制,影响驾驶体验。
图9主要给出了是否考虑驾驶意图会对控制策略带来差异。
在图10所示的横向辅助驾驶的场景中,驾驶员的关注扇区为图10中直线A和直线B之间的区域,即关注扇区包括前方车窗视野扇区(或者前方车窗左视野扇区和前方车窗右视野扇区)。驾驶员的驾驶意图是向左变道。曲线C和曲线D表示的是车辆的预测行驶轨迹。障碍物#1是在关注扇区内的移动物体,障碍物#1不在车辆的预测行驶轨迹上。在该场景下,驾驶员的行驶轨迹是会偏向左侧车道的,所以不再需要进行车道保持辅助预警或车道偏离预警,即得到的控制策略是:不进行车道保持辅助预警或不进行车道偏离预警。如果不考虑驾驶意图,则由于驾驶员的关注扇区是正前方,很有可能导致预测行驶轨迹依然在车道内,即预测车辆会保持车道内行驶,就会导致上述预警策略为进行预警,影响驾驶体验。此外,在该场景中,还可以根据驾驶意图为向左变道来执行自主变道,提升驾驶 体验。
应理解,图5至图10只是一些驾驶场景的示例,还可以应用于其他控制策略,不再一一列举。
图11是本申请实施例的车辆控制方法的示意性流程图。下面对图11的各个步骤进行介绍。
1101、获取驾驶员的目光信息,该目光信息包括关注扇区。
目光信息和关注扇区的解释以及获取目光信息的方式可以参照上文相关介绍,不再重复。
在一些实现方式中,该关注扇区至少是根据盲区信息和/或障碍物信息得到的。也就是说,在关注扇区的确认过程中考虑了盲区和/或障碍物的因素,并做出了对应的调整,或者可以立即为,在确定关注扇区的时候消除了盲区和/或障碍物的影响。这样可以提高关注扇区的准确性,即目光信息的准确性,从而提高车辆控制的准确性。
1102、至少根据目光信息获取控制策略。
控制策略可以参照上文的相关介绍,不再重复。
可选地,该控制策略可以包括以下至少一项:防碰撞预警策略、自动紧急制动策略、自适应巡航控制策略、车道偏离预警策略、车道保持辅助策略或车道居中辅助策略等。
在一些实现方式中,可以根据目光信息和自车信息得到驾驶员的驾驶意图。该驾驶意图可以为车道保持、转向或变道等,还可以为加速、减速或停车等。
在另一些实现方式中,可以根据驾驶意图和目光信息来得到上述控制策略。例如,假设驾驶员的驾驶意图是车道保持,则对于其他车道中的静止障碍物就可以不再考虑。可以是只根据驾驶意图来得到控制策略,也可以是根据驾驶意图和目光信息来得到控制策略,还可以进一步结合车辆的预测行驶轨迹等其他因素来得到控制策略。
在又一些实现方式中,可以先根据目光信息和自车信息得到驾驶员的驾驶意图,再根据驾驶意图来得到控制策略。
驾驶意图可以利用训练好的神经网络模型等模型来得到。例如可以利用训练好的神经网络模型对目光信息和自车信息进行处理,得到上述驾驶意图。该神经网络模型可以理解为建立了输入和输出之间的对应关系的模型,输入为目光信息和自车信息,输出为驾驶意图。可以理解为,该神经网络模型建立了输入数据和标签之间的映射关系,此处即为建立了目光信息和驾驶意图之间的映射关系,以及自车信息和驾驶意图之间的映射关系,训练则是使得上述映射关系更加准确。
神经网络模型可以采用卷积神经网络、深度神经网络、循环神经网络或长短期记忆神经网络等。训练数据包括输入数据和标签,输入数据包括上述目光信息和自车信息,标签包括上述驾驶意图,每个输入数据都对应一个标签。在训练过程中,利用上述训练数据更新神经网络模型(例如初始的神经网络模型)的参数,就可以得到训练后的神经网络模型,该训练后的神经网络模型就可以用于上述车辆控制方法。
在又一些实现方式中,还可以根据反应时间和关注扇区来得到控制策略,或者根据反应时间和驾驶意图来得到控制策略,或者根据反应时间、驾驶意图和关注扇区来得到控制策略。反应时间的介绍可以参照上文。
如上文所述,反应时间可以利用注意力信息得到,下面结合图12对获取注意力信息 的方法进行介绍。
图12是本申请实施例的获取注意力信息的方法的示意图。如图12所示,可以利用神经网络模型来对自车信息和驾驶员状态监测信息进行处理,从而得到注意力信息。
可选地,可以利用图中用RNN表示的神经网络模型对方向盘转角、车速和转向角(即车头转向角度)等自车信息进行处理,以及利用图中用NN表示的神经网络模型对驾驶员状态监测信息进行处理,将上述处理得到的结果输入到在图中用FC表示的全连接(fully connected,FC)层,就可以得到注意力信息。该注意力信息为用于表示驾驶员的注意力程度的信息。
在一些实现方式中,用于对自车信息进行处理的神经网络例如可以采用LSTM等循环神经网络,用于对驾驶员状态监测信息进行处理的神经网络例如可以采用多层感知机(multilayer perceptron,MLP)神经网络。
上述神经网络模型同样可以利用训练数据训练得到,过程可以参照上文关于神经网络模型的训练的介绍。
应理解,上述图示RNN、NN和FC可以看作共同组成了一个注意力模型,该注意力模型的输入为自车信息和驾驶员状态监测信息,输出为注意力信息。或者可以理解为,注意力模型用于对自车信息和驾驶员状态监测信息进行处理,得到注意力信息。在一个具体例子中,注意力模型包括LSTM、MLP和FC,其中,LSTM用于对自车信息进行处理,并将得到的处理结果输入到FC;MLP用于对驾驶员状态监测信息进行处理,并将得到的处理结果输入到FC;FC用于将来自于LSTM和MLP的处理结果进行继续处理,得到注意力信息。
图11所示方法,利用关注扇区来表示驾驶员的视野范围,视野范围的检测的稳定性得到了提高,从而提高了车辆控制的准确性。
可选地,图11所示方法还可以包括:在显示单元上显示上述关注扇区。该显示单元可以是车载显示屏或人机交互界面等。
图13是本申请实施例的车辆控制过程的示意图。图13可以看作是利用图11所示方法进行车辆控制的一个具体示例,主要以防碰撞预警的控制策略为例。
如图13所示,获取模块获取视线方向、盲区信息和障碍物信息,并根据视线方向、盲区信息和障碍物信息得到目光信息,该目光信息中包括关注扇区。该过程可以看作是步骤1101的一个具体示例,即关注扇区是根据视线方向、盲区信息和障碍物信息综合得到的。此处的获取模块可以看作是图1所示获取模块110的一个示例。
如图13所示,神经网络模型根据自车信息和目光信息进行驾驶意图分析,得到驾驶员的驾驶意图,其中,自车信息包括方向盘转向、角速度、转向灯和车速,驾驶意图包括车道保持、转向和变道。应理解,图13只是一种示例,所以上述自车信息、驾驶意图等还可以是其他组成方式,不存在限定,例如驾驶意图还可以包括停车等。此处的神经网络模型可以看作是图1所示驾驶意图分析模块130的一个示例,即利用神经网络模型来对自车信息和目光信息进行处理,以获取驾驶员的驾驶意图,该神经网络模型是训练好的神经网络模型。
如图13所示,车辆的控制单元根据驾驶意图和目光信息进行车辆行驶轨迹的预测与车辆碰撞风险的预测,以及得到控制策略,该控制单元可以看作是图1所示控制策略模块 120的一个具体示例。
在图13中,神经网络模型对自车信息和目光信息进行处理得到驾驶意图,之后控制单元根据驾驶意图和目光信息得到控制策略,上述过程可以看作是步骤1102的一个具体示例。
如上文所述,图13可以看作是利用图11所示方法进行车辆控制的一个具体示例,因此还可以存在其他例子。例如,图13中还可以包括反应时间,该反应时间可以是根据注意力信息得到的,控制单元可以根据反应时间、驾驶意图和目光信息进行车辆行驶轨迹的预测与车辆碰撞风险的预测,以及得到控制策略。又例如,图13还可以包括注意力模型,该注意力模型用于对自车信息和驾驶员状态监测信息进行处理得到注意力信息,等等,在此不再一一列举。图14是本申请实施例的车辆控制装置的示意性框图。图14所示的装置2000包括获取单元2001和处理单元2002。
获取单元2001和处理单元2002可以用于执行本申请实施例的车辆控制方法。具体地,获取单元2001可以执行上述步骤1101,处理单元2002可以执行上述步骤1102。
装置2000可以为图1所示车辆控制装置100,获取单元2001可以包括图1所示获取模块110,处理单元2002可以包括图1所示控制策略模块120。处理单元2002还可以包括驾驶意图分析模块130。
装置2000还可以包括显示单元2003,显示单元2003用于显示上述关注扇区。显示单元2003还可以用于将预警信号以画面的形式呈现给驾驶员。显示单元2003还可以集成在处理单元2002中。
应理解,上述装置2000中的处理单元2002可以相当于下文中的装置3000中的处理器3002。
图15是本申请实施例的车辆控制装置的硬件结构示意图。图15所示的车辆控制装置3000(该装置3000具体可以是一种计算机设备)包括存储器3001、处理器3002、通信接口3003以及总线3004。其中,存储器3001、处理器3002、通信接口3003通过总线3004实现彼此之间的通信连接。
存储器3001可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器3001可以存储程序,当存储器3001中存储的程序被处理器3002执行时,处理器3002和通信接口3003用于执行本申请实施例的车辆控制方法的各个步骤。
处理器3002可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的车辆控制装置中的单元所需执行的功能,或者执行本申请方法实施例的车辆控制方法。
处理器3002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的车辆控制方法的各个步骤可以通过处理器3002中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器3002还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、ASIC、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行 本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器3001,处理器3002读取存储器3001中的信息,结合其硬件完成本申请实施例的车辆控制装置中包括的单元所需执行的功能,或者执行本申请方法实施例的车辆控制方法。
通信接口3003使用例如但不限于收发器一类的收发装置,来实现装置3000与其他设备或通信网络之间的通信。例如,可以通过通信接口3003获取上述目光信息。
总线3004可包括在装置3000各个部件(例如,存储器3001、处理器3002、通信接口3003)之间传送信息的通路。
应注意,尽管图15所示的装置3000仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,装置3000还包括实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当理解,装置3000还可包括实现其他附加功能的硬件器件。此外,本领域的技术人员应当理解,装置3000也可仅仅包括实现本申请实施例所必须的器件,而不必包括图15中所示的全部器件。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同装置来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而 前述的存储介质包括:通用串行总线闪存盘(USB flash disk,UFD),UFD也可以简称为U盘或者优盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
Claims (21)
- 一种车辆控制方法,其特征在于,包括:获取驾驶员的目光信息,所述目光信息包括所述驾驶员的关注扇区,所述关注扇区包括车辆的多个视野扇区中的至少一个视野扇区;至少根据所述目光信息获取所述车辆的控制策略。
- 如权利要求1所述的方法,其特征在于,所述至少根据所述目光信息获取所述车辆的控制策略,包括:根据所述目光信息和所述车辆的自车信息,获取所述控制策略,所述自车信息包括以下至少一种:方向盘转角、角速度、转向灯或车速。
- 如权利要求2所述的方法,其特征在于,所述根据所述目光信息和所述车辆的自车信息,获取所述控制策略,包括:利用训练好的神经网络模型对所述目光信息和所述自车信息进行处理,得到所述驾驶员的驾驶意图,所述驾驶意图为车道保持、转向或变道;根据所述驾驶意图得到所述控制策略。
- 如权利要求1至3中任一项所述的方法,其特征在于,所述多个视野扇区包括:左侧车窗视野扇区、左侧后视镜视野扇区、前方车窗视野扇区、车内后视镜视野扇区、右侧车窗视野扇区和右侧后视镜视野扇区。
- 如权利要求4所述的方法,其特征在于,所述前方车窗视野扇区包括前方车窗左视野扇区和前方车窗右视野扇区。
- 如权利要求1至5中任一项所述的方法,其特征在于,所述关注扇区至少是根据盲区信息和/或障碍物信息得到的。
- 如权利要求1至6中任一项所述的方法,其特征在于,所述关注扇区至少是根据所述驾驶员的视线方向得到的。
- 如权利要求1至7中任一项所述的方法,其特征在于,所述控制策略包括以下至少一项:防碰撞预警策略、自动紧急制动策略、自适应巡航控制策略、车道偏离预警策略、车道保持辅助策略或车道居中辅助策略。
- 如权利要求1至8中任一项所述的方法,其特征在于,所述方法还包括:在显示单元上显示所述关注扇区。
- 一种车辆控制装置,其特征在于,包括:获取单元,用于获取驾驶员的目光信息,所述目光信息包括所述驾驶员的关注扇区,所述关注扇区包括车辆的多个视野扇区中的至少一个视野扇区;处理单元,用于至少根据所述目光信息获取所述车辆的控制策略。
- 如权利要求10所述的装置,其特征在于,所述处理单元具体用于:根据所述目光信息和所述车辆的自车信息,获取所述控制策略,所述自车信息包括以下至少一种:方向盘转角、角速度、转向灯或车速。
- 如权利要求11所述的装置,其特征在于,所述处理单元具体用于:利用训练好的神经网络模型对所述目光信息和所述自车信息进行处理,得到所述驾驶 员的驾驶意图,所述驾驶意图为车道保持、转向或变道;根据所述驾驶意图得到所述控制策略。
- 如权利要求10至12中任一项所述的装置,其特征在于,所述多个视野扇区包括:左侧车窗视野扇区、左侧后视镜视野扇区、前方车窗视野扇区、车内后视镜视野扇区、右侧车窗视野扇区和右侧后视镜视野扇区。
- 如权利要求13所述的装置,其特征在于,所述前方车窗视野扇区包括前方车窗左视野扇区和前方车窗右视野扇区。
- 如权利要求10至14中任一项所述的装置,其特征在于,所述关注扇区至少是根据盲区信息和/或障碍物信息得到的。
- 如权利要求10至15中任一项所述的装置,其特征在于,所述关注扇区至少是根据所述驾驶员的视线方向得到的。
- 如权利要求10至16中任一项所述的装置,其特征在于,所述控制策略包括以下至少一项:防碰撞预警策略、自动紧急制动策略、自适应巡航控制策略、车道偏离预警策略、车道保持辅助策略或车道居中辅助策略。
- 如权利要求10至17中任一项所述的装置,其特征在于,所述装置还包括:显示单元,用于显示所述关注扇区。
- 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求1至9中任一项所述方法的指令。
- 一种车辆控制装置,其特征在于,所述装置包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1至9中任一项所述的方法。
- 一种计算机程序产品,其特征在于,当所述计算机程序在计算机上执行时,使得所述计算机执行如权利要求1至9中任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21951332.2A EP4365051A4 (en) | 2021-07-30 | 2021-07-30 | VEHICLE CONTROL METHOD AND ASSOCIATED APPARATUS |
CN202180006540.9A CN114765974A (zh) | 2021-07-30 | 2021-07-30 | 车辆控制方法及其装置 |
PCT/CN2021/109557 WO2023004736A1 (zh) | 2021-07-30 | 2021-07-30 | 车辆控制方法及其装置 |
US18/425,750 US20240166200A1 (en) | 2021-07-30 | 2024-01-29 | Vehicle control method and apparatus thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/109557 WO2023004736A1 (zh) | 2021-07-30 | 2021-07-30 | 车辆控制方法及其装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/425,750 Continuation US20240166200A1 (en) | 2021-07-30 | 2024-01-29 | Vehicle control method and apparatus thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023004736A1 true WO2023004736A1 (zh) | 2023-02-02 |
Family
ID=82364791
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/109557 WO2023004736A1 (zh) | 2021-07-30 | 2021-07-30 | 车辆控制方法及其装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240166200A1 (zh) |
EP (1) | EP4365051A4 (zh) |
CN (1) | CN114765974A (zh) |
WO (1) | WO2023004736A1 (zh) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130151030A1 (en) * | 2011-12-09 | 2013-06-13 | Denso Corporation | Driving condition determination apparatus |
WO2013115241A1 (ja) * | 2012-01-31 | 2013-08-08 | 株式会社デンソー | 車両の運転手の注意を喚起する装置及びその方法 |
CN105083291A (zh) * | 2014-04-25 | 2015-11-25 | 歌乐株式会社 | 基于视线检测的驾驶员辅助系统 |
CN108447303A (zh) * | 2018-03-20 | 2018-08-24 | 武汉理工大学 | 基于人眼视觉与机器视觉耦合的外周视野危险识别方法 |
CN109094457A (zh) * | 2018-07-16 | 2018-12-28 | 武汉理工大学 | 一种考虑驾驶员外周视野的车辆防碰撞预警系统及方法 |
CN109774470A (zh) * | 2017-11-15 | 2019-05-21 | 欧姆龙株式会社 | 旁视判定装置、旁视判定方法及存储介质 |
CN113128250A (zh) * | 2019-12-27 | 2021-07-16 | 罗伯特·博世有限公司 | 用于提高车辆驾驶安全性的方法和装置、控制器、车辆以及计算机可读存储介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107323338B (zh) * | 2017-07-03 | 2020-04-03 | 北京汽车研究总院有限公司 | 车用转向灯控制系统、控制方法及车辆 |
CN111709264A (zh) * | 2019-03-18 | 2020-09-25 | 北京市商汤科技开发有限公司 | 驾驶员注意力监测方法和装置及电子设备 |
CN111723828B (zh) * | 2019-03-18 | 2024-06-11 | 北京市商汤科技开发有限公司 | 注视区域检测方法、装置及电子设备 |
-
2021
- 2021-07-30 CN CN202180006540.9A patent/CN114765974A/zh active Pending
- 2021-07-30 EP EP21951332.2A patent/EP4365051A4/en active Pending
- 2021-07-30 WO PCT/CN2021/109557 patent/WO2023004736A1/zh active Application Filing
-
2024
- 2024-01-29 US US18/425,750 patent/US20240166200A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130151030A1 (en) * | 2011-12-09 | 2013-06-13 | Denso Corporation | Driving condition determination apparatus |
WO2013115241A1 (ja) * | 2012-01-31 | 2013-08-08 | 株式会社デンソー | 車両の運転手の注意を喚起する装置及びその方法 |
CN105083291A (zh) * | 2014-04-25 | 2015-11-25 | 歌乐株式会社 | 基于视线检测的驾驶员辅助系统 |
CN109774470A (zh) * | 2017-11-15 | 2019-05-21 | 欧姆龙株式会社 | 旁视判定装置、旁视判定方法及存储介质 |
CN108447303A (zh) * | 2018-03-20 | 2018-08-24 | 武汉理工大学 | 基于人眼视觉与机器视觉耦合的外周视野危险识别方法 |
CN109094457A (zh) * | 2018-07-16 | 2018-12-28 | 武汉理工大学 | 一种考虑驾驶员外周视野的车辆防碰撞预警系统及方法 |
CN113128250A (zh) * | 2019-12-27 | 2021-07-16 | 罗伯特·博世有限公司 | 用于提高车辆驾驶安全性的方法和装置、控制器、车辆以及计算机可读存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4365051A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP4365051A1 (en) | 2024-05-08 |
EP4365051A4 (en) | 2024-08-14 |
CN114765974A (zh) | 2022-07-19 |
US20240166200A1 (en) | 2024-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11977675B2 (en) | Primary preview region and gaze based driver distraction detection | |
CN111880533B (zh) | 驾驶场景重构方法、装置、系统、车辆、设备及存储介质 | |
US10457294B1 (en) | Neural network based safety monitoring system for autonomous vehicles | |
JP6725568B2 (ja) | 車両制御装置、車両、車両制御方法およびプログラム | |
US11072324B2 (en) | Vehicle and control method thereof | |
US20220396287A1 (en) | Adaptive trust calibration | |
US12097892B2 (en) | System and method for providing an RNN-based human trust model | |
US20230202520A1 (en) | Travel controller and method for travel control | |
CN115520100A (zh) | 汽车电子后视镜系统及车辆 | |
US12017679B2 (en) | Adaptive trust calibration | |
US11912277B2 (en) | Method and apparatus for confirming blindspot related to nearby vehicle | |
US20210291736A1 (en) | Display control apparatus, display control method, and computer-readable storage medium storing program | |
US11580861B2 (en) | Platooning controller, system including the same, and method thereof | |
JP2015219721A (ja) | 動作支援システム及び物体認識装置 | |
WO2023004736A1 (zh) | 车辆控制方法及其装置 | |
US20190315349A1 (en) | Collision determination apparatus and method | |
US11420639B2 (en) | Driving assistance apparatus | |
JP7244562B2 (ja) | 移動体の制御装置及び制御方法並びに車両 | |
JP7281728B2 (ja) | 駐車支援装置 | |
CN114511834A (zh) | 一种确定提示信息的方法、装置、电子设备及存储介质 | |
US20240246419A1 (en) | Vehicle display control device, vehicle, vehicle display control method, and non-transitory storage medium | |
US20240336140A1 (en) | Driving assistance apparatus, driving assistance method, and non-transitory recording medium | |
JP7541497B2 (ja) | 車両制御装置、情報処理装置、それらの動作方法及びプログラム | |
JP7554699B2 (ja) | 画像処理装置および画像処理方法、車両用制御装置、プログラム | |
KR20230075032A (ko) | 차량의 사고 이벤트를 분석하기 위한 전자 장치 및 그 동작방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21951332 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021951332 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021951332 Country of ref document: EP Effective date: 20240131 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |