WO2023279285A1 - 方向盘接管检测方法、方向盘接管检测系统以及车辆 - Google Patents

方向盘接管检测方法、方向盘接管检测系统以及车辆 Download PDF

Info

Publication number
WO2023279285A1
WO2023279285A1 PCT/CN2021/104971 CN2021104971W WO2023279285A1 WO 2023279285 A1 WO2023279285 A1 WO 2023279285A1 CN 2021104971 W CN2021104971 W CN 2021104971W WO 2023279285 A1 WO2023279285 A1 WO 2023279285A1
Authority
WO
WIPO (PCT)
Prior art keywords
steering wheel
driver
vehicle
data
torque value
Prior art date
Application number
PCT/CN2021/104971
Other languages
English (en)
French (fr)
Inventor
陈亦伦
李帅君
方晔阳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180084876.7A priority Critical patent/CN116670003A/zh
Priority to PCT/CN2021/104971 priority patent/WO2023279285A1/zh
Publication of WO2023279285A1 publication Critical patent/WO2023279285A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system

Definitions

  • the present application relates to the field of intelligent driving, and more specifically, to a steering wheel takeover detection method, a steering wheel takeover detection system, and a vehicle.
  • the torque sensor is one of the important sensors to determine whether the driver turns the steering wheel.
  • the torque sensor detects whether the driver applies torque to the steering wheel to determine whether the driver turns the steering wheel.
  • Intelligent vehicles can maximize the freedom of the driver's operations, and autonomously complete operations such as environment perception, route planning, and vehicle control.
  • the steering wheel and the steering shaft can be driven by the electric motor, which will cause the torque sensor to generate output.
  • smart vehicles can judge whether the driver takes over the steering wheel through the amount of twisting of the steering wheel detected by the torque sensor. For example, when the amount of torque detected by the torque sensor is greater than a set threshold, it is determined that the driver wants to take over the steering wheel. However, when the automatic driving system controls the steering of the vehicle or under certain special road sections (for example, the road surface is uneven), the power steering system may also generate a torque greater than the set threshold, which may cause the automatic driving system to generate a driver takeover Misjudgment of the steering wheel. At this time, if the automatic driving system determines that the driver has taken over the vehicle and handed over control of the vehicle, the driver usually cannot take over the vehicle in time, which may cause greater danger.
  • the present application provides a steering wheel connection detection method, a steering wheel connection detection system and a vehicle, which help to improve the accuracy of steering wheel connection detection, thereby helping to improve driving safety.
  • a method for detecting a steering wheel takeover is provided, the method is applied to a vehicle, and the method includes: the vehicle acquires data collected by multiple sensors; the vehicle collects data collected by each of the multiple sensors Extract features to obtain multiple feature data; the vehicle fuses the multiple feature data to obtain fused data; the vehicle infers the fused data based on the inference result obtained and the steering wheel torque value detected by the torque sensor , to determine if the driver is taking over the steering wheel.
  • the vehicle can combine the data of multiple sensors and torque sensors to determine whether the driver takes over the steering wheel, which helps to improve the accuracy of steering wheel takeover detection, thereby helping to improve driving safety.
  • the multiple sensors include at least two sensors in a driver camera, a time of flight (time of flight, TOF) camera, and a capacitive steering wheel.
  • the method before acquiring the data collected by the plurality of sensors, the method further includes: the vehicle determines that the vehicle is in an automatic driving state.
  • the vehicle before the vehicle acquires data collected by multiple sensors, it may first determine that the vehicle is in an automatic driving state. In this way, in the state of automatic driving, the vehicle triggers the processing of data collected by multiple sensors, which helps to save the computing overhead of the vehicle.
  • the vehicle being in an automatic driving state may include that the steering wheel of the vehicle is controlled by an advanced driving assistant system (advanced driving assistant system, ADAS) instead of the driver.
  • ADAS advanced driving assistant system
  • the vehicle is in automatic parking assist (auto parking assist, APA), remote parking assist (remote parking assist, RPA) or automatic valet parking (auto valet parking, AVP); or the vehicle is in L2 and above level Navigation cruise assistant (NCA); or, integrated cruise assist (ICA) with the vehicle at L3 level.
  • APA automatic parking assist
  • RPA remote parking assist
  • AVP automatic valet parking
  • NCA navigation cruise assistant
  • ICA integrated cruise assist
  • the vehicle determines whether the driver takes over the steering wheel according to the inference result obtained by reasoning the fused data and the steering wheel torque value detected by the torque sensor, including: When the reasoning result is that the driver turns the steering wheel and the steering wheel torque value is greater than or equal to a preset threshold, the vehicle determines that the driver takes over the steering wheel; wherein, the method further includes: the vehicle exits the automatic driving state.
  • the vehicle when the vehicle determines that the reasoning result is that the driver turns the steering wheel and the steering wheel torque value detected by the torque sensor is greater than or equal to the preset threshold, it can be determined that the driver takes over the steering wheel, which helps to improve the accuracy of steering wheel takeover detection. Rate. At the same time, the vehicle can exit the automatic driving state (hand over the control of the vehicle to the driver), thus helping to improve driving safety.
  • the method further includes: the vehicle prompts the user to take over the steering wheel.
  • the user when the vehicle exits the automatic driving state, the user can be prompted to take over the steering wheel, which can improve the user's attention, thereby helping to improve driving safety.
  • the vehicle may prompt the user to take over the vehicle through a human machine interface (human machine interface, HMI), sound, or ambient light.
  • a human machine interface human machine interface, HMI
  • HMI human machine interface
  • the vehicle determines whether the driver takes over the steering wheel according to the inference result obtained by reasoning the fused data and the steering wheel torque value detected by the torque sensor, including: When the inference result is that the driver does not turn the steering wheel and the steering wheel torque value is less than or equal to the preset threshold, the vehicle determines that the driver has not taken over the steering wheel; or, when the inference result is that the driver accidentally touches the steering wheel and the steering wheel torque value When it is less than or equal to the preset threshold, the vehicle determines that the driver has not taken over the steering wheel.
  • the vehicle when the inference result is that the driver did not turn the steering wheel (or, the inference result is that the driver touched the steering wheel by mistake) and the steering wheel torque value is less than or equal to the preset threshold, the vehicle can determine that the driver did not take over the steering wheel, so that Continuing to control the vehicle will help improve the accuracy of steering wheel takeover detection, thereby helping to improve driving safety.
  • the vehicle determines whether the driver takes over the steering wheel according to the inference result obtained by reasoning the fused data and the steering wheel torque value detected by the torque sensor, including: When the inference result is that the driver turns the steering wheel and the steering wheel torque value is less than or equal to the preset threshold, the vehicle determines that the driver takes over the steering wheel; or, when the inference result is that the driver accidentally touches the steering wheel and the steering wheel torque value is greater than or When equal to the preset threshold, the vehicle determines that the driver is not taking over the steering wheel.
  • the vehicle when the reasoning result is that the driver turns the steering wheel and the steering wheel torque value is less than or equal to the preset threshold, the vehicle can determine that the driver takes over the steering wheel, so that the control of the vehicle can be handed over to the driver, which helps to improve The accuracy of steering wheel takeover detection helps to improve driving safety.
  • the vehicle can determine that the driver did not take over the steering wheel and continue to control the vehicle. It helps to improve the accuracy of steering wheel takeover detection, thereby helping to improve driving safety.
  • the vehicle can also prompt the user not to touch the steering wheel by mistake.
  • the confidence level of the reasoning result is greater than the confidence level of the torque value of the steering wheel detected by the torque sensor.
  • each feature data in the plurality of feature data is feature data in the first coordinate system.
  • the first coordinate system is an image coordinate system or a bird eye view (bird eye view, BEV) coordinate system.
  • the method can be performed by a computing platform in the vehicle.
  • a steering wheel takeover detection system includes multiple sensors and a computing platform, wherein the multiple sensors are used to collect multiple data and send the multiple data to the computing platform; the computing platform , which is used to extract features from the data collected by each of the multiple sensors to obtain multiple feature data; to fuse the multiple feature data to obtain fused data; to obtain The inference result and the steering wheel torque value detected by the torque sensor determine whether the driver takes over the steering wheel.
  • the computing platform is further configured to determine that the vehicle is in an automatic driving state before acquiring the plurality of data.
  • the computing platform is specifically used to determine that the driver takes over when the reasoning result is that the driver turns the steering wheel and the steering wheel torque value is greater than or equal to a preset threshold. Steering wheel; control the vehicle to exit the automatic driving state.
  • the computing platform is further configured to send a first instruction to the first prompting device, where the first instruction is used to instruct the first prompting device to prompt the user to take over the steering wheel.
  • the computing platform is specifically used to: when the reasoning result is that the driver does not turn the steering wheel and the steering wheel torque value is less than or equal to a preset threshold, determine that the driver Not taking over the steering wheel; or, when the inference result is that the driver touches the steering wheel by mistake and the steering wheel torque value is less than or equal to the preset threshold, it is determined that the driver has not taken over the steering wheel.
  • the computing platform is specifically used to: determine that the driver takes over when the reasoning result is that the driver turns the steering wheel and the steering wheel torque value is less than or equal to a preset threshold. steering wheel; or, when the reasoning result is that the driver touches the steering wheel by mistake and the steering wheel torque value is greater than or equal to the preset threshold, it is determined that the driver does not take over the steering wheel.
  • the multiple sensors include at least two sensors of a driver camera, a time-of-flight camera, and a capacitive steering wheel.
  • each feature data in the plurality of feature data is feature data in the first coordinate system.
  • the first coordinate system is an image coordinate system or a bird's-eye view BEV coordinate system.
  • the computing platform is located in a cloud server.
  • a steering wheel takeover detection device which includes: an acquisition unit, configured to acquire a plurality of data collected by a plurality of sensors; a feature extraction unit, configured to obtain data collected by each of the plurality of sensors Data extraction features to obtain a plurality of feature data; a data fusion unit is used to fuse the multiple feature data to obtain fused data; a determination unit is used to reason the inference results and The steering wheel torque value detected by the torque sensor determines whether the driver takes over the steering wheel.
  • the determining unit is further configured to determine that the vehicle is in an automatic driving state before the acquiring unit acquires the plurality of data.
  • the determining unit is specifically configured to: determine that the driver takes over when the reasoning result is that the driver turns the steering wheel and the steering wheel torque value is greater than or equal to a preset threshold Steering wheel; the device also includes a sending unit, which is used to send an instruction to the automatic driving system, and the instruction is used for the automatic driving system to control the vehicle to exit the automatic driving state.
  • the device further includes: a sending unit configured to send a first instruction to the first prompt unit, where the first instruction is used to instruct the first prompt unit to prompt the user to take over steering wheel.
  • the determining unit is specifically configured to: when the reasoning result is that the driver does not turn the steering wheel and the steering wheel torque value is less than or equal to a preset threshold, determine that the driver Not taking over the steering wheel; or, when the inference result is that the driver touches the steering wheel by mistake and the steering wheel torque value is less than or equal to the preset threshold, it is determined that the driver has not taken over the steering wheel.
  • the determination unit is specifically configured to: determine that the driver takes over when the reasoning result is that the driver turns the steering wheel and the steering wheel torque value is less than or equal to a preset threshold value. steering wheel; or, when the reasoning result is that the driver touches the steering wheel by mistake and the steering wheel torque value is greater than or equal to the preset threshold, it is determined that the driver does not take over the steering wheel.
  • the multiple sensors include at least two sensors of a driver camera, a time-of-flight TOF camera, and a capacitive steering wheel.
  • each feature data in the plurality of feature data is feature data in the first coordinate system.
  • the first coordinate system is an image coordinate system or a bird's-eye view BEV coordinate system.
  • the device may reside in the vehicle's computing platform.
  • a device in a fourth aspect, includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device can perform any one of the possibilities in the first aspect.
  • the above-mentioned processing unit may be a processor
  • the above-mentioned storage unit may be a memory
  • the memory may be a storage unit (such as a register, a cache, etc.) in the chip, or a storage unit outside the chip in the smart device ( For example, read-only memory, random-access memory, etc.).
  • a system which includes a sensor and a steering wheel takeover detection device, wherein the steering wheel takeover detection device may be the steering wheel takeover detection device described in any one of the above third aspects.
  • the steering wheel takeover detection device is located in a cloud server.
  • the system further includes a device for receiving an instruction from the cloud server.
  • a vehicle including the steering wheel takeover detection system described in the second aspect above, the device described in the third aspect or the device described in the fourth aspect.
  • a computer program product comprising: computer program code, when the computer program code is run on a computer, the computer is made to execute the method in the above first aspect.
  • a computer-readable medium stores program codes, and when the computer program codes are run on a computer, the computer is made to execute the method in the above-mentioned first aspect.
  • Fig. 1 is a schematic functional block diagram of a vehicle provided by an embodiment of the present application.
  • Fig. 2 is a schematic diagram of sensing ranges of various sensors.
  • Fig. 3 is a schematic block diagram of a system architecture provided by an embodiment of the present application.
  • Fig. 4 is a schematic block diagram of another system architecture provided by an embodiment of the present application.
  • FIG. 5 is a system architecture diagram provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of the deep feature fusion DLA network architecture.
  • Fig. 7 is a schematic diagram of the structure of UNET.
  • Fig. 8 is a schematic diagram of capacitive steering wheel detection.
  • FIG. 9 is a schematic diagram of data fusion through concat superposition.
  • Fig. 10 is a schematic diagram of sending the fused data into the neural network for reasoning.
  • Fig. 11 is a schematic diagram of decoding the result pushed out by the neural network by the decoder.
  • Fig. 12 is a schematic diagram of prompting the user through the large central control screen provided by the embodiment of the present application.
  • Fig. 13 is another schematic diagram of prompting the user through the large central control screen provided by the embodiment of the present application.
  • FIG. 14 is a system architecture diagram for determining whether the driver takes over the vehicle through the data collected by the capacitive steering wheel and the torque sensor provided by the embodiment of the present application.
  • FIG. 15 is a system architecture diagram for determining whether the driver takes over the vehicle through the data collected by the driver's camera, the TOF camera and the torque sensor provided by the embodiment of the present application.
  • Fig. 16 is a schematic flowchart of a steering wheel takeover detection method provided by an embodiment of the present application.
  • Fig. 17 is a schematic structural diagram of a steering wheel takeover detection system provided by an embodiment of the present application.
  • Fig. 18 is a schematic block diagram of a device provided by an embodiment of the present application.
  • FIG. 1 is a schematic functional block diagram of a vehicle 100 provided by an embodiment of the present application.
  • Vehicle 100 may be configured in a fully or partially autonomous driving mode.
  • the vehicle 100 can obtain its surrounding environment information through the perception system 120, and obtain an automatic driving strategy based on the analysis of the surrounding environment information to realize fully automatic driving, or present the analysis results to the user to realize partially automatic driving.
  • the perception system 120 may include several kinds of sensors that sense information about the environment around the vehicle 100 .
  • the perception system 120 may include a global positioning system 121 (the global positioning system may be a GPS system, or a Beidou system or other positioning systems), an inertial measurement unit (inertial measurement unit, IMU) 122, a laser radar 123, a millimeter wave radar 124 , one or more of ultrasonic radar 125 , camera device 126 and capacitive steering wheel 127 .
  • a global positioning system 121 the global positioning system may be a GPS system, or a Beidou system or other positioning systems
  • IMU inertial measurement unit
  • the camera device 126 may include a driver's camera and a TOF camera.
  • Computing platform 150 may include at least one processor 151 that may execute instructions 153 stored in a non-transitory computer-readable medium such as memory 152 .
  • computing platform 150 may also be a plurality of computing devices that control individual components or subsystems of vehicle 100 in a distributed manner.
  • the processor 151 may be any conventional processor, such as a central processing unit (central processing unit, CPU). Alternatively, the processor 151 may also include, for example, an image processor (graphic process unit, GPU), a field programmable gate array (field programmable gate array, FPGA), a system on chip (system on chip, SOC), an ASIC ( application specific integrated circuit, ASIC) or their combination.
  • a central processing unit central processing unit, CPU
  • the processor 151 may also include, for example, an image processor (graphic process unit, GPU), a field programmable gate array (field programmable gate array, FPGA), a system on chip (system on chip, SOC), an ASIC ( application specific integrated circuit, ASIC) or their combination.
  • one or more of these components described above may be installed separately from or associated with the vehicle 100 .
  • memory 152 may exist partially or completely separate from vehicle 100 .
  • the components described above may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as limiting the embodiment of the present application.
  • the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, etc., the embodiment of the present application There is no particular limitation.
  • the vehicle 100 may include an ADAS.
  • the ADAS utilizes various sensors on the vehicle (including but not limited to: lidar, millimeter wave radar, camera, ultrasonic sensor, global positioning system, inertial measurement unit) to obtain information from the surroundings of the vehicle, and The acquired information is analyzed and processed to realize functions such as obstacle perception, target recognition, vehicle positioning, path planning, driver monitoring/reminder, etc., thereby improving the safety, automation and comfort of vehicle driving.
  • FIG 2 shows a schematic diagram of the sensing range of various sensors.
  • the sensors may include, for example, laser radars, millimeter-wave radars, cameras, and ultrasonic sensors as shown in Figure 1.
  • Millimeter-wave radars can be divided into long-range radars and medium/short-range radars. .
  • the farthest sensing distance of lidar is about 150 meters; the longest sensing distance of long-range millimeter-wave radar is about 250 meters; the longest sensing distance of medium/short-range millimeter-wave radar is about 120 meters ;
  • the farthest sensing distance of the camera is about 200 meters; the farthest sensing distance of the ultrasonic radar is about 5 meters.
  • the ADAS system generally includes three main functional modules: perception module, decision-making module and execution module.
  • the perception module perceives the surrounding environment of the vehicle body through sensors, and inputs corresponding real-time data to the decision-making processing center.
  • the perception module mainly includes the vehicle camera. /ultrasonic radar/millimeter-wave radar/lidar, etc.; the decision-making module uses computing devices and algorithms to make corresponding decisions based on the information obtained by the perception module; the execution module takes corresponding actions after receiving the decision signal from the decision-making module, such as driving and changing lanes , steering, brake, warning, etc.
  • ADAS Under different levels of autonomous driving (L0-L5), based on information obtained by artificial intelligence algorithms and multi-sensors, ADAS can achieve different levels of automatic driving assistance.
  • the above-mentioned levels of autonomous driving (L0-L5) are based on the (Society of Automotive Engineers, SAE) grading standards. Among them, L0 is no automation; L1 is driving support; L2 is partial automation; L3 is conditional automation; L4 is high automation; L5 is full automation.
  • SAE Society of Automotive Engineers
  • ADAS mainly include but are not limited to: adaptive cruise, automatic emergency braking, automatic parking, blind spot monitoring, front crossroad traffic warning/braking, rear crossroad traffic warning/braking, front vehicle collision warning , Lane departure warning, lane keeping assist, rear collision avoidance warning, traffic sign recognition, traffic jam assist, highway assist, etc.
  • L0-L5 levels of automatic driving
  • automatic parking can include APA, RPA, and AVP; for APA, the driver does not need to manipulate the steering wheel, but still needs to operate the accelerator and brake on the vehicle; The vehicle is parked remotely; for AVP, the vehicle can be parked without a driver.
  • APA is about at the level of L1
  • RPA is about at the level of L2-L3
  • AVP is about at the level of L4.
  • the vehicle in the automatic driving state mentioned in the embodiment of the present application may mean that the steering wheel of the vehicle is controlled by the ADAS system rather than the driver.
  • the vehicle is in APA, RPA or AVP; or, the vehicle is in NCA at level L2 and above; or, the vehicle is in ICA at level L3.
  • the vehicle can determine whether the driver takes over the vehicle through the data collected by the driver's camera, TOF camera, capacitive steering wheel, and torque sensor, which helps to improve the accuracy of steering wheel takeover detection, thereby helping to improve driving.
  • Safety helps to improve the accuracy of steering wheel takeover detection, thereby helping to improve driving.
  • FIG. 3 shows a schematic block diagram of a system architecture provided by an embodiment of the present application.
  • the system can be installed in a vehicle, and the system includes sensors and a computing platform.
  • the sensor can be one or more sensors in the perception system 120 shown in FIG. 1 (eg, camera 126 , capacitive steering wheel, and torque sensor), and the computing platform can be the computing platform 150 shown in FIG. 1 .
  • Computing platform 150 may include an ADAS system.
  • the torque sensor can input the detected torque value into the computing platform.
  • the camera device and the capacitive steering wheel can input the collected data into the computing platform, so that the computing platform can output the driver's action characteristics (for example, the driver turns the steering wheel, the driver taps the steering wheel, and the driver accidentally touches the steering wheel).
  • the computing platform can combine the torque value detected by the torque sensor and the driver's action characteristics to determine whether the driver takes over the vehicle. If it is determined that the driver has taken over the vehicle, the ADAS system can hand over control of the vehicle; if it is determined that the driver has not taken over the vehicle or mishandled the steering wheel, the ADAS system can continue to control the vehicle.
  • FIG. 4 shows another schematic block diagram of the system architecture provided by the embodiment of the present application.
  • the system includes sensors, ADAS systems and cloud servers, where the sensors and ADAS systems may be located in the vehicle, and the cloud server may include a steering wheel takeover detection device.
  • the vehicle can send the data collected by the torque sensor, camera device and capacitive steering wheel to the cloud server through the network.
  • the steering wheel takeover detection device of the cloud server can output the driver's action characteristics through the data collected by the camera device and the capacitive steering wheel.
  • the cloud server can combine the driver's action characteristics and the steering wheel torque value detected by the torque sensor to determine whether the driver takes over the vehicle.
  • the cloud server can send the result of whether the driver takes over the vehicle to the vehicle through the network.
  • the ADAS system can hand over control of the vehicle; if the result sent by the cloud server indicates that the driver did not take over the vehicle or mishandled the steering wheel, the ADAS system can continue to control the vehicle.
  • the vehicle can also send the data collected by the camera and the capacitive steering wheel to the cloud server.
  • the steering wheel takeover detection device of the cloud server can determine the driver's action characteristics through the data collected by the camera device and the capacitive steering wheel. The behavior characteristics of the driver are thus sent to the vehicle.
  • the vehicle can combine the torque value detected by the torque sensor and the driver's action characteristics to determine whether the driver takes over the vehicle.
  • FIG. 5 shows a system architecture diagram provided by an embodiment of the present application.
  • the computing platform obtains data collected by multiple sensors (including driver camera, TOF camera and capacitive steering wheel).
  • the computing platform encodes the data collected by each sensor through the corresponding encoder to obtain the characteristic data of each sensor.
  • the computing platform fuses multiple feature data and sends them to the neural network for inference, so that the driver's action features can be obtained.
  • the calculation platform can combine the torque value of the steering wheel detected by the torque sensor and the driver's action characteristics to determine whether the driver takes over the vehicle. For example, if the driver's action is characterized by the fact that the driver turns the steering wheel and the torque value detected by the torque sensor is greater than a preset threshold, the ADAS system in the computing platform can determine that the driver takes over the vehicle, and the ADAS system hands over control of the vehicle; another example , the driver's action is characterized by the driver lightly touching the steering wheel and the torque value detected by the torque sensor is less than the preset threshold, then the ADAS system can determine that the driver has not taken over the vehicle, and the ADAS system can continue to control the vehicle.
  • the ADAS system can determine that the driver has not taken over the vehicle, and the ADAS system can continue to control the vehicle.
  • the ADAS system may be located in the computing platform.
  • the computing platform can also be located outside the computing platform, then after the computing platform determines that the driver takes over the vehicle, it can send an instruction to the ADAS system, which is used to instruct the driver to take over the vehicle; in response to receiving the instruction, the ADAS system Control of the vehicle may be surrendered.
  • the input in the embodiment of the present application may be the data collected by the cockpit sensor, and the cockpit sensor may include sensors such as a driver's camera, a TOF camera, and a capacitive steering wheel.
  • the data collected by the driver's camera may be image data.
  • the data of W ⁇ H ⁇ 3 ⁇ K can be obtained, where W represents the width of the image captured by the camera, H represents the height of the image collected by the camera, 3 represents the three primary colors of RGB, and K represents K frame data.
  • the computing platform can use deep layer aggregation (DLA), visual geometry group (VGG), ResNet and other feature extraction networks to extract feature data.
  • DLA deep layer aggregation
  • VCG visual geometry group
  • ResNet ResNet
  • Feature extraction networks usually include structures such as fully connected layers, convolutional layers, and pooling layers. Feature extraction networks can be trained on labeled data.
  • the encoder may be trained by using marked data.
  • the marked data includes the data collected by the driver's camera at the same time, and the marked result is the driver's action state at this time.
  • the driver’s action state can be divided into three categories: the driver actively holds the steering wheel to control the steering of the vehicle; , the driver touches the steering wheel by mistake while drinking water or picking up things, etc.).
  • the data collected by the driver's camera at this time can be formed into a data set, and the data set can include image data collected by the driver's camera. Label the data set that the driver actively holds the steering wheel to control the steering of the vehicle, then the labeled data set can be used as the marked data when the driver actively holds the steering wheel to control the vehicle steering.
  • the process of feature extraction of the data collected by the camera will be described below in combination with the DLA network architecture shown in FIG. 6 .
  • the number on each grid indicates the downsampling factor, that is, the factor by which it is reduced relative to the original input.
  • the dotted line represents the 2x upsampling process, which doubles the corresponding feature size.
  • the thick arrow indicates that the features are transferred to the corresponding square box for aggregation, so as to fuse multiple features of the same dimension.
  • Each square box includes operations such as convolution and batch normalization on feature data.
  • the data collected by the TOF camera may be depth data.
  • the data of W ⁇ H ⁇ D ⁇ K can be obtained, where W and H represent the width and height dimensions of the image, D represents the depth dimension information, and K represents K frame data.
  • feature extraction can be performed on depth information.
  • the space can be divided into small grids, and the number and characteristics of each pixel depth falling in these grids can be counted.
  • Encode pixels into features in BEV space (such as replacing the features of grids in BEV space with the average depth of points in each grid), and send the statistical results to features such as U-shaped network (UNET) Extract the features of the TOF camera data in the neural network.
  • the feature extraction network here can be trained using labeled data. It should be understood that for the process of obtaining marked data, reference may be made to the descriptions in the foregoing embodiments, and details are not repeated here.
  • FIG. 7 shows a schematic diagram of the UNET structure, and data convolution processing is performed in each square in the distance information grid.
  • the black downward solid arrow on the left represents the 2 ⁇ 2 maximum pooling operation, which reduces the feature dimension by half.
  • the upward solid arrow on the right represents the upsampling process, which expands the feature dimension by 2 times.
  • Dashed lines represent the copying process. The features passed by the dotted line are superimposed with the upsampled features on the right as the input features of the convolution.
  • the data collected by the capacitive steering wheel can indicate the driver's grip on the steering wheel, including the way of grip, the resolution of left and right hands, etc.
  • the position and state of the driver's grip on the steering wheel can be obtained.
  • the area where the driver grasps the steering wheel can be encoded and transformed into the image coordinate system to construct a one-dimensional feature. Specifically, the place where the steering wheel is grasped is set to 1*coefficient, and the place where the steering wheel is not grasped is set to 0, and the coefficient is determined by the grip strength.
  • Fig. 8 shows a schematic diagram of capacitive steering wheel detection. If the driver grasps the left and right handles under the steering wheel, the corresponding covered coding grid (the grid filled with black) is marked as 1*coefficient, and the remaining positions are set to 0. If the driver's hands are holding the steering wheel tightly, the coefficient is set to 1; if the driver does not hold the steering wheel, the coefficient is set to 0. The coefficients of the coding grid are determined proportionally to the driver's grip on the steering wheel.
  • FIG. 9 shows a schematic diagram of data fusion through concat superposition.
  • the corresponding sensor features are obtained after feature extraction and feature fusion is performed.
  • the fusion method can be superimposed by concat to obtain the fused feature data.
  • the concat superposition method is shown in Figure 10.
  • Superposition is performed on the third dimension of each feature.
  • the feature data extracted from the data collected by the driver’s camera is W ⁇ H ⁇ D 1
  • the feature data extracted from the data collected by the TOF camera is
  • the feature data is W ⁇ H ⁇ D 2
  • the feature data extracted from the data collected by the capacitive steering wheel is W ⁇ H ⁇ D 3
  • the fused data can be W ⁇ H ⁇ (D 1 +D 2 +D 3 ).
  • Fig. 10 shows a schematic diagram of sending fused data into a neural network for reasoning.
  • the network can use the Temporal Convolutional Networks sequential neural network, which can be trained using labeled data to obtain the best results.
  • the structure of the network is shown in Figure 10: X 0 , X 1 , ..., X T represents the input from time 0 to time T, and Y T represents the output at time T.
  • marked data may be used to train the neural network.
  • the marked data includes data collected by multiple sensors (including the driver's camera, TOF camera, and capacitive direction deviation) at the same time, and the marked result is the driver's action state at this moment.
  • the driver's action state can be divided into three categories: the driver actively holds the steering wheel to control the steering of the vehicle, the driver lightly touches the steering wheel, and the driver accidentally touches the steering wheel.
  • the data collected by the driver's camera, TOF camera and capacitive steering wheel can be combined into a data set, which can include the image data collected by the camera and the driver's data collected by the capacitive steering wheel.
  • Grip data Label the data set that the driver actively holds the steering wheel to control the steering of the vehicle, then the labeled data set can be used as the marked data when the driver actively holds the steering wheel to control the vehicle steering.
  • Fig. 11 shows a schematic diagram of decoding the result pushed by the neural network used for inference by the decoder.
  • the decoder can use a fully connected layer neural network to decode the output of the previous sequential neural network.
  • the decoder mainly completes the classification task and recognizes the driver's action characteristics.
  • the output here can include the driver turning the steering wheel, tapping the steering wheel, and the driver touching the steering wheel by mistake.
  • the above descriptions of the driver's action state are based on three classification results: the driver turns the steering wheel, lightly touches the steering wheel, and the driver mistakenly touches the steering wheel, and the embodiment of the present application is not limited thereto.
  • the neural network can also be trained by labeling the results as data collected by multiple sensors when the driver grips the steering wheel.
  • the classification result output by the decoder may also include the classification result that the driver is holding the steering wheel tightly.
  • the computing platform can judge the driver's state by combining the driver's action state and the torque value of the steering wheel output by the torque sensor.
  • the status of the driver may include the driver taking the steering wheel and the driver not taking the steering wheel.
  • the computing platform determines that the driver takes over the vehicle, and the ADAS system surrenders control of the vehicle.
  • the computing platform can also send an instruction like a prompting device, and the instruction is used to instruct the driver to take over the vehicle; in response to receiving the instruction, the prompting device can use a human machine interface (human machine interface, HMI), sound, Ambient lights and other methods prompt the user to take over the vehicle.
  • a human machine interface human machine interface, HMI
  • HMI human machine interface
  • Fig. 12 shows a schematic diagram of prompting the user through the large screen of the central control provided by the embodiment of the present application.
  • the vehicle is in the NCA of autonomous driving level L2 and above.
  • the computing platform detects that the driver turns the steering wheel and the torque value output by the torque sensor is greater than or equal to the preset threshold, the computing platform can send an instruction to the cockpit domain controller (cockpit domain controller, CDC); in response to receiving the instruction , CDC can control the large screen in the central control to prompt the user "It has been detected that you have taken over the vehicle actively, and the automatic driving has exited, please pay more attention".
  • the cockpit domain controller cockpit domain controller
  • the computing platform can also determine that the driver has not taken over the vehicle, and then control the The ADAS system continues to control the vehicle.
  • the computing platform can determine that the driving The driver touches the steering wheel by mistake, and then controls the ADAS system to continue to maintain control of the vehicle, and can actively control the rotation of the steering wheel to prevent accidents.
  • the computing platform may also send an instruction to the prompting device, where the instruction is used to instruct the prompting device to prompt the user to touch the steering wheel by mistake.
  • the prompting device may prompt the user to just touch the steering wheel by mistake through HMI, sound and other means.
  • Fig. 13 shows a schematic diagram of prompting the user through the large screen of the central control provided by the embodiment of the present application.
  • the vehicle is under the NCA of the autonomous driving L2 level. If the computing platform detects that the driver touches the steering wheel by mistake at this time, the computing platform can send an instruction to the cockpit domain controller CDC; in response to receiving the instruction, the CDC can control the vehicle voice assistant to prompt the user "For the safety of automatic driving, please Don't touch the steering wheel by mistake.”
  • the computing platform can also determine that the driver takes over the vehicle, and the computing platform can control the ADAS to hand over control of the vehicle.
  • the computing platform can also determine If the driver touches the steering wheel by mistake, the computing platform can control the ADAS system to continue to maintain control of the vehicle, and can actively turn the steering wheel to control the vehicle to prevent accidents.
  • the computing platform can determine whether the driver takes over the vehicle, which helps to improve the vehicle's ability to judge whether the driver takes over the vehicle. Accuracy. When the driver accidentally touches the steering wheel, the computing platform can control the ADAS system to continue to maintain control of the vehicle and correct the misoperation of the steering wheel, thus ensuring driving safety during automatic driving.
  • the above describes the process of determining whether the driver takes over the vehicle through the data collected by the driver camera, TOF camera, capacitive steering wheel and torque sensor in conjunction with FIGS. 5 to 13 .
  • the following introduces the system architecture diagram of determining whether the driver takes over the vehicle through the data collected by the capacitive steering wheel and the torque sensor in conjunction with Figure 14, and introduces the data collected by the driver's camera, TOF camera and torque sensor to determine whether the driver takes over the vehicle in combination with Figure 15 system architecture diagram.
  • FIG. 14 shows another system architecture diagram provided by the embodiment of the present application.
  • the data collected by the capacitive steering wheel and the torque sensor is used to determine whether the driver takes over the vehicle.
  • the capacitive steering wheel can directly output the driver's steering wheel grip posture.
  • the driver's operating habit is to lightly hold the steering wheel under the steering wheel, and will not rotate with the rotation of the steering wheel. Therefore, using the output of the capacitive steering wheel, the neural network can learn whether the driver has turned the steering wheel within a period of time to determine whether the steering at this time is caused by the ADAS system or the driver turning the steering wheel.
  • the steering wheel grip posture can be obtained by extracting features from the data collected by the capacitive steering wheel and inputting the feature data into the neural network.
  • steering wheel grip gestures may include the driver tapping the steering wheel, the driver not touching the steering wheel, and the driver turning the steering wheel.
  • the computing platform can determine whether the output of the torque sensor comes from the ADAS system or the driver's operation, and then determine whether the driver takes over the vehicle.
  • FIG. 15 shows another system architecture diagram provided by the embodiment of the present application.
  • the data collected by the camera, TOF camera and torque sensor is used to determine whether the driver takes over the vehicle.
  • the extracted features are superimposed to form a fused multi-dimensional feature.
  • the fused feature data is sent to the neural network trained with marked data, which can output the driver's action features, for example, whether the driver is holding the steering wheel, whether there is an action to turn the steering wheel actively, etc.
  • the computing platform can determine whether the torque sensor output comes from the ADAS system or the driver's operation, and then determine whether the driver takes over the vehicle.
  • the neural network in the system architecture diagram shown in Figure 5 and Figure 15 can output the specific behavior of the driver, including whether the driver accidentally touches the steering wheel.
  • the ADAS system can continue to take over the vehicle, thus helping to ensure driving safety.
  • the computing platform can also determine whether the driver takes over the vehicle through the data collected by the TOF camera and the torque sensor; or, the computing platform can also determine whether the driver takes over the vehicle through the data collected by the driver's camera and the torque sensor; Or, the computing platform can also use the data collected by the driver's camera, capacitive steering wheel and torque sensor to determine whether the driver takes over the vehicle; or, the computing platform can also use the data collected by the TOF camera, capacitive steering wheel and torque sensor to determine whether the driver is Take over the vehicle.
  • the specific process reference may be made to the description in the foregoing embodiments, and details are not repeated here.
  • FIG. 16 shows a schematic flow chart of a steering wheel takeover detection method 1600 provided by an embodiment of the present application.
  • the method can be executed by a computing platform.
  • the method 1600 includes:
  • the computing platform acquires data collected by multiple sensors.
  • the plurality of sensors includes at least two sensors of a driver camera, a time-of-flight camera, and a capacitive steering wheel.
  • the method before acquiring the data collected by the multiple sensors, the method further includes: the computing platform determines that the vehicle is in an automatic driving state.
  • the computing platform determining that the vehicle is in the automatic driving state includes that the vehicle is executing intelligent driving functions such as APA, RPA, AVP, NCA or ICA.
  • intelligent driving functions such as APA, RPA, AVP, NCA or ICA.
  • the computing platform extracts features from the data collected by each of the multiple sensors to obtain multiple feature data.
  • FIG. 6 to FIG. 8 respectively show the process of the encoder extracting feature data from the driver's camera, the encoder extracting feature data from the TOF camera, and the encoder extracting feature data from the capacitive steering wheel.
  • each feature data in the plurality of feature data is feature data in the first coordinate system.
  • the first coordinate system is an image coordinate system or a bird's-eye view BEV coordinate system.
  • the computing platform fuses the plurality of feature data to obtain fused data.
  • FIG. 9 shows a process of performing data fusion on multiple feature data.
  • the computing platform determines whether the driver takes over the steering wheel according to the reasoning result obtained by reasoning the fused data and the steering wheel torque value detected by the torque sensor.
  • the computing platform determines whether the driver takes over the steering wheel according to the reasoning result obtained by reasoning the fused data and the steering wheel torque value detected by the torque sensor, including: when the reasoning result is that the driver turns the steering wheel and the steering wheel When the torque value is greater than or equal to the preset threshold, the computing platform determines that the driver takes over the steering wheel; wherein, the method further includes: exiting the automatic driving state.
  • the method further includes: when the computing platform controls the vehicle to exit the automatic driving state, sending an instruction to the prompting device, where the instruction is used to instruct the prompting device to prompt the user to take over the steering wheel.
  • the computing platform determines whether the driver takes over the steering wheel according to the inference result obtained by inferring the fused data and the steering wheel torque value detected by the torque sensor, including: when the inference result is that the driver does not turn the steering wheel and the When the steering wheel torque value is less than or equal to the preset threshold, the calculation platform determines that the driver has not taken over the steering wheel; The platform determines that the driver is not taking over the steering wheel.
  • the computing platform determines whether the driver takes over the steering wheel according to the reasoning result obtained by reasoning the fused data and the steering wheel torque value detected by the torque sensor, including: when the reasoning result is that the driver turns the steering wheel and the steering wheel When the torque value is less than or equal to the preset threshold, the computing platform determines that the driver takes over the steering wheel; The driver did not take over the steering wheel.
  • FIG. 17 shows a schematic block diagram of a steering wheel takeover detection system 1700 provided by an embodiment of the present application.
  • the system 1700 includes a plurality of sensors 1701 and a computing platform 1702, wherein,
  • the multiple sensors 1701 are used to collect multiple data and send the multiple data to the computing platform;
  • the computing platform 1702 is used to extract features from the data collected by each of the multiple sensors to obtain multiple feature data; to fuse the multiple feature data to obtain fused data; according to the fused
  • the inference result obtained by reasoning the data and the steering wheel torque value detected by the torque sensor determine whether the driver takes over the steering wheel.
  • the computing platform 1702 is also configured to determine that the vehicle is in an automatic driving state before acquiring the plurality of data.
  • the computing platform 1702 is specifically configured to: determine that the driver takes over the steering wheel when the inference result is that the driver turns the steering wheel and the steering wheel torque value is greater than or equal to a preset threshold; and control the vehicle to exit the automatic driving state.
  • the computing platform 1702 is further configured to send a first instruction to the first prompting device, where the first instruction is used to instruct the first prompting device to prompt the user to take over the steering wheel.
  • the computing platform 1702 is specifically configured to: determine that the driver has not taken over the steering wheel when the reasoning result is that the driver does not turn the steering wheel and the steering wheel torque value is less than or equal to a preset threshold; or, when the reasoning result is When the driver touches the steering wheel by mistake and the steering wheel torque value is less than or equal to the preset threshold, it is determined that the driver does not take over the steering wheel.
  • the computing platform 1702 is specifically configured to: determine that the driver takes over the steering wheel when the inference result is that the driver turns the steering wheel and the steering wheel torque value is less than or equal to a preset threshold; or, when the inference result is that the driver turns the steering wheel When the steering wheel is accidentally touched and the steering wheel torque value is greater than or equal to the preset threshold, it is determined that the driver does not take over the steering wheel.
  • the plurality of sensors includes at least two sensors of a driver camera, a time-of-flight camera, and a capacitive steering wheel.
  • each feature data in the plurality of feature data is feature data in the first coordinate system.
  • the first coordinate system is an image coordinate system or a bird's-eye view BEV coordinate system.
  • the computing platform is located in a cloud server.
  • Fig. 18 is a schematic block diagram of an apparatus 1800 provided by an embodiment of the present application.
  • the apparatus 1800 includes an acquisition unit 1801, a feature extraction unit 1802, a data fusion unit 1803, and an inference unit 1804, wherein the acquisition unit 1801 is used to acquire multiple sensor The collected data; the feature extraction unit 1802 is used to extract features from the data collected by each sensor in the multiple sensors to obtain multiple feature data; the data fusion unit 1803 is used to fuse the multiple feature data, The fused data is obtained; the reasoning unit 1804 is used to infer the fused data to obtain an inference result; the determination unit 1805 is used to determine whether the driver takes over the steering wheel according to the inference result and the steering wheel torque value detected by the torque sensor. steering wheel.
  • the embodiment of the present application also provides a device, which includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device executes the detection method.
  • the above-mentioned processing unit may be the processor 151 shown in FIG. 1, and the above-mentioned storage unit may be the memory 152 shown in FIG. It may also be a storage unit (for example, read-only memory, random access memory, etc.) located outside the chip in the vehicle.
  • the embodiment of the present application also provides a system, the system includes a sensor and a steering wheel takeover detection device, and the steering wheel takeover detection device may be the above-mentioned device 1800 .
  • the embodiment of the present application also provides a vehicle, including the above-mentioned steering wheel takeover detection system 1700 or the above-mentioned device 1800 .
  • the embodiment of the present application also provides a computer program product, the computer program product including: computer program code, when the computer program code is run on the computer, the computer is made to execute the above method.
  • the embodiment of the present application also provides a computer-readable medium, the computer-readable medium stores program codes, and when the computer program codes are run on a computer, the computer is made to execute the above method.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 151 or instructions in the form of software.
  • the methods disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor 151 .
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor 151 reads the information in the memory 152, and completes the steps of the above method in combination with its hardware. To avoid repetition, no detailed description is given here.
  • the memory 152 may include a read-only memory and a random access memory, and provide instructions and data to the processor.
  • sequence numbers of the above-mentioned processes do not mean the order of execution, and the execution order of the processes should be determined by their functions and internal logic, and should not be used in the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

本申请提供了一种方向盘接管检测方法、方向盘接管检测系统以及车辆,该方法包括:车辆获取多个传感器采集到的数据;车辆对多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据;车辆对多个特征数据进行融合,得到融合后的数据;车辆根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。本申请实施例有助于提升方向盘接管检测时的准确率,从而有助于提升行车安全。

Description

方向盘接管检测方法、方向盘接管检测系统以及车辆 技术领域
本申请涉及智能驾驶领域,并且更具体地,涉及一种方向盘接管检测方法、方向盘接管检测系统以及车辆。
背景技术
在传统车辆上,扭矩传感器是判断驾驶员是否转动方向盘的重要传感器之一。通过扭矩传感器检测驾驶员是否给方向盘施加力矩,来判定驾驶员是否转动方向盘。智能车辆可以最大化地解放驾驶员的操作,并自主完成环境感知、路线规划、车辆控制等操作。在自动驾驶系统控制车辆过程中,可以通过电机驱动方向盘和转向轴转动,这会使得扭矩传感器产生输出。
当前智能车辆可以通过扭矩传感器检测到的方向盘的扭转量来判断驾驶员是否接管方向盘。例如,当扭矩传感器检测到的扭矩量大于设定阈值时,就确定驾驶员要接管方向盘。然而自动驾驶系统控制车辆转向的过程中或者在某些特殊路段下(例如,路面不平整),转向助力系统也可能产生大于设定阈值的扭矩量,这可能会使得自动驾驶系统产生驾驶员接管方向盘的误判。此时如果自动驾驶系统判定驾驶员接管车辆并交出车辆控制权,驾驶员通常无法及时接管车辆,从而可能造成较大的危险。
因此,如何提高方向盘接管检测的准确率成为了一个亟待解决的问题。
发明内容
本申请提供一种方向盘接管检测方法、方向盘接管检测系统以及车辆,有助于提升方向盘接管检测时的准确率,从而有助于提升行车安全。
第一方面,提供了一种方向盘接管的检测方法,该方法应用于车辆,该方法包括:该车辆获取多个传感器采集到的数据;该车辆对该多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据;该车辆对该多个特征数据进行融合,得到融合后的数据;该车辆根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。
本申请实施例中,车辆可以结合多个传感器和扭矩传感器的数据确定驾驶员是否接管方向盘,有助于提升方向盘接管检测时的准确率,从而有助于提升行车安全。
结合第一方面,在第一方面的某些实现方式中,该多个传感器包括驾驶员摄像头、飞行时间(time of flight,TOF)摄像头以及电容方向盘中的至少两个传感器。
结合第一方面,在第一方面的某些实现方式中,该获取多个传感器采集到的数据之前,该方法还包括:该车辆确定该车辆处于自动驾驶状态。
本申请实施例中,车辆在获取多个传感器采集的数据之前,可以先确定车辆处于自动驾驶状态。这样在自动驾驶状态下车辆触发对多个传感器采集的数据的处理,有助于节省 车辆的计算开销。
在一些可能的实现方式中,车辆处于自动驾驶状态可以包括车辆的方向盘由高级驾驶辅助系统(advanced driving assistant system,ADAS)控制而非驾驶员控制。例如,车辆处于自动泊车辅助(auto parking assist,APA)、遥控泊车辅助(remote parking assist,RPA)或者自动代客泊车(auto valet parking,AVP);或者,车辆处于L2及以上级别的导航式巡航辅助(navigation cruise assistant,NCA);或者,车辆处于L3级别的集成式巡航辅助(integrated cruise assist,ICA)。
结合第一方面,在第一方面的某些实现方式中,该车辆根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值大于或者等于预设阈值时,该车辆确定该驾驶员接管方向盘;其中,该方法还包括:该车辆退出自动驾驶状态。
本申请实施例中,当车辆确定推理结果为驾驶员转动方向盘且扭矩传感器检测到的方向盘扭矩值大于或者等于预设阈值时,可以确定驾驶员接管方向盘,有助于提升方向盘接管检测时的准确率。同时车辆可以退出自动驾驶状态(将车辆的控制权交给驾驶员),从而助于提升行车安全。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:该车辆提示用户接管方向盘。
本申请实施例中,车辆在退出自动驾驶状态时可以提示用户接管方向盘,这样可以提高用户的注意力,从而助于提升行车安全。
在一些可能的实现方式中,该车辆可以通过人机交互界面(human machine interface,HMI)、声音或者氛围灯等方式提示用户接管车辆。
结合第一方面,在第一方面的某些实现方式中,该车辆根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:当该推理结果为驾驶员未转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,该车辆确定该驾驶员未接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值小于或者等于该预设阈值时,该车辆确定该驾驶员未接管方向盘。
本申请实施例中,当推理结果为驾驶员未转动方向盘(或者,推理结果为驾驶员误碰方向盘)且方向盘扭矩值小于或者等于预设阈值时,车辆可以确定驾驶员未接管方向盘,从而可以继续对车辆进行控制,有助于提升方向盘接管检测时的准确率,从而有助于提升行车安全。
结合第一方面,在第一方面的某些实现方式中,该车辆根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,该车辆确定该驾驶员接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值大于或者等于该预设阈值时,该车辆确定该驾驶员未接管方向盘。
本申请实施例中,当推理结果为驾驶员转动方向盘且方向盘扭矩值小于或者等于预设阈值时,车辆可以确定驾驶员接管方向盘,从而可以将车辆的控制权交给驾驶员,有助于提升方向盘接管检测时的准确率,从而有助于提升行车安全。
当推理结果为驾驶员误碰方向盘且方向盘扭矩值大于或者等于预设阈值时,车辆可以 确定驾驶员未接管方向盘,从而继续控制车辆。助于提升方向盘接管检测时的准确率,从而有助于提升行车安全。可选地,车辆还可以提示用户不要误碰方向盘。
在一些可能的实现方式中,该推理结果的置信度大于扭矩传感器检测到的方向盘的扭矩值的置信度。
结合第一方面,在第一方面的某些实现方式中,该多个特征数据中的每个特征数据为第一坐标系下的特征数据。
结合第一方面,在第一方面的某些实现方式中,该第一坐标系为图像坐标系或者鸟瞰图(bird eye view,BEV)坐标系。
在一些可能的实现方式中,该方法可以由车辆中的计算平台执行。
第二方面,提供了一种方向盘接管检测系统,该系统包括多个传感器和计算平台,其中,该多个传感器,用于采集多个数据并向该计算平台发送该多个数据;该计算平台,用于对该多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据;对该多个特征数据进行融合,得到融合后的数据;根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。
结合第二方面,在第二方面的某些实现方式中,该计算平台还用于在获取该多个数据之前,确定该车辆处于自动驾驶状态。
结合第二方面,在第二方面的某些实现方式中,该计算平台具体用于:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值大于或者等于预设阈值时,确定该驾驶员接管方向盘;控制该车辆退出自动驾驶状态。
结合第二方面,在第二方面的某些实现方式中,该计算平台还用于向第一提示装置发送第一指令,该第一指令用于指示该第一提示装置提示用户接管方向盘。
结合第二方面,在第二方面的某些实现方式中,该计算平台具体用于:当该推理结果为驾驶员未转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,确定该驾驶员未接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值小于或者等于该预设阈值时,确定该驾驶员未接管方向盘。
结合第二方面,在第二方面的某些实现方式中,该计算平台具体用于:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,确定该驾驶员接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值大于或者等于该预设阈值时,确定该驾驶员未接管方向盘。
结合第二方面,在第二方面的某些实现方式中,该多个传感器包括驾驶员摄像头、飞行时间TOF摄像头以及电容方向盘中的至少两个传感器。
结合第二方面,在第二方面的某些实现方式中,该多个特征数据中的每个特征数据为第一坐标系下的特征数据。
结合第二方面,在第二方面的某些实现方式中,该第一坐标系为图像坐标系或者鸟瞰图BEV坐标系。
结合第二方面,在第二方面的某些实现方式中,该计算平台位于云端服务器中。
第三方面,提供了一种方向盘接管检测装置,该装置包括:获取单元,用于获取多个传感器采集的多个数据;特征提取单元,用于对该多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据;数据融合单元,用于对该多个特征数据进行融合,得到融 合后的数据;确定单元,用于根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。
结合第三方面,在第三方面的某些实现方式中,该确定单元,还用于在获取单元获取该多个数据之前,确定该车辆处于自动驾驶状态。
结合第三方面,在第三方面的某些实现方式中,该确定单元具体用于:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值大于或者等于预设阈值时,确定该驾驶员接管方向盘;该装置还包括发送单元,该发送单元用于向自动驾驶系统发送指令,该指令用于自动驾驶系统控制该车辆退出自动驾驶状态。
结合第三方面,在第三方面的某些实现方式中,该装置还包括:发送单元用于向第一提示单元发送第一指令,该第一指令用于指示该第一提示单元提示用户接管方向盘。
结合第三方面,在第三方面的某些实现方式中,该确定单元具体用于:当该推理结果为驾驶员未转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,确定该驾驶员未接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值小于或者等于该预设阈值时,确定该驾驶员未接管方向盘。
结合第三方面,在第三方面的某些实现方式中,该确定单元具体用于:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,确定该驾驶员接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值大于或者等于该预设阈值时,确定该驾驶员未接管方向盘。
结合第三方面,在第三方面的某些实现方式中,该多个传感器包括驾驶员摄像头、飞行时间TOF摄像头以及电容方向盘中的至少两个传感器。
结合第三方面,在第三方面的某些实现方式中,该多个特征数据中的每个特征数据为第一坐标系下的特征数据。
结合第三方面,在第三方面的某些实现方式中,该第一坐标系为图像坐标系或者鸟瞰图BEV坐标系。
在一些可能的实现方式中,该装置可以位于车辆的计算平台中。
第四方面,提供了一种装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行第一方面中任一种可能的方法。
可选地,上述处理单元可以是处理器,上述存储单元可以是存储器,其中存储器可以是芯片内的存储单元(例如,寄存器、缓存等),也可以是智能设备内位于芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
第五方面,提供了一种系统,该系统包括传感器和方向盘接管检测装置,其中,该方向盘接管检测装置可以是上述第三方面中任一项所述的方向盘接管检测装置。
在一些可能的实现方式中,该方向盘接管检测装置位于云端服务器中。
在一些可能的实现方式中,该系统中还包括用于接收云端服务器的指令的装置。
第六方面,提供了供一种车辆,包括上述第二方面所述的方向盘接管的检测系统,第三方面所述的装置或者第四方面所述的装置。
第七方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面中的方法。
需要说明的是,上述计算机程序代码可以全部或者部分存储在第一存储介质上,其中第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装,本申请实施例对此不作具体限定。
第八方面,提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面中的方法。
附图说明
图1是本申请实施例提供的车辆的一个功能框图示意。
图2是各种传感器感测范围示意图。
图3是本申请实施例提供的系统架构的示意性框图。
图4是本申请实施例提供的另一种系统架构的示意性框图。
图5是本申请实施例提供的系统架构图。
图6是深层特征融合DLA网络架构的示意图。
图7是UNET结构的示意图。
图8是电容方向盘检测的示意图。
图9是一种通过concat叠加进行数据融合的示意图。
图10是将融合后的数据送入神经网络进行推理的示意图。
图11是通过解码器对神经网络推输出的结果进行解码的示意图。
图12是本申请实施例提供的通过中控大屏提示用户的示意图。
图13是本申请实施例提供的通过中控大屏提示用户的另一示意图。
图14是本申请实施例提供的通过电容方向盘以及扭矩传感器采集的数据来确定驾驶员是否接管车辆的系统架构图。
图15是本申请实施例提供的通过驶员摄像头、TOF摄像头以及扭矩传感器采集的数据来确定驾驶员是否接管车辆的系统架构图。
图16是本申请实施例提供的方向盘接管检测方法的示意性流程图。
图17是本申请实施例提供的方向盘接管检测系统的示意性结构图。
图18是本申请实施例提供的装置的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
图1是本申请实施例提供的车辆100的一个功能框图示意。可以将车辆100配置为完全或部分自动驾驶模式。例如:车辆100可以通过感知系统120获取其周围的环境信息,并基于对周边环境信息的分析得到自动驾驶策略以实现完全自动驾驶,或者将分析结果呈现给用户以实现部分自动驾驶。
感知系统120可包括感测关于车辆100周边的环境的信息的若干种传感器。例如,感知系统120可包括全球定位系统121(全球定位系统可以是GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)122、激光雷达123、毫米波雷达124、超声雷达125、摄像装置126以及电容方向盘127中的一种或者多种。
本申请实施例中,摄像装置126可以包括驾驶员摄像头以及TOF摄像头。
车辆100的部分或所有功能受计算平台150控制。计算平台150可包括至少一个处理器151,处理器151可以执行存储在例如存储器152这样的非暂态计算机可读介质中的指令153。在一些实施例中,计算平台150还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
处理器151可以是任何常规的处理器,诸如中央处理单元(central processing unit,CPU)。替选地,处理器151还可以包括诸如图像处理器(graphic process unit,GPU),现场可编程门阵列(field programmable gate array,FPGA)、片上系统(system on chip,SOC)、专用集成芯片(application specific integrated circuit,ASIC)或它们的组合。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,存储器152可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车等,本申请实施例不做特别的限定。
车辆100可以包括ADAS,ADAS利用在车辆上的多种传感器(包括但不限于:激光雷达、毫米波雷达、摄像装置、超声波传感器、全球定位系统、惯性测量单元)从车辆周围获取信息,并对获取的信息进行分析和处理,实现例如障碍物感知、目标识别、车辆定位、路径规划、驾驶员监控/提醒等功能,从而提升车辆驾驶的安全性、自动化程度和舒适度。
图2示出各种传感器感测范围示意图,传感器可以包括例如图1所示意的激光雷达、毫米波雷达、摄像装置、超声波传感器,其中毫米波雷达可以分为长距雷达和中/短距雷达。目前,例如,激光雷达的最远感测距离约为150米;长距毫米波雷达的最远感测距离约为250米;中/短距毫米波雷达的最远感测距离约为120米;摄像头的最远感测距离约为200米;超声波雷达的最远感测距离约为5米。
从逻辑功能上来说,ADAS系统一般包括三个主要功能模块:感知模块,决策模块和执行模块,感知模块通过传感器感知车身周围环境,输入相应实时数据至决策层处理中心,感知模块主要包括车载摄像头/超声波雷达/毫米波雷达/激光雷达等;决策模块根据感知模块获取的信息,使用计算装置和算法做出相应决策;执行模块从决策模块接收到决策信号后采取相应行动,如驾驶、变道、转向、刹车、警示等。
在不同的自动驾驶等级(L0-L5)下,基于人工智能算法和多传感器所获取的信息,ADAS可以实现不同等级的自动驾驶辅助,上述的自动驾驶等级(L0-L5)是基于汽车工程师协会(society of automotive engineers,SAE)的分级标准的。其中,L0级为无自动化;L1级为驾驶支援;L2级为部分自动化;L3级为有条件自动化;L4级为高度自动化;L5级为完全自动化。L1至L3级监测路况并做出反应的任务都由驾驶员和系统共同完成,并需要驾驶员接管动态驾驶任务。L4和L5级可以让驾驶员完全转变为乘客的角色。目前,ADAS可以实现的功能主要包括但不限于:自适应巡航、自动紧急刹车、自动泊车、盲点 监测、前方十字路口交通警示/制动、后方十字路口交通警示/制动、前车碰撞预警、车道偏离预警、车道保持辅助、后车防撞预警、交通标识识别、交通拥堵辅助、高速公路辅助等。应当理解的是:上述的各种功能在不同的自动驾驶等级(L0-L5)下可以有具体的模式,自动驾驶等级越高,对应的模式越智能。例如,自动泊车可以包括APA、RPA以及AVP等;对于APA,驾驶员无需操纵方向盘,但是仍然需要在车辆上操控油门和刹车;对于RPA,驾驶员可以使用终端(例如手机)在车辆外部对车辆进行遥控泊车;对于AVP,车辆可以在没有驾驶员的情况下完成泊车。从对应的自动驾驶等级而言,APA约处在L1级的水平,RPA约处于L2-L3级的水平,而AVP约处于L4级的水平。
应理解,本申请实施例中所提及的车辆处于自动驾驶状态可以指车辆的方向盘由ADAS系统控制而非驾驶员控制。例如,车辆处于APA、RPA或者AVP;或者,车辆处于L2及以上级别的NCA;或者,车辆处于L3等级下的ICA。
本申请实施例中,车辆可以通过驾驶员摄像头、TOF摄像头、电容方向盘以及扭矩传感器采集到的数据确定驾驶员是否接管车辆,有助于提升方向盘接管检测时的准确率,从而有助于提升行车安全。
图3示出了本申请实施例提供的系统架构的示意性框图。如图3所示,该系统可以设置于车辆中,该系统包括传感器和计算平台。例如,传感器可以为图1所示的感知系统120中的一个或者多个传感器(例如,摄像装置126、电容方向盘以及扭矩传感器),计算平台可以为图1所示的计算平台150。计算平台150可以包括ADAS系统。
扭矩传感器可以将检测到的扭矩值输入计算平台。摄像装置以及电容方向盘可以将采集到的数据输入计算平台,从而使得计算平台可以输出驾驶员的动作特征(例如,驾驶员转动方向盘、驾驶员轻搭方向盘以及驾驶员误碰方向盘)。计算平台可以结合扭矩传感器检测到的扭矩值以及驾驶员的动作特征确定驾驶员是否接管车辆。如果确定驾驶员接管车辆,则ADAS系统可以交出车辆的控制权;如果确定驾驶员没有接管车辆或者误操作方向盘,ADAS系统可以继续控制车辆。
图4示出了本申请实施例提供的系统架构的另一示意性框图。如图4所示,该系统包括传感器、ADAS系统和云端服务器,其中传感器和ADAS系统可以是位于车辆中,云端服务器中可以包括方向盘接管检测装置。车辆可以将扭矩传感器、摄像装置以及电容方向盘采集的数据通过网络发送给云端服务器。云端服务器的方向盘接管检测装置可以通过摄像装置和电容方向盘采集的数据输出驾驶员的动作特征。进而云端服务器可以结合驾驶员的动作特征以及扭矩传感器检测到的方向盘扭矩值确定驾驶员是否接管车辆。云端服务器可以将驾驶员是否接管车辆的结果通过网络发送给车辆。如果云端服务器发送的结果指示驾驶员接管车辆,则ADAS系统可以交出车辆的控制权;如果云端服务器发送的结果指示驾驶员没有接管车辆或者误操作方向盘,ADAS系统可以继续控制车辆。
一个实施例中,车辆也可以将摄像装置以及电容方向盘采集到的数据发送给云端服务器。云端服务器的方向盘接管检测装置可以通过摄像装置以及电容方向盘采集到的数据确定驾驶员的动作特征。从而将驾驶员的动作特征发送给车辆。车辆可以结合扭矩传感器检测到的扭矩值以及驾驶员的动作特征来确定驾驶员是否接管车辆。
图5示出了本申请实施例提供的系统架构图。计算平台获取多个传感器(包括驾驶员摄像头、TOF摄像头以及电容方向盘)采集的数据。计算平台对每个传感器采集的数据通 过对应的编码器进行编码可以得到每个传感器的特征数据。计算平台将多个特征数据进行数据融合后送入神经网络进行推理,就可以得到驾驶员的动作特征。
计算平台可以结合扭矩传感器检测到的方向盘的扭矩值以及驾驶员的动作特征,来确定驾驶员是否接管车辆。例如,驾驶员的动作特征为驾驶员转动方向盘且扭矩传感器检测到的扭矩值大于预设阈值,则计算平台中的ADAS系统可以确定驾驶员接管车辆,ADAS系统交出车辆的控制权;又例如,驾驶员的动作特征为驾驶员轻搭方向盘且扭矩传感器检测到的扭矩值小于预设阈值,则ADAS系统可以确定驾驶员没有接管车辆,ADAS系统可以继续控制车辆。又例如,驾驶员的动作特征为驾驶员误碰方向盘且扭矩传感器检测到的扭矩值小于预设阈值,则ADAS系统可以确定驾驶员没有接管车辆,ADAS系统可以继续控制车辆。
本申请实施例中,ADAS系统可以是位于计算平台内的。或者,计算平台也可以是位于计算平台外的,那么计算平台在确定驾驶员接管车辆后,可以向ADAS系统发送指令,该指令用于指示驾驶员接管车辆;响应于接收到该指令,ADAS系统可以交出车辆的控制权。
下面对图5所示的数据处理过程进行具体描述。
本申请实施例中的输入可以是座舱传感器采集的数据,座舱传感器可以包括驾驶员摄像头、TOF摄像头以及电容方向盘等传感器。
驾驶员摄像头采集的数据可以为图像数据。根据传感器的不同,可以获得W×H×3×K的数据,其中,W表示摄像头采集到的图像的宽,H表示摄像头采集到的图像的高,3表示RGB三原色,K表示K帧数据。
对于驾驶员摄像头来说,可以对图像数据进行特征提取。计算平台可以采用深层特征融合(deep layer aggregation,DLA)、视觉几何组(visual geometry group,VGG)、ResNet等特征提取网络提取特征数据。特征提取网络通常包括全连接层、卷积层、池化层等结构。特征提取网络可以通过标注好的数据进行训练。
应理解,本申请实施例中,可以通过标注好的数据对编码器进行训练。标注好的数据包括同一时刻驾驶员摄像头采集的数据,其标注结果为这一时刻驾驶员的动作状态。例如,驾驶员的动作状态可以分为3个种类:驾驶员主动握住方向盘控制车辆转向、驾驶员手轻搭方向盘(车辆控制转向,但驾驶员能随时接管车辆)、驾驶员误碰方向盘(例如,驾驶员在喝水、捡东西的过程中误碰方向盘等)。当驾驶员主动握住方向盘控制车辆转向时,可以将此时驾驶员摄像头采集的数据组成数据集合,该数据集合中可以包括驾驶员摄像头采集的图像数据。对该数据集合打上驾驶员主动握住方向盘控制车辆转向的标签,那么打上标签后的数据集合就可以作为驾驶员主动握住方向盘控制车辆转向时标注好的数据。
下面结合图6所示的DLA网络架构,对摄像头采集的数据进行特征提取的过程进行说明。每个格子上的数字表示下采样的倍数,即相对于原始输入缩小的倍数。虚线表示2倍的上采样过程,将对应的特征大小扩大两倍。粗箭头表示将特征进行传递,到对应的正方形框进行聚合,从而将多个同一维度的特征进行融合。每一个正方形框包括对特征数据的卷积、批标准化等操作。
TOF摄像头采集的数据可以为深度数据。根据传感器的不同,可以获得W×H×D×K的数据,其中,W、H表示图像的宽度和高度尺寸,D表示深度维度信息,K表示K帧数 据。
对于TOF摄像头来说,可以对深度信息进行特征提取。可以将空间进行划分成小格,统计每个像素深度落在这些栅格中的个数和特性。将像素编码成在BEV空间下的特征(如以每个栅格中的点的平均深度来代替BEV空间下栅格的特征),并将其统计结果送入到U型网络(UNET)等特征提取神经网络中对TOF摄像头的数据进行特征提取。这里的特征提取网络可以使用标注好的数据进行训练。应理解,标注好的数据的获取过程可以参考上述实施例中的描述,此处不再赘述。
图7示出了UNET结构的示意图,距离信息格栅化中每一个方块中进行了数据的卷积处理。UNET特征提取网络中左边黑色向下实线箭头代表2×2的最大池化操作,将特征维度缩小一半。右边部分向上实线箭头代表上采样过程,将特征维度扩大2倍。虚线代表拷贝过程。虚线传递过来的特征与右边上采样的特征进行叠加,作为卷积的输入特征。
电容方向盘采集的数据可以指示驾驶员对方向盘的抓握状态,包括抓握的方式、左右手分辨率等。
对于电容方向盘来说,可以获取驾驶员抓握方向盘的位置和状态。可以将驾驶员抓握方向盘的区域进行编码,将其转换到图像坐标系中构建一维特征。具体来说,抓握方向盘的地方置为1*系数,未抓握方向盘的地方置为0,系数由抓握力度决定。
图8示出了电容方向盘检测的示意图。如果驾驶员抓握方向盘下方的左右两侧把手处,那么对应覆盖的编码格子(由黑色填充的格子)标记为1*系数,其余位置置为0。如果驾驶员的手紧握方向盘,系数设为1;若驾驶员不握住方向盘,系数设为0。根据驾驶员抓握方向盘的力度按比例确定编码格子的系数。
以上结合图6至图8分别介绍了对驾驶员摄像头、TOF摄像头以及电容方向盘采集的数据进行特征提取的过程。下面结合图9至图11介绍特征数据进行数据融合、神经网络推理以及解码器输出驾驶员的动作特征的过程。
图9示出了一种通过concat叠加进行数据融合的示意图。对于不同的传感器数据,通过特征提取后获得对应的传感器特征并进行特征融合。融合的方式可以为concat叠加,从而获得融合后的特征数据。concat叠加方式如图10所示,在每个特征第三维度上进行叠加,如对驾驶员摄像头采集到的数据提取的特征数据为W×H×D 1,对TOF摄像头采集的数据提取到的特征数据为W×H×D 2,对电容方向盘采集的数据提取的特征数据为W×H×D 3,那么融合后的数据可以为W×H×(D 1+D 2+D 3)。
图10示出了将融合后的数据送入用于推理的神经网络进行推理的示意图。如图10所示,该网络可以采用Temporal Convolutional Networks时序神经网络,可以使用标注好的数据进行训练得到最优的效果。网络的结构如图10所示:X 0,X 1,…,X T代表从0时刻开始到T时刻的输入,Y T代表在T时刻的输出。
应理解,本申请实施例中可以使用标注好的数据对神经网络进行训练。标注好的数据包括同一时刻多个传感器(包括驾驶员摄像头、TOF摄像头以及电容方向偏)采集的数据,其标注结果为这一时刻驾驶员的动作状态。例如,驾驶员的动作状态可以分为3个种类:驾驶员主动握住方向盘控制车辆转向、驾驶员手轻搭方向盘、驾驶员误碰方向盘。当驾驶员主动握住方向盘控制车辆转向时,可以将此时驾驶员摄像头、TOF摄像头以及电容方向盘采集的数据组成数据集合,该数据集合中可以包括摄像头采集的图像数据以及电容方向 盘采集的驾驶员抓握方式数据。对该数据集合打上驾驶员主动握住方向盘控制车辆转向的标签,那么打上标签后的数据集合就可以作为驾驶员主动握住方向盘控制车辆转向时标注好的数据。
图11示出了通过解码器对用于推理的神经网络推输出的结果进行解码的示意图。解码器可以采用全连接层神经网络对之前的时序神经网络输出结果进行解码。解码器主要完成分类任务,对驾驶员动作特征进行识别。这里的输出可以包括驾驶员转动方向盘、轻搭方向盘、驾驶员误碰方向盘。
应理解,以上对于驾驶员的动作状态是以驾驶员转动方向盘、轻搭方向盘、驾驶员误碰方向盘这三个分类结果进行说明的,本申请实施例并不限于此。例如,还可以通过标注结果为驾驶员紧握方向盘时多个传感器采集的数据对神经网络进行训练。这样解码器输出的分类结果中还可以包括驾驶员紧握方向盘这一分类结果。
计算平台可以结合驾驶员的动作状态以及扭矩传感器输出的方向盘的扭矩值对驾驶员的状态进行判断。驾驶员的状态可以包括驾驶员接管方向盘以及驾驶员未接管方向盘。
例如,如果驾驶员动作特征为驾驶员转动方向盘且方向盘的扭矩值大于或者等于预设阈值,则计算平台判定驾驶员接管车辆,ADAS系统交出车辆控制权。
一个实施例中,计算平台还可以像提示装置发送指令,该指令用于指示驾驶员接管车辆;响应于接收到该指令,提示装置可以通过人机交互界面(human machine interface,HMI)、声音、氛围灯等方式提示用户接管车辆。
图12示出了本申请实施例提供的通过中控大屏提示用户的示意图。车辆处于自动驾驶等级L2及以上级别的NCA。若此时计算平台检测到驾驶员转动方向盘且扭矩传感器输出的扭矩值大于或者等于预设阈值,那么计算平台可以向座舱域控制器(cockpit domain controller,CDC)发送指令;响应于接收到该指令,CDC可以控制中控大屏提示用户“检测到您主动接管车辆,自动驾驶已退出,请提高注意力”。
又例如,如果驾驶员动作特征为驾驶员轻搭方向盘(或者,驾驶员没有操作方向盘的动作)且扭矩传感器输出的扭矩值小于预设阈值,计算平台也可以判定驾驶员未接管车辆,进而控制ADAS系统继续控制车辆。
又例如,如果驾驶员动作特征为驾驶员误碰方向盘(例如,驾驶员在低头捡东西或者喝水的过程中误碰方向盘)且扭矩传感器输出的扭矩值小于预设阈值,计算平台可以判定驾驶员误碰到了方向盘,进而控制ADAS系统继续维持对车辆的控制,并且可以主动控制方向盘回转,防止事故发生。
一个实施例中,计算平台还可以向提示装置发送指令,该指令用于指示提示装置提示用户误碰了方向盘。例如,提示装置可以通过HMI、声音等方式提示用户刚刚误碰方向盘。
图13示出了本申请实施例提供的通过中控大屏提示用户的示意图。车辆处于自动驾驶L2等级下的NCA。若此时计算平台检测到驾驶员误碰方向盘,计算平台可以向座舱域控制器CDC发送指令;响应于接收到该指令,CDC可以控制车载语音助手通过声音提示用户“为了自动驾驶安全,请您不要误碰方向盘”。
又例如,如果驾驶员动作特征为驾驶员转动方向盘且扭矩传感器输出的扭矩值小于预设阈值,则计算平台也可以判定驾驶员接管车辆,计算平台可以控制ADAS交出车辆控制权。
又例如,如果驾驶员动作特征为驾驶员误碰方向盘(例如,驾驶员在低头捡东西或者喝水的过程中误碰方向盘)且扭矩传感器输出的扭矩值大于预设阈值,计算平台也可以判定驾驶员误碰到了方向盘,计算平台可以控制ADAS系统可以继续维持对车辆的控制,并且可以主动回转方向盘控制车辆,防止事故发生。
本申请实施例中,通过结合神经网络输出的驾驶员的动作特征以及扭矩传感器输出的扭矩值,计算平台可以确定驾驶员是否接管车辆,这样有助于提升车辆在判断驾驶员是否接管车辆时的准确度。在驾驶员误碰方向盘时,计算平台可以控制ADAS系统可以继续维持对车辆的控制,并对方向盘误操作进行纠偏,从可以保证自动驾驶过程中的行车安全。
以上结合图5至图13介绍了通过驾驶员摄像头、TOF摄像头、电容方向盘以及扭矩传感器采集的数据来确定驾驶员是否接管车辆的过程。下面结合图14介绍通过电容方向盘以及扭矩传感器采集的数据来确定驾驶员是否接管车辆的系统架构图以及结合图15介绍通过驶员摄像头、TOF摄像头以及扭矩传感器采集的数据来确定驾驶员是否接管车辆的系统架构图。
图14示出了本申请实施例提供的另一系统架构图。通过电容方向盘以及扭矩传感器采集的数据来确定驾驶员是否接管车辆。
电容方向盘可以直接输出驾驶员的方向盘抓握姿势。对于自动驾驶车辆来说,方向盘在自主转动过程中,驾驶员的操作习惯是轻轻搭住方向盘下方,并不会随着方向盘的转动而转动。故利用电容方向盘的输出,神经网络可以学习到在一段时间内,驾驶员是否有转动方向盘的动作,用来判断此时的转向是ADAS系统在操作还是由驾驶员转动方向盘造成的。
通过对电容方向盘采集的数据提取特征,并将特征数据输入神经网络可以得到方向盘抓握姿势。例如,方向盘抓握姿势可以包括驾驶员轻搭方向盘、驾驶员未触碰方向盘以及驾驶员转动方向盘。计算平台通过结合方向盘抓握姿势以及扭矩传感器输出的扭矩值,可以判断扭矩传感器的输出来自于ADAS系统还是来自于驾驶员操作,进而来判定驾驶员是否接管车辆。
图15示出了本申请实施例提供的另一系统架构图。通过摄像头、TOF摄像头以及扭矩传感器采集的数据来确定驾驶员是否接管车辆。
通过车内摄像头(包括驾驶员摄像头和TOF摄像头)获取单帧或多帧数据。将提取到的特征进行叠加,形成融合后的多维特征。融合后的特征数据送入到使用标注好的数据进行训练后的神经网络,可以输出驾驶员的动作特征,例如,驾驶员是否手握住方向盘,是否有主动转动方向盘的动作等。计算平台通过结合驾驶员的动作特征以及扭矩传感器输出的扭矩值,可以判断扭矩传感器输出来自于ADAS系统还是来自于驾驶员操作,进而来判定驾驶员是否接管车辆。
相比于图14所示的系统架构图,图5以及图15所示的系统架构图中神经网络可以输出驾驶员的具体行为,包括驾驶员是否误碰方向盘。当检测到驾驶员误碰方向盘时,ADAS系统可以继续接管车辆,从而有助于保证行车安全。
应理解,以上图14和图15中的对传感器的数据进行处理的过程可以参考上述实施例中的描述,此处不再赘述。
一个实施例中,计算平台还可以通过TOF摄像头以及扭矩传感器采集的数据来确定 驾驶员是否接管车辆;或者,计算平台还可以通过驾驶员摄像头以及扭矩传感器采集的数据来确定驾驶员是否接管车辆;或者,计算平台还可以通过驾驶员摄像头、电容方向盘以及扭矩传感器采集的数据来确定驾驶员是否接管车辆;或者,计算平台还可以通过TOF摄像头、电容方向盘以及扭矩传感器采集的数据来确定驾驶员是否接管车辆。具体的过程可以参考上述实施例中的描述,此处不再赘述。
图16示出了本申请实施例提供的一种方向盘接管的检测方法1600的示意性流程图,该方法可以由计算平台执行中,该方法1600包括:
S1601,计算平台获取多个传感器采集到的数据。
可选地,该多个传感器包括驾驶员摄像头、飞行时间TOF摄像头以及电容方向盘中的至少两个传感器。
可选地,该获取多个传感器采集到的数据之前,该方法还包括:计算平台确定车辆处于自动驾驶状态。
可选地,计算平台确定车辆处于自动驾驶状态包括车辆正在执行APA、RPA、AVP、NCA或者ICA等智能驾驶功能。
S1602,计算平台对该多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据。
示例性的,图6至图8分别示出了编码器对驾驶员摄像头提取特征数据、编码器对TOF摄像头提取特征数据和编码器对电容方向盘提取特征数据的过程。
可选地,该多个特征数据中的每个特征数据为第一坐标系下的特征数据。
可选地,该第一坐标系为图像坐标系或者鸟瞰图BEV坐标系。
S1603,计算平台对该多个特征数据进行融合,得到融合后的数据。
示例性的,图9示出了对多个特征数据进行数据融合的过程。
S1604,计算平台根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。
可选地,计算平台根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值大于或者等于预设阈值时,计算平台确定该驾驶员接管方向盘;其中,该方法还包括:退出自动驾驶状态。
可选地,该方法还包括:计算平台控制车辆退出自动驾驶状态时,向提示装置发送指令,该指令用于指示提示装置提示用户接管方向盘。
可选地,计算平台根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:当该推理结果为驾驶员未转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,该计算平台确定该驾驶员未接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值小于或者等于该预设阈值时,该计算平台确定该驾驶员未接管方向盘。
可选地,计算平台根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,该计算平台确定该驾驶员接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值大于或者等于该预设阈值时,该 计算平台确定该驾驶员未接管方向盘。
图17示出了本申请实施例提供的一种方向盘接管的检测系统1700的示意性框图。如图17所示,该系统1700包括多个传感器1701和计算平台1702,其中,
该多个传感器1701,用于采集多个数据并向该计算平台发送该多个数据;
该计算平台1702,用于对该多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据;对该多个特征数据进行融合,得到融合后的数据;根据对该融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。
可选地,该计算平台1702还用于在获取该多个数据之前,确定该车辆处于自动驾驶状态。
可选地,该计算平台1702具体用于:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值大于或者等于预设阈值时,确定该驾驶员接管方向盘;控制该车辆退出自动驾驶状态。
可选地,该计算平台1702还用于向第一提示装置发送第一指令,该第一指令用于指示该第一提示装置提示用户接管方向盘。
可选地,该计算平台1702具体用于:当该推理结果为驾驶员未转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,确定该驾驶员未接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值小于或者等于该预设阈值时,确定该驾驶员未接管方向盘。
可选地,该计算平台1702具体用于:当该推理结果为驾驶员转动方向盘且该方向盘扭矩值小于或者等于预设阈值时,确定该驾驶员接管方向盘;或者,当该推理结果为驾驶员误碰方向盘且该方向盘扭矩值大于或者等于该预设阈值时,确定该驾驶员未接管方向盘。
可选地,该多个传感器包括驾驶员摄像头、飞行时间TOF摄像头以及电容方向盘中的至少两个传感器。
可选地,该多个特征数据中的每个特征数据为第一坐标系下的特征数据。
可选地,该第一坐标系为图像坐标系或者鸟瞰图BEV坐标系。
可选地,该计算平台位于云端服务器中。
图18是本申请实施例提供的装置1800的示意性框图,该装置1800包括获取单元1801、特征提取单元1802、数据融合单元1803和推理单元1804,其中,获取单元1801,用于获取多个传感器采集到的数据;特征提取单元1802,用于对该多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据;数据融合单元1803,用于对该多个特征数据进行融合,得到融合后的数据;推理单元1804,用于对该融合后的数据进行推理得到的推理结果;确定单元1805,用于根据该推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。
本申请实施例还提供了一种装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行检测方法。
可选地,上述处理单元可以是图1所示的处理器151,上述存储单元可以是图1所示的存储器152,其中存储器152可以是芯片内的存储单元(例如,寄存器、缓存等),也 可以是车辆内位于上述芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
本申请实施例还系统了一种系统,该系统包括传感器和方向盘接管检测装置,该方向盘接管检测装置可以为上述装置1800。
本申请实施例还提供了供一种车辆,包括上述方向盘接管的检测系统1700或者上述装置1800。
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
本申请实施例还提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
在实现过程中,上述方法的各步骤可以通过处理器151中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完成,或者用处理器151中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器151读取存储器152中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,该存储器152可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。
在本申请实施例中,“第一”、“第二”以及各种数字编号仅为描述方便进行的区分,并不用来限制本申请实施例的范围。例如,区分不同的管路、通孔等。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的 部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖。在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种方向盘接管检测方法,其特征在于,所述方法应用于车辆,所述方法包括:
    获取多个传感器采集到的数据;
    对所述多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据;
    对所述多个特征数据进行融合,得到融合后的数据;
    根据对所述融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。
  2. 根据权利要求1所述的方法,其特征在于,所述获取多个传感器采集到的数据之前,所述方法还包括:
    确定所述车辆处于自动驾驶状态。
  3. 根据权利要求2所述的方法,其特征在于,所述根据对所述融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:
    当所述推理结果为驾驶员转动方向盘且所述方向盘扭矩值大于或者等于预设阈值时,确定所述驾驶员接管方向盘;
    其中,所述方法还包括:退出自动驾驶状态。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    提示用户接管方向盘。
  5. 根据权利要求1或2所述的方法,其特征在于,所述根据对所述融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:
    当所述推理结果为驾驶员未转动方向盘且所述方向盘扭矩值小于或者等于预设阈值时,确定所述驾驶员未接管方向盘;或者,
    当所述推理结果为驾驶员误碰方向盘且所述方向盘扭矩值小于或者等于所述预设阈值时,确定所述驾驶员未接管方向盘。
  6. 根据权利要求1或2所述的方法,其特征在于,所述根据对所述融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘,包括:
    当所述推理结果为驾驶员转动方向盘且所述方向盘扭矩值小于或者等于预设阈值时,确定所述驾驶员接管方向盘;或者,
    当所述推理结果为驾驶员误碰方向盘且所述方向盘扭矩值大于或者等于所述预设阈值时,确定所述驾驶员未接管方向盘。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述多个传感器包括驾驶员摄像头、飞行时间TOF摄像头以及电容方向盘中的至少两个传感器。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述多个特征数据中的每个特征数据为第一坐标系下的特征数据。
  9. 根据权利要求8所述的方法,其特征在于,所述第一坐标系为图像坐标系或者鸟 瞰图BEV坐标系。
  10. 一种方向盘接管检测系统,其特征在于,所述系统包括多个传感器和计算平台,其中,
    所述多个传感器,用于采集多个数据并向所述计算平台发送所述多个数据;
    所述计算平台,用于对所述多个传感器中每个传感器采集到的数据提取特征,得到多个特征数据;
    对所述多个特征数据进行融合,得到融合后的数据;
    根据对所述融合后的数据进行推理得到的推理结果和扭矩传感器检测到的方向盘扭矩值,确定驾驶员是否接管方向盘。
  11. 根据权利要求10所述的系统,其特征在于,所述计算平台还用于在获取所述多个数据之前,确定所述车辆处于自动驾驶状态。
  12. 根据权利要求11所述的系统,其特征在于,所述计算平台具体用于:当所述推理结果为驾驶员转动方向盘且所述方向盘扭矩值大于或者等于预设阈值时,确定所述驾驶员接管方向盘;
    控制所述车辆退出自动驾驶状态。
  13. 根据权利要求12所述的系统,其特征在于,所述计算平台还用于向第一提示装置发送第一指令,所述第一指令用于指示所述第一提示装置提示用户接管方向盘。
  14. 根据权利要求10或11所述的系统,其特征在于,所述计算平台具体用于:当所述推理结果为驾驶员未转动方向盘且所述方向盘扭矩值小于或者等于预设阈值时,确定所述驾驶员未接管方向盘;或者,
    当所述推理结果为驾驶员误碰方向盘且所述方向盘扭矩值小于或者等于所述预设阈值时,确定所述驾驶员未接管方向盘。
  15. 根据权利要求10或11所述的系统,其特征在于,所述计算平台具体用于:
    当所述推理结果为驾驶员转动方向盘且所述方向盘扭矩值小于或者等于预设阈值时,确定所述驾驶员接管方向盘;或者,
    当所述推理结果为驾驶员误碰方向盘且所述方向盘扭矩值大于或者等于所述预设阈值时,确定所述驾驶员未接管方向盘。
  16. 根据权利要求10至15中任一项所述的系统,其特征在于,所述多个传感器包括驾驶员摄像头、飞行时间TOF摄像头以及电容方向盘中的至少两个传感器。
  17. 根据权利要求10至16中任一项所述的系统,其特征在于,所述多个特征数据中的每个特征数据为第一坐标系下的特征数据。
  18. 根据权利要求17所述的系统,其特征在于,所述第一坐标系为图像坐标系或者鸟瞰图BEV坐标系。
  19. 根据权利要求10至18中任一项所述的系统,其特征在于,所述计算平台位于云端服务器中。
  20. 一种方向盘接管检测系统,其特征在于,包括:
    存储器,用于存储指令;
    处理器,用于读取所述指令,以执行如权利要求1至9中任意一项所述的方法。
  21. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储有程序代码, 当所述程序代码在计算机上运行时,使得计算机执行如权利要求1至9中任意一项所述的方法。
  22. 一种车辆,其特征在于,包括如权利要求10至20中任一项所述的方向盘接管检测系统。
PCT/CN2021/104971 2021-07-07 2021-07-07 方向盘接管检测方法、方向盘接管检测系统以及车辆 WO2023279285A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180084876.7A CN116670003A (zh) 2021-07-07 2021-07-07 方向盘接管检测方法、方向盘接管检测系统以及车辆
PCT/CN2021/104971 WO2023279285A1 (zh) 2021-07-07 2021-07-07 方向盘接管检测方法、方向盘接管检测系统以及车辆

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/104971 WO2023279285A1 (zh) 2021-07-07 2021-07-07 方向盘接管检测方法、方向盘接管检测系统以及车辆

Publications (1)

Publication Number Publication Date
WO2023279285A1 true WO2023279285A1 (zh) 2023-01-12

Family

ID=84800117

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/104971 WO2023279285A1 (zh) 2021-07-07 2021-07-07 方向盘接管检测方法、方向盘接管检测系统以及车辆

Country Status (2)

Country Link
CN (1) CN116670003A (zh)
WO (1) WO2023279285A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103204166A (zh) * 2011-11-17 2013-07-17 通用汽车环球科技运作有限责任公司 用于闭环驾驶员注意力管理的系统和方法
CN105264450A (zh) * 2013-04-05 2016-01-20 谷歌公司 用于将对自主车辆的控制转移给驾驶员的系统和方法
CN107953891A (zh) * 2016-10-17 2018-04-24 操纵技术Ip控股公司 用于自动驾驶切换控制的传感器融合
US20180172528A1 (en) * 2016-12-15 2018-06-21 Hyundai Motor Company Apparatus and method for detecting driver's hands-off
JP2019172113A (ja) * 2018-03-29 2019-10-10 株式会社Subaru 車両の運転支援システム
CN110316195A (zh) * 2018-03-29 2019-10-11 株式会社斯巴鲁 车辆的驾驶辅助系统
JP2020032949A (ja) * 2018-08-31 2020-03-05 トヨタ自動車株式会社 自動運転システム

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103204166A (zh) * 2011-11-17 2013-07-17 通用汽车环球科技运作有限责任公司 用于闭环驾驶员注意力管理的系统和方法
CN105264450A (zh) * 2013-04-05 2016-01-20 谷歌公司 用于将对自主车辆的控制转移给驾驶员的系统和方法
CN107953891A (zh) * 2016-10-17 2018-04-24 操纵技术Ip控股公司 用于自动驾驶切换控制的传感器融合
US20180172528A1 (en) * 2016-12-15 2018-06-21 Hyundai Motor Company Apparatus and method for detecting driver's hands-off
JP2019172113A (ja) * 2018-03-29 2019-10-10 株式会社Subaru 車両の運転支援システム
CN110316195A (zh) * 2018-03-29 2019-10-11 株式会社斯巴鲁 车辆的驾驶辅助系统
JP2020032949A (ja) * 2018-08-31 2020-03-05 トヨタ自動車株式会社 自動運転システム

Also Published As

Publication number Publication date
CN116670003A (zh) 2023-08-29

Similar Documents

Publication Publication Date Title
US11774963B2 (en) Remote operation of a vehicle using virtual representations of a vehicle state
US11762391B2 (en) Systems and methods for training predictive models for autonomous devices
US20230274540A1 (en) Autonomous Vehicle Lane Boundary Detection Systems and Methods
US20210009163A1 (en) Systems and Methods for Generating Motion Forecast Data for Actors with Respect to an Autonomous Vehicle and Training a Machine Learned Model for the Same
US11827240B2 (en) Systems and methods for costing autonomous vehicle maneuvers
WO2018087828A1 (ja) 車両制御装置、車両制御システム、車両制御方法、および車両制御プログラム
US11520338B2 (en) Systems and methods for vehicle spatial path sampling
EP3710980A1 (en) Autonomous vehicle lane boundary detection systems and methods
JP2019156265A (ja) 表示システム、表示方法、およびプログラム
KR102402293B1 (ko) 자율 주행 차량 거동을 디스플레이하기 위한 그래픽 사용자 인터페이스
CN113439247A (zh) 自主载具的智能体优先级划分
CN111595357B (zh) 可视化界面的显示方法、装置、电子设备和存储介质
US11188766B2 (en) System and method for providing context aware road-user importance estimation
WO2020116195A1 (ja) 情報処理装置、情報処理方法、プログラム、移動体制御装置、及び、移動体
CN116529783A (zh) 用于构建机器学习模型的数据的智能选择的系统和方法
US11834069B2 (en) Systems and methods for selecting trajectories based on interpretable semantic representations
US11507090B2 (en) Systems and methods for vehicle motion control with interactive object annotation
US20220122363A1 (en) IDENTIFYING OBJECTS USING LiDAR
WO2021178517A1 (en) Systems and methods for object detection and motion prediction by fusing multiple sensor sweeps into a range view representation
CN108437996B (zh) 用于情境感知信息警报、建议和通知的集成接口
US20220289198A1 (en) Automated emergency braking system
JP2019185390A (ja) 車両制御装置、車両制御方法、及びプログラム
WO2023279285A1 (zh) 方向盘接管检测方法、方向盘接管检测系统以及车辆
US20230281871A1 (en) Fusion of imaging data and lidar data for improved object recognition
US20210300438A1 (en) Systems and methods for capturing passively-advertised attribute information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21948783

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180084876.7

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: MX/A/2024/000367

Country of ref document: MX

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112024000246

Country of ref document: BR

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112024000246

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20240105