WO2023092611A1 - Procédé et appareil de diffusion d'informations, procédé et appareil d'indication d'état de circulation, et véhicule - Google Patents

Procédé et appareil de diffusion d'informations, procédé et appareil d'indication d'état de circulation, et véhicule Download PDF

Info

Publication number
WO2023092611A1
WO2023092611A1 PCT/CN2021/134178 CN2021134178W WO2023092611A1 WO 2023092611 A1 WO2023092611 A1 WO 2023092611A1 CN 2021134178 W CN2021134178 W CN 2021134178W WO 2023092611 A1 WO2023092611 A1 WO 2023092611A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
vehicle
user
state
driving
Prior art date
Application number
PCT/CN2021/134178
Other languages
English (en)
Chinese (zh)
Inventor
格拉多·罗萨诺
周游
陈晓智
张谷力
高健博
Original Assignee
深圳市大疆创新科技有限公司
上汽大众汽车有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司, 上汽大众汽车有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/134178 priority Critical patent/WO2023092611A1/fr
Priority to CN202180101681.9A priority patent/CN117836853A/zh
Publication of WO2023092611A1 publication Critical patent/WO2023092611A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Definitions

  • the present application relates to the technical field of automobiles, in particular to an information broadcasting method, a traffic state prompting method, a device and a vehicle.
  • one of the purposes of the present application is to provide an information broadcasting method, a traffic state prompting method, a device, and a vehicle, so as to improve the safety control of the vehicle in the shared driving mode of people and vehicles.
  • an information broadcasting method which is applied to a vehicle equipped with a camera device, and the camera device faces a user seat inside the vehicle, and the method includes:
  • the specific status information includes one or more of the following: alarm information output by the vehicle, or the vehicle receives a message indicating that the execution module of the vehicle changes working status The actual working status information of the execution module when the user instruction is not executed, and the user instruction is not executed;
  • the analysis information is obtained based on the working state information, and the analysis information is broadcast.
  • a method for prompting traffic status is provided, the method is applied to a vehicle, and the vehicle is equipped with a sensor facing the outside of the vehicle and a camera device facing a user seat inside the vehicle; the method includes :
  • an information broadcasting method is provided, the method is applied to a vehicle equipped with a camera device, and the camera device faces a user seat inside the vehicle, and the method includes:
  • the posture and behavior information includes the user's head posture, face information, line of sight direction, hand position, and behavior information. one or more;
  • the state index information being used to indicate that the user is in a state of concentrating on driving or not in a state of driving;
  • the presentation parameters include audio presentation parameters and/or visual presentation parameters
  • an information broadcasting device including:
  • memory for storing processor-executable program instructions
  • a traffic state prompting device including:
  • memory for storing processor-executable program instructions
  • an information broadcasting device including:
  • memory for storing processor-executable program instructions
  • a vehicle comprising
  • the device includes a processor, a memory for storing processor-executable program instructions, wherein, when the processor invokes the executable instructions, the method according to any one of the above-mentioned first aspect to the third aspect is implemented operate.
  • a computer program product including a computer program, and when the computer program is executed by a processor, the steps of the method described in any one of the above first to third aspects are implemented.
  • a machine-readable storage medium stores several computer instructions, and when the computer instructions are executed, the method described in any one of the first to third aspects above is performed. method.
  • Fig. 1 is a flowchart of an information broadcasting method according to an embodiment of the present application.
  • Fig. 2 is a flow chart of an information broadcasting method according to another embodiment of the present application.
  • Fig. 3 is a flowchart of an information broadcasting method according to another embodiment of the present application.
  • Fig. 4 is a flowchart of an information broadcasting method according to another embodiment of the present application.
  • Fig. 5 is a flowchart of a method for prompting traffic status according to an embodiment of the present application.
  • Fig. 6 is a flow chart of a method for prompting traffic status according to another embodiment of the present application.
  • Fig. 7 is a flowchart of an information broadcasting method according to another embodiment of the present application.
  • Fig. 8 is a flowchart of a method for prompting traffic status according to another embodiment of the present application.
  • Fig. 9 is a flowchart of an information broadcasting method according to another embodiment of the present application.
  • Fig. 10 is a flowchart of an information broadcasting method according to another embodiment of the present application.
  • Fig. 11 is a schematic structural diagram of an information broadcasting device according to an embodiment of the present application.
  • Fig. 12 is a schematic structural diagram of a passing state prompting device according to an embodiment of the present application.
  • Fig. 13 is a schematic structural diagram of an information broadcasting device according to an embodiment of the present application.
  • Fig. 14 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
  • L2 and L3 belong to the human-vehicle co-driving mode.
  • Level 2 autonomous driving is partially automated driving, which allows the driver to detach his hands and feet, but still needs to keep his eyes on the direction of travel and concentrate on the road conditions.
  • the L3 level of automatic driving is conditional automatic driving, which can automatically control the vehicle in most road conditions, and the driver needs to be ready to take over the vehicle at any time.
  • Vehicle-mounted sensors include, for example, vision sensors, distance sensors, lidar, millimeter-wave radar, and the like.
  • the vehicle's automatic driving module needs to remind the user to take over the vehicle when it finds that the road conditions are complex and exceed the operation design threshold of the automatic driving level through the collected environmental observation data.
  • both the human and the vehicle's automatic driving modules have control over the vehicle, and it is urgent to effectively ensure the safe driving of the vehicle.
  • the present application proposes a solution by focusing on the state of the driver in the vehicle in the scenario of automatic driving.
  • DMS Driver Monitoring System
  • Status monitoring including driver fatigue monitoring, distraction monitoring, eye tracking and other dangerous behavior monitoring, such as making phone calls, eating, chatting, etc.
  • Face recognition including driver identification, feature recognition and emotion recognition.
  • DMS can collect the driver's image through the camera device configured in the vehicle, and then use different trained neural networks to perform behavior detection and face detection on the image respectively.
  • Behavior detection can monitor whether the driver is making phone calls, smoking and other behaviors.
  • Face detection can include head pose detection, eye tracking, eye state detection, etc. Through head posture detection, it can be judged whether the driver has chatted with the passenger in the passenger seat, turned his head to look behind, looked down at the mobile phone and other behaviors. With eye tracking, it is possible to detect whether the driver's gaze is focused on the front. Through the eye state detection, it can be judged whether the driver has closed eyes or the eyes are slack, so as to judge whether the driver is asleep or in a state of fatigue.
  • the current DMS can detect the status and behavior of the driver, and remind the driver based on the detected information.
  • the DMS only focuses on the driver itself, and its function is only to remind the driver of the status, and does not Better vehicle control is achieved, but the safe driving of the vehicle cannot be guaranteed.
  • the control of the timing when the driver and the car take over directly affects the safety in the co-driving mode.
  • Drivers are required to concentrate on the road conditions and be ready to take over the vehicle at any time.
  • This application scheme is based on the above-mentioned problem of safe driving of vehicles, and provides an information broadcasting scheme that can be used to ensure safe driving of vehicles through the idea of combining DMS with automatic driving technology. Applying DMS to automatic driving scenarios will improve human Safety in car-sharing mode.
  • the built-in automatic driving module of the vehicle will make decisions based on road condition information. These decisions can include Make turns, change lanes, keep going straight, speed up, slow down, turn on lights, and more.
  • the user may not understand the decisions made by the automatic driving module, so that the user cannot have a good grasp of the running state of the vehicle.
  • the shared driving mode of people and vehicles if the user cannot grasp the running status of the vehicle well, it will make it impossible for the user to take over the vehicle at the right moment, which will affect the safety of the shared driving mode.
  • this application combines DMS to propose a kind of information broadcasting method
  • the vehicle that is equipped with DMS usually carries camera device, and this camera device faces the user's seat inside the vehicle, such as driver's seat, passenger seat, Back seats etc.
  • the camera device is used to collect images, and the images may include images of the user on the facing user's seat.
  • some posture and behavior information of the user can be obtained, such as expression information, head posture, line of sight direction, hand position, behavior information and so on.
  • a method for broadcasting information provided by the present application can be applied to a vehicle equipped with a DMS, the vehicle is equipped with a camera device, and the camera device faces the user's seat inside the vehicle, and the method includes the steps shown in Figure 1 :
  • Step 110 Determine the facial expression information of the user located in the user seat based on the image information collected by the camera device;
  • Step 120 Obtain specific state information of the vehicle
  • the specific state information includes one or more of the following: alarm information output by the vehicle, or the vehicle receives a user instruction for instructing the execution module of the vehicle to change the working state and does not execute the When the user commands, the actual working status information of the execution module;
  • Step 130 If the expression feature in the expression information matches the preset expression feature representing doubt, obtain analysis information based on the specific state information, and broadcast the analysis information.
  • step 110 and step 120 are not executed sequentially, and step 110 and step 120 may also be executed at the same time.
  • the camera device can face the driver's seat, and the user in the driver's seat is the driver.
  • the camera device can also be directed towards the passenger seat, the rear seat, etc., so the user in the seat is the passenger.
  • the following takes the user as a driver as an example to expand the description.
  • the vehicle-specific state information may be vehicle-specific state information related to decisions made by the autonomous driving module. For example, if the automatic driving module makes a decision to output warning information, then the specific state information of the vehicle may include the warning information output by the vehicle. Another example is that the vehicle receives a user instruction, the user instruction is used to instruct the execution module of the vehicle to change the working state, but the automatic driving module makes a decision not to execute the user instruction, then the specific state information of the vehicle may include the actual working state information of the execution module .
  • the image information collected by the camera device can be input into the trained neural network, and the neural network can extract the user's expression features in the image information, and match the extracted expression features with the preset expression features representing doubts , according to the output result of the neural network, it can be determined that the user has produced a puzzled expression.
  • the expression feature can be extracted in many ways.
  • the expression feature can be obtained according to the position of the facial feature points and the distance between the feature points.
  • the expression feature may include the distance between the feature points representing the left and right eyebrows, and may also include the position of the feature points representing the corners of the mouth.
  • the above embodiments combine the application of DMS in the scene of automatic driving, which can monitor the running state of the vehicle and the state of the user.
  • the user When the user has a puzzled expression, it can output analysis information based on the user's doubts in a timely manner, so that the user can grasp the running status of the vehicle and prevent the user from making wrong decisions. Users can make correct decisions in a timely manner based on the broadcast analysis information, control the safe driving of vehicles, and improve the safety of people and vehicles in the shared driving mode.
  • Vehicle-specific status information may include alert messages output by the vehicle.
  • the analysis information may be the analysis information of the alarm information, which is used to analyze the cause of the alarm information.
  • the alarm information output by the vehicle may include an icon alarm displayed on the instrument panel, and the icon alarm may include presentation modes such as icon flashing, constant light, and color change.
  • the alarm information output by the vehicle may also include alarm sounds emitted by vehicle components, such as the alarm sound of "Didi Di". The user may not know clearly the meanings represented by the icons on the instrument panel and the alarm sound or the cause of the alarm information, and it is inconvenient for the user to refer to the manual to understand the above alarm information during the driving of the vehicle.
  • the analysis information may be the meaning represented by the alarming icon on the instrument panel, or the meaning represented by the icon in the presentation mode, or the meaning represented by the alarm sound.
  • the parsed information could be "the seat belt icon is on, the passenger is not wearing the seat belt correctly”.
  • the analysis information may be "the alarm sound means that the trunk door is not closed properly”.
  • the analysis information of the alarm information is acquired and played, which prevents the user from staring at the icons on the instrument panel for a long time while the vehicle is running, and allows the user to know the alarm information in time.
  • the user can know the alarm information in time by analyzing the information, and make corresponding processing for the alarm information.
  • the user can stop the vehicle in time according to the alarm sound, close the trunk door and continue driving the vehicle. Users can always grasp the running status of the vehicle, make decisions at the right moment, and improve the safety of people and vehicles in the shared driving mode.
  • the specific state information of the vehicle may also include the actual working state information of the execution module when the vehicle receives a user instruction for instructing the execution module of the vehicle to change the working state and the user instruction is not executed.
  • the analysis information may be the analysis information of the actual working status information of the execution module, and is used to analyze the cause of the actual working status of the execution module.
  • the actual working state information of the executive module is determined based on the instruction input to the executive module by the vehicle's automatic driving module, and the user instruction is different from the instruction input to the executive module by the automatic driving module.
  • the instruction input by the user to the power assembly of the vehicle is different from the instruction input by the automatic driving module to the power assembly; it may also include that the driving mode set by the user for the vehicle is different from the driving mode set by the automatic driving module for the vehicle.
  • the priority of the automatic driving module's instruction is higher than that of the user's instruction, so the user's instruction is not executed, and the vehicle The working status of the execution module has not changed as expected by the user.
  • the user's input to the power assembly of the vehicle may include a lane change instruction.
  • the sensors mounted on the vehicle facing the outside of the vehicle including vision sensors, distance sensors, lidar, millimeter-wave radar, etc., based on the environmental observation data collected by these sensors, it is detected that the distance between the vehicle behind and the vehicle on the next lane is insufficient to allow the vehicle to change lanes. If the vehicle executes the lane-changing command input by the user, it is likely to collide with a vehicle in the next lane. Therefore, based on the above detection results, the automatic driving module of the vehicle makes a decision not to execute the user instruction, and the instructions input by the automatic driving module to the power components include the instruction to keep going straight.
  • the DMS detects that the user has a puzzled expression, it can broadcast the analysis information of the actual working status information of the execution module, such as the analysis information that the vehicle has not changed lanes, such as "there is a risk of collision when changing lanes" to analyze the actual work of the execution module The cause of the state.
  • the driving mode of the vehicle is set to the automatic driving mode.
  • the automatic driving module makes a decision not to execute the user instruction, and the automatic driving module sets the driving mode of the vehicle to the manual driving mode.
  • the DMS detects that the user has a puzzled expression, it can broadcast the analysis information of the actual working status information of the execution module, such as the analysis information that the vehicle has not entered the automatic driving mode, such as "the road conditions are complicated, please drive in manual driving mode", to Analyze the cause of the actual working state of the execution module.
  • the above-mentioned embodiment records that when the instrument panel of the vehicle displays an icon alarm, a vehicle component emits an alarm sound, or when the vehicle does not execute the user's instruction, and when the user makes a puzzled expression, the analysis information of the alarm is broadcast or the alarm is not executed. Parsing information for user commands. Therefore, on the one hand, the broadcast of the analysis information can eliminate the user's doubts and improve the user experience; on the other hand, the analysis information is only broadcast when the user shows doubts, so the frequent broadcast will not cause interference to the user.
  • the vehicle may include multiple execution modules, and each execution module has its own corresponding actual working state. In this way, when the user has a puzzled expression, it is necessary to determine from the multiple execution modules which actual working state of the execution module makes the user doubt. Then the step of obtaining the analysis information in step 130 above may include steps as shown in FIG. 2:
  • Step 210 Determine a target execution module among the plurality of execution modules according to the expression information
  • Step 220 Obtain analysis information according to the actual working state information of the target execution module.
  • the user's gaze direction may be determined based on the expression information, and then the target execution module may be determined from multiple execution modules based on the gaze direction.
  • the target execution module may be determined from multiple execution modules based on the gaze direction.
  • Both of the above two situations may cause confusion to the user.
  • the vehicle in addition to determining the target execution module from multiple execution modules according to the expression information, when the vehicle outputs multiple warning messages, it is also possible to determine the target warning message from the multiple warning messages according to the user's expression information. Information, obtain the analysis information of the alarm information and broadcast it.
  • the gaze direction of the user may be determined based on the expression information, and then target alert information is determined from multiple alert information based on the gaze direction.
  • target alert information is determined from multiple alert information based on the gaze direction.
  • the target alarm information that makes the user doubtful can be determined through the user's gaze direction. For example, when the user's line of sight falls on the instrument panel, it means that the user may be confused about the icon alarm displayed on the instrument panel, thereby determining that the target alarm information is the icon alarm.
  • the analysis information of the alarm information can be acquired and broadcast.
  • the above-mentioned step 130 for obtaining the analysis information may include steps as shown in Figure 3:
  • Step 310 Determine the user's line of sight direction based on the expression information
  • Step 320 Determine a target execution module in the internal environment of the vehicle based on the line-of-sight direction;
  • Step 330 Obtain analysis information according to the actual working status information of the target execution module.
  • the user's expression information can be obtained.
  • the three-dimensional vector information of the user's line of sight that is, the direction of the user's line of sight
  • the target execution module can be determined in the interior environment of the vehicle. For example, if the user's line of sight intersects the instrument panel inside the vehicle, it is determined that the user is looking at the area on the instrument panel. In this way, analysis information can be obtained and broadcasted according to the actual working status information of the dashboard. If there is an icon alarm on the dashboard at this moment, the analysis information of the icon alarm can be broadcast.
  • the target icon can be further determined from the multiple icons based on the gaze direction, that is, to determine which icon the user's gaze falls on, and then obtain and broadcast the analysis of the target icon information.
  • broadcasting the analysis information may include: controlling a playback device in the vehicle to broadcast the analysis information.
  • the playback device can be, for example, a loudspeaker in the vehicle.
  • the analysis information of the specific state information may be pre-stored in the memory of the vehicle, or the analysis information of the specific state information may be obtained from a connected server in real time through the Internet.
  • the target execution module can be determined from a plurality of row modules, or the target alarm information can be determined from a plurality of alarm information.
  • the user's gaze direction may be determined based on the expression information, and then the target execution module may be determined from multiple line modules based on the gaze direction, or the target alarm information may be determined from multiple alarm information.
  • the target execution module can be determined in the internal environment of the vehicle.
  • the DMS equipped with the vehicle can also monitor the status of the user, such as the driver, including user fatigue monitoring, distraction monitoring, eye tracking and other dangerous behavior monitoring, such as calling, eating, chatting, etc.
  • the DMS in addition to combining with DMS to broadcast analysis information when the user has a doubtful expression, the DMS can also be used to detect the user's status and make different levels of reminders according to the user's status.
  • an information broadcasting method proposed by this application may further include the steps shown in Figure 4:
  • Step 410 Determine the gesture and behavior information of the user based on the image collected by the camera device, the gesture and behavior information includes one of the user's head posture, face information, line of sight direction, hand position, and behavior information. item or items;
  • Step 420 Determine state index information of the user based on the gesture behavior information, the state index information is used to indicate that the user is in a state of concentrating on driving or not in a state of driving;
  • Step 430 Obtain the prompt content information and the presentation parameters of the prompt level corresponding to the state indicator information; the presentation parameters include audio presentation parameters and/or visual presentation parameters;
  • Step 440 Based on the prompt content information and the presentation parameters of the prompt level, generate prompt information of a corresponding degree, and output the prompt information of a corresponding degree.
  • user images can be collected, and user posture and behavior information can be extracted from the user images, including the user's head posture, face information, line of sight direction, hand position, behavior information, etc.
  • user posture and behavior information can be extracted from the user images, including the user's head posture, face information, line of sight direction, hand position, behavior information, etc.
  • the user's head posture it can be judged whether the user has chatted with the passenger in the passenger seat, turned his head to look behind, or looked down at the mobile phone.
  • the user's eye state can be obtained, so as to determine whether the user has closed eyes or distracted eyes, and thus infers whether the driver is asleep or in a state of fatigue.
  • Based on the user's gaze direction it may be determined whether the user's gaze is focused on the front.
  • the position of the user's hand it may be determined whether the user's hand keeps holding the steering wheel.
  • the user's behavior information it can be judged whether the user is making a phone call, eating, chatting, smoking, etc.
  • the user's state index information can be determined. For example, according to the user's gesture behavior information, the user's state can be scored, and the user's state index information can be a score value representing the user's state. According to the score value of the user's state, it can be determined that the user is in the state of driving attentively or not.
  • the inattentive driving state may include a fatigue state and a distraction state.
  • the gesture behavior information includes more than one type of information, such as the direction of sight, hand position, etc.
  • the score based on the direction of sight can be added to the score based on the hand position, or based on different gesture behavior information
  • the obtained scores are weighted and summed to obtain the user's status indicator information.
  • the score value of the user's state to represent the user's state index information
  • those skilled in the art can also express the user's state index information in other ways. The application is not limited here.
  • Different state indicator information corresponds to different presentation parameters of prompting degrees. Take the status indicator information as the score value of the status of the user as an example. Several score areas can be divided, and each score area corresponds to a presentation parameter of a prompting degree. In this way, the prompt content information and the presentation parameters corresponding to the score area where the score value is located can be acquired, and based on the prompt content information and the presentation parameters, corresponding prompt information is generated.
  • the presentation parameters include audio presentation parameters, such as pitch, volume, speech rate, playback frequency, and the like. Different score regions can correspond to audio presentation parameters with different prompting levels, and the audio presentation parameters with different prompting levels can be reflected in different pitches, different volumes, different speech speeds, and different playback frequency intervals.
  • the presentation parameters may also include visual presentation parameters, such as brightness, animation type, area occupied by the broadcast content on the interface, and the like. Different score areas can correspond to video presentation parameters with different prompting levels, which can be reflected in different brightness, different animation types, and different sizes of areas occupied by the broadcast content on the interface.
  • the prompt information can be played through a loudspeaker inside the vehicle.
  • the prompt information may be played with different frequencies and volumes according to the status indicator information of the user. For example, if the user is distracted, a reminder message can be played every minute at a normal volume. If the user is in a state of fatigue, the prompt information can be played every 30 seconds according to a higher volume.
  • the prompt content can also be displayed through the human-computer interaction interface inside the vehicle.
  • different colors and font sizes may be used to display the prompt content.
  • the visual presentation parameters may include animation types, which may include simulated expressions representing different degrees of severity.
  • animation types may include simulated expressions representing different degrees of severity.
  • a corresponding degree of simulated expression can be displayed on the human-computer interaction interface of the vehicle, such as a display screen inside the vehicle.
  • the simulated expressions representing different degrees of severity may include: one or more of simulated expressions representing normal emotions, simulated expressions representing nervous emotions, and simulated expressions representing painful emotions.
  • the highest level of prompt information may also include tightening the seat belt at the user's location or turning on the double flashing lights of the vehicle.
  • the vehicle can output different levels of prompts, including first-level prompts, second-level prompts, and third-level prompts.
  • first-level prompts such as displaying a simulated expression representing normal emotions on the display screen inside the vehicle, and displaying in a black-and-white prompt box such as " Please concentrate on driving" prompt information, at the same time, it can also broadcast the prompt information with a gentle tone and low frequency.
  • the vehicle can output a secondary reminder, such as a simulated expression representing nervousness on the display screen inside the vehicle, and a message such as "Please concentrate on driving" in the orange prompt box
  • a secondary reminder such as a simulated expression representing nervousness on the display screen inside the vehicle
  • the prompt information can also be broadcast in a tense tone and at a low frequency.
  • the vehicle can output a three-level prompt. Prompt information such as "Please concentrate on driving” is displayed in bold font, and at the same time, prompt information can be broadcast at a high frequency in a serious tone.
  • the above-mentioned embodiment records that in the automatic driving scene, in addition to combining with DMS to broadcast analysis information when the user has a doubtful expression, the DMS can also be used to detect the user's state and make different levels of reminders according to the user's state. For progressive automatic driving scenarios, reminding users to concentrate on driving at all times can effectively control the timing of switching between people and vehicles in automatic driving, thereby improving safety in the shared driving mode.
  • a vehicle equipped with a DMS is usually equipped with a camera device, and the camera device faces user seats inside the vehicle, such as a driver's seat, a passenger seat, a rear seat, and the like.
  • the camera device is used to collect images, and the images may include images of the user on the facing user's seat.
  • some posture and behavior information of the user can be obtained, such as expression information, head posture, line of sight direction, hand position, behavior information and so on.
  • self-driving cars are generally equipped with sensors facing the outside of the vehicle, and the vehicle's self-driving module makes relevant decisions based on the environmental observation data collected by the sensors. Sensors can include vision sensors, distance sensors, lidar, millimeter-wave radar, and more.
  • a method for prompting traffic status provided in this application can be applied to a vehicle equipped with a DMS, and the vehicle is equipped with a sensor facing the outside of the vehicle and a camera device facing the user's seat inside the vehicle. Described method comprises the steps as shown in Figure 5:
  • Step 510 Based on the road information collected by the sensor, determine that the road where the vehicle is located changes from a non-traffic state to a traffic state;
  • Step 520 Obtain the expected motion state of the vehicle corresponding to the passing state
  • Step 530 Obtain the current motion state of the vehicle
  • Step 540 Based on the images collected by the camera device, determine the characteristics of the driving state of the user in the user seat;
  • Step 550 If the current motion state of the vehicle does not reach the expected motion state, and the feature of the user's driving state matches the state feature for indicating inattentive driving, output a The prompt information of the passing state.
  • the road information collected by the sensor may include traffic light information.
  • the sensor could be a color camera that detects the color displayed by the traffic light by taking an image of the traffic light in front of the car.
  • the traffic light information indicates that vehicles are allowed to pass, if the green light is on, the road is in a state of passing. In this way, based on the traffic light information, it can be determined that the road on which the vehicle is located changes from a non-traffic state to a traffic state.
  • the road information collected by the sensor may also include information on surrounding objects.
  • the information of the surrounding objects may include the distance between the surrounding objects and the vehicle, and when the distance between the surrounding objects and the vehicle is greater than a distance threshold, the road is in a passing state.
  • surrounding objects may be vehicles ahead. When the vehicle in front drives away, the distance between it and the vehicle will gradually increase. When the distance increases to a certain extent, the car can start to move forward. Therefore, when the distance between the vehicle in front and the vehicle is greater than the distance threshold, the road is in a passing state. In this way, based on the distance between the surrounding objects and the vehicle, it can be determined that the road on which the vehicle is located changes from a non-traffic state to a traffic state.
  • the information on surrounding objects may also include whether there are obstacles within the preset range of the vehicle. For example, some parts of the road have zebra crossings but no traffic lights. When pedestrians cross the road, vehicles need to give way to pedestrians and stop behind the zebra crossing. The vehicle can detect whether there is an obstacle within the preset range through the sensor. If it does not exist, it means that the pedestrian has crossed the road, the road is in the state of traffic, and the vehicle can move forward. In this way, based on whether there is an obstacle within the preset range of the vehicle, it can be determined that the road on which the vehicle is located changes from a non-traffic state to a traffic state.
  • the expected motion state of the vehicle corresponding to the traffic state can be obtained, for example, it can be a driving state. and the current state of motion of the vehicle.
  • the gesture behavior information of the user can be determined based on the images collected by the camera device, and the gesture behavior information can include one or more of the user's head posture, face information, gaze direction, hand position, and behavior information. multiple. Based on the user's head posture, it can be judged whether the user has chatted with the passenger in the passenger seat, turned his head to look behind, or looked down at the mobile phone. Based on the user's face information, the user's eye state can be obtained, so as to determine whether the user has closed eyes or distracted eyes, and thus infers whether the driver is asleep or in a state of fatigue.
  • prompt information for indicating that the road is in a passing state. For example, in the above example, when the road changes from a non-traffic state to a traffic state, the vehicle is in a parked state, the expected driving state is not reached, and the user is in a state of inattentive driving, then a prompt message indicating that the road is in a traffic state is output.
  • the above-mentioned embodiment records that when the road changes from non-traffic to traffic, including the situation that the traffic light changes to green, or the vehicle in front has driven away, if the vehicle does not drive in time and the user is not driving attentively, a prompt message is output to Prompt the user that the road is open. In order to prevent the user from not noticing that the road is already passable due to inattention, causing the vehicle to fail to drive in time and causing road congestion.
  • a warning message is sent to surrounding objects.
  • a warning message is sent to the surrounding objects. It can also be within a preset period of time after outputting the prompt information, if the current motion state of the vehicle has not yet reached the expected motion state, and the user is still in a state of not concentrating on driving, then a warning message is sent to the surrounding objects to prompt the surrounding objects such as A special situation has occurred between other vehicles and the pedestrian's own vehicle.
  • the way of issuing the warning information includes turning on the double flashing lights of the vehicle to warn surrounding objects, such as surrounding vehicles.
  • step 550 outputs prompt information, including the steps shown in FIG. 6:
  • Step 610 Obtain the prompt content information and the presentation parameters of the prompt level corresponding to the driving state index information; the presentation parameters include audio presentation parameters and/or visual presentation parameters;
  • Step 620 Based on the prompt content information and the presentation parameters of the prompt level, generate prompt information of a corresponding degree, and output the prompt information of a corresponding degree.
  • the index information of the driving state may be a score value of the state the user is in.
  • the score value can be determined according to the user's posture and behavior information, including the user's head posture, face information, gaze direction, hand position, behavior information, and so on. If the gesture behavior information includes more than one type of information, such as the direction of sight, hand position, etc., then the score based on the direction of sight can be added to the score based on the hand position, or the score based on different gesture behavior information can be added.
  • the scores are weighted and summed to obtain the user's status indicator information.
  • those skilled in the art can also express the user's state index information in other ways. The application is not limited here.
  • Different state indicator information corresponds to different presentation parameters of prompting levels. Take the status indicator information as the score value of the status of the user as an example. Several score areas can be divided, and each score area corresponds to a presentation parameter of a prompting degree. In this way, the prompt content information and the presentation parameters corresponding to the score area where the score value is located can be acquired, and based on the prompt content information and the presentation parameters, corresponding prompt information is generated.
  • the presentation parameters include audio presentation parameters, such as pitch, volume, speech rate, playback frequency, and the like. Different score regions can correspond to audio presentation parameters with different prompting levels, and the audio presentation parameters with different prompting levels can be reflected in different pitches, different volumes, different speech speeds, and different playback frequency intervals.
  • the presentation parameters may also include visual presentation parameters, such as brightness, animation type, area occupied by the broadcast content on the interface, and the like. Different score areas can correspond to video presentation parameters with different prompting levels, which can be reflected in different brightness, different animation types, and different sizes of areas occupied by the broadcast content on the interface.
  • the prompt information can be played through a loudspeaker inside the vehicle.
  • the prompt information can be played with different frequencies and volumes according to the status index information of the user. For example, if the user is distracted, a reminder message can be played every minute at a normal volume. If the user is in a state of fatigue, the prompt information can be played every 30 seconds according to a higher volume.
  • the prompt content can also be displayed through the human-computer interaction interface inside the vehicle.
  • different colors and font sizes may be used to display the prompt content.
  • the visual presentation parameters may include animation types, which may include simulated expressions representing different degrees of severity.
  • animation types may include simulated expressions representing different degrees of severity.
  • a corresponding degree of simulated expression can be displayed on the human-computer interaction interface of the vehicle, such as a display screen inside the vehicle.
  • the simulated expressions representing different degrees of severity may include: one or more of simulated expressions representing normal emotions, simulated expressions representing nervous emotions, and simulated expressions representing painful emotions.
  • the highest level of prompt information may also include tightening the seat belt at the user's location or turning on the double flashing lights of the vehicle.
  • the vehicle can output different levels of prompts, including first-level prompts, second-level prompts, and third-level prompts.
  • first-level prompts such as displaying a simulated expression representing normal emotions on the display screen inside the vehicle, and displaying in a black-and-white prompt box such as " Please concentrate on driving" prompt information, at the same time, it can also broadcast the prompt information with a gentle tone and low frequency.
  • the vehicle can output a secondary reminder, such as a simulated expression representing nervousness on the display screen inside the vehicle, and a message such as "Please concentrate on driving" in the orange prompt box
  • a secondary reminder such as a simulated expression representing nervousness on the display screen inside the vehicle
  • the prompt information can also be broadcast in a tense tone and at a low frequency.
  • the vehicle can output a three-level prompt. Prompt information such as "Please concentrate on driving” is displayed in bold font, and at the same time, prompt information can be broadcast at a high frequency in a serious tone.
  • the above-mentioned embodiment records that when the road changes from non-traffic to traffic, if the vehicle does not drive away in time and the user is not driving attentively, different levels of reminders will be given according to the state of the user.
  • reminding the user to concentrate on driving at all times can effectively control the timing of switching between people and vehicles in automatic driving, thereby improving the safety of the human-vehicle shared driving mode.
  • it can also prevent users from not noticing that the road is already passable due to inattention, causing the vehicle to fail to drive away in time and causing road congestion.
  • both the human and the vehicle's automatic driving modules have control authority over the vehicle.
  • the control of the time when the driver and the car take over directly affects the safety in the shared driving mode.
  • Drivers are required to concentrate on the road conditions and be ready to take over the vehicle at any time.
  • the driver will have difficulty maintaining concentration for a long time because there is no need for manual driving.
  • the driver may enter a state of fatigue, or lower his head to play with mobile phones, chat with passengers and other distracting behaviors.
  • the present application also provides an information broadcasting method, which is applied to a vehicle equipped with an imaging device, wherein the imaging device faces the user's seat inside the vehicle, and the The method includes the steps shown in Figure 7:
  • Step 710 Determine the gesture and behavior information of the user based on the images collected by the camera device, the gesture and behavior information includes one of the user's head posture, face information, line of sight direction, hand position, and behavior information. item or items;
  • Step 720 Determine state index information of the user based on the gesture behavior information, the state index information is used to indicate that the user is in a state of concentrating on driving or not in a state of driving;
  • Step 730 Obtain the prompt content information and the presentation parameters of the prompt level corresponding to the state indicator information; the presentation parameters include audio presentation parameters and/or visual presentation parameters;
  • Step 740 Based on the prompt content information and the presentation parameters of the prompt level, generate prompt information of a corresponding degree, and output the prompt information of a corresponding degree.
  • user images can be collected, and user posture and behavior information can be extracted from the user images, including the user's head posture, face information, line of sight direction, hand position, behavior information, etc.
  • user posture and behavior information can be extracted from the user images, including the user's head posture, face information, line of sight direction, hand position, behavior information, etc.
  • the user's head posture it can be judged whether the user has chatted with the passenger in the passenger seat, turned his head to look behind, or looked down at the mobile phone.
  • the user's eye state can be obtained, so as to determine whether the user has closed eyes or distracted eyes, and thus infers whether the driver is asleep or in a state of fatigue.
  • Based on the user's gaze direction it may be determined whether the user's gaze is focused on the front.
  • the position of the user's hand it may be determined whether the user's hand keeps holding the steering wheel.
  • the user's behavior information it can be judged whether the user is making a phone call, eating, chatting, smoking, etc.
  • the user's state index information can be determined. For example, according to the user's gesture and behavior information, the user's state can be scored, and the user's state index information can be a score representing the user's state. According to the score value of the user's state, it can be determined that the user is in the state of driving attentively or not.
  • the inattentive driving state may include a fatigue state and a distraction state.
  • the gesture behavior information includes more than one type of information, such as the direction of sight, hand position, etc.
  • the score based on the direction of sight can be added to the score based on the hand position, or based on different gesture behavior information
  • the obtained scores are weighted and summed to obtain the user's status indicator information.
  • the user's state index information in addition to representing the user's state index information by the score value of the user's state, those skilled in the art can also express the user's state index information in other ways. The application is not limited here.
  • Different state indicator information corresponds to different presentation parameters of prompting levels. Take the status indicator information as the score value of the status of the user as an example. Several score areas can be divided, and each score area corresponds to a presentation parameter of a prompting degree. In this way, the prompt content information and the presentation parameters corresponding to the score area where the score value is located can be acquired, and based on the prompt content information and the presentation parameters, corresponding prompt information is generated.
  • the presentation parameters include audio presentation parameters, such as pitch, volume, speech rate, playback frequency, and the like. Different score regions can correspond to audio presentation parameters with different prompting levels, and the audio presentation parameters with different prompting levels can be reflected in different pitches, different volumes, different speech speeds, and different playback frequency intervals.
  • the presentation parameters may also include visual presentation parameters, such as brightness, animation type, area occupied by the broadcast content on the interface, and the like. Different score areas can correspond to video presentation parameters with different prompting levels, which can be reflected in different brightness, different animation types, and different sizes of areas occupied by the broadcast content on the interface.
  • the prompt information can be played through a loudspeaker inside the vehicle.
  • the prompt information may be played with different frequencies and volumes according to the status indicator information of the user. For example, if the user is distracted, a reminder message can be played every minute at a normal volume. If the user is in a state of fatigue, the prompt information can be played every 30 seconds according to a higher volume.
  • the prompt content can also be displayed through the human-computer interaction interface inside the vehicle.
  • different colors and font sizes may be used to display the prompt content.
  • the visual presentation parameters may include animation types, which may include simulated expressions representing different degrees of severity.
  • animation types may include simulated expressions representing different degrees of severity.
  • a corresponding degree of simulated expression can be displayed on the human-computer interaction interface of the vehicle, such as a display screen inside the vehicle.
  • the simulated expressions representing different degrees of severity may include: one or more of simulated expressions representing normal emotions, simulated expressions representing nervous emotions, and simulated expressions representing painful emotions.
  • the highest level of prompt information may also include tightening the seat belt at the user's location or turning on the double flashing lights of the vehicle.
  • the vehicle can output different levels of prompts, including first-level prompts, second-level prompts, and third-level prompts.
  • first-level prompts such as displaying a simulated expression representing normal emotions on the display screen inside the vehicle, and displaying in a black-and-white prompt box such as " Please concentrate on driving" prompt information, at the same time, it can also broadcast the prompt information with a gentle tone and low frequency.
  • the vehicle can output a secondary reminder, such as a simulated expression representing nervousness on the display screen inside the vehicle, and a message such as "Please concentrate on driving" in the orange prompt box
  • a secondary reminder such as a simulated expression representing nervousness on the display screen inside the vehicle
  • the prompt information can also be broadcast in a tense tone and at a low frequency.
  • the vehicle can output a three-level prompt. Prompt information such as "Please concentrate on driving” is displayed in bold font, and at the same time, prompt information can be broadcast at a high frequency in a serious tone.
  • the DMS is used to detect the user's state, and different levels of reminders are made according to the user's state.
  • reminding users to concentrate on driving at all times can effectively control the timing of human-vehicle switching during automatic driving, avoiding safety accidents caused by users failing to take over the vehicle at the right time, thereby improving the safety of the human-vehicle shared driving mode.
  • the present application also provides a method for prompting traffic status.
  • the method is applied to a vehicle equipped with a 4K color camera facing outside the vehicle to capture images in front of the vehicle.
  • the 4K color camera can continuously output color images, which can be input into Convolutional Neural Networks (CNN) to detect traffic lights in the image, and the color or status of traffic lights can be detected by CNN.
  • CNN Convolutional Neural Networks
  • the 4K color camera is also used to detect the surrounding environment, including detecting the surrounding vehicles, estimating the depth information of the surrounding environment, calculating the three-dimensional position of the surrounding objects, and then calculating the motion state of the surrounding vehicles.
  • the vehicle is also equipped with a DMS grayscale camera facing the interior of the vehicle to capture images of the driver.
  • the DMS grayscale camera can continuously output grayscale images that can be fed into another CNN to detect faces in the image, the pose of the head, track gaze, detect behavior, etc. Combining this information can further determine whether the driver is fatigued, inattentive, and whether he is holding the steering wheel.
  • the vehicle When it is detected that the color of the traffic light changes from red light to green light, or from a signal indicating no traffic to a signal indicating traffic, and the vehicle in front has gone far away, and the vehicle is in a manual driving state, and the driver is not paying attention , including drinking water, chatting, playing with mobile phones, etc., the vehicle will output prompt information to remind the driver to drive the vehicle as soon as possible.
  • the prompt information can be output in a variety of ways, such as audio broadcast, display on the human-computer interaction interface, and icon alarm on the instrument panel.
  • the prompt information can be divided into multi-level prompt information. For example, it can be divided into a first-level prompt, a second-level prompt and a third-level prompt. If it is detected that the driver is talking on the phone, drinking water, smoking, etc., the vehicle can output a first-level reminder, such as displaying a simulated expression representing normal emotions on the display screen inside the vehicle, and displaying the following in a black-and-white prompt box:
  • the reminder message "Please concentrate on driving", at the same time, the reminder information can also be broadcast at a low frequency in a gentle tone.
  • the vehicle can output a secondary prompt, for example, a simulated expression representing nervousness can be displayed on the display screen inside the vehicle, and a prompt such as "Please concentrate on driving" can be displayed in the orange prompt box Information, at the same time, can also broadcast prompt information in a tense tone and in low frequency. If it is detected that the driver is not driving, including bowing his head to pick up his mobile phone, closing his eyes, etc., the vehicle can output a three-level reminder. Prompt information such as "Please concentrate on driving" is displayed in bold font, and at the same time, prompt information can be broadcast at a high frequency in a serious tone. At the same time, tighten the seat belt at the user's location, or turn on the double flashing lights of the vehicle.
  • a secondary prompt for example, a simulated expression representing nervousness can be displayed on the display screen inside the vehicle, and a prompt such as "Please concentrate on driving" can be displayed in the orange prompt box Information, at the same time, can also broadcast prompt information in a tense tone and in
  • the above prompt information can be output as an icon alarm on the instrument panel.
  • an icon alarm can also be issued on the instrument panel.
  • the driver cannot understand the analysis of the alarm by consulting the manual. If the driver looks at the central control screen while driving, it will bring driving risks. In this way, as shown in FIG. 9 , according to the status information of the vehicle, the icon on the instrument panel will light up to perform an icon alarm.
  • the inward-facing DMS grayscale camera mounted on the vehicle can continuously output grayscale images, and CNN can be used to detect faces and head postures in these images, track line of sight, and conduct behavior detection.
  • the driver and DMS gray The three-dimensional position and posture relationship between the cameras, calculate the three-dimensional vector information of the driver's sight, locate the icon area the driver is looking at, and then interpret the analysis information of the icon alarm that the driver is looking at through voice broadcast.
  • the DMS can also be used to detect the dangerous behavior of the driver during the driving of the vehicle.
  • the 4K camera mounted on the vehicle facing outside the vehicle can detect the surrounding environment, including detecting surrounding vehicles and estimating the depth of the surrounding environment. Information, calculate the three-dimensional position of the surrounding objects, and then calculate the movement state of the surrounding vehicles and itself.
  • the DMS grayscale camera mounted on the vehicle facing the interior of the vehicle can continuously output grayscale images, and CNN can be used to detect faces and head postures in these images, track sight, behavior detection, etc.
  • a prompt message will be output to remind the driver to drive the vehicle safely. If the prompt information is invalid, for example, the driver is still driving dangerously after the prompt information output lasts for 3 seconds, then further upgrade the prompt level, and actively turn on the double flashing lights of the vehicle to alert the driver with a higher level of prompt information, such as tightening the Seat belts, increase the volume of announcements of prompt information, etc.
  • the output prompt information can be divided into multi-level prompt information.
  • the specific multi-level prompt information can refer to the above-mentioned embodiments, and the present application will not repeat them here.
  • the above-mentioned embodiment shows that in the automatic driving scene, the combination of DMS can remind the driver that the road on which the vehicle is located changes from a non-traffic state to a traffic state, thereby allowing the user to drive the vehicle in time to avoid road congestion.
  • the user when the user has doubts about the alarm information inside the vehicle, it can broadcast and analyze the information for the user in time to improve the safety of automatic driving.
  • DMS is used to detect the user's driving status at all times, and different levels of prompt information are used to warn the user, so that the user can concentrate on driving and be ready for switching between people and vehicles at all times, effectively controlling the timing of switching between people and vehicles in automatic driving, thereby improving human The safety of car sharing mode.
  • the present application also provides a schematic structural diagram of an information broadcasting device as shown in FIG. 11 .
  • the information broadcasting device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and of course may also include hardware required by other services.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it, so as to realize the information broadcasting method described in any of the above embodiments.
  • the present application also provides a schematic structural diagram of a passing state prompting device as shown in FIG. 12 .
  • the traffic status prompting device includes a processor, an internal bus, a network interface, a memory and a non-volatile memory, and of course may also include hardware required by other services.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory, and then runs it, so as to realize a method for prompting the passing status described in any of the above embodiments.
  • the present application also provides a schematic structural diagram of an information broadcasting device as shown in FIG. 13 .
  • the information broadcasting device includes a processor, an internal bus, a network interface, a memory and a non-volatile memory, and of course may also include hardware required by other services.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it, so as to realize the information broadcasting method described in any of the above embodiments.
  • the present application also provides a schematic structural diagram of a vehicle as shown in FIG. 14 .
  • the vehicle includes a vehicle body, a power assembly, a user seat facing the interior of the vehicle, and/or a sensor facing the exterior of the vehicle, and a device
  • the device includes a processor, an internal bus, a network Interfaces, memory, and non-volatile storage, and of course, possibly other hardware required by the business.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and runs it, so as to realize the information broadcasting method, the traffic status prompt or the information broadcasting method described in any of the above embodiments.
  • the present application also provides a computer program product, including a computer program, which can be used when the computer program is executed by a processor. Executing an information broadcasting method, a traffic state reminder and an information broadcasting method described in any of the above embodiments.
  • the present application also provides a computer storage medium, the storage medium stores a computer program, and the computer program is executed by a processor It can be used to implement an information broadcasting method, a traffic status reminder and an information broadcasting method described in any of the above embodiments.
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé et un appareil de diffusion d'informations, un procédé et un appareil d'indication d'état de circulation, ainsi qu'un véhicule. Le procédé de diffusion d'informations est appliqué à un véhicule équipé d'un appareil de caméra, qui fait face à un siège d'utilisateur à l'intérieur du véhicule. Le procédé de diffusion d'informations consiste à : déterminer, sur la base d'informations d'image qui sont collectées par un appareil de caméra, des informations d'expression d'un utilisateur qui se trouve dans un siège d'utilisateur ; acquérir des informations d'état spécifiques d'un véhicule, les informations d'état spécifiques comprenant une ou plusieurs des informations suivantes : des informations d'alarme qui sont délivrées par le véhicule, ou des informations d'état de fonctionnement réel d'un module d'exécution lorsque le véhicule a reçu une instruction d'utilisateur pour donner l'instruction au module d'exécution du véhicule de changer un état de fonctionnement, mais n'a pas exécuté l'instruction d'utilisateur ; et si une caractéristique d'expression dans les informations d'expression correspond à une caractéristique d'expression prédéfinie pour représenter la perplexité, acquérir des informations d'analyse sur la base des informations d'état spécifiques, et diffuser les informations d'analyse.
PCT/CN2021/134178 2021-11-29 2021-11-29 Procédé et appareil de diffusion d'informations, procédé et appareil d'indication d'état de circulation, et véhicule WO2023092611A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/134178 WO2023092611A1 (fr) 2021-11-29 2021-11-29 Procédé et appareil de diffusion d'informations, procédé et appareil d'indication d'état de circulation, et véhicule
CN202180101681.9A CN117836853A (zh) 2021-11-29 2021-11-29 一种信息播报方法、通行状态提示方法、装置及车辆

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/134178 WO2023092611A1 (fr) 2021-11-29 2021-11-29 Procédé et appareil de diffusion d'informations, procédé et appareil d'indication d'état de circulation, et véhicule

Publications (1)

Publication Number Publication Date
WO2023092611A1 true WO2023092611A1 (fr) 2023-06-01

Family

ID=86538798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134178 WO2023092611A1 (fr) 2021-11-29 2021-11-29 Procédé et appareil de diffusion d'informations, procédé et appareil d'indication d'état de circulation, et véhicule

Country Status (2)

Country Link
CN (1) CN117836853A (fr)
WO (1) WO2023092611A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3034561A1 (fr) * 2015-03-30 2016-10-07 Peugeot Citroen Automobiles Sa Dispositif d’avertissement d’un conducteur de vehicule du niveau de son etat de somnolence et/ou du niveau de son etat de distraction au moyen d’imagette(s)
US20190135176A1 (en) * 2016-05-19 2019-05-09 Denso Corporation Vehicle-mounted warning system
WO2020055992A1 (fr) * 2018-09-11 2020-03-19 NetraDyne, Inc. Surveillance de véhicule vers l'intérieur/vers l'extérieur pour un rapport à distance et des améliorations d'avertissement dans la cabine
JP2020095502A (ja) * 2018-12-13 2020-06-18 本田技研工業株式会社 情報処理装置及びプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3034561A1 (fr) * 2015-03-30 2016-10-07 Peugeot Citroen Automobiles Sa Dispositif d’avertissement d’un conducteur de vehicule du niveau de son etat de somnolence et/ou du niveau de son etat de distraction au moyen d’imagette(s)
US20190135176A1 (en) * 2016-05-19 2019-05-09 Denso Corporation Vehicle-mounted warning system
WO2020055992A1 (fr) * 2018-09-11 2020-03-19 NetraDyne, Inc. Surveillance de véhicule vers l'intérieur/vers l'extérieur pour un rapport à distance et des améliorations d'avertissement dans la cabine
JP2020095502A (ja) * 2018-12-13 2020-06-18 本田技研工業株式会社 情報処理装置及びプログラム

Also Published As

Publication number Publication date
CN117836853A (zh) 2024-04-05

Similar Documents

Publication Publication Date Title
JP7080598B2 (ja) 車両制御装置および車両制御方法
US9747812B2 (en) Saliency based awareness modeling
JP5406321B2 (ja) 車載用画像表示装置
US11685390B2 (en) Assistance method and assistance system and assistance device using assistance method that execute processing relating to a behavior model
WO2015162764A1 (fr) Dispositif d'information monté sur un véhicule et procédé de limitation de fonction pour un dispositif d'information monté sur un véhicule
CN107097793A (zh) 驾驶员辅助设备和具有该驾驶员辅助设备的车辆
US10338583B2 (en) Driving assistance device
JP6062043B2 (ja) 移動体状態通知装置、サーバ装置および移動体状態通知方法
JP2007052719A5 (fr)
JP2010217956A (ja) 情報処理装置及び方法、プログラム、並びに情報処理システム
US20180284766A1 (en) Vehicle drive assistance system
EP4140795A1 (fr) Assistant de transfert pour les transitions entre machine et conducteur
JP2018198071A (ja) 車載用画像表示装置
US11276313B2 (en) Information processing device, drive assist system, and drive assist method
WO2023092611A1 (fr) Procédé et appareil de diffusion d'informations, procédé et appareil d'indication d'état de circulation, et véhicule
JP2017076431A (ja) 車両制御装置
JP6372556B2 (ja) 車載用画像表示装置
WO2022158230A1 (fr) Dispositif de commande de présentation et programme de commande de présentation
JP2014078271A (ja) 車載用画像表示装置
JP2016064829A (ja) 車両制御装置
JP6105513B2 (ja) 車両制御装置、及び車載用画像表示装置
US20240149904A1 (en) Attention attracting system and attention attracting method
US11794768B2 (en) Safety confirmation support device, safety confirmation support method, and safety confirmation support program
Mbelekani et al. User Experience and Behavioural Adaptation Based on Repeated Usage of Vehicle Automation: Online Survey.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21965340

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180101681.9

Country of ref document: CN