WO2021258671A1 - Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium - Google Patents
Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium Download PDFInfo
- Publication number
- WO2021258671A1 WO2021258671A1 PCT/CN2020/136255 CN2020136255W WO2021258671A1 WO 2021258671 A1 WO2021258671 A1 WO 2021258671A1 CN 2020136255 W CN2020136255 W CN 2020136255W WO 2021258671 A1 WO2021258671 A1 WO 2021258671A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- vehicle
- displayed
- animation
- display device
- Prior art date
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 87
- 238000000034 method Methods 0.000 title claims abstract description 78
- 230000008447 perception Effects 0.000 claims abstract description 111
- 230000002452 interceptive effect Effects 0.000 claims description 87
- 238000012545 processing Methods 0.000 claims description 84
- 230000009471 action Effects 0.000 claims description 64
- 230000000875 corresponding effect Effects 0.000 claims description 47
- 230000015654 memory Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 11
- 238000001514 detection method Methods 0.000 description 53
- 230000008569 process Effects 0.000 description 17
- 241000282412 Homo Species 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000014509 gene expression Effects 0.000 description 5
- 238000013507 mapping Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000002996 emotional effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3664—Details of the user input interface, e.g. buttons, knobs or sliders, including those provided on a touch screen; remote controllers; input using gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
Definitions
- the present disclosure relates to the field of augmented reality, and in particular to an assisted driving interaction method and device based on a vehicle-mounted digital human, and a storage medium.
- the present disclosure provides an assisted driving interaction method and device based on a vehicle-mounted digital human, and a storage medium.
- an assisted driving interaction method based on a vehicle-mounted digital human.
- the method includes: acquiring external environment perception information; according to the external environment perception information, generating and displaying the The digital person displayed on the on-board display device in the cabin makes an animation of interactive information used to assist driving.
- the generating and displaying an animation of interactive information used to assist driving by a digital person displayed on a vehicle-mounted display device installed in the vehicle cabin according to the perception information of the outside environment of the vehicle includes: determining and Action information matched with the perception information of the outside environment for driving assistance; generated according to the action information and displayed on the vehicle-mounted display device in the cabin an animation of the digital person performing the corresponding action.
- the generating and displaying an animation of a digital person performing a corresponding action on a vehicle-mounted display device in the cabin according to the action information includes: determining a voice that matches the perception information of the outside environment Information; Obtain the corresponding voice according to the voice information, the voice includes a timestamp; while playing the voice, generate and display the digital person at the time corresponding to the timestamp while playing the voice Describe the animation of the action.
- the action includes a plurality of sub-actions, and each sub-action matches one phoneme in the voice, and the time stamp includes a time stamp of each phoneme; Displaying the animation of the digital person performing the action at the time corresponding to the time stamp includes: determining the execution time of the sub-action matching each phoneme according to the time stamp of each phoneme; and according to the action Information, generating and displaying the animation of the digital person performing the sub-action matching the phoneme at the timestamp of each phoneme.
- the generating and displaying the animation of the digital person performing the corresponding action on the on-board display device in the cabin according to the action information includes: calling the action model library corresponding to the interaction information At least one frame of motion slices of the digital person; the motion slices of each frame of the at least one frame of digital human motion slices are sequentially displayed on the display device.
- the generating and displaying an animation of interactive information used to assist driving by a digital human displayed on a vehicle-mounted display device installed in the vehicle cabin according to the external environment perception information includes: The perception information of the outside environment of the vehicle is processed by a first predetermined task to obtain a first predetermined task processing result; in response to the first predetermined task processing result meeting a preset first safe driving warning condition, it is generated and displayed in the cabin The digital human displayed on the vehicle-mounted display device makes an animation of interactive information for the first safe driving warning.
- the method further includes: acquiring vehicle control information; the generating and displaying the digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin according to the external environment perception information of the vehicle is used to assist
- the animation of driving interactive information includes: generating and displaying the interactive information used to assist driving by the digital human displayed on the on-board display device installed in the vehicle cabin according to the external environment perception information and the vehicle control information Animation.
- the generated and displayed digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin makes interactive information for assisting driving
- the animation includes: performing a second predetermined task processing according to the external environment perception information and the vehicle control information to obtain a second predetermined task processing result; responding to the second task processing result meeting a preset second safe driving The pre-warning condition, generating and displaying the animation of the interactive information of the second safe driving pre-warning made by the digital person displayed on the on-board display device installed in the cabin.
- the method further includes: acquiring status analysis information of the driver in the vehicle; said generating and displaying the digital human status displayed on the vehicle-mounted display device installed in the vehicle cabin according to the perception information of the outside environment of the vehicle.
- the animation of interactive information used to assist driving includes: generating and displaying the digital human images displayed on the on-board display device installed in the cabin according to the external environment perception information and the in-vehicle driver status analysis information. An animation of interactive information used to assist driving.
- the digital person displayed on the vehicle-mounted display device installed in the vehicle cabin is generated and displayed to assist driving
- the interactive information animation includes: performing a third predetermined task processing according to the external environment perception information and the in-vehicle driver status analysis information to obtain a third predetermined task processing result; responding to the third task processing result Meet the preset third safe driving warning condition, generate and display the interactive information animation of the digital person displayed on the on-board display device set in the cabin to make the third safe driving warning.
- the generating and displaying an animation of interactive information used to assist driving by a digital human displayed on a vehicle-mounted display device installed in the vehicle cabin according to the external environment perception information includes: Describes the perception information of the environment outside the vehicle, the acquired vehicle control information, and the acquired status analysis information of the driver in the vehicle, and generates and displays the digital human displayed on the on-board display device installed in the vehicle cabin to make interactions for assisting driving Information animation.
- the method further includes: acquiring map information including the environment outside the vehicle; and generating and displaying numbers displayed on a vehicle display device installed in the cabin according to the perception information of the outside environment.
- the animation of interactive information used by people to assist driving includes: generating first navigation information according to the perception information of the outside environment and the map information; generating and displaying the first navigation information according to the first navigation information.
- the digital human displayed on the on-board display device inside makes an animation of navigation interactive information.
- the method further includes: acquiring map information including the environment outside the vehicle and traffic control information; and generating and displaying the vehicle display device installed in the vehicle cabin according to the perception information of the environment outside the vehicle
- the animation of interactive information for assisting driving made by the digital human displayed on the above includes: generating second navigation information according to the external environment perception information, the map information, and the traffic control information; and according to the second The navigation information generates and displays the animation of the digital person making the navigation interaction information displayed on the on-board display device installed in the cabin.
- an interactive device for assisting driving including: a first acquisition module for acquiring perception information of an outside environment; an interaction module for perception information of an outside environment , To generate and display the interactive information animation of the digital person displayed on the on-board display device installed in the cabin to assist driving.
- a computer-readable storage medium stores a computer program, and when a processor executes the computer program, the processor is configured to execute the above-mentioned first The driving assistance interaction method based on a vehicle-mounted digital human according to any one of the embodiments of the aspect.
- an assisted driving interaction device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call The stored executable instructions implement the driving assistance interaction method based on a vehicle-mounted digital human according to any one of the embodiments in the first aspect.
- a vehicle including: a camera for acquiring images in the vehicle and/or images outside the vehicle; a vehicle-mounted display device for displaying a digital person and the digital person An animation of interactive information for driving assistance; and the driving assistance interaction device of any one of the second aspect or the fourth aspect.
- a computer program product including computer-readable code, and when the computer-readable code runs on a processor, the processor is configured to execute any one of the implementations in the first aspect
- the interactive method for assisted driving based on the digital human in the vehicle described in the example is provided.
- the digital human displayed on the vehicle-mounted display device installed in the cabin is generated and displayed to make an animation of interactive information used to assist driving, thereby enabling human-computer interaction
- the method is more in line with human interaction habits, and the interaction process is more natural, allowing people to feel the warmth of human-computer interaction, improving driving pleasure, comfort and sense of accompaniment, and helping to reduce driving safety risks.
- FIG. 1 is a flowchart of a method for assisted driving interaction based on a vehicle-mounted digital human shown in an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of an interaction scene between on-vehicle devices according to an embodiment of the present disclosure
- FIG. 3 is a flowchart of an implementation manner of step 102 shown in an embodiment of the present disclosure
- FIG. 4 is a flowchart of an implementation manner of step 202 shown in an embodiment of the present disclosure
- FIG. 5 is a flowchart of an implementation manner of step 303 shown in an embodiment of the present disclosure.
- FIG. 6 is a flowchart of an implementation manner of step 202 shown in an embodiment of the present disclosure.
- FIG. 7 is a flowchart of an implementation manner of step 102 shown in an embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of a scene for generating and displaying an animation according to an embodiment of the present disclosure
- FIG. 9 is a flowchart of a driving assistance interaction method based on a vehicle-mounted digital human shown in an embodiment of the present disclosure.
- FIG. 10 is a flowchart of an implementation manner of step 1021 shown in an embodiment of the present disclosure.
- FIG. 11 is a flowchart of a method for assisted driving interaction based on a vehicle-mounted digital human shown in an embodiment of the present disclosure
- FIG. 12 is a flowchart of an implementation manner of step 1022 shown in an embodiment of the present disclosure.
- FIG. 13 is a schematic diagram of a scene for generating and displaying an animation according to an embodiment of the present disclosure
- FIG. 14 is a flowchart of a driving assistance interaction method based on a vehicle-mounted digital human shown in an embodiment of the present disclosure
- FIG. 15 is a flowchart of step 1023 shown in an embodiment of the present disclosure.
- FIG. 16 is a flowchart of an assisted driving interaction method based on a vehicle-mounted digital human shown in an embodiment of the present disclosure
- FIG. 17 is a flowchart of an implementation manner of step 1024 shown in an embodiment of the present disclosure.
- FIG. 18 is a block diagram of an interactive device for assisting driving according to an embodiment of the present disclosure.
- FIG. 19 is a schematic diagram of the hardware structure of a driving assistance interactive device shown in an embodiment of the present disclosure.
- Fig. 20 is a schematic diagram showing a hardware structure of a vehicle according to an embodiment of the present disclosure.
- first, second, third, etc. may be used in this disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
- first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
- word “if” as used herein can be interpreted as "when” or “when” or “in response to certainty.”
- Fig. 1 shows a method for assisted driving interaction based on a vehicle-mounted digital human according to an exemplary embodiment, which includes the following steps:
- step 101 the perception information of the environment outside the vehicle is acquired.
- the external environment perception information includes but is not limited to at least one of the following: road line detection information, road line attribute detection information, traffic light detection information, traffic sign detection information, drivable area detection information, obstacles Detection information.
- road lines include but are not limited to at least one of the following: lane lines, stop lines, turning lines, U-turn lines, solid lines, dashed lines, single lines, and double lines.
- the external environment perception information may be sent to the on-board central control device.
- the auxiliary driving equipment outside the vehicle may include, but is not limited to, Advanced Driving Assistance System (ADAS).
- ADAS Advanced Driving Assistance System
- the vehicle-mounted central control device may obtain the external environment perception information sent by the vehicle-mounted vehicle-mounted auxiliary driving device through an in-vehicle communication bus, such as a Controller Area Network (CAN) bus.
- the vehicle-mounted central control device may acquire the external environment perception information sent by the vehicle-mounted vehicle-mounted auxiliary driving device in a vehicle-to-vehicle communication mode.
- CAN Controller Area Network
- step 102 according to the external environment perception information, an animation of interactive information for assisting driving made by the digital person displayed on the on-board display device installed in the cabin is generated and displayed.
- the digital person is a virtual image generated by software, which can be displayed on a vehicle-mounted display device in a cabin, where the vehicle-mounted display device may include, but is not limited to, a central control display or a vehicle-mounted tablet Equipment, etc.
- the interactive information includes but is not limited to one of the following: facial expression information, action information, and voice information.
- the architecture between the vehicle central control device and other vehicle equipment is shown in Figure 2.
- the vehicle central control device can interact with the vehicle’s external auxiliary driving equipment, vehicle camera, vehicle display device and other vehicle equipment (such as air conditioners, radios, car windows). Etc.) to communicate, of course, the in-vehicle central control device can also be connected to other in-vehicle devices, such as mobile phones carried by people in the vehicle, through the network.
- the vehicle-mounted central control device can also obtain at least one of the voice input and the action input of the person entering the cabin through the vehicle-mounted audio collection device and the vehicle-mounted camera, and subsequently can control the digital person to perform corresponding interactive feedback.
- the vehicle-mounted central control device after the vehicle-mounted central control device obtains the external environment perception information sent by the external auxiliary driving device, it can generate the digital human set displayed on the vehicle-mounted display device according to the external environment perception information.
- An animation of interactive information used to assist driving, and the animation is displayed by the on-board display device.
- step 102 may include:
- step 201 the action information for driving assistance that matches the perception information of the outside environment is determined.
- the degree of matching between the same action information and the perception information of the outside environment can be determined respectively, and then the degree of matching between the action information and the perception information corresponding to the various information can be determined.
- the degree of matching of the perception information of the environment outside the vehicle For example, the matching degree corresponding to various information is weighted and averaged.
- mapping relationship between different external environment perception information and the matching action information can be established in advance, and the matching action information can be determined according to the mapping relationship.
- step 202 an animation of a digital person performing a corresponding action is generated and displayed on the vehicle-mounted display device in the cabin according to the action information.
- the interactive information when the interactive information includes motion information, combining the motion information matched with the perception information of the outside environment allows the digital human to make animations of corresponding actions that assist driving, reducing the safety risk of driving, and also making the digital human Interaction with people is more natural, making people feel the warmth of human-computer interaction.
- step 202 may include:
- step 301 the voice information that matches the perception information of the outside environment of the vehicle is determined.
- a trained deep learning neural network may be used to determine the voice information that matches the perception information of the outside environment.
- mapping relationship between different perception information of the outside environment and the matching voice information may be established in advance, and the matching voice information may be determined according to the mapping relationship.
- step 302 a corresponding voice is acquired according to the voice information, and the voice includes a timestamp.
- the voice can be pulled from the voice database according to the voice information, and the pulled voice carries the timestamp of the voice, which is used to make the digital person in the animation perform the corresponding action time and time. Said voice synchronization.
- the timbre of the pulled voice may be a preset timbre, or a timbre set according to the user's preference.
- the preset timbre is a child's timbre, and the voice corresponding to the child's timbre is pulled when the voice is pulled.
- step 303 while the voice is being played, an animation of the digital person performing the action at the time corresponding to the time stamp is generated and displayed according to the action information.
- the digital person can not only play the voice, but also allow the digital person to perform corresponding actions to assist driving at the time corresponding to the timestamp in the voice. While reducing the safety risk of driving, the interaction between digital humans and humans is richer and more natural, making people feel the warmth of human-computer interaction.
- a piece of speech often includes multiple phonemes.
- a phoneme is the smallest phonetic unit divided according to the natural attributes of the speech. It is analyzed according to the pronunciation actions in the syllable, and one pronunciation action constitutes a phoneme. For example, “hello” includes the two phonemes of "you" and "good".
- the timestamp may include the timestamp of each phoneme.
- An action generally includes multiple sub-actions. For example, a beckoning action may include a sub-action of swinging an arm to the left and a sub-action of swinging an arm to the right. In order to make the displayed digital person more vivid, each sub-action can be matched with a phoneme in the voice.
- step 303 may include:
- step 401 according to the timestamp of each phoneme, the execution time of the sub-action matching each phoneme is determined.
- step 402 an animation in which the digital person performs a sub-action matching the phoneme at the timestamp of each phoneme is generated and displayed.
- the digital person is allowed to perform sub-actions matching the phoneme at the time stamp of each phoneme, thereby reducing the safety risk of driving.
- it makes the interaction between digital people and humans richer and more natural, and makes people feel the warmth of human-computer interaction.
- step 202 may include:
- step 501 at least one frame of motion slice of the digital person corresponding to the motion information is called from the motion model library.
- the action corresponding to the action information can be called from the action model library.
- at least one frame of motion slices of the digital person corresponding to the motion information can be called from the motion model library.
- step 502 the motion slices of each of the at least one frame of digital human motion slices are sequentially displayed on the display device.
- At least one of the digital human's body movements, facial expression movements, mouth movements, eye movements, etc. corresponding to different action slices is different.
- an animation of the digital person performing the corresponding action can be displayed on the vehicle-mounted display device.
- a manner of sequentially displaying action slices called from the action model library can be used to display corresponding animations, and the action model library can be updated as needed, so that the interaction between digital humans and humans is richer and more natural.
- step 102 may include:
- step 601 a first predetermined task processing is performed according to the perception information of the outside environment of the vehicle, and a processing result of the first predetermined task is obtained.
- the first predetermined task processing may be performed based on the perception information of the environment outside the vehicle, so as to obtain the corresponding processing result.
- the first predetermined task processing includes at least one of the following: lane departure detection processing, speed limit detection processing, traffic light violation detection processing, traffic sign violation detection processing, drivable area detection processing, collision detection processing, and vehicle distance detection processing .
- the lane departure detection process indicates whether the vehicle has deviated from the lane corresponding to the current driving route.
- the speed limit detection process indicates whether the current speed of the vehicle exceeds the maximum speed indicated by the traffic signs appearing on the current route.
- Violation of the traffic light indication detection process indicates whether the vehicle has violated the traffic light indication.
- the drivable area detection processing indicates that the vehicle does not appear in the drivable area.
- the collision detection processing includes but is not limited to at least one of the following: forward collision detection processing, pedestrian collision detection processing, and urban forward collision detection processing.
- Forward collision detection processing is used to detect whether the distance between the vehicle and other vehicles is likely to collide
- the pedestrian collision detection processing is used to detect whether the distance between the vehicle and the pedestrian is likely to collide
- the urban forward collision detection processing It is used to indicate whether the distance between the vehicle and other vehicles is likely to collide in a congested environment in the city.
- the distance detection process is used to indicate the distance between the vehicle and other vehicles.
- step 602 in response to the first predetermined task processing result meeting the preset first safe driving warning condition, the digital person displayed on the on-board display device set in the cabin is generated and displayed to make the first safe driving warning Animation of interactive information.
- the first predetermined task processing result that satisfies the preset first safe driving warning condition includes but is not limited to at least one of the following: lane departure detection processing indicates lane departure, speed limit detection processing indicates vehicle speeding, violation
- the traffic light indicator detection process indicates that the vehicle has violated the traffic light instruction
- the traffic sign detection process indicates that the vehicle has violated the content corresponding to the traffic sign
- the drivable area detection process indicates that the vehicle is not in the drivable area
- the collision detection process indicates that the vehicle is about to collide
- the distance detection process indicates that the distance between the vehicle and other vehicles is less than the threshold.
- an animation of the interactive information of the first safe driving warning set by the digital person displayed on the vehicle display device can be generated and displayed.
- the content includes, but is not limited to: playing the voice of "Be careful, you are too close to other vehicles", the digital person makes a "careful” expression, and the digital person makes actions such as "shaking his head and waving his hand", as shown in Figure 8.
- the digital human can make the animation of the first safe driving warning based on the perception information of the outside environment of the car. While reducing the safety risk of driving, the way of human-computer interaction is more in line with human interaction habits and interaction process. It is more natural and makes people feel the warmth of human-computer interaction, which enhances driving pleasure, comfort and sense of accompany.
- the foregoing method may further include:
- step 103 vehicle control information is acquired.
- the vehicle control information includes but is not limited to at least one of the following: vehicle turn signal control information, steering wheel control information, accelerator control information, brake control information, and clutch control information.
- vehicle turn signal control information includes but is not limited to at least one of the following: vehicle turn signal control information, steering wheel control information, accelerator control information, brake control information, and clutch control information.
- the driver's control information on the turn signal, steering wheel, accelerator, brake, clutch, etc. can be obtained through the on-board central control device.
- step 102 may include step 1021:
- Step 1021 according to the external environment perception information and the vehicle control information, generate and display an animation of interactive information for assisting driving made by the digital person displayed on the on-board display device installed in the vehicle cabin.
- the external environment perception information and the acquired vehicle control information can be combined to jointly generate and display the interactive information animation of the digital human displayed on the vehicle display device for assisting driving.
- step 1021 may include:
- step 701 a second predetermined task is processed according to the external environment perception information and the vehicle control information to obtain a second predetermined task processing result.
- the second predetermined task processing includes but is not limited to at least one of the following: vehicle turn signal control matching detection processing, steering wheel control matching detection processing, throttle control matching detection processing, brake control matching detection processing, clutch control matching Detection and processing.
- the second predetermined task processing result includes whether the acquired vehicle control information matches the vehicle control information corresponding to the external environment perception information.
- the external environment perception information includes the traffic light being a red light.
- the vehicle control information that matches the external environment perception information should be brake control information. If the actual vehicle control information obtained is brake control information, it is determined The processing result of the second predetermined task is brake control matching, otherwise it is brake control mismatch.
- step 702 in response to the second task processing result meeting the preset second safe driving warning condition, the digital person displayed on the on-board display device set in the cabin is generated and displayed to give the second safe driving warning. Animation of interactive information.
- the second predetermined task processing result is that the acquired vehicle control information does not match the perception information of the outside environment, it is determined that the preset second safe driving warning condition is satisfied, and accordingly, it can be generated and displayed
- the digital person displayed on the on-board display device installed in the cabin makes an animation of the interactive information of the second safe driving warning.
- the processing result of the second predetermined task is that the brake control does not match
- the generated animation content includes but is not limited to at least one of the following: playing the voice "It's time to brake", making the expression "this is not right", and making "brake” Actions.
- the perception information of the environment outside the vehicle can be combined with the acquired vehicle control information to generate and display the interactive information of the digital human displayed on the vehicle-mounted display device installed in the vehicle cabin to make the second safe driving warning.
- Animation In addition to reducing the safety risks of driving, it can also make safe driving warnings more accurate, and enhance driving pleasure, comfort and sense of accompaniment.
- the foregoing method may further include:
- step 104 the status analysis information of the driver in the vehicle is obtained.
- the execution order of step 104 and step 101 is also not limited.
- the status analysis information of the driver in the vehicle includes but is not limited to at least one of the following: human body status analysis information, emotional status analysis information, fatigue status analysis information, distraction status analysis information, dangerous action analysis information, seat belt wearing analysis information, driver Off-job analysis information.
- step 102 may include step 1022:
- Step 1022 Based on the external environment perception information and the driver status analysis information in the vehicle, generate and display an animation of interactive information used to assist driving by the digital person displayed on the on-board display device installed in the vehicle cabin .
- the external environment perception information and the acquired status analysis information of the driver in the vehicle can be combined to jointly generate and display the interactive information animation of the digital human displayed on the vehicle display device for assisting driving.
- step 1022 may include:
- step 801 a third predetermined task processing is performed according to the external environment perception information and the in-vehicle driver status analysis information to obtain a third predetermined task processing result.
- the third task processing includes but is not limited to at least one of the following: human body state detection processing, emotional state detection processing, fatigue state detection processing, distraction state detection processing, dangerous action detection processing, seat belt wearing detection processing Processing, driver's departure detection processing.
- step 802 in response to the third task processing result satisfying the preset third safe driving warning condition, the digital person displayed on the on-board display device installed in the cabin is generated and displayed to make the third safe driving warning. Animation of interactive information.
- the status analysis information of the driver in the vehicle satisfies but is not limited to at least one of the following conditions: the human body state detection processing indicates that the sitting posture of the human body is not suitable for driving, and the emotional state detection result indicates that the person in the vehicle is negative
- the detection result of emotion and fatigue state indicates that the person in the car is in a state of fatigue
- the detection result of the distracted state indicates that the person in the car is distracted
- the detection result of dangerous actions indicates that the person in the car has dangerous actions and seat belt wearing detection processing
- the result indicates that the driver is not wearing a seat belt
- the driver's departure detection processing indicates that the driver is not on duty.
- the first predetermined task processing is performed according to the external environment perception information, and the processing result of the first predetermined task meets the preset first safe driving warning condition. At this time, it is determined that the processing result of the third task meets the preset third Warning conditions for safe driving. Correspondingly, it is possible to generate and display the animation of the interactive information of the digital person making the third safe driving warning displayed on the on-board display device installed in the vehicle cabin.
- an animation of the interactive information of the digital person making the third safe driving warning set on the vehicle display device can be generated and displayed.
- the content of the animation includes, but is not limited to: playing the voice of "The lane has deviated, is it because you are tired, take a break and set off", the digital person makes a "don't be like this" expression, and the digital person is playing "The lane has deviated” Make a hand waving action when the voice is played, and make a rest action when the voice "Take a break and then start” is played, for example, as shown in Figure 13.
- the perception information of the environment outside the vehicle can be combined with the acquired analysis information of the driver's status in the vehicle to generate and display the digital person displayed on the on-board display device installed in the cabin to make the third safe driving warning.
- Animation of interactive information When the driver in the car is in a state that requires driving assistance, the corresponding animation is generated and displayed based on the perception information of the external environment. While reducing the safety risk of driving, it can also make the safe driving warning more accurate and avoid unnecessary driving assistance reminders. , To enhance driving pleasure, comfort and sense of companionship.
- the perception information of the environment outside the vehicle may also be combined with the acquired vehicle control information and the acquired status analysis information of the driver in the vehicle, so as to generate a corresponding animation.
- the driver status analysis information in the vehicle satisfies the above-mentioned conditions, it is possible to generate and display the digital human status displayed on the vehicle-mounted display device installed in the vehicle cabin according to steps 701 to 702. Animation of interactive information used to assist driving.
- the perception information of the outside environment of the vehicle can be combined with various acquired information to jointly determine the final generated and displayed animation content. While reducing the safety risk of driving, it can also make the safe driving warning more accurate. Avoid unnecessary reminders of assisted driving to enhance driving pleasure, comfort and sense of accompany.
- the foregoing method may further include:
- step 105 map information including the environment outside the vehicle is acquired.
- the execution order of step 105 and step 101 is not limited.
- the map information of the environment outside the vehicle can be obtained from the high-precision map or navigation application through the vehicle network or other communication methods.
- the map information includes but is not limited to at least one of the following: current location information of the vehicle, at least one planned driving route information, traffic sign information in front of the vehicle, speed limit information in front of the vehicle, obstacle information in front of the vehicle, and lane ahead information.
- step 102 may include step 1023: step 1023, according to the external environment perception information and the map information, generate and display the digital person displayed on the on-board display device installed in the cabin to assist driving Animation of interactive information.
- step 1023 may include:
- step 901 first navigation information is generated according to the external environment perception information and the map information.
- step 902 according to the first navigation information, an animation of the digital person making navigation interaction information displayed on the on-board display device installed in the cabin is generated and displayed.
- the first navigation information includes a left turn 50 meters ahead and the need to enter a left-turn lane.
- the animation content may include but is not limited to at least one of the following: playing a voice saying "turn left 50 meters ahead and need to enter the left-turn lane", The digital person makes an expression of "concentration”, and the digital person makes an action of "waves his left hand”.
- the digital person displayed on the on-board display device can make an animation of the navigation interaction information based on the perception information of the external environment and the map information, and the navigation process is completed by the digital person in the anthropomorphic image.
- the navigation process is more vivid, making people feel the warmth of human-computer interaction, and enhancing driving pleasure, comfort and sense of accompany.
- the foregoing method may further include:
- step 106 map information including the environment outside the vehicle and traffic control information are acquired.
- step 106 and step 101 is not limited.
- traffic control information includes but is limited to: real-time traffic control information, long-term or short-term traffic control information in a certain place. Real-time traffic control information can be obtained through real-time road condition detection, etc., and long-term or short-term traffic control information in a certain place can be obtained through interaction with the Internet.
- step 102 may include step 1024: step 1024, according to the external environment perception information, the map information, and the traffic control information, generate and display the digital human figure displayed on the vehicle-mounted display device installed in the vehicle cabin. Animation of interactive information used to assist driving.
- step 1024 may include:
- step 1001 second navigation information is generated according to the external environment perception information, the map information, and the traffic control information.
- step 1002 according to the second navigation information, an animation of the digital person making navigation interaction information displayed on the on-board display device installed in the cabin is generated and displayed.
- the second navigation information includes temporary traffic control caused by road construction ahead and 500 meters ahead
- the animation content may include but not limited to at least one of the following: playing "go straight ahead” and “road construction ahead 500 meters ahead”
- Temporary traffic control will slow down traffic. Don’t be impatient” voices, digital humans make “attention” expressions, and digital humans make “wave hands” while playing the “don’t worry” voice.
- the digital person displayed on the vehicle display device can make an animation of navigation interaction information based on the perception information of the external environment, the map information, and the traffic control information, so that the navigation process is more accurate and allows people Feel the warmth of human-computer interaction, enhance driving pleasure, comfort and sense of accompany.
- FIG. 18 is a block diagram of an assisted driving interaction device according to an exemplary embodiment of the present disclosure.
- the device includes: a first acquisition module 1110 for acquiring perception information of the environment outside the vehicle; an interaction module 1120 for According to the perception information of the outside environment of the vehicle, an animation of interactive information for assisting driving made by a digital person displayed on an on-board display device installed in the vehicle cabin is generated and displayed.
- the interaction module includes: a first determination sub-module, configured to determine action information matching the perception information of the outside environment for driving assistance; a first interaction sub-module, configured according to the The action information is generated and displayed on the vehicle-mounted display device in the cabin of the digital person to perform the corresponding action animation.
- the first interaction submodule includes: a first determining unit, configured to determine voice information that matches the perception information of the outside environment; an acquiring unit, configured to obtain corresponding voice information according to the voice information Voice, the voice includes a timestamp; the first interaction unit is used to generate and display the digital person performing the action at the time corresponding to the timestamp while playing the voice, according to the action information Animation.
- the action includes a plurality of sub-actions, each sub-action matches one phoneme in the voice, and the time stamp includes the time stamp of each phoneme;
- the first interaction submodule includes: The second determining unit is used to determine the execution time of the sub-action matching each phoneme according to the time stamp of each phoneme; the second interaction unit is used to generate and display the number according to the action information The person executes the animation of the sub-action matching the phoneme at the timestamp of each phoneme.
- the first interaction sub-module includes: a calling unit for calling at least one frame of motion slices of a digital person corresponding to the interaction information from an action model library; a display unit for displaying the Each of the at least one frame of motion slices of the digital person is sequentially displayed on the display device.
- the interaction module includes: a first processing sub-module, configured to process a first predetermined task according to the perception information of the outside environment to obtain the processing result of the first predetermined task; a second interactive sub-module, In response to the first task processing result satisfying the preset first safe driving warning condition, an animation of interactive information of the first safe driving warning made by the digital person displayed on the on-board display device installed in the cabin is generated and displayed .
- the device further includes: a second acquisition module, configured to acquire vehicle control information; and the interaction module includes: a third interaction sub-module, configured according to the perception information of the outside environment and the vehicle Control information, generate and display an animation of interactive information for assisting driving made by the digital person displayed on the on-board display device installed in the cabin.
- a second acquisition module configured to acquire vehicle control information
- the interaction module includes: a third interaction sub-module, configured according to the perception information of the outside environment and the vehicle Control information, generate and display an animation of interactive information for assisting driving made by the digital person displayed on the on-board display device installed in the cabin.
- the third interaction submodule includes: a first processing unit, configured to perform a second predetermined task processing according to the external environment perception information and the vehicle control information to obtain a second predetermined task processing result ;
- the third interaction unit in response to the second task processing results to meet the preset second safe driving early warning conditions, generate and display the digital person displayed on the vehicle-mounted display device set in the cabin to make the second safe Animation of interactive information for driving warning.
- the device further includes: a third acquisition module, configured to acquire status analysis information of the driver in the vehicle;
- the interaction module includes: a fourth interaction sub-module, configured to perceive information based on the environment outside the vehicle Analyze the information with the state of the driver in the vehicle, and generate and display an animation of interactive information for assisting driving made by the digital person displayed on the on-board display device installed in the vehicle cabin.
- the fourth interaction submodule includes: a second processing unit, configured to perform a third predetermined task processing according to the external environment perception information and the in-vehicle driver status analysis information to obtain a third The predetermined task processing result; the fourth interaction unit, in response to the third task processing result meets the preset third safe driving warning condition, generates and displays the digital person displayed on the vehicle-mounted display device installed in the cabin An animation of the interactive information of the third safe driving warning is presented.
- the interaction module includes: a fifth interaction sub-module for generating and displaying information based on the external environment perception information, the acquired vehicle control information, and the acquired status analysis information of the in-vehicle driver
- the digital human displayed on the on-board display device installed in the cabin makes an animation of interactive information used to assist driving.
- the device further includes: a fourth obtaining module, configured to obtain map information including the environment outside the vehicle;
- the interaction module includes: a first generation sub-module, configured according to the environment outside the vehicle Perceived information and the map information to generate first navigation information;
- the fifth interaction sub-module is used to generate and display the digital person displayed on the on-board display device in the cabin according to the first navigation information to make navigation Animation of interactive information.
- the device further includes: a fifth acquisition module for acquiring map information and traffic control information including the environment outside the vehicle; the interaction module includes: a second generation sub-module for The external environment perception information, the map information, and the traffic control information generate second navigation information; the sixth interaction sub-module is used to generate and display the vehicle mounted in the cabin according to the second navigation information
- the digital person displayed on the display device makes an animation of navigation and interactive information.
- the relevant part can refer to the part of the description of the method embodiment.
- the device embodiments described above are merely illustrative, where the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place. , Or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the present disclosure. Those of ordinary skill in the art can understand and implement without creative work.
- the embodiment of the present disclosure also provides a computer-readable storage medium, and the computer-readable storage medium stores a computer program.
- the processor reads the computer program, the processor is used to execute the An interactive method for assisted driving based on on-board digital humans.
- the embodiments of the present disclosure provide a computer program product, including computer-readable code.
- the processor in the device executes to achieve the same as that provided in any of the above embodiments.
- the computer program product can be specifically implemented by hardware, software, or a combination thereof.
- the computer program product is specifically embodied as a computer storage medium.
- the computer program product is specifically embodied as a software product, such as a software development kit (SDK) and so on.
- SDK software development kit
- the embodiment of the present disclosure also provides a driving assistance interaction device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the executable instructions stored in the memory to implement the foregoing
- a driving assistance interaction device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the executable instructions stored in the memory to implement the foregoing
- the driving assistance interaction method based on a vehicle-mounted digital human described in any one of the embodiments.
- FIG. 19 is a schematic diagram of the hardware structure of a driving assistance interaction device provided by an embodiment of the application.
- the driving assistance interactive device 1210 includes a processor 1211, and may also include an input device 1212, an output device 1213, and a memory 1214.
- the input device 1212, the output device 1213, the memory 1214, and the processor 1211 are connected to each other through a bus.
- the memory 1214 includes, but is not limited to, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or portable Read-only memory (compact disc read-only memory, CD-ROM), the memory 1214 is used to store related instructions and data.
- the input device 1212 is used to input data and/or signals
- the output device 1213 is used to output data and/or signals.
- the output device 1213 and the input device 1212 may be independent devices or a whole device.
- the input device 1212 includes a camera 01 and other input devices 02
- the other input devices 02 include but are not limited to audio capture devices.
- the output device 1213 includes a display device 03 and other output devices 04.
- the other output devices 04 may include but are not limited to audio output devices.
- the digital human is displayed through the display device 03, and the animation of the digital human performing the corresponding steering action is also through the display device. 03 to show.
- the processor 1211 may include one or more processors, such as one or more central processing units (CPU). In the case where the processor 1211 is a CPU, the CPU may be a single-core CPU or It is a multi-core CPU.
- the memory 1214 is used to store program codes and data of the network device.
- the processor 1211 is configured to call the program code and data in the memory 1214 to execute the steps in the foregoing method embodiment. For details, please refer to the description in the method embodiment, which will not be repeated here.
- Fig. 19 only shows a simplified design of a driving assistance interactive device.
- the driving assistance interaction device may also include other necessary components, including but not limited to any number of input/output devices, processors, controllers, memories, etc., and all of them can implement the driving assistance in the embodiments of the present application. All interactive devices are within the protection scope of this application.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the embodiment of the present disclosure also provides a vehicle, including: a camera for acquiring images in the vehicle and/or images outside the vehicle; a vehicle-mounted display device for displaying a digital person and the digital person's interaction for assisting driving Information animation; and the driving assistance interactive device described in any of the above embodiments.
- an embodiment of the present disclosure further provides a vehicle 1300, which includes a vehicle-mounted camera 1310, a vehicle-mounted display device 1320, and a driving assistance interaction device 1330.
- the vehicle-mounted camera 1310 can be used to obtain images inside and/or outside the vehicle
- the vehicle-mounted display device 1320 can be used to display the digital person and the animation of the digital person’s interactive information for assisting driving
- assisting the driving interactive device 1330 can use the driving assistance interactive device described in any of the above embodiments to generate and display the digital person displayed on the vehicle-mounted display device set in the cabin according to the acquired perception information of the external environment to assist driving. Animation of interactive information.
Abstract
Description
图3是本公开实施例示出的步骤102的一种实现方式的流程图;[Corrected according to Rule 26 30.12.2020]
FIG. 3 is a flowchart of an implementation manner of
Claims (18)
- 一种基于车载数字人的辅助驾驶交互方法,包括:An assisted driving interaction method based on a vehicle-mounted digital human, including:获取车外环境感知信息;Obtain the perception information of the environment outside the vehicle;根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画。According to the perception information of the outside environment of the vehicle, an animation of interactive information for assisting driving made by the digital person displayed on the on-board display device installed in the vehicle cabin is generated and displayed.
- 根据权利要求1所述的方法,其中,所述根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The method according to claim 1, wherein said generating and displaying an animation of interactive information for assisting driving made by a digital person displayed on a vehicle-mounted display device installed in the vehicle cabin based on the perception information of the outside environment of the vehicle ,include:确定与所述车外环境感知信息匹配的用于辅助驾驶的动作信息;Determining the action information used to assist driving that matches the perception information of the outside environment of the vehicle;根据所述动作信息生成并在所述车舱内的车载显示设备上显示数字人执行相应动作的动画。The animation of the digital person performing the corresponding action is generated and displayed on the vehicle-mounted display device in the cabin according to the action information.
- 根据权利要求2所述的方法,其中,所述根据所述动作信息生成并在所述车舱内的车载显示设备上显示数字人执行相应动作的动画,包括:The method according to claim 2, wherein the generating and displaying an animation of a digital person performing a corresponding action on a vehicle-mounted display device in the cabin according to the action information comprises:确定与所述车外环境感知信息相匹配的语音信息;Determining voice information that matches the perception information of the outside environment of the vehicle;根据所述语音信息获取对应的语音,所述语音中包括时间戳;Acquiring a corresponding voice according to the voice information, where the voice includes a timestamp;在播放所述语音的同时,根据所述动作信息生成并显示所述数字人在所述时间戳对应的时刻执行所述动作的动画。While playing the voice, an animation of the digital person performing the action at the time corresponding to the time stamp is generated and displayed according to the action information.
- 根据权利要求3所述的方法,其中,所述动作中包括多个子动作,每个子动作与所述语音中的一个音素相匹配,所述时间戳包括每个音素的时间戳;The method according to claim 3, wherein the action includes a plurality of sub-actions, each sub-action matches one phoneme in the speech, and the time stamp includes the time stamp of each phoneme;所述根据所述动作信息生成并显示所述数字人在所述时间戳对应的时刻执行所述动作的动画,包括:The generating and displaying the animation of the digital person performing the action at the time corresponding to the time stamp according to the action information includes:根据每个音素的时间戳,确定与所述每个音素相匹配的子动作的执行时间;Determine the execution time of the sub-action matching each phoneme according to the time stamp of each phoneme;根据所述动作信息,生成并显示所述数字人在每个音素的时间戳执行与该音素相匹配的子动作的动画。According to the action information, an animation in which the digital person performs a sub-action matching the phoneme at the timestamp of each phoneme is generated and displayed.
- 根据权利要求2-4任一项所述的方法,其中,所述根据所述动作信息生成并在所述车舱内的车载显示设备上显示数字人执行相应动作的动画,包括:The method according to any one of claims 2-4, wherein the generating and displaying an animation of a digital person performing a corresponding action on a vehicle-mounted display device in the cabin according to the action information comprises:从动作模型库中调用与所述动作信息对应的至少一帧数字人的动作切片;Calling at least one frame of motion slices of the digital person corresponding to the motion information from the motion model library;将所述至少一帧数字人的动作切片中的每帧数字人的动作切片依次显示在所述显示设备上。The motion slices of each frame of the digital human in the at least one frame of motion slices of the digital human are sequentially displayed on the display device.
- 根据权利要求1-5任一项所述的方法,其中,所述根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The method according to any one of claims 1-5, wherein said generating and displaying the digital person displayed on the vehicle-mounted display device installed in the vehicle cabin according to the perception information of the external environment is used to assist driving Animation of interactive information, including:根据所述车外环境感知信息进行第一预定任务处理,得到第一预定任务处理结果;Performing the first predetermined task processing according to the perception information of the outside environment of the vehicle to obtain the first predetermined task processing result;响应于所述第一预定任务处理结果满足预设的第一安全驾驶预警条件,生成并显示设置在车舱内的车载显示设备上显示的数字人做出第一安全驾驶预警的交互信息的动画。In response to the first predetermined task processing result satisfying the preset first safe driving warning condition, an animation of interactive information of the first safe driving warning made by the digital person displayed on the on-board display device installed in the cabin is generated and displayed .
- 根据权利要求1-6任一项所述的方法,其中,所述方法还包括:The method according to any one of claims 1-6, wherein the method further comprises:获取车辆控制信息;Obtain vehicle control information;所述根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The generating and displaying the interactive information animation used to assist driving by the digital person displayed on the on-board display device installed in the vehicle cabin according to the perception information of the outside environment of the vehicle includes:根据所述车外环境感知信息和所述车辆控制信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画。According to the external environment perception information and the vehicle control information, an animation of interactive information for assisting driving made by a digital person displayed on a vehicle-mounted display device installed in the vehicle cabin is generated and displayed.
- 根据权利要求7所述的方法,其中,所述根据所述车外环境感知信息和所述车辆控制信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The method according to claim 7, wherein said generating and displaying a digital person displayed on an on-board display device installed in the cabin according to the perception information of the external environment and the vehicle control information is used to assist Animation of interactive driving information, including:根据所述车外环境感知信息和所述车辆控制信息进行第二预定任务处理,得到第二预定任务处理结果;Performing a second predetermined task processing according to the external environment perception information and the vehicle control information to obtain a second predetermined task processing result;响应于所述第二任务处理结果满足预设的第二安全驾驶预警条件,生成并显示设置在车舱内的车载显示设备上显示的数字人做出第二安全驾驶预警的交互信息的动画。In response to the second task processing result satisfying the preset second safe driving warning condition, an animation of interactive information of the digital person making the second safe driving warning displayed on the on-board display device arranged in the cabin is generated and displayed.
- 根据权利要求1-8任一项的所述的方法,其中,所述方法还包括:获取车内驾驶员状态分析信息;8. The method according to any one of claims 1-8, wherein the method further comprises: obtaining status analysis information of the driver in the vehicle;所述根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The generating and displaying the interactive information animation used to assist driving by the digital person displayed on the on-board display device installed in the vehicle cabin according to the perception information of the outside environment of the vehicle includes:根据所述车外环境感知信息和所述车内驾驶员状态分析信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画。According to the external environment perception information and the in-vehicle driver status analysis information, an animation of interactive information for assisting driving made by a digital person displayed on an on-board display device installed in the vehicle cabin is generated and displayed.
- 根据权利要求9所述的方法,其中,所述根据所述车外环境感知信息和所述车内驾驶员状态分析信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The method according to claim 9, wherein said generating and displaying a digital person displayed on a vehicle-mounted display device installed in the vehicle cabin based on the perception information of the external environment and the status analysis information of the driver in the vehicle Animation of interactive information used to assist driving, including:根据所述车外环境感知信息和所述车内驾驶员状态分析信息进行第三预定任务处理,得到第三预定任务处理结果;Performing a third predetermined task processing according to the external environment perception information and the in-vehicle driver status analysis information to obtain a third predetermined task processing result;响应于所述第三任务处理结果满足预设的第三安全驾驶预警条件,生成并显示设置在车舱内的车载显示设备上显示的数字人做出第三安全驾驶预警的交互信息的动画。In response to the third task processing result meeting the preset third safe driving early warning condition, an animation of interactive information of the third safe driving early warning made by the digital person displayed on the on-board display device installed in the cabin is generated and displayed.
- 根据权利要求1-10任一项所述的方法,其中,所述根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The method according to any one of claims 1-10, wherein the generating and displaying the digital person displayed on the vehicle-mounted display device arranged in the cabin according to the perception information of the external environment is used to assist driving Animation of interactive information, including:根据所述车外环境感知信息、获取到的车辆控制信息以及获取到的车内驾驶员状态分析信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画。According to the external environment perception information, the acquired vehicle control information, and the acquired status analysis information of the driver in the vehicle, the digital person displayed on the on-board display device installed in the cabin is generated and displayed to assist driving Animation of interactive information.
- 根据权利要求1-11任一项所述的方法,其中,所述方法还包括:获取包括所述车外环境的地图信息;The method according to any one of claims 1-11, wherein the method further comprises: obtaining map information including the environment outside the vehicle;所述根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The generating and displaying the interactive information animation used to assist driving by the digital person displayed on the on-board display device installed in the vehicle cabin according to the perception information of the outside environment of the vehicle includes:根据所述车外环境感知信息和所述地图信息,生成第一导航信息;Generating first navigation information according to the perception information of the outside environment of the vehicle and the map information;根据所述第一导航信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出导航交互信息的动画。According to the first navigation information, an animation of the digital person making navigation interaction information displayed on the on-board display device installed in the cabin is generated and displayed.
- 根据权利要求1-12任一项所述的方法,其中,所述方法还包括:获取包括所述车外环境的地图信息以及交通管制信息;The method according to any one of claims 1-12, wherein the method further comprises: obtaining map information and traffic control information including the environment outside the vehicle;所述根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画,包括:The generating and displaying the interactive information animation used to assist driving by the digital person displayed on the on-board display device installed in the vehicle cabin according to the perception information of the outside environment of the vehicle includes:根据所述车外环境感知信息、所述地图信息以及所述交通管制信息,生成第二导航信息;Generating second navigation information according to the external environment perception information, the map information, and the traffic control information;根据所述第二导航信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出导航交互信息的动画。According to the second navigation information, an animation of the digital person making navigation interaction information displayed on the on-board display device installed in the cabin is generated and displayed.
- 一种辅助驾驶交互装置,包括:A driving assistance interactive device, including:第一获取模块,用于获取车外环境感知信息;The first acquisition module is used to acquire the perception information of the environment outside the vehicle;交互模块,用于根据所述车外环境感知信息,生成并显示设置在车舱内的车载显示设备上显示的数字人做出用于辅助驾驶的交互信息的动画。The interaction module is used to generate and display the interactive information animation of the digital human displayed on the vehicle-mounted display device installed in the cabin to assist driving according to the perception information of the external environment.
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,当处理器执行所述计算机程序时,所述处理器用于执行上述权利要求1-13任一所述的基于车载数字人的辅助驾驶交互方法。A computer-readable storage medium, the computer-readable storage medium stores a computer program, and when a processor executes the computer program, the processor is used to execute the vehicle-based digital Human-assisted driving interaction method.
- 一种辅助驾驶交互装置,包括:A driving assistance interactive device, including:处理器;processor;用于存储所述处理器可执行指令的存储器;A memory for storing executable instructions of the processor;其中,所述处理器被配置为调用所述存储器中存储的可执行指令时,实现权利要求1-13中任一项所述的基于车载数字人的辅助驾驶交互方法。Wherein, the processor is configured to implement the assisted driving interaction method based on a vehicle-mounted digital human according to any one of claims 1-13 when calling the executable instructions stored in the memory.
- 一种车辆,包括:A vehicle including:摄像头,用于获取车内图像和/或车外图像;Camera, used to obtain images inside the car and/or outside the car;车载显示设备,用于显示数字人以及所述数字人做出用于辅助驾驶的交互信息的动画;以及A vehicle-mounted display device for displaying a digital person and an animation of interactive information made by the digital person for assisting driving; and如权利要求14或16所述的辅助驾驶交互装置。The driving assistance interactive device according to claim 14 or 16.
- 一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在处理器上运行时,所述处理器用于执行权利要求1-13中任一项所述的基于车载数字人的辅助驾驶交互方法。A computer program product, comprising computer readable code, and when the computer readable code runs on a processor, the processor is used to execute the vehicle-mounted digital human-based assistant according to any one of claims 1-13 Driving interactive method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217043117A KR20220015462A (en) | 2020-06-24 | 2020-12-14 | Assistive driving interaction method and device based on in-vehicle digital human, storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010592118.3A CN111736701A (en) | 2020-06-24 | 2020-06-24 | Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium |
CN202010592118.3 | 2020-06-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021258671A1 true WO2021258671A1 (en) | 2021-12-30 |
Family
ID=72651098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/136255 WO2021258671A1 (en) | 2020-06-24 | 2020-12-14 | Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium |
Country Status (3)
Country | Link |
---|---|
KR (1) | KR20220015462A (en) |
CN (1) | CN111736701A (en) |
WO (1) | WO2021258671A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111736701A (en) * | 2020-06-24 | 2020-10-02 | 上海商汤临港智能科技有限公司 | Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium |
CN113470394A (en) * | 2021-07-05 | 2021-10-01 | 浙江商汤科技开发有限公司 | Augmented reality display method and related device, vehicle and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105679209A (en) * | 2015-12-31 | 2016-06-15 | 戴姆勒股份公司 | In-car 3D holographic projection device |
CN109427199A (en) * | 2017-08-24 | 2019-03-05 | 北京三星通信技术研究有限公司 | For assisting the method and device of the augmented reality driven |
US20190287290A1 (en) * | 2017-04-17 | 2019-09-19 | Intel Corporation | Augmented reality and virtual reality feedback enhancement system, apparatus and method |
CN111105691A (en) * | 2020-01-07 | 2020-05-05 | 重庆渝微电子技术研究院有限公司 | Driving assistance equipment quality detection system |
CN111124198A (en) * | 2018-11-01 | 2020-05-08 | 广州汽车集团股份有限公司 | Animation playing and interaction method, device, system and computer equipment |
CN111736701A (en) * | 2020-06-24 | 2020-10-02 | 上海商汤临港智能科技有限公司 | Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107972674A (en) * | 2017-12-11 | 2018-05-01 | 惠州市德赛西威汽车电子股份有限公司 | It is a kind of to merge navigation and the drive assist system and method for intelligent vision |
CN108859963A (en) * | 2018-06-28 | 2018-11-23 | 深圳奥尼电子股份有限公司 | Multifunctional driver householder method, multifunctional driver auxiliary device and storage medium |
CN110926487A (en) * | 2018-09-19 | 2020-03-27 | 阿里巴巴集团控股有限公司 | Driving assistance method, driving assistance system, computing device, and storage medium |
CN110608751A (en) * | 2019-08-14 | 2019-12-24 | 广汽蔚来新能源汽车科技有限公司 | Driving navigation method and device, vehicle-mounted computer equipment and storage medium |
CN110728256A (en) * | 2019-10-22 | 2020-01-24 | 上海商汤智能科技有限公司 | Interaction method and device based on vehicle-mounted digital person and storage medium |
-
2020
- 2020-06-24 CN CN202010592118.3A patent/CN111736701A/en active Pending
- 2020-12-14 WO PCT/CN2020/136255 patent/WO2021258671A1/en active Application Filing
- 2020-12-14 KR KR1020217043117A patent/KR20220015462A/en not_active Application Discontinuation
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105679209A (en) * | 2015-12-31 | 2016-06-15 | 戴姆勒股份公司 | In-car 3D holographic projection device |
US20190287290A1 (en) * | 2017-04-17 | 2019-09-19 | Intel Corporation | Augmented reality and virtual reality feedback enhancement system, apparatus and method |
CN109427199A (en) * | 2017-08-24 | 2019-03-05 | 北京三星通信技术研究有限公司 | For assisting the method and device of the augmented reality driven |
CN111124198A (en) * | 2018-11-01 | 2020-05-08 | 广州汽车集团股份有限公司 | Animation playing and interaction method, device, system and computer equipment |
CN111105691A (en) * | 2020-01-07 | 2020-05-05 | 重庆渝微电子技术研究院有限公司 | Driving assistance equipment quality detection system |
CN111736701A (en) * | 2020-06-24 | 2020-10-02 | 上海商汤临港智能科技有限公司 | Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111736701A (en) | 2020-10-02 |
KR20220015462A (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150062168A1 (en) | System and method for providing augmented reality based directions based on verbal and gestural cues | |
US10994612B2 (en) | Agent system, agent control method, and storage medium | |
EP3994426B1 (en) | Method and system for scene-aware interaction | |
WO2021258671A1 (en) | Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium | |
JP7250547B2 (en) | Agent system, information processing device, information processing method, and program | |
US10901503B2 (en) | Agent apparatus, agent control method, and storage medium | |
JP2020075720A (en) | Experience provision system, experience provision method, and experience provision program | |
CN112805182A (en) | Agent device, agent control method, and program | |
US20200111489A1 (en) | Agent device, agent presenting method, and storage medium | |
CN115195637A (en) | Intelligent cabin system based on multimode interaction and virtual reality technology | |
JP2010072573A (en) | Driving evaluation device | |
JP2020060861A (en) | Agent system, agent method, and program | |
US11325605B2 (en) | Information providing device, information providing method, and storage medium | |
Sauras-Perez | A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars | |
JP7148296B2 (en) | In-vehicle robot | |
Nakrani | Smart car technologies: a comprehensive study of the state of the art with analysis and trends | |
JP7245695B2 (en) | Server device, information providing system, and information providing method | |
KR20180025379A (en) | System and method for provision of head up display information according to driver's condition and driving condition based on speech recognition | |
JP2020060623A (en) | Agent system, agent method, and program | |
WO2022124164A1 (en) | Attention object sharing device, and attention object sharing method | |
JP2020144285A (en) | Agent system, information processing device, control method for mobile body mounted apparatus, and program | |
CN111308999B (en) | Information providing device and in-vehicle device | |
CN110998688A (en) | Information control device | |
CN115534850B (en) | Interface display method, electronic device, vehicle and computer program product | |
WO2022244178A1 (en) | Device for estimating person being spoken to, method for estimating person being spoken to, and program for estimating person being spoken to |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20217043117 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20942110 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20942110 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20942110 Country of ref document: EP Kind code of ref document: A1 |