CN111736701A - Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium - Google Patents

Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium Download PDF

Info

Publication number
CN111736701A
CN111736701A CN202010592118.3A CN202010592118A CN111736701A CN 111736701 A CN111736701 A CN 111736701A CN 202010592118 A CN202010592118 A CN 202010592118A CN 111736701 A CN111736701 A CN 111736701A
Authority
CN
China
Prior art keywords
vehicle
information
animation
digital person
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010592118.3A
Other languages
Chinese (zh)
Inventor
曾彬
周群艳
李轲
吴阳平
许亮
许亲亲
林楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202010592118.3A priority Critical patent/CN111736701A/en
Publication of CN111736701A publication Critical patent/CN111736701A/en
Priority to PCT/CN2020/136255 priority patent/WO2021258671A1/en
Priority to KR1020217043117A priority patent/KR20220015462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3664Details of the user input interface, e.g. buttons, knobs or sliders, including those provided on a touch screen; remote controllers; input using gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Transportation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a vehicle-mounted digital person-based auxiliary driving interaction method and device and a storage medium, wherein the method comprises the following steps: acquiring vehicle exterior environment perception information; and generating and displaying the animation of interactive information for assisting driving, which is made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin, according to the vehicle external environment perception information.

Description

Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium
Technical Field
The disclosure relates to the field of augmented reality, and in particular to a vehicle-mounted digital person-based driving assistance interaction method and device and a storage medium.
Background
At present, a robot can be placed in a vehicle, and after a person enters the vehicle, the robot interacts with the person in the vehicle. However, the interaction mode of the robot and the personnel in the vehicle is relatively fixed, and humanization is lacked.
Disclosure of Invention
The disclosure provides a vehicle-mounted digital person-based auxiliary driving interaction method and device and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a driving assistance interaction method based on an in-vehicle digital person, the method including: acquiring vehicle exterior environment perception information; and generating and displaying the animation of interactive information for assisting driving, which is made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin, according to the vehicle external environment perception information.
In some optional embodiments, the generating and displaying the animation of the interactive information for driving assistance by the digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin according to the environment perception information outside the vehicle comprises: determining action information matched with the vehicle external environment perception information and used for assisting driving; and generating and displaying the animation of the corresponding action executed by the digital person on the vehicle-mounted display equipment in the vehicle cabin according to the action information.
In some optional embodiments, the generating and displaying the animation of the digital person performing the corresponding action on the vehicle-mounted display device in the vehicle cabin according to the action information includes: determining voice information matched with the vehicle external environment perception information; acquiring corresponding voice according to the voice information, wherein the voice comprises a timestamp; and generating and displaying the animation of the action executed by the digital person at the moment corresponding to the timestamp according to the action information while playing the voice.
In some optional embodiments, the action comprises a plurality of sub-actions, each sub-action matching a phoneme in the speech, the timestamp comprising a timestamp of each phoneme; the generating and displaying the animation of the action executed by the digital person at the moment corresponding to the timestamp according to the action information comprises the following steps: determining the execution time of the sub-action matched with each phoneme according to the time stamp of each phoneme; and generating and displaying an animation of the digital person performing a sub-action matched with each phoneme at the time stamp of each phoneme according to the action information.
In some optional embodiments, the generating and displaying the animation of the digital person performing the corresponding action on the vehicle-mounted display device in the vehicle cabin according to the action information includes: calling at least one frame of action slice of the digital person corresponding to the interaction information from an action model library; sequentially displaying the motion slices of each frame of the at least one frame of digital person on the display device.
In some optional embodiments, the generating and displaying the animation of the interactive information for driving assistance by the digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin according to the environment perception information outside the vehicle comprises: performing first preset task processing according to the vehicle exterior environment perception information to obtain a first preset task processing result; and responding to the first task processing result meeting a preset first safe driving early warning condition, and generating and displaying animation of interactive information of first safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
In some optional embodiments, the method further comprises: acquiring vehicle control information; the animation for making interactive information for assisting driving by a digital person who generates and displays interactive information displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the vehicle exterior environment perception information comprises the following steps: and generating and displaying the animation of interactive information for assisting driving, which is made by a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin, according to the external environment perception information and the vehicle control information.
In some optional embodiments, the generating and displaying an animation of interactive information for driving assistance by a digital human displayed on an on-board display device disposed in a vehicle cabin according to the vehicle exterior environment perception information and the vehicle control information includes: performing second predetermined task processing according to the external environment perception information and the vehicle control information to obtain a second predetermined task processing result; and responding to the second task processing result to meet a preset second safe driving early warning condition, and generating and displaying animation of interactive information of second safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
In some optional embodiments, the method further comprises: acquiring state analysis information of a driver in the vehicle; the animation for making interactive information for assisting driving by a digital person who generates and displays interactive information displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the vehicle exterior environment perception information comprises the following steps: and generating and displaying animation of interactive information for assisting driving made by a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the external environment perception information and the state analysis information of the driver in the vehicle.
In some optional embodiments, the generating and displaying an animation of interactive information for driving assistance by a digital human displayed on an on-board display device disposed in a vehicle cabin according to the external environment perception information and the in-vehicle driver state analysis information includes: performing third scheduled task processing according to the external environment perception information and the internal driver state analysis information to obtain a third scheduled task processing result; and responding to the third task processing result to meet a preset third safe driving early warning condition, and generating and displaying animation of interactive information of third safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
In some optional embodiments, the generating and displaying the animation of the interactive information for driving assistance by the digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin according to the environment perception information outside the vehicle comprises: and generating and displaying animation of interactive information for assisting driving of a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the external environment perception information, the acquired vehicle control information and the acquired state analysis information of the driver in the vehicle.
In some optional embodiments, the method further comprises: obtaining map information including the environment outside the vehicle; the animation for making interactive information for assisting driving by a digital person who generates and displays interactive information displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the vehicle exterior environment perception information comprises the following steps: generating first navigation information according to the external environment perception information and the map information; and generating and displaying animation of the navigation interactive information made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin according to the first navigation information.
In some optional embodiments, the method further comprises: acquiring map information and traffic control information of the environment outside the vehicle; the animation for making interactive information for assisting driving by a digital person who generates and displays interactive information displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the vehicle exterior environment perception information comprises the following steps: generating second navigation information according to the external environment perception information, the map information and the traffic control information; and generating and displaying animation of the navigation interactive information made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin according to the second navigation information.
According to a second aspect of the embodiments of the present disclosure, there is provided a driving assistance interaction apparatus, the apparatus including: the first acquisition module is used for acquiring the external environment perception information; and the interaction module is used for generating and displaying the animation of the interaction information for assisting driving, which is made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin, according to the environment perception information outside the vehicle.
According to a third aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, where the storage medium stores a computer program for executing the vehicle-mounted digital human-based assistant driving interaction method according to any one of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a driving assistance interaction apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to invoke the executable instructions stored in the memory to implement the vehicle-mounted digital human-based assistant driving interaction method of any one of the first aspect.
According to a fifth aspect of an embodiment of the present disclosure, there is provided a vehicle including: the camera is used for acquiring images inside and/or outside the vehicle; the vehicle-mounted display equipment is used for displaying a digital person and animation of interactive information made by the digital person for driving assistance; and the driving assistance interaction apparatus according to any one of the second aspect or the fourth aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, according to the external environment perception information of car that acquires, the animation that is used for the mutual information of auxiliary driving is made to the digital people who generates and show the setting and show on the on-vehicle display device in the car cabin to make man-machine interaction's mode accord with people's interactive custom more, and the interactive process is more natural, lets the people receive man-machine interaction's warmth, promotes driving enjoyment, comfort and accompany sense, is favorable to reducing the safe risk of driving.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of a vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 2 is a schematic view of an interaction scenario between vehicle-mounted devices shown in the present disclosure;
FIG. 3 is a flow chart of another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 4 is a flow chart of another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 5 is a flow chart of another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 6 is a flow chart of another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 7 is a flow chart illustrating another vehicle-mounted digital human-based driving assistance interaction method according to the present disclosure;
FIG. 8 is a schematic view of a scene showing the generation and display of an animation according to the present disclosure;
FIG. 9 is a flow chart of another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 10 is a flow chart of another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 11 is a flow chart illustrating another vehicle-mounted digital human-based driving assistance interaction method according to the present disclosure;
FIG. 12 is a flow chart illustrating another vehicle-mounted digital human-based driving assistance interaction method according to the present disclosure;
FIG. 13 is a schematic view of another scenario presented in the present disclosure for generating and displaying animations;
FIG. 14 is a flow chart illustrating another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 15 is a flow chart illustrating another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 16 is a flow chart illustrating another vehicle-mounted digital human-based driving assistance interaction method shown in the present disclosure;
FIG. 17 is a flow chart illustrating another vehicle-mounted digital human-based driving assistance interaction method according to the present disclosure;
FIG. 18 is a block diagram of a driving assistance interaction apparatus shown in the present disclosure;
FIG. 19 is a schematic diagram of a hardware configuration of a driving assistance interaction apparatus shown in the present disclosure;
fig. 20 is a hardware configuration diagram of a vehicle shown in the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as operated herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
The embodiment of the disclosure provides an auxiliary driving interaction method based on vehicle-mounted digital people, which can be used for machine equipment capable of driving, such as an intelligent vehicle, an intelligent vehicle cabin simulating vehicle driving and the like. As shown in fig. 1, fig. 1 is a vehicle-mounted digital human-based driving assistance interaction method according to an exemplary embodiment, including the following steps:
in step 101, vehicle exterior environment perception information is acquired.
In embodiments of the present disclosure, the off-board environmental awareness information includes, but is not limited to, at least one of: road line detection information, road line attribute detection information, traffic light detection information, traffic sign detection information, feasible area detection information, and obstacle detection information. Wherein a road line includes, but is not limited to, at least one of: lane lines, stop lines, turn lines, drop lines, solid lines, dashed lines, single lines, double lines.
In some optional embodiments, after the vehicle-mounted external auxiliary driving device collects the external environment perception information in real time, the external environment perception information is sent to the vehicle-mounted central control device. The Driving Assistance device outside the vehicle may include, but is not limited to, an Advanced Driving Assistance System (ADAS).
In one example, the vehicle-mounted central control device may obtain the vehicle-mounted environment sensing information sent by the vehicle-mounted external auxiliary driving device through an in-vehicle communication bus, such as a Controller Area Network (CAN) bus. In another example, the vehicle-mounted central control device may obtain the vehicle-mounted environment awareness information sent by the vehicle-mounted auxiliary driving device in a vehicle networking communication manner.
In step 102, according to the environment sensing information outside the vehicle, generating and displaying an animation of interactive information for driving assistance made by a digital person displayed on a vehicle-mounted display device arranged in a vehicle cabin.
In the disclosed embodiment, the digital person is an avatar generated by software, and can be displayed on a vehicle-mounted display device in a vehicle cabin, wherein the vehicle-mounted display device can include, but is not limited to, a central control display screen or a vehicle-mounted tablet device. The interaction information includes, but is not limited to, one of: expression information, action information, and voice information.
For example, as shown in fig. 2, the vehicle-mounted center control device may communicate with a vehicle-mounted driving assistance device outside the vehicle, a vehicle-mounted camera, a vehicle-mounted display device, and other vehicle-mounted devices (such as an air conditioner, a radio, and a window), and may also connect with other vehicle-mounted devices, such as a mobile phone carried by a vehicle occupant, through a network. The vehicle-mounted central control equipment can also obtain at least one of voice input and action input of a person entering the vehicle cabin through the vehicle-mounted audio acquisition equipment and the vehicle-mounted camera, and can subsequently control the digital person to execute corresponding interactive feedback.
In the embodiment of the disclosure, after the vehicle-mounted central control device acquires the vehicle-mounted environment perception information sent by the vehicle-mounted auxiliary driving device, an animation which is provided for the digital man displayed on the vehicle-mounted display device to make interactive information for driving assistance can be generated according to the vehicle-mounted environment perception information, and the animation is displayed by the vehicle-mounted display device.
In the above-mentioned embodiment, can be according to the outer environmental perception information of car that acquires, the animation that is used for the mutual information of auxiliary driving is made to the digital people who shows on the on-vehicle display device of the setting in the car cabin of generation and demonstration to make man-machine interaction's mode accord with people's interaction custom more, and interactive process is more natural, lets the people receive man-machine interaction's warmth, promotes driving enjoyment, comfort and accompany sense, is favorable to reducing the safe risk of driving.
In some alternative embodiments, such as shown in FIG. 3, step 102 may include:
in step 201, action information for driving assistance that matches the vehicle exterior environment perception information is determined.
In one example, the matching degree of each type of motion information for driving assistance and the vehicle exterior environment perception information may be determined, and the motion information with the highest matching degree may be determined as the motion information matching the vehicle exterior environment perception information.
Under the condition that the vehicle exterior environment perception information comprises a plurality of information, the matching degree of the same action information and various information in the vehicle exterior environment perception information can be respectively determined, and then the matching degree of the action information and the vehicle exterior environment perception information is determined according to the matching degree corresponding to the various information. For example, the matching degrees corresponding to various pieces of information are weighted and averaged.
In another example, a mapping relationship between different pieces of vehicle exterior environment perception information and the matching action information may be established in advance, and the matching action information may be determined according to the mapping relationship.
In step 202, animation of the digital person executing corresponding action is generated and displayed on the vehicle-mounted display device in the vehicle cabin according to the action information.
In the above embodiment, when the interaction information includes the motion information, the motion information matched with the external environment perception information is combined to make the digital person perform the animation of the corresponding motion for assisting the driving, so that the driving safety risk is reduced, the interaction between the digital person and the human is more natural, and the human feels the warmth of human-computer interaction.
In some alternative embodiments, in addition to generating and displaying corresponding animations in conjunction with motion information, corresponding animations may be generated and displayed in conjunction with voice information. For example, as shown in FIG. 4, step 202 may include:
in step 301, voice information matching the vehicle external environment perception information is determined.
In one example, a trained deep learning neural network may be employed to determine speech information that matches the off-board environmental awareness information.
In another example, a mapping relation between different vehicle-exterior environment perception information and the matched voice information can be established in advance, and the matched voice information can be determined according to the mapping relation.
In step 302, a corresponding voice is obtained according to the voice information, where the voice includes a timestamp.
In the embodiment of the present disclosure, the voice may be pulled from a voice database according to the voice information, and the time stamp of the voice is carried in the pulled voice, so that the time when the digital person in the animation performs the corresponding action is synchronized with the voice. The tone of the pulled voice may be a preset tone, or the preset tone may be a tone that the user likes to set, for example, the preset tone is a tone of a child, and the voice corresponding to the tone of the child is pulled when the voice is pulled.
In step 303, while playing the voice, generating and displaying an animation of the action executed by the digital person at a time corresponding to the time stamp according to the action information.
In the embodiment of the present disclosure, when the voice is played, an animation that allows the digital person to execute a corresponding action at a time corresponding to a different timestamp in the voice may be generated and displayed.
Above-mentioned embodiment, combine with the action information and the speech information of outer environmental perception information matching of car, let the digit people not only can play pronunciation, can also let the digit people make the corresponding action of supplementary driving constantly that the time stamp in the pronunciation corresponds, when reducing the safe risk of driving, let the interaction of digit people with people abundanter, nature, let the people receive human-computer interaction's warmth.
In some alternative embodiments, a piece of speech often includes multiple phones, which are the smallest phonetic units divided according to natural attributes of the speech, and analyzed according to pronunciation actions in syllables, where a pronunciation action constitutes a phone. For example, "hello" includes two phonemes, "you" and "good". In the case where a plurality of phonemes are included in the speech, a time stamp for each phoneme may be included in the time stamp. An action will typically include multiple sub-actions, for example, a waving action may include a sub-action with an arm swinging to the left and a sub-action with an arm swinging to the right. In order to make the displayed digital person more vivid, each sub-action may be matched to a phoneme in the speech, respectively.
For example, as shown in fig. 5, step 303 may include:
in step 401, according to the time stamp of each phoneme, the execution time of the sub-action matched with each phoneme is determined.
In step 402, according to the action information, an animation of the digital person performing a sub-action matching each phoneme at the time stamp of each phoneme is generated and displayed.
For example, while playing the phoneme of "you", a mouth-type motion matching "you" is displayed and a waving motion of the digital person's arm swinging to the left is displayed, while playing the phoneme of "good", a mouth-type motion matching "good" is displayed and a waving motion of the digital person's arm swinging to the right is displayed.
In the embodiment, the action information, the voice information and the factors in the voice which are matched with the external environment perception information are combined, the digital person is enabled to execute the sub-action matched with each voice speed at the timestamp of each voice speed, the safety risk of driving is reduced, meanwhile, the interaction between the digital person and the person is richer and more natural, and the person is enabled to be warm due to human-computer interaction.
In some alternative embodiments, such as shown in fig. 6, step 202 may include:
in step 501, at least one frame of motion slice of the digital person corresponding to the motion information is called from the motion model library.
The action corresponding to the action information may be called from an action model library. Specifically, an action slice of at least one frame of the digital person corresponding to the action information may be called from an action model library.
In step 502, the motion slices of each frame of the at least one frame of digital person are sequentially displayed on the display device.
In the embodiment of the disclosure, at least one of the limb movement, the facial expression movement, the mouth shape movement, the eye movement and the like of the digital person corresponding to different movement slices is different, and by calling and sequentially displaying the corresponding movement slices, the animation of the digital person executing the corresponding movement can be displayed on the vehicle-mounted display device.
In the above embodiment, the corresponding animation may be displayed by sequentially displaying the action slices called from the action model library, and the action model library may be updated as needed, so that the interaction between digital people and human is richer and more natural.
In some alternative embodiments, such as shown in fig. 7, step 102 may include:
in step 601, a first predetermined task processing is performed according to the vehicle exterior environment perception information to obtain a first predetermined task processing result.
In the embodiment of the disclosure, after the vehicle exterior environment perception information is acquired, the first predetermined task processing may be performed based on the vehicle exterior environment perception information, so as to obtain a corresponding processing result. Wherein the first predetermined task processing comprises at least one of: lane departure detection processing, speed limit detection processing, violation of traffic light indication detection processing, violation of traffic sign detection processing, travelable region detection processing, collision detection processing, and vehicle distance detection processing.
The lane departure detection process indicates whether the vehicle departs from the lane corresponding to the current travel route. The speed limit detection process indicates whether the vehicle's current speed exceeds the maximum speed-per-hour indicated by the traffic sign present on the current route. The violation traffic light indication detection process indicates whether the vehicle has a behavior that violates the traffic light indication. The travelable region detection process indicates that the vehicle is not present in the travelable region. The collision detection process includes, but is not limited to, at least one of: forward collision detection processing, pedestrian collision detection processing, and city forward collision detection processing. The method comprises the steps of detecting whether the distance between a vehicle and other vehicles is likely to be collided or not, detecting whether the distance between the vehicle and pedestrians is likely to be collided or not by pedestrian collision detection processing, and detecting whether the distance between the vehicle and other vehicles is likely to be collided or not by city forward collision detection processing in a congested environment. The inter-vehicle distance detection process is used to indicate an inter-vehicle distance between the vehicle and another vehicle.
In step 602, in response to that the first task processing result meets a preset first safe driving early warning condition, generating and displaying an animation of interactive information of the first safe driving early warning made by a digital person displayed on a vehicle-mounted display device arranged in a vehicle cabin.
In the embodiment of the present disclosure, the first task processing result satisfying the preset first safe driving warning condition includes, but is not limited to, at least one of the following: the lane departure detection process indicates that a lane departure occurs, the speed limit detection process indicates that the vehicle is speeding, the violation of the traffic light indication detection process indicates that the vehicle violates the traffic light indication, the violation of the traffic sign detection process indicates that the vehicle violates the content corresponding to the traffic sign, the travelable region detection process indicates that the vehicle is not in the travelable region, the collision detection process indicates that the vehicle is about to collide, and the inter-vehicle distance detection process indicates that the inter-vehicle distance between the vehicle and another vehicle is less than a threshold value.
For example, if the inter-vehicle distance detection process indicates that the inter-vehicle distance between the vehicle and another vehicle is less than the threshold, an animation of the interactive information of the digital person making the first safety driving warning, which is set to be displayed on the in-vehicle display device, may be generated and displayed, and the content of the animation includes, but is not limited to: playing a voice of "caution, too close to other vehicles", an expression of "caution" by the digital person, a motion of "shaking head, waving hands" by the digital person, etc., as shown in fig. 8.
Above-mentioned embodiment, can be according to the outer environmental perception information of car, let the animation that the digital person made the early warning of safe driving, when reducing the safe risk of driving for human-computer interaction's mode accords with people's interactive custom more, and interactive process is more natural, lets the people receive human-computer interaction's warmth, promotes driving enjoyment, comfort and accompany sense.
In some alternative embodiments, such as shown in fig. 9, the method may further include:
in step 103, vehicle control information is acquired.
In the embodiment of the present disclosure, the execution order of step 101 and step 103 is not limited. The vehicle control information includes, but is not limited to, at least one of: the control information of the steering lamp of the vehicle, the control information of the steering wheel, the control information of the accelerator, the control information of the brake and the control information of the clutch. The control information of a person on a steering lamp, a steering wheel, an accelerator, a brake, a clutch and the like can be acquired through the vehicle-mounted central control equipment.
Accordingly, step 102 may include:
and generating and displaying the animation of interactive information for assisting driving, which is made by a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin, according to the external environment perception information and the vehicle control information.
In the embodiment of the disclosure, the external environment perception information and the acquired vehicle control information can be combined to jointly generate and display the animation of the interactive information for driving assistance made by the digital person displayed on the vehicle-mounted display device.
In some alternative embodiments, such as shown in FIG. 10, step 102 may include:
in step 701, a second predetermined task processing is performed according to the vehicle exterior environment sensing information and the vehicle control information, and a second predetermined task processing result is obtained.
In the disclosed embodiment, the second predetermined task process includes, but is not limited to, at least one of: the method comprises the following steps of vehicle steering lamp control matching detection processing, steering wheel control matching detection processing, accelerator control matching detection processing, brake control matching detection processing and clutch control matching detection processing.
The second predetermined task processing result comprises whether the acquired vehicle control information is matched with vehicle control information corresponding to the vehicle exterior environment perception information.
For example, the vehicle exterior environment perception information includes that the traffic light is a red light, the vehicle control information matched with the vehicle exterior environment perception information should be brake control information, if the acquired actual vehicle control information is the brake control information, it is determined that the second predetermined task processing result is brake control matching, and otherwise, it is determined that the brake control is not matched.
In step 702, in response to that the second task processing result meets a preset second safe driving early warning condition, generating and displaying an animation of interactive information of the second safe driving early warning made by a digital person displayed on a vehicle-mounted display device arranged in the vehicle cabin.
In the embodiment of the disclosure, if the second predetermined task processing result is that the acquired vehicle control information is not matched with the external environment perception information, it is determined that the preset second safe driving early warning condition is satisfied, and accordingly, an animation of interactive information of the second safe driving early warning made by a digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin can be generated and displayed.
For example, the second predetermined task processing result is a brake control mismatch, and the generated animation content includes, but is not limited to, at least one of: playing the voice of 'the brake is done', making the expression of 'not so' and making the action of 'the brake'. In the above embodiment, the external environment perception information and the acquired vehicle control information may be combined to generate and display an animation of the interactive information of the second safety driving warning made by the digital person displayed on the vehicle-mounted display device disposed in the vehicle cabin. Except the safe risk that can reduce the driving, can also let safe driving early warning more accurate, promote driving enjoyment, comfort and accompany and attend to the sense.
In some alternative embodiments, such as shown in fig. 11, the method may further include:
in step 104, in-vehicle driver state analysis information is acquired.
In the embodiment of the present disclosure, the execution order of step 104 and step 101 is also not limited. The in-vehicle driver status analysis information includes, but is not limited to, at least one of: the system comprises human body state analysis information, emotion state analysis information, fatigue state analysis information, distraction state analysis information, dangerous action analysis information, safety belt wearing analysis information and driver off duty analysis information.
Accordingly, step 102 may include:
and generating and displaying animation of interactive information for assisting driving made by a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the external environment perception information and the state analysis information of the driver in the vehicle.
In the embodiment of the disclosure, the external environment perception information and the acquired in-vehicle driver state analysis information can be combined to jointly generate and display the animation of interactive information for assisting driving made by a digital person displayed on the vehicle-mounted display device.
In some alternative embodiments, such as shown in fig. 12, step 102 may include:
in step 801, a third predetermined task processing is performed according to the external environment perception information and the in-vehicle driver state analysis information, and a third predetermined task processing result is obtained.
In the disclosed embodiment, the third task process includes, but is not limited to, at least one of the following: human body state detection processing, emotion state detection processing, fatigue state detection processing, distraction state detection processing, dangerous action detection processing, safety belt wearing detection processing and driver off duty detection processing.
In step 802, in response to that the third task processing result meets a preset third safe driving early warning condition, generating and displaying an animation of interactive information of the third safe driving early warning made by the digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin.
In the embodiment of the present disclosure, first, the in-vehicle driver state analysis information satisfies, but is not limited to, a condition of at least one of: the human body state detection processing indicates that the sitting posture of a human body is not suitable for driving, the emotion state detection result indicates that the person in the vehicle is in a negative emotion, the fatigue state detection result indicates that the person in the vehicle is in a fatigue state, the distraction state detection result indicates that the person in the vehicle is distracted, the dangerous action detection result indicates that the person in the vehicle is in a dangerous action, the safety belt wearing detection processing result indicates that the driver does not wear the safety belt, and the driver is indicated to be on duty by the off-duty detection processing. Further, a first preset task is processed according to the external environment sensing information, the processing result of the first preset task meets a preset first safe driving early warning condition, and at the moment, it is determined that the processing result of a third task meets a preset third safe driving early warning condition. Accordingly, an animation of the interactive information of the third safety driving warning made by the digital person displayed on the vehicle-mounted display device provided in the vehicle cabin can be generated and displayed.
For example, if the result of the third predetermined task processing includes that the person in the vehicle is in a fatigue state and a lane departure occurs, an animation of the interactive information of the digital person making the third safety driving warning, which is set to be displayed on the vehicle-mounted display device, may be generated and displayed, and the content of the animation includes, but is not limited to: playing the voice of "lane departure, whether it is because you are tired, having a break for re-departure bar", the expression of "do not so" made by the digital person, the action of waving hands when playing the voice of "lane departure" and the action of having a break when playing the voice of "having a break for re-departure bar", as shown in fig. 13, for example.
In the above embodiment, the external environment perception information and the acquired in-vehicle driver state analysis information may be combined to generate and display an animation of the interactive information of the third safety driving warning made by the digital person displayed on the in-vehicle display device disposed in the vehicle cabin. When a driver is in a state of needing auxiliary driving in a vehicle, corresponding animations are generated and displayed by combining external environment sensing information, the safety risk of driving is reduced, the safety driving early warning can be more accurate, unnecessary auxiliary driving reminding is avoided, and the driving pleasure, comfort and accompanying sense are improved.
In some optional embodiments, the external environment perception information may be combined with the acquired vehicle control information and the acquired in-vehicle driver state analysis information to generate corresponding animations.
In the embodiment of the present disclosure, when the in-vehicle driver state analysis information satisfies the above condition, an animation of interactive information for driving assistance made by a digital person displayed on an in-vehicle display device disposed in the vehicle cabin may be generated and displayed in the manner of steps 701 to 702.
In the above embodiment, can combine the outer environmental perception information of car with the multiple information that acquires, decide the animation content that finally generates and show jointly, when reducing the safe risk of driving, can also let safe driving early warning more accurate, avoid meaningless driver assistance to remind, promote driving enjoyment, comfort and accompany and attend to the sense.
In some alternative embodiments, such as shown in fig. 14, the method may further include:
in step 105, map information including the out-of-vehicle environment is acquired.
In the embodiment of the present disclosure, the execution order of step 105 and step 101 is not limited. The map information of the environment outside the vehicle can be acquired from a high-precision map or a navigation application program through a vehicle network or other communication modes. The map information includes, but is not limited to, at least one of: the information of the current position of the vehicle, the information of at least one planned driving route, the traffic sign information in front of the vehicle, the speed limit information in front of the vehicle, the obstacle information in front of the vehicle and the front lane information.
Accordingly, step 102 may include: and generating and displaying an animation of the navigation interactive information made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin according to the external environment perception information and the map information.
For example, as shown in FIG. 15, step 102 may include:
in step 901, first navigation information is generated according to the vehicle external environment perception information and the map information.
In step 902, an animation of navigation interactive information is generated and displayed by a digital person displayed on a vehicle-mounted display device arranged in a vehicle cabin according to the first navigation information.
For example, the first navigation information includes a left turn of 50 meters ahead, which requires access to a lane dedicated to the left turn, and the animation content may include, but is not limited to, at least one of: the voice of '50 meters left turn ahead, need to enter into the lane dedicated to left turn', the expression of 'attention concentration' made by the digital person, and the action of 'waving the left hand' made by the digital person are played.
In the above embodiment, can be through according to the outer environmental perception information of car with map information, make the digital people who shows on the on-vehicle display device make the animation of navigation interactive information, the navigation process is accomplished by anthropomorphic visual digital people, and the navigation process is more vivid, makes the people receive human-computer interaction's warmth, promotes driving enjoyment, comfort and accompany sense.
In some alternative embodiments, such as shown in fig. 16, the method may further include:
in step 106, map information including the out-of-vehicle environment and traffic control information are acquired.
In the embodiment of the present disclosure, the execution order of step 106 and step 101 is not limited. In addition to map information including the environment outside the vehicle, traffic control information may be acquired. Wherein the traffic control information includes but is not limited to: real-time traffic control information, traffic control information that is implemented in a certain place for a long or short period of time. The real-time traffic control information can be acquired by real-time road condition detection and other modes, and the traffic control information carried out in a certain place for a long time or a short time can be acquired by interaction with the internet.
Accordingly, step 102 may include: and generating and displaying animation of interactive information for assisting driving of a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the external environment perception information, the map information and the traffic control information.
For example, as shown in fig. 17, step 102 may include:
in step 1001, second navigation information is generated according to the external environment sensing information, the map information and the traffic control information.
In step 1002, according to the second navigation information, generating and displaying an animation of the navigation interactive information made by the digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin.
For example, the second navigation information includes a straight ahead, and a temporary traffic control caused by road construction exists 500 meters ahead, the animation content may include, but is not limited to, at least one of the following: the ' straight ahead ' and the ' temporary traffic control caused by road construction 500 meters ahead are played, the passing speed is slowed, the voice is not worried, the digital person makes an ' attention-focusing ' expression, and the digital person makes a ' hand-waving ' action while playing the ' no-worried ' voice.
In the above embodiment, can be through according to the outer environmental perception information of car map information and traffic control information let the digital people who shows on the on-vehicle display device make the animation of navigation interactive information for the navigation process is more accurate, lets the people receive human-computer interaction's warmth, promotes driving enjoyment, comfort and accompany and attend to the sense.
As shown in fig. 18, fig. 18 is a block diagram of a driving assistance interaction apparatus according to an exemplary embodiment of the present disclosure, the apparatus including: the first obtaining module 1110 is configured to obtain vehicle exterior environment perception information; and the interaction module 1120 is used for generating and displaying an animation of interactive information for driving assistance by a digital person displayed on a vehicle-mounted display device arranged in a vehicle cabin according to the environment sensing information outside the vehicle.
In some optional embodiments, the interaction module comprises: the first determining submodule is used for determining action information matched with the environment perception information outside the vehicle and used for assisting driving; and the first interaction submodule is used for generating and displaying the animation of the corresponding action executed by the digital person on the vehicle-mounted display equipment in the vehicle cabin according to the action information.
In some optional embodiments, the first interaction submodule comprises: the first determining unit is used for determining voice information matched with the vehicle exterior environment perception information; the acquisition unit is used for acquiring corresponding voice according to the voice information, wherein the voice comprises a timestamp; and the first interaction unit is used for generating and displaying the animation of the action executed by the digital person at the moment corresponding to the timestamp according to the action information while playing the voice.
In some optional embodiments, the action comprises a plurality of sub-actions, each sub-action matching a phoneme in the speech, the timestamp comprising a timestamp of each phoneme; the first interaction submodule comprises: a second determining unit, configured to determine, according to the timestamp of each phoneme, an execution time of a sub-action matching the each phoneme; and the second interaction unit is used for generating and displaying an animation of the digital person executing the sub-action matched with each phoneme at the time stamp of each phoneme according to the action information.
In some optional embodiments, the first interaction submodule comprises: the calling unit is used for calling at least one frame of action slice of the digital person corresponding to the interaction information from an action model library; and the display unit is used for sequentially displaying the action slices of each frame of digital person in the action slices of the at least one frame of digital person on the display equipment.
In some optional embodiments, the interaction module comprises: the first processing submodule is used for processing a first preset task according to the external environment perception information to obtain a first preset task processing result; and the second interaction submodule is used for responding to the first task processing result and meeting the preset first safe driving early warning condition, and generating and displaying animation of interactive information of first safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
In some optional embodiments, the apparatus further comprises: the second acquisition module is used for acquiring vehicle control information; the interaction module comprises: and the third interaction submodule is used for generating and displaying an animation of interactive information for assisting driving, which is displayed on vehicle-mounted display equipment arranged in a vehicle cabin, according to the external environment perception information and the vehicle control information.
In some optional embodiments, the third interaction submodule comprises: the first processing unit is used for performing second preset task processing according to the external environment perception information and the vehicle control information to obtain a second preset task processing result; and the third interaction unit is used for responding to the second task processing result and meeting the preset second safe driving early warning condition, and generating and displaying animation of interactive information of second safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
In some optional embodiments, the apparatus further comprises: the third acquisition module is used for acquiring state analysis information of a driver in the vehicle; the interaction module comprises: and the fourth interaction submodule is used for generating and displaying an animation of interactive information for assisting driving, which is made by a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin, according to the external environment perception information and the state analysis information of the driver in the vehicle.
In some optional embodiments, the fourth interaction submodule comprises: the second processing unit is used for carrying out third scheduled task processing according to the external environment perception information and the in-vehicle driver state analysis information to obtain a third scheduled task processing result; and the fourth interaction unit is used for responding to the third task processing result and meeting a preset third safe driving early warning condition, and generating and displaying an animation of interactive information of third safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
In some optional embodiments, the interaction module comprises: and the fifth interaction submodule is used for generating and displaying animation of interactive information for driving assistance made by a digital person displayed on a vehicle-mounted display device arranged in a vehicle cabin according to the external environment perception information, the acquired vehicle control information and the acquired state analysis information of the driver in the vehicle.
In some optional embodiments, the apparatus further comprises: the fourth acquisition module is used for acquiring map information comprising the environment outside the vehicle; the interaction module comprises: the first generation submodule is used for generating first navigation information according to the external environment perception information and the map information; and the fifth interaction submodule is used for generating and displaying the animation of the navigation interaction information of the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin according to the first navigation information.
In some optional embodiments, the apparatus further comprises: the fifth acquisition module is used for acquiring map information and traffic control information of the environment outside the vehicle; the interaction module comprises: the second generation submodule is used for generating second navigation information according to the external environment perception information, the map information and the traffic control information; and the sixth interaction submodule is used for generating and displaying the animation of the navigation interaction information of the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin according to the second navigation information.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the disclosure also provides a computer-readable storage medium, which stores a computer program, wherein the computer program is used for executing any one of the above-mentioned auxiliary driving interaction methods based on the vehicle-mounted digital person.
In some optional embodiments, the present disclosure provides a computer program product, comprising computer readable code, which when run on a device, a processor in the device executes instructions for implementing the vehicle-mounted digital person-based assisted driving interaction method provided in any one of the above embodiments.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The disclosed embodiment also provides a driving assistance interaction device, including: a processor; a memory for storing processor-executable instructions; the processor is configured to call the executable instructions stored in the memory to implement any one of the above-mentioned vehicle-mounted digital human-based assistant driving interaction methods.
Fig. 19 is a schematic hardware structure diagram of a driving assistance interaction device according to an embodiment of the present application. The driving assistance interface 1210 includes a processor 1211 and may further include an input device 1212, an output device 1213, and a memory 1214. The input device 1212, the output device 1213, the memory 1214, and the processor 1211 are connected to each other via a bus. The memory includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), which is used for storing related instructions and data.
The input means are for inputting data and/or signals and the output means are for outputting data and/or signals. The output means and the input means may be separate devices or may be an integral device. In the disclosed embodiment, the input device 1212 includes a camera 01 and other input devices 02, and the other input devices 02 include, but are not limited to, an audio capture device. Output 1213 includes a display device 03 and other output devices 04, which other output devices 04 may include, but are not limited to, audio output devices, a digital person being displayed via display device 03, and an animation of the digital person performing a corresponding turning action also being displayed via display device 03.
The processor may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case of one CPU, the CPU may be a single-core CPU or a multi-core CPU. The memory is used to store program codes and data of the network device. The processor is used for calling the program codes and data in the memory and executing the steps in the method embodiment. Specifically, reference may be made to the description of the method embodiment, which is not repeated herein.
It will be appreciated that fig. 19 shows only a simplified design of a driving assistance interaction device. In practical applications, the driving assistance interaction devices may also respectively include other necessary elements, including but not limited to any number of input/output devices, processors, controllers, memories, etc., and all driving assistance interaction devices that can implement the embodiments of the present application are within the scope of the present application.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The disclosed embodiment also provides a vehicle, including: the camera is used for acquiring images inside and/or outside the vehicle; the vehicle-mounted display equipment is used for displaying a digital person and animation of interactive information made by the digital person for driving assistance; and any one of the driving assistance interaction devices described above.
For example, as shown in fig. 20, embodiments of the present disclosure also provide a vehicle 1300, including: an in-vehicle camera 1310, an in-vehicle display device 1320, and a driving assistance interaction means 1330.
The vehicle-mounted camera 1310 may be configured to acquire an image inside the vehicle and/or an image outside the vehicle, the vehicle-mounted display device 1320 may be configured to display a digital person and an animation of interactive information for assisting driving made by the digital person, and the driving assistance interaction device 1330 may be configured to generate and display the animation of the interactive information for assisting driving made by the digital person displayed on the vehicle-mounted display device disposed in the vehicle cabin according to the acquired environment sensing information outside the vehicle.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (17)

1. A driving assistance interaction method based on vehicle-mounted digital people is characterized by comprising the following steps:
acquiring vehicle exterior environment perception information;
and generating and displaying the animation of interactive information for assisting driving, which is made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin, according to the vehicle external environment perception information.
2. The method according to claim 1, wherein the generating and displaying the animation of the interactive information for driving assistance by the digital human displayed on the vehicle-mounted display device arranged in the vehicle cabin according to the vehicle-mounted environment perception information comprises:
determining action information matched with the vehicle external environment perception information and used for assisting driving;
and generating and displaying the animation of the corresponding action executed by the digital person on the vehicle-mounted display equipment in the vehicle cabin according to the action information.
3. The method of claim 2, wherein generating and displaying an animation of a digital person performing a corresponding action on an onboard display device within the cabin based on the action information comprises:
determining voice information matched with the vehicle external environment perception information;
acquiring corresponding voice according to the voice information, wherein the voice comprises a timestamp;
and generating and displaying the animation of the action executed by the digital person at the moment corresponding to the timestamp according to the action information while playing the voice.
4. The method of claim 3, wherein the action comprises a plurality of sub-actions, each sub-action matching a phoneme in the speech, and wherein the timestamp comprises a timestamp for each phoneme;
the generating and displaying the animation of the action executed by the digital person at the moment corresponding to the timestamp according to the action information comprises the following steps:
determining the execution time of the sub-action matched with each phoneme according to the time stamp of each phoneme;
and generating and displaying an animation of the digital person performing a sub-action matched with each phoneme at the time stamp of each phoneme according to the action information.
5. The method according to any one of claims 2 to 4, wherein the generating and displaying the animation of the digital person performing the corresponding action on the on-board display device in the cabin according to the action information comprises:
calling at least one frame of action slice of the digital person corresponding to the action information from an action model library;
sequentially displaying the motion slices of each frame of the at least one frame of digital person on the display device.
6. The method according to any one of claims 1-5, wherein the generating and displaying the animation of the interactive information for driving assistance by the digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin according to the vehicle-mounted environment perception information comprises:
performing first preset task processing according to the vehicle exterior environment perception information to obtain a first preset task processing result;
and responding to the first task processing result meeting a preset first safe driving early warning condition, and generating and displaying animation of interactive information of first safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
7. The method according to any one of claims 1-6, further comprising:
acquiring vehicle control information;
the animation for making interactive information for assisting driving by a digital person who generates and displays interactive information displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the vehicle exterior environment perception information comprises the following steps:
and generating and displaying the animation of interactive information for assisting driving, which is made by a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin, according to the external environment perception information and the vehicle control information.
8. The method according to claim 7, wherein the generating and displaying of the animation of the interactive information for driving assistance by the digital person displayed on the vehicle-mounted display device provided in the vehicle cabin according to the vehicle-exterior environment perception information and the vehicle control information comprises:
performing second predetermined task processing according to the external environment perception information and the vehicle control information to obtain a second predetermined task processing result;
and responding to the second task processing result to meet a preset second safe driving early warning condition, and generating and displaying animation of interactive information of second safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
9. The method according to any one of claims 1-8, further comprising: acquiring state analysis information of a driver in the vehicle;
the animation for making interactive information for assisting driving by a digital person who generates and displays interactive information displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the vehicle exterior environment perception information comprises the following steps:
and generating and displaying animation of interactive information for assisting driving made by a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the external environment perception information and the state analysis information of the driver in the vehicle.
10. The method according to claim 9, wherein the generating and displaying an animation of interactive information for driving assistance made by a digital human displayed on an on-board display device provided in a vehicle cabin according to the outside environment perception information and the in-vehicle driver state analysis information comprises:
performing third scheduled task processing according to the external environment perception information and the internal driver state analysis information to obtain a third scheduled task processing result;
and responding to the third task processing result to meet a preset third safe driving early warning condition, and generating and displaying animation of interactive information of third safe driving early warning made by a digital person displayed on vehicle-mounted display equipment arranged in the vehicle cabin.
11. The method according to any one of claims 1-10, wherein the generating and displaying the animation of the interactive information for driving assistance by the digital person displayed on the vehicle-mounted display device arranged in the vehicle cabin according to the vehicle-mounted environment perception information comprises:
and generating and displaying animation of interactive information for assisting driving of a digital person displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the external environment perception information, the acquired vehicle control information and the acquired state analysis information of the driver in the vehicle.
12. The method according to any one of claims 1-11, further comprising: obtaining map information including the environment outside the vehicle;
the animation for making interactive information for assisting driving by a digital person who generates and displays interactive information displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the vehicle exterior environment perception information comprises the following steps:
generating first navigation information according to the external environment perception information and the map information;
and generating and displaying animation of the navigation interactive information made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin according to the first navigation information.
13. The method according to any one of claims 1-12, further comprising: acquiring map information and traffic control information of the environment outside the vehicle;
the animation for making interactive information for assisting driving by a digital person who generates and displays interactive information displayed on vehicle-mounted display equipment arranged in a vehicle cabin according to the vehicle exterior environment perception information comprises the following steps:
generating second navigation information according to the external environment perception information, the map information and the traffic control information;
and generating and displaying animation of the navigation interactive information made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin according to the second navigation information.
14. A driving assistance interaction apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring the external environment perception information;
and the interaction module is used for generating and displaying the animation of the interaction information for assisting driving, which is made by the digital person displayed on the vehicle-mounted display equipment arranged in the vehicle cabin, according to the environment perception information outside the vehicle.
15. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the vehicle-mounted digital human-based assistant driving interaction method according to any one of claims 1 to 13.
16. A driving assistance interaction apparatus, characterized by comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to invoke executable instructions stored in the memory to implement the on-board digital human-based assisted driving interaction method of any one of claims 1-13.
17. A vehicle, characterized by comprising:
the camera is used for acquiring images inside and/or outside the vehicle;
the vehicle-mounted display equipment is used for displaying a digital person and animation of interactive information made by the digital person for driving assistance; and
the driving assistance interaction device according to claim 14 or 16.
CN202010592118.3A 2020-06-24 2020-06-24 Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium Pending CN111736701A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010592118.3A CN111736701A (en) 2020-06-24 2020-06-24 Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium
PCT/CN2020/136255 WO2021258671A1 (en) 2020-06-24 2020-12-14 Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium
KR1020217043117A KR20220015462A (en) 2020-06-24 2020-12-14 Assistive driving interaction method and device based on in-vehicle digital human, storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010592118.3A CN111736701A (en) 2020-06-24 2020-06-24 Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium

Publications (1)

Publication Number Publication Date
CN111736701A true CN111736701A (en) 2020-10-02

Family

ID=72651098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010592118.3A Pending CN111736701A (en) 2020-06-24 2020-06-24 Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium

Country Status (3)

Country Link
KR (1) KR20220015462A (en)
CN (1) CN111736701A (en)
WO (1) WO2021258671A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470394A (en) * 2021-07-05 2021-10-01 浙江商汤科技开发有限公司 Augmented reality display method and related device, vehicle and storage medium
WO2021258671A1 (en) * 2020-06-24 2021-12-30 上海商汤临港智能科技有限公司 Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105679209A (en) * 2015-12-31 2016-06-15 戴姆勒股份公司 In-car 3D holographic projection device
CN108859963A (en) * 2018-06-28 2018-11-23 深圳奥尼电子股份有限公司 Multifunctional driving assistance method, multifunctional driving assistance device, and storage medium
WO2019114030A1 (en) * 2017-12-11 2019-06-20 惠州市德赛西威汽车电子股份有限公司 Driving assistance system and method fusing navigation and intelligent vision
CN110608751A (en) * 2019-08-14 2019-12-24 广汽蔚来新能源汽车科技有限公司 Driving navigation method and device, vehicle-mounted computer equipment and storage medium
CN110728256A (en) * 2019-10-22 2020-01-24 上海商汤智能科技有限公司 Interaction method and device based on vehicle-mounted digital person and storage medium
CN110926487A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Driving assistance method, driving assistance system, computing device, and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10242486B2 (en) * 2017-04-17 2019-03-26 Intel Corporation Augmented reality and virtual reality feedback enhancement system, apparatus and method
CN115620545A (en) * 2017-08-24 2023-01-17 北京三星通信技术研究有限公司 Augmented reality method and device for driving assistance
CN111124198A (en) * 2018-11-01 2020-05-08 广州汽车集团股份有限公司 Animation playing and interaction method, device, system and computer equipment
CN111105691A (en) * 2020-01-07 2020-05-05 重庆渝微电子技术研究院有限公司 Driving assistance equipment quality detection system
CN111736701A (en) * 2020-06-24 2020-10-02 上海商汤临港智能科技有限公司 Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105679209A (en) * 2015-12-31 2016-06-15 戴姆勒股份公司 In-car 3D holographic projection device
WO2019114030A1 (en) * 2017-12-11 2019-06-20 惠州市德赛西威汽车电子股份有限公司 Driving assistance system and method fusing navigation and intelligent vision
CN108859963A (en) * 2018-06-28 2018-11-23 深圳奥尼电子股份有限公司 Multifunctional driving assistance method, multifunctional driving assistance device, and storage medium
CN110926487A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Driving assistance method, driving assistance system, computing device, and storage medium
CN110608751A (en) * 2019-08-14 2019-12-24 广汽蔚来新能源汽车科技有限公司 Driving navigation method and device, vehicle-mounted computer equipment and storage medium
CN110728256A (en) * 2019-10-22 2020-01-24 上海商汤智能科技有限公司 Interaction method and device based on vehicle-mounted digital person and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021258671A1 (en) * 2020-06-24 2021-12-30 上海商汤临港智能科技有限公司 Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium
CN113470394A (en) * 2021-07-05 2021-10-01 浙江商汤科技开发有限公司 Augmented reality display method and related device, vehicle and storage medium

Also Published As

Publication number Publication date
WO2021258671A1 (en) 2021-12-30
KR20220015462A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN108137050B (en) Driving control device and driving control method
CN108137052B (en) Driving control device, driving control method, and computer-readable medium
JP6499682B2 (en) Information provision system
CN109835346B (en) Driving advice device and driving advice method
US10994612B2 (en) Agent system, agent control method, and storage medium
CN108688677A (en) Vehicle drive support system and vehicle drive support method
WO2018100619A1 (en) Vehicle control system, vehicle control method, and vehicle control program
CN107886970B (en) Information providing device
JP6062043B2 (en) Mobile body state notification device, server device, and mobile body state notification method
JP2018060192A (en) Speech production device and communication device
US10901503B2 (en) Agent apparatus, agent control method, and storage medium
JP7250547B2 (en) Agent system, information processing device, information processing method, and program
WO2021258671A1 (en) Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium
CN108688675A (en) Vehicle drive support system
CN111750885A (en) Control device, control method, and storage medium storing program
JP2010072573A (en) Driving evaluation device
CN115195637A (en) Intelligent cabin system based on multimode interaction and virtual reality technology
JP2019098780A (en) Driving advice device and driving advice method
JP6419134B2 (en) Vehicle emotion display device, vehicle emotion display method, and vehicle emotion display program
WO2022124164A1 (en) Attention object sharing device, and attention object sharing method
JP2020144285A (en) Agent system, information processing device, control method for mobile body mounted apparatus, and program
Spießl Assessment and support of error recognition in automated driving
KR20180025379A (en) System and method for provision of head up display information according to driver's condition and driving condition based on speech recognition
JP2020201792A (en) On-vehicle communication device, vehicle remote operation system, communication method, and program
JP2020130502A (en) Information processing device and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination