CN113147771A - Active interaction method and device based on vehicle-mounted virtual robot - Google Patents

Active interaction method and device based on vehicle-mounted virtual robot Download PDF

Info

Publication number
CN113147771A
CN113147771A CN202110508159.4A CN202110508159A CN113147771A CN 113147771 A CN113147771 A CN 113147771A CN 202110508159 A CN202110508159 A CN 202110508159A CN 113147771 A CN113147771 A CN 113147771A
Authority
CN
China
Prior art keywords
vehicle
virtual robot
question
mounted virtual
target face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110508159.4A
Other languages
Chinese (zh)
Inventor
钟煌辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qianhai Qijian Technology Shenzhen Co ltd
Original Assignee
Qianhai Qijian Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qianhai Qijian Technology Shenzhen Co ltd filed Critical Qianhai Qijian Technology Shenzhen Co ltd
Priority to CN202110508159.4A priority Critical patent/CN113147771A/en
Publication of CN113147771A publication Critical patent/CN113147771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0881Seat occupation; Driver or passenger presence
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/22Psychological state; Stress level or workload

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an active interaction method and device based on a vehicle-mounted virtual robot, wherein the active interaction method based on the vehicle-mounted virtual robot comprises the following steps: detecting whether a person gets on a main driving position and/or a secondary driving position of the current vehicle; if so, controlling the vehicle-mounted virtual robot to execute corresponding actions and acquiring corresponding target face images; and outputting corresponding question information according to the target face image and a preset question rule. According to the active interaction method and device based on the vehicle-mounted virtual robot, the vehicle-mounted virtual robot can actively initiate interaction with people in the vehicle, and the use experience of the people in the vehicle on the vehicle-mounted virtual robot is improved.

Description

Active interaction method and device based on vehicle-mounted virtual robot
Technical Field
The application relates to the technical field of vehicle-mounted robots, in particular to an active interaction method and device based on a vehicle-mounted virtual robot.
Background
In a present vehicle, an in-vehicle virtual robot may refer to a virtual robot displayed on a center console display device. At present, the interaction between the vehicle-mounted virtual robot and people in the vehicle basically belongs to a passive question and answer mode, and the interaction with the people in the vehicle cannot be actively initiated, so that poor experience feeling is brought to the people in the vehicle in the use process.
Disclosure of Invention
An object of the embodiment of the application is to provide an active interaction method and device based on a vehicle-mounted virtual robot, which can enable the vehicle-mounted virtual robot to actively initiate interaction with people in a vehicle, and improve the use experience of the people in the vehicle on the vehicle-mounted virtual robot.
In a first aspect, an embodiment of the present application provides an active interaction method based on a vehicle-mounted virtual robot, including:
detecting whether a person gets on a main driving position and/or a secondary driving position of the current vehicle;
if so, controlling the vehicle-mounted virtual robot to execute corresponding actions and acquiring corresponding target face images;
and outputting corresponding question information according to the target face image and a preset question rule.
In the implementation process, the active interaction method based on the vehicle-mounted virtual robot of the embodiment of the application controls the vehicle-mounted virtual robot to execute corresponding actions and acquire corresponding target face images by detecting that a person is in a main driving position and/or a secondary driving position of the current vehicle; and outputting corresponding asking information according to the target face image and a preset asking rule, wherein the asking mode of the person on the vehicle is an active interaction mode of the vehicle-mounted virtual robot and the person in the vehicle, so that the vehicle-mounted virtual robot can actively initiate interaction with the person in the vehicle, and the use experience of the person in the vehicle on the vehicle-mounted virtual robot is improved.
Further, the outputting corresponding question information according to the target face image and a predetermined question rule includes:
carrying out face recognition on the target face image, and judging whether corresponding matched personnel information exists or not;
if yes, acquiring the matched personnel information, and outputting corresponding question information according to the matched personnel information;
and if not, identifying according to the target face image to obtain the sex and age of the corresponding person, and outputting corresponding question and question information according to the sex and age of the corresponding person.
In the implementation process, when the face identification has the corresponding matching person information, the method can output the corresponding question information according to the matching person information so as to exactly ask the car occupants better, and better improve the use experience of the car-mounted virtual robot by the car occupants; meanwhile, when the face recognition does not have corresponding matched person information, corresponding asking information can be output according to the sex and the age of the corresponding person obtained through recognition, the effect of asking persons in the vehicle to the vehicle is well guaranteed, and the use experience of the vehicle-mounted virtual robot by the vehicle-mounted person is well guaranteed.
Further, after outputting corresponding question and question information according to the target face image and a predetermined question and question rule, the method further includes:
acquiring vehicle basic data of the current vehicle;
and outputting corresponding vehicle data information according to the vehicle basic data.
In the implementation process, the output of the vehicle data information in the method is also in a mode that the vehicle-mounted virtual robot actively interacts with the vehicle interior personnel, so that the use experience of the vehicle interior personnel on the vehicle-mounted virtual robot can be better improved.
Further, after outputting corresponding question and question information according to the target face image and a predetermined question and question rule, the method further includes:
detecting whether a person gets off a main driving position and/or a secondary driving position of the current vehicle;
and if so, controlling the vehicle-mounted virtual robot to execute corresponding actions, and outputting corresponding transmission information according to a preset transmission rule.
In the implementation process, the method can be used for sending the passengers to get off, the mode of sending the passengers to get off is also a mode of actively interacting the vehicle-mounted virtual robot and the passengers in the vehicle, and the use experience of the passengers in the vehicle on the vehicle-mounted virtual robot can be better improved.
Further, after outputting corresponding question and question information according to the target face image and a predetermined question and question rule, the method further includes:
detecting the emotion of a driver to obtain a corresponding emotion detection result;
and controlling the vehicle-mounted virtual robot to execute corresponding emotion adjusting actions according to the emotion detection result and a preset emotion adjusting rule, and outputting corresponding emotion adjusting information.
In the implementation process, the method can adjust the emotion of the driver, the mode of adjusting the emotion of the driver is also the mode of actively interacting the vehicle-mounted virtual robot and the vehicle-mounted personnel, and the use experience of the vehicle-mounted virtual robot by the vehicle-mounted personnel can be better improved.
Further, after outputting corresponding question and question information according to the target face image and a predetermined question and question rule, the method further includes:
detecting whether a driver is in a fatigue driving state;
and if so, controlling the vehicle-mounted virtual robot to perform a refreshing action and outputting fatigue driving warning information.
In the implementation process, the method can prompt a driver in a fatigue driving state, the mode of the fatigue driving prompt is also a mode of active interaction between the vehicle-mounted virtual robot and the vehicle-mounted virtual robot, and the use experience of the vehicle-mounted virtual robot by the vehicle-mounted virtual robot can be better improved.
Further, after the controlling the vehicle-mounted virtual robot to perform a sobering action and output fatigue driving warning information, the method further comprises:
and reducing the temperature of the air conditioner in the current vehicle.
In the implementation process, the method can reduce the current air-conditioning temperature in the vehicle when the driver is in the fatigue driving state, and the reduction of the air-conditioning temperature in the vehicle can have a refreshing effect on the driver in the fatigue driving state, so that the driver can be better helped to adjust the state of the driver.
In a second aspect, an embodiment of the present application provides an active interaction device based on a vehicle-mounted virtual robot, including:
the detection module is used for detecting whether a person gets on the vehicle at the main driving position and/or the assistant driving position of the current vehicle;
the control processing module is used for controlling the vehicle-mounted virtual robot to execute corresponding actions and acquiring corresponding target face images when detecting that a person gets on the vehicle at the main driving position and/or the auxiliary driving position of the current vehicle;
and the output module is used for outputting corresponding question and question information according to the target face image and a preset question and question rule.
In the implementation process, the active interaction device based on the vehicle-mounted virtual robot controls the vehicle-mounted virtual robot to execute corresponding actions and acquire corresponding target face images by detecting that a person is in a main driving position and/or a secondary driving position of a current vehicle; and outputting corresponding asking information according to the target face image and a preset asking rule, wherein the asking mode of the person on the vehicle is an active interaction mode of the vehicle-mounted virtual robot and the person in the vehicle, so that the vehicle-mounted virtual robot can actively initiate interaction with the person in the vehicle, and the use experience of the person in the vehicle on the vehicle-mounted virtual robot is improved.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory is used for storing a computer program, and the processor runs the computer program to make the electronic device execute the above active interaction method based on an in-vehicle virtual robot.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for active interaction based on a vehicle-mounted virtual robot is implemented.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a first flowchart of an active interaction method based on a vehicle-mounted virtual robot according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of step S130 according to a first embodiment of the present application;
fig. 3 is a second flowchart of an active interaction method based on a vehicle-mounted virtual robot according to an embodiment of the present application;
fig. 4 is a block diagram of a structure of an active interaction device based on a vehicle-mounted virtual robot according to a second embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
At present, the interaction between the vehicle-mounted virtual robot and people in the vehicle basically belongs to a passive question and answer mode, and the interaction with the people in the vehicle cannot be actively initiated, so that poor experience feeling is brought to the people in the vehicle in the use process.
In order to solve the problems in the prior art, the application provides an active interaction method and device based on a vehicle-mounted virtual robot, so that the vehicle-mounted virtual robot can actively initiate interaction with people in a vehicle, and the use experience of the people in the vehicle on the vehicle-mounted virtual robot is improved.
Example one
Referring to fig. 1, fig. 1 is a first flowchart of an active interaction method based on a vehicle-mounted virtual robot according to an embodiment of the present application. The active interaction method based on the vehicle-mounted virtual robot in the embodiment of the application can be applied to a vehicle-mounted controller.
The active interaction method based on the vehicle-mounted virtual robot comprises the following steps:
and step S110, detecting whether a person gets on the vehicle at the main driving position and/or the auxiliary driving position of the current vehicle.
In this embodiment, if it is detected that someone gets on the main driving seat and/or the assistant driving seat of the current vehicle, step S120 is executed; if the fact that no person gets on the vehicle at the main driving position and/or the auxiliary driving position of the current vehicle is detected, the process is ended.
When detecting whether a person gets on the main driving position and/or the assistant driving position of the current vehicle, the person getting on the vehicle is detected only if the main driving position or the assistant driving position of the current vehicle is detected, or the main driving position and the assistant driving position of the current vehicle are detected to simultaneously get on the vehicle, namely the person getting on the vehicle is detected; correspondingly, when it is detected that no person gets on the vehicle at the main driving position and the auxiliary driving position of the current vehicle, that is, no person gets on the vehicle at the main driving position and/or the auxiliary driving position of the current vehicle is detected.
Optionally, when detecting whether a person gets on the main driving seat and/or the assistant driving seat of the current vehicle, whether the person gets on the main driving seat and/or the assistant driving seat of the current vehicle may be detected through an image or a video acquired by the camera device.
It will be appreciated that the primary driver's seat of the current vehicle, i.e. the primary driver's seat or position of the current vehicle, and the secondary driver's seat of the current vehicle, i.e. the secondary driver's seat or position of the current vehicle.
And step S120, controlling the vehicle-mounted virtual robot to execute corresponding actions and acquiring corresponding target face images.
In this embodiment, the vehicle-mounted virtual robot may be a virtual robot displayed on the console display device, and the vehicle-mounted virtual robot may be an image of a lovely girl, an image of a lively boy, an image of a beautiful boy, or an image of a mature and steady successful person, and the like.
The control of the in-vehicle virtual robot to perform the corresponding operation may be controlling the direction of the in-vehicle virtual robot.
Optionally, when a person gets into the current vehicle at the main driving position, controlling the vehicle-mounted virtual robot to face the current vehicle at the main driving position; when a person gets on the current vehicle at the front driving position, controlling the vehicle-mounted virtual robot to face the front driving position of the current vehicle; and when a person gets on the vehicle at the main driving position and the auxiliary driving position of the current vehicle, controlling the vehicle-mounted virtual robot to face the main driving position of the current vehicle.
Optionally, when the corresponding target face image is obtained, if a person gets on the vehicle at the main driving position of the current vehicle, obtaining the target face image of the person at the main driving position; if a person gets on the vehicle at the copilot position of the current vehicle, acquiring a target face image of the person at the copilot position; and if a person gets on the vehicle at the main driving position and the auxiliary driving position of the current vehicle at the same time, acquiring a target face image of the person at the main driving position.
And step S130, outputting corresponding question and question information according to the target face image and a preset question and question rule.
In this embodiment, the predetermined query rule may include a predetermined query pattern.
And outputting the corresponding question information which can be voice information or the voice information and the text information displayed on the display equipment of the current vehicle center console.
Optionally, when a person gets on the current vehicle at the main driving position, outputting asking information corresponding to the person at the main driving position; when a person gets on the front passenger seat of the current vehicle, outputting the question information corresponding to the person in the front passenger seat; when a person gets on the vehicle at the main driving position and the assistant driving position of the current vehicle, the question information corresponding to the person at the main driving position is output.
For example, the output question information may mean XXX (person name or popular name), good morning/good afternoon/good evening.
In this embodiment, the vehicle-mounted virtual robot is controlled to execute corresponding actions, and corresponding question information is output according to the target face image and a predetermined question rule, that is, the vehicle-mounted virtual robot is considered to be actively interacting with people in the vehicle.
According to the active interaction method based on the vehicle-mounted virtual robot, when a person is detected to get on the vehicle at the main driving position and/or the auxiliary driving position of the current vehicle, the vehicle-mounted virtual robot is controlled to execute corresponding actions, and corresponding target face images are obtained; and outputting corresponding asking information according to the target face image and a preset asking rule, wherein the asking mode of the person on the vehicle is an active interaction mode of the vehicle-mounted virtual robot and the person in the vehicle, so that the vehicle-mounted virtual robot can actively initiate interaction with the person in the vehicle, and the use experience of the person in the vehicle on the vehicle-mounted virtual robot is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of step S130 provided in the embodiment of the present application.
In some embodiments of the present application, the step S130 of outputting the corresponding question and question information according to the target face image and the predetermined question and question rule may include the following steps:
step S131, carrying out face recognition on the target face image, and judging whether corresponding matched personnel information exists;
step S132, acquiring matching personnel information, and outputting corresponding question and question information according to the matching personnel information;
and step S133, identifying and obtaining the sex and age of the corresponding person according to the target face image, and outputting corresponding question and question information according to the sex and age of the corresponding person.
If the corresponding matched personnel information is judged to exist, executing step S132; if it is determined that there is no corresponding matching person information, step S133 is performed.
When the face of the target face image is identified, the target face image can be matched with a pre-stored person head portrait; in general, the pre-stored person avatar has corresponding person information.
It can be understood that, when outputting corresponding question information according to the matching person information, the output question information may mean XXX (person name), good morning/good afternoon/good evening; when outputting the corresponding question information according to the sex and age of the corresponding person, the output question information may mean XXX (popular name, for example, mr.), good morning/good afternoon/good evening.
In the process, the method can output corresponding question information according to the matching personnel information when the corresponding matching personnel information exists in the face recognition, so that the question is exactly asked for the vehicle occupant, and the use experience of the vehicle occupant on the vehicle-mounted virtual robot is better improved; meanwhile, when the face recognition does not have corresponding matched person information, corresponding asking information can be output according to the sex and the age of the corresponding person obtained through recognition, the effect of asking persons in the vehicle to the vehicle is well guaranteed, and the use experience of the vehicle-mounted virtual robot by the vehicle-mounted person is well guaranteed.
Referring to fig. 3, fig. 3 is a second flowchart of the active interaction method based on the vehicle-mounted virtual robot according to the embodiment of the present application.
In some embodiments of the present application, after outputting the corresponding question and question information according to the target face image and the predetermined question and question rule in step S130, the method may further include the following steps:
step S140, acquiring vehicle basic data of the current vehicle;
and S150, outputting corresponding vehicle data information according to the vehicle basic data.
The vehicle basic data of the current vehicle may include basic data such as a remaining oil amount of the current vehicle.
The corresponding vehicle data information is output, and can be voice information or voice information and character information displayed on a display device of the current vehicle center console.
For example, if the vehicle basic data of the current vehicle includes the remaining oil amount of the current vehicle, the vehicle data information outputted according to the vehicle basic data may mean that the remaining oil amount of the current vehicle is XXXX.
In the process, the output of the vehicle data information in the method is also in a mode that the vehicle-mounted virtual robot actively interacts with the vehicle-mounted personnel, so that the use experience of the vehicle-mounted virtual robot by the vehicle-mounted personnel can be better improved.
In some embodiments of the present application, after outputting the corresponding question and question information according to the target face image and the predetermined question and question rule in step S130, the method may further include the following steps:
detecting whether a person gets off a main driving position and/or a copilot position of the current vehicle;
and if so, controlling the vehicle-mounted virtual robot to execute corresponding actions, and outputting corresponding transmission information according to a preset transmission rule.
If it is detected that no person gets off the main driving position and/or the assistant driving position of the current vehicle, the process is ended.
For detecting whether there is a person in the main driving seat and/or the assistant driving seat of the current vehicle to get off, reference may be made to the above-mentioned content of detecting whether there is a person in the main driving seat and/or the assistant driving seat of the current vehicle to get on, which is not described herein again.
Optionally, when detecting whether a person gets off the main driving seat and/or the assistant driving seat of the current vehicle, whether the person gets off the main driving seat and/or the assistant driving seat of the current vehicle can be detected through images or videos collected by the camera device.
Optionally, when detecting whether the main driving position of the current vehicle is the vehicle getting off, whether the main driving position of the current vehicle is the vehicle getting off can be determined by detecting whether the current vehicle is flameout.
Here, the control of the in-vehicle virtual robot to perform the corresponding operation may be controlling the direction of the in-vehicle virtual robot.
Optionally, when a person gets off the current vehicle from the main driving position, controlling the vehicle-mounted virtual robot to face the current vehicle from the main driving position; when a person gets off the vehicle at the front driving position of the current vehicle, controlling the vehicle-mounted virtual robot to face the front driving position of the current vehicle; and when a person gets off the vehicle at the main driving position and the auxiliary driving position of the current vehicle, controlling the vehicle-mounted virtual robot to face the main driving position of the current vehicle.
The predetermined rendering rule may be a predetermined rendering manner.
The corresponding sending information is output, and can be voice information or the voice information and the character information displayed on the display equipment of the current vehicle center console.
For example, outputting the corresponding delivery information according to a predetermined delivery rule may mean bye.
In the process, the method can be used for sending the passengers to get off, the mode of sending the passengers to get off is also a mode of actively interacting the vehicle-mounted virtual robot and the passengers in the vehicle, and the use experience of the passengers in the vehicle on the vehicle-mounted virtual robot can be better improved.
In some embodiments of the present application, after outputting the corresponding question and question information according to the target face image and the predetermined question and question rule in step S130, the method may further include the following steps:
detecting the emotion of a driver to obtain a corresponding emotion detection result;
and controlling the vehicle-mounted virtual robot to execute corresponding emotion adjusting actions according to the emotion detection result and a preset emotion adjusting rule, and outputting corresponding emotion adjusting information.
When the emotion of the driver is detected, the emotion of the driver can be detected through the video collected by the camera equipment.
The emotion detection result may be driver emotional distraction, driver emotional calmness, or driver emotional sadness, etc.
For example, if the emotion detection result is that the emotion of the driver is happy, the facial expression of the vehicle-mounted virtual robot can be controlled to actively change into a smiling face, the flower spreading action is performed, and meanwhile, the clothes of the vehicle-mounted virtual robot can be controlled to be replaced by the clothes which are customized in advance and have bright and bright colors; if the emotion detection result is that the driver is sad, the facial expression of the vehicle-mounted virtual robot can be controlled to actively turn into a crying face.
The corresponding emotion adjusting information is output, and can be voice information or the voice information and character information displayed on a display device of the current vehicle center console.
For example, if the emotion detection result is that the driver is happy, the output emotion adjustment information may mean XXX (person name or popular name), and your mood today looks good; if the emotion detection result is that the driver is sad, the output emotion adjusting information can be some preset words of chicken soup, for example, after you see you, I suddenly find the original commander/beauty and can also specifically feel like this.
In the process, the method can adjust the emotion of the driver, the mode of adjusting the emotion of the driver is also the mode of actively interacting the vehicle-mounted virtual robot and the vehicle-mounted virtual robot, and the use experience of the vehicle-mounted virtual robot by the vehicle-mounted virtual robot can be better improved.
In some embodiments of the present application, after outputting the corresponding question and question information according to the target face image and the predetermined question and question rule in step S130, the method may further include the following steps:
detecting whether a driver is in a fatigue driving state;
and if so, controlling the vehicle-mounted virtual robot to perform a refreshing action and outputting fatigue driving warning information.
If the driver is not detected to be in the fatigue driving state, the process is ended.
When detecting whether driver is in the driver fatigue state, the video detection driver that accessible camera device gathered is in the driver fatigue state.
Optionally, the vehicle-mounted virtual robot is controlled to perform a waking action, which may be controlling the vehicle-mounted virtual robot to perform a preset hand swinging action.
The fatigue driving warning information is output, and can be voice information or voice information and character information displayed on a display device of a current vehicle center console.
In the process, the method can prompt a driver in a fatigue driving state, the fatigue driving prompting mode is also a mode that the vehicle-mounted virtual robot actively interacts with the vehicle-mounted virtual robot, and the use experience of the vehicle-mounted virtual robot by the vehicle-mounted virtual robot can be better improved.
Optionally, after controlling the vehicle-mounted virtual robot to perform a waking action and outputting the fatigue driving warning information, the method may further include:
and reducing the temperature of the air conditioner in the vehicle.
In the process, the method can reduce the current air-conditioning temperature in the vehicle when the driver is in the fatigue driving state, and the reduction of the air-conditioning temperature in the vehicle can have a refreshing effect on the driver in the fatigue driving state, so that the driver can be better helped to adjust the state of the driver.
Example two
In order to execute a corresponding method of the above embodiments to achieve corresponding functions and technical effects, the following provides an active interaction device based on a vehicle-mounted virtual robot.
Referring to fig. 4, fig. 4 is a block diagram of a structure of an active interaction device based on a vehicle-mounted virtual robot according to an embodiment of the present application.
The active interaction device based on the vehicle-mounted virtual robot in the embodiment of the application comprises:
the detection module 210 is configured to detect whether a person gets on a main driving seat and/or a secondary driving seat of a current vehicle;
the control processing module 220 is used for controlling the vehicle-mounted virtual robot to execute corresponding actions and acquiring corresponding target face images when detecting that a person gets on the vehicle at the main driving position and/or the auxiliary driving position of the current vehicle;
and the output module 230 is configured to output corresponding question and question information according to the target face image and a predetermined question and question rule.
According to the active interaction device based on the vehicle-mounted virtual robot, when detecting that a person is at a main driving position and/or a secondary driving position of a current vehicle, the vehicle-mounted virtual robot is controlled to execute corresponding actions, and corresponding target face images are obtained; and outputting corresponding asking information according to the target face image and a preset asking rule, wherein the asking mode of the person on the vehicle is an active interaction mode of the vehicle-mounted virtual robot and the person in the vehicle, so that the vehicle-mounted virtual robot can actively initiate interaction with the person in the vehicle, and the use experience of the person in the vehicle on the vehicle-mounted virtual robot is improved.
As an optional implementation manner, the output module 230 may specifically be configured to:
carrying out face recognition on the target face image, and judging whether corresponding matched personnel information exists or not;
when corresponding matching personnel information exists, acquiring the matching personnel information, and outputting corresponding question information according to the matching personnel information;
and when no corresponding matched person information exists, identifying according to the target face image to obtain the gender and the age of the corresponding person, and outputting corresponding question information according to the gender and the age of the corresponding person.
As an optional implementation manner, the active interaction device based on a vehicle-mounted virtual robot according to the embodiment of the present application further includes:
the acquisition module is used for acquiring vehicle basic data of a current vehicle;
the output module 230 may be further configured to output corresponding vehicle data information according to the vehicle basic data.
As an optional implementation manner, the detection module 210 may be further configured to detect whether a person gets off the vehicle at the primary driving position and/or the secondary driving position of the current vehicle;
the control processing module 220 is further configured to control the vehicle-mounted virtual robot to execute a corresponding action when it is detected that a person gets off the vehicle at the primary driving position and/or the secondary driving position of the current vehicle;
the output module 230 may further be configured to output corresponding sending information according to a predetermined sending rule when it is detected that a person gets off the vehicle at the primary driving location and/or the secondary driving location of the current vehicle.
As an optional implementation manner, the detection module 210 may further be configured to detect an emotion of the driver, and obtain a corresponding emotion detection result;
the control processing module 220 is further configured to control the vehicle-mounted virtual robot to execute a corresponding emotion adjustment action according to the emotion detection result and a predetermined emotion adjustment rule;
the output module 230 may further be configured to output corresponding emotion adjusting information according to the emotion detection result and a predetermined emotion adjusting rule.
As an optional implementation manner, the detection module 210 may be further configured to detect whether the driver is in a fatigue driving state;
the control processing module 220 is further configured to control the vehicle-mounted virtual robot to perform a waking action when detecting that the driver is in a fatigue driving state;
the output module 230 may further be configured to output fatigue driving warning information when detecting that the driver is in a fatigue driving state.
Optionally, the control processing module 220 may be further configured to reduce the current vehicle interior air conditioning temperature when the driver is detected to be in the fatigue driving state.
The active interaction device based on the vehicle-mounted virtual robot can implement the active interaction method based on the vehicle-mounted virtual robot in the first embodiment. The alternatives in the first embodiment are also applicable to the present embodiment, and are not described in detail here.
The rest of the embodiments of the present application may refer to the contents of the first embodiment, and in this embodiment, details are not repeated.
EXAMPLE III
The embodiment of the application provides electronic equipment, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program to enable the electronic equipment to execute the active interaction method based on the vehicle-mounted virtual robot.
Alternatively, the electronic device may be an onboard controller.
In addition, an embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for active interaction based on a vehicle-mounted virtual robot is implemented.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. An active interaction method based on a vehicle-mounted virtual robot is characterized by comprising the following steps:
detecting whether a person gets on a main driving position and/or a secondary driving position of the current vehicle;
if so, controlling the vehicle-mounted virtual robot to execute corresponding actions and acquiring corresponding target face images;
and outputting corresponding question information according to the target face image and a preset question rule.
2. The active interaction method based on the vehicle-mounted virtual robot as claimed in claim 1, wherein the outputting of the corresponding question and question information according to the target face image and a predetermined question and question rule comprises:
carrying out face recognition on the target face image, and judging whether corresponding matched personnel information exists or not;
if yes, acquiring the matched personnel information, and outputting corresponding question information according to the matched personnel information;
and if not, identifying according to the target face image to obtain the sex and age of the corresponding person, and outputting corresponding question and question information according to the sex and age of the corresponding person.
3. The active interaction method based on the vehicle-mounted virtual robot as claimed in claim 1, wherein after outputting corresponding question and ask information according to the target face image and a predetermined question and ask rule, the method further comprises:
acquiring vehicle basic data of the current vehicle;
and outputting corresponding vehicle data information according to the vehicle basic data.
4. The active interaction method based on the vehicle-mounted virtual robot as claimed in claim 1, wherein after outputting corresponding question and ask information according to the target face image and a predetermined question and ask rule, the method further comprises:
detecting whether a person gets off a main driving position and/or a secondary driving position of the current vehicle;
and if so, controlling the vehicle-mounted virtual robot to execute corresponding actions, and outputting corresponding transmission information according to a preset transmission rule.
5. The active interaction method based on the vehicle-mounted virtual robot as claimed in claim 1, wherein after outputting corresponding question and ask information according to the target face image and a predetermined question and ask rule, the method further comprises:
detecting the emotion of a driver to obtain a corresponding emotion detection result;
and controlling the vehicle-mounted virtual robot to execute corresponding emotion adjusting actions according to the emotion detection result and a preset emotion adjusting rule, and outputting corresponding emotion adjusting information.
6. The active interaction method based on the vehicle-mounted virtual robot as claimed in claim 1, wherein after outputting corresponding question and ask information according to the target face image and a predetermined question and ask rule, the method further comprises:
detecting whether a driver is in a fatigue driving state;
and if so, controlling the vehicle-mounted virtual robot to perform a refreshing action and outputting fatigue driving warning information.
7. The active interaction method based on the vehicle-mounted virtual robot as claimed in claim 6, wherein after the controlling the vehicle-mounted virtual robot to perform a sobering action and output fatigue driving warning information, the method further comprises:
and reducing the temperature of the air conditioner in the current vehicle.
8. An active interaction device based on a vehicle-mounted virtual robot is characterized by comprising:
the detection module is used for detecting whether a person gets on the vehicle at the main driving position and/or the assistant driving position of the current vehicle;
the control processing module is used for controlling the vehicle-mounted virtual robot to execute corresponding actions and acquiring corresponding target face images when detecting that a person gets on the vehicle at the main driving position and/or the auxiliary driving position of the current vehicle;
and the output module is used for outputting corresponding question and question information according to the target face image and a preset question and question rule.
9. An electronic device, comprising a memory for storing a computer program and a processor for executing the computer program to make the electronic device execute the active interaction method based on the in-vehicle virtual robot according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the in-vehicle virtual robot-based active interaction method according to any one of claims 1 to 7.
CN202110508159.4A 2021-05-10 2021-05-10 Active interaction method and device based on vehicle-mounted virtual robot Pending CN113147771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110508159.4A CN113147771A (en) 2021-05-10 2021-05-10 Active interaction method and device based on vehicle-mounted virtual robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110508159.4A CN113147771A (en) 2021-05-10 2021-05-10 Active interaction method and device based on vehicle-mounted virtual robot

Publications (1)

Publication Number Publication Date
CN113147771A true CN113147771A (en) 2021-07-23

Family

ID=76874315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110508159.4A Pending CN113147771A (en) 2021-05-10 2021-05-10 Active interaction method and device based on vehicle-mounted virtual robot

Country Status (1)

Country Link
CN (1) CN113147771A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502381A (en) * 2016-09-21 2017-03-15 北京光年无限科技有限公司 A kind of multi-modal output intent of the robot for visual capacity
CN108363492A (en) * 2018-03-09 2018-08-03 南京阿凡达机器人科技有限公司 A kind of man-machine interaction method and interactive robot
CN108664123A (en) * 2017-12-15 2018-10-16 蔚来汽车有限公司 People's car mutual method, apparatus, vehicle intelligent controller and system
CN110427472A (en) * 2019-08-02 2019-11-08 深圳追一科技有限公司 The matched method, apparatus of intelligent customer service, terminal device and storage medium
CN110688973A (en) * 2019-09-30 2020-01-14 Oppo广东移动通信有限公司 Equipment control method and related product
CN110728256A (en) * 2019-10-22 2020-01-24 上海商汤智能科技有限公司 Interaction method and device based on vehicle-mounted digital person and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502381A (en) * 2016-09-21 2017-03-15 北京光年无限科技有限公司 A kind of multi-modal output intent of the robot for visual capacity
CN108664123A (en) * 2017-12-15 2018-10-16 蔚来汽车有限公司 People's car mutual method, apparatus, vehicle intelligent controller and system
CN108363492A (en) * 2018-03-09 2018-08-03 南京阿凡达机器人科技有限公司 A kind of man-machine interaction method and interactive robot
CN110427472A (en) * 2019-08-02 2019-11-08 深圳追一科技有限公司 The matched method, apparatus of intelligent customer service, terminal device and storage medium
CN110688973A (en) * 2019-09-30 2020-01-14 Oppo广东移动通信有限公司 Equipment control method and related product
CN110728256A (en) * 2019-10-22 2020-01-24 上海商汤智能科技有限公司 Interaction method and device based on vehicle-mounted digital person and storage medium

Similar Documents

Publication Publication Date Title
CN108725357B (en) Parameter control method and system based on face recognition and cloud server
US9956963B2 (en) Apparatus for assessing, predicting, and responding to driver fatigue and drowsiness levels
US20170349027A1 (en) System for controlling vehicle climate of an autonomous vehicle socially
US20170330044A1 (en) Thermal monitoring in autonomous-driving vehicles
US10053113B2 (en) Dynamic output notification management for vehicle occupant
JP2018537332A (en) Vehicle control system based on human face recognition
WO2013153781A1 (en) Affect-monitoring system
JP6713490B2 (en) Information providing apparatus and information providing method
WO2018061354A1 (en) Information provision device, and moving body
WO2017104793A1 (en) System for enhancing sensitivity of vehicle occupant
CN112035034A (en) Vehicle-mounted robot interaction method
CN113696844A (en) Vehicle cabin viewing method, device and computer readable storage medium
CN114771442A (en) Vehicle personalized setting method and vehicle
CN113147771A (en) Active interaction method and device based on vehicle-mounted virtual robot
WO2020039994A1 (en) Car sharing system, driving control adjustment device, and vehicle preference matching method
CN116580661A (en) Terminal color adjustment method, system, electronic device and storage medium in vehicle
DE102018203944B4 (en) Method and motor vehicle for outputting information depending on a property characterizing an occupant of the motor vehicle
WO2022124164A1 (en) Attention object sharing device, and attention object sharing method
CN112356670B (en) Control parameter adjusting method and device for vehicle-mounted equipment
CN114734912A (en) Method and device for reminding in cabin through atmosphere lamp, electronic equipment and storage medium
Wu et al. Designing for driver's emotional transitions and rituals
CN112506353A (en) Vehicle interaction system, method, storage medium and vehicle
CN207059776U (en) A kind of motor vehicle driving approval apparatus
JP2023172303A (en) Vehicle control method and vehicle control device
CN110751011A (en) Driving safety detection method, driving safety detection device and vehicle-mounted terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210723