CN112735440A - Vehicle-mounted intelligent robot interaction method, robot and vehicle - Google Patents

Vehicle-mounted intelligent robot interaction method, robot and vehicle Download PDF

Info

Publication number
CN112735440A
CN112735440A CN202011629125.2A CN202011629125A CN112735440A CN 112735440 A CN112735440 A CN 112735440A CN 202011629125 A CN202011629125 A CN 202011629125A CN 112735440 A CN112735440 A CN 112735440A
Authority
CN
China
Prior art keywords
user
vehicle
robot
information
voice information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011629125.2A
Other languages
Chinese (zh)
Inventor
陈万慧
胡泽钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yanyan Zhiyu Technology Co ltd
Original Assignee
Beijing Overlooking Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Overlooking Technology Co ltd filed Critical Beijing Overlooking Technology Co ltd
Priority to CN202011629125.2A priority Critical patent/CN112735440A/en
Publication of CN112735440A publication Critical patent/CN112735440A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Child & Adolescent Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a vehicle-mounted intelligent robot interaction method, which comprises the following steps: acquiring user voice information and calculating a sound source position; controlling the robot to rotate towards the sound source position to acquire user image information; judging the identity of the user according to the sound source position; when the user identity is a driver, recognizing the emotion and driving state of the user according to the image information and the voice information of the user, and entering a first interaction mode; and when the user identity is a passenger, recognizing the emotion of the user according to the image information and the voice information of the user, and entering a second interaction mode. In addition, the invention also provides a robot and a vehicle.

Description

Vehicle-mounted intelligent robot interaction method, robot and vehicle
Technical Field
The invention relates to the technical field of vehicle-mounted robots, in particular to a vehicle-mounted intelligent robot interaction method, a robot and a vehicle.
Background
With the increase of the number of global automobiles, the automobiles are the most important transportation tools for people, the use range of the road vehicles is on the rise at present, and traffic jam becomes a common problem of urban traffic, so that a driver needs to drive the vehicles for a long time. Long-term driving can lead to poor mood and fatigue driving for the driver. Passengers can feel uninteresting for a long time when riding the vehicle.
With the development of robotics, more and more vehicle-mounted robot apparatuses are emerging. However, the existing vehicle-mounted robot has single function, poor interaction capability and complex operation, and most of the existing vehicle-mounted robots are limited to the aspects of in-vehicle entertainment, information, navigation, electronic maps based on position solutions and the like. Therefore, the robot which can soothe the driver and passengers in the vehicle and assist the driver in driving has important social and practical significance.
Disclosure of Invention
The invention provides an interaction method of an on-vehicle intelligent robot, the robot and a vehicle, which can identify the emotion of a user and perform good interaction with the user.
In a first aspect, an embodiment of the present invention provides a vehicle-mounted intelligent robot interaction method, where the vehicle-mounted intelligent robot interaction method includes:
acquiring user voice information and calculating a sound source position;
controlling the robot to rotate towards the sound source position to acquire user image information;
judging the identity of the user according to the sound source position;
when the user identity is a driver, recognizing the emotion and driving state of the user according to the image information and the voice information of the user, and entering a first interaction mode;
and when the user identity is a passenger, recognizing the emotion of the user according to the image information and the voice information of the user, and entering a second interaction mode.
In a second aspect, embodiments of the present invention provide a robot, comprising:
the voice information module is used for acquiring the voice information of the user and calculating the position of a sound source;
the action control module is used for controlling the robot to rotate towards the sound source position;
the recognition module is used for recognizing the emotion and the driving state of the user according to the image information and the voice information of the user when the identity of the user is a driver, and recognizing the emotion of the user according to the image information and the voice information of the user when the identity of the user is a passenger, and entering a second interaction mode;
and the processing module is used for judging the identity of the user, entering a first interaction mode and entering a second interaction mode.
And the image collection module is used for acquiring the image information of the user.
In a third aspect, an embodiment of the present invention provides a robot, where the robot includes a processor and a memory, where the memory is used to store program instructions of the vehicle-mounted intelligent robot interaction method, and the processor is used to execute the program instructions of the vehicle-mounted intelligent robot interaction method, so as to implement the vehicle-mounted intelligent robot interaction method.
In a fourth aspect, an embodiment of the present invention provides a vehicle including a vehicle body and the robot provided to the vehicle body.
According to the vehicle-mounted intelligent robot interaction method, the robot and the vehicle, the user voice message is obtained, the sound source position is identified, the user image information is obtained, the semantic identification and the face identification are carried out according to different user attributes, the user emotion is obtained, the pacification is carried out in a targeted manner, the accompanying can be given to the user in a long-time driving journey, the long-distance driving is not boring and tasteless, and the driving experience is improved. Meanwhile, the driving safety of the driver can be monitored or the road condition and objects around the vehicle can be identified so as to remind the driver of paying attention to the driving safety, so that the driving safety is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of an interaction method of a vehicle-mounted intelligent robot according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating a first embodiment of a first interaction mode according to the present invention.
Fig. 3 is a flowchart illustrating a second embodiment of the first interaction mode according to the present invention.
Fig. 4 is a flowchart illustrating a third embodiment of the first interaction mode according to the present invention.
FIG. 5 is a flow chart illustrating a second interaction mode provided by the present invention.
FIG. 6 is a schematic flow chart illustrating a first embodiment of the present invention.
FIG. 7 is a flow chart illustrating a second embodiment of the present invention.
Fig. 8 is a schematic diagram of a robot module according to the present invention.
Fig. 9 is a schematic diagram of the internal structure of the robot provided by the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. The drawings illustrate examples of embodiments of the invention, in which like numerals represent like elements. It is to be understood that the drawings are not to scale as the invention may be practiced in practice, but are for illustrative purposes and are not to scale. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances, in other words that the embodiments described are to be practiced in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, may also include other things, such as processes, methods, systems, articles, or apparatus that comprise a list of steps or elements is not necessarily limited to only those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such processes, methods, articles, or apparatus.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
Please refer to fig. 1 in combination, which is a flowchart of an interaction method of an intelligent robot mounted on a vehicle according to an embodiment of the present invention.
Step S100, acquiring user voice information and calculating the position of a sound source. Specifically, the method obtains the user voice information through a sound sensor array disposed on the robot, where the sound sensor array includes a plurality of sound sensors, and calculates the sound source position through a sound correlation algorithm, such as music algorithm, GCC-phot algorithm, and the like, which is not limited herein. In some possible embodiments, a wake-up condition may be set, and only the user speaks a specific sentence, for example, when the user speaks "Hi, bear", the robot will wake up, keep on, and analyze the sound source position according to the user voice information.
And S200, controlling the robot to rotate towards the sound source position, and acquiring user image information. Specifically, the robot faces the user through an electric device according to the sound source position to give the user a feeling of communicating face to face. And the robot faces the user, and the image information of the user can be conveniently acquired through the image sensor.
And step S300, judging the identity of the user according to the sound source position. Specifically, based on the sound source information, it is possible to analyze whether the user is in the driver's seat or the passenger's seat or in the rear seat, thereby determining whether the speaking user is the driver or the passenger.
And step S400, when the user identity is a driver, recognizing the emotion and driving state of the user according to the user image and the voice information, and entering a first interaction mode. Details will be described later.
And S500, when the user identity is a passenger, recognizing the emotion of the user according to the user image and the voice information, and entering a second interaction mode. Details will be described later.
Please refer to fig. 2, which is a flowchart illustrating a first embodiment of a first interaction mode according to the present invention.
And step S411, acquiring the facial micro-expression of the user according to the user image information. Specifically, from the user image acquired by the image sensor, facial micro-expressions of the user in the user image, such as an eyebrow direction, a distance between two eyebrows, an eye size, a pupil size, two mouth corner angles, upper and lower lip shapes, and the like, are recognized.
And step S412, judging the emotion of the user according to the facial micro-expression and the voice information of the user. Specifically, the emotion of the user is comprehensively judged according to the change of the facial micro-expression of the user, the tone, the speed of speech and the decibel of sound in the voice information and the semanteme calculated through deep learning. For example, when the facial expression of the user is small in opening and closing of both eyes, tight in closing of eyebrows, small in distance between two eyebrows, and pulling down the mouth corner, the speech rate is not fast, and "qi-word" frequently appears, the user is judged to be unhappy or worried.
Step S413, if the user is in a bad mood, playing a corresponding soothing voice. Specifically, the soothing voice is played through the loudspeaker according to the bad mood of the user. In some possible embodiments, the robot stores the user's data and is provided with a face recognition device, reads the user's data through face recognition, and analyzes an optimal soothing scheme according to the user's data, such as joke, playing happy songs, and playing gasoline and gas.
According to the embodiment, the identity of the user is judged as the driver, the emotion of the driver is analyzed according to the facial micro-expression and the voice information of the driver, and the optimal soothing scheme is obtained according to the emotion of the driver and the preference of the user in the data; the mood of the driver is helped to be pacified, so that the driver is not easy to generate impatient mood which is not beneficial to driving, and dangerous driving is caused.
Please refer to fig. 3, which is a flowchart illustrating a second embodiment of the first interaction mode according to the present invention.
Step S421, according to the user image information, identifying user portrait information, wherein the user portrait information comprises the catch of eyes, the opening degree of eyelids and the head lowering frequency. Specifically, the portrait information of the user is recognized based on the image information of the user acquired by the image sensor, for example, information identifying the size, position, shape, mutual distance, scale size, catch of eye, degree of opening of eyelid, and frequency of lowering head of the user.
And step S422, judging the driving state according to the portrait information of the user. Specifically, the frequency of eye closure, the eye closing time per time, and the head lowering frequency are analyzed to determine whether or not the driver is tired. The image sensor scans the face of the driver at a high speed, for example, when the eye closing frequency of the driver reaches 100 times in 1 minute, the fatigue driving of the driver is judged; for example, if the eye closing time of the driver is more than two seconds within 1 minute for five times, the fatigue driving of the driver is judged; for example, if the driver lowers the head 3 times within 30 seconds, it is determined that the driver is tired.
Step S423, if the user is driving fatigue, playing a corresponding warning voice. Specifically, the voice can be played through the loudspeaker to remind the user of fatigue driving. In some possible embodiments, the robot is connected with an on-board computer, and can control the driver seat to vibrate to remind the driver or control the automobile horn to give out a warning sound. In a specific situation, for example, fatigue driving of the driver is monitored in a short time, or the fatigue degree of the driver is high, wherein the fatigue degree comprises more than 5 seconds per eye closing time, the robot can control the vehicle to decelerate and stop at the roadside.
The embodiment analyzes the driving state of the driver by judging the identity of the user as the driver according to the facial micro-expression of the driver, and assists the driver in driving safely by effective reminding measures when the driver is in fatigue driving.
Please refer to fig. 4, which is a flowchart illustrating a third embodiment of the first interaction mode according to the present invention.
In step S431, road information is acquired, and objects around the vehicle are identified. Specifically, the image sensor may be a plurality of cameras, and when the user is recognized as the driver, the camera that captures the road information is turned on to acquire the road information.
And step S432, playing corresponding reminding voice. Specifically, when the road conditions are relatively complicated, the driver may not be able to consider obstacles around the vehicle, and at this time, the robot is required to assist the driver in identifying the road conditions and reminding the driver.
According to the embodiment, the user identity is judged to be the driver, the obstacles around the vehicle are obtained according to the road information, the driver is reminded, and the driving of the driver is assisted.
Please refer to fig. 5, which is a flowchart illustrating a second interaction mode according to the present invention.
And step S511, acquiring the facial micro-expression of the user according to the user image information. Specifically, when the identity of the user is identified as a passenger, facial micro-expressions of the user in the user image, such as an eyebrow direction, a distance between two eyebrows, an eye size, a pupil size, two mouth corner angles, upper and lower lip shapes, and the like, are identified according to the user image acquired by the image sensor.
And S512, judging the emotion of the user according to the facial micro-expression and the voice information of the user. Specifically, the emotion of the user is comprehensively judged according to the change of the facial micro-expression of the user, the tone, the speed of speech and the decibel of sound in the voice information and the semanteme obtained through deep learning. For example, when the facial expression of the user is that the head is slightly tilted upwards, the eyebrows are raised, the eyes are almost closed, the mouth is opened, the upper teeth and the lower teeth are separated, the lower teeth are visible, and the tone is light, the emotion of the user is judged to be happy.
Step S513, if the user is bad, playing corresponding soothing voice. Specifically, a soothing voice is played through a speaker of the robot according to the bad mood of the user. In some possible embodiments, the robot stores data of a user, and is provided with a face recognition module, reads the data of the user through face recognition, and analyzes an optimal soothing scheme according to the data of the user, for example, the optimal soothing scheme can be a joke, a happy song and a joke of filling oil and gas.
In some embodiments, when the user is a passenger, when it is determined that the passenger is sad, the mood of the passenger can be adjusted in a plurality of ways, for example, playing a video according to the preference of the user, finding a topic of interest to the user and communicating with the user. When the user is a driver, too much interaction of the robot with the driver can distract the driver, possibly leading to dangerous driving.
Please refer to fig. 6, which is a flow chart illustrating a first embodiment of the present invention.
Step S601, receiving the user voice in real time, and acquiring the user voice information. Specifically, the voice information module of the robot receives the voice of the user in real time to acquire the voice information of the user.
Step S602, semantic analysis is carried out according to the user voice information, and the user intention is obtained. The recognition module head analyzes the voice of the user through deep learning to obtain the intention of the user.
Step S603, generating a response to the user intention according to the user intention. Specifically, for example, when the user says "chat well, want to listen to a song" is received, through semantic analysis, it is obtained that the emotion of the user is that the user is chat and needs to listen to music for relaxation, and the music is played immediately; when the fact that the user says 'on duty, waist soreness and back pain' is received, the user is informed of discomfort and needs to be massaged through semantic analysis, the seat massage function is turned on, and the user is pacified through voice.
According to the embodiment, the user intention is obtained through analysis according to the voice information of the user, and the user intention is automatically responded, so that the complicated operation of the user is saved, and the robot is humanized.
Please refer to fig. 7, which is a flow chart illustrating a second embodiment of the present invention.
And step S701, acquiring the temperature in the vehicle and the body temperature of the user. Specifically, the robot acquires the temperature inside the vehicle and the body temperature of the user through the temperature sensor.
And S702, regulating and controlling the temperature in the vehicle according to the temperature in the vehicle and the body temperature of the user. The robot is in communication connection with the vehicle-mounted computer, and automatically opens the air conditioner to adjust the temperature in the vehicle to the optimum temperature according to historical body temperature data and real-time body temperature data of a user and the temperature in the vehicle.
Please refer to fig. 8, which is a schematic diagram of a robot module according to the present invention. The robot 800 is installed in a vehicle, and includes a voice information module 501, a motion control module 502, a recognition module 503, a processing module 504, and an image collection module 505.
A voice information module 501, configured to obtain user voice information and calculate a sound source position;
a motion control module 502 for controlling the robot to rotate towards the sound source position;
the recognition module 503 is configured to recognize the emotion and the driving state of the user according to the image information and the voice information of the user when the identity of the user is a driver, and recognize the emotion of the user according to the image information and the voice information of the user when the identity of the user is a passenger, and enter a second interaction mode;
the processing module 504 is configured to determine the identity of the user, enter a first interaction mode, and enter a second interaction mode.
And an image collecting module 505 for acquiring user image information.
Please refer to fig. 9, which is a schematic diagram of the internal structure of the robot according to the present invention. The robot 800 includes at least one processor 802, memory 801, and a communication component 804. The memory 801 is used for storing vehicle-mounted intelligent robot interaction method program instructions. The processor 802 is configured to execute the in-vehicle intelligent robot interaction method program instructions to implement robot-based interaction between the robot and the user. In some embodiments, the communication component 804 includes, but is not limited to, a mobile network, satellite communication, bluetooth, etc. wireless communication component.
Processor 802 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip for executing the robot-based autonomous vehicle emergency braking method stored in memory 801. The memory 801 includes at least one type of readable storage medium including flash memory, hard disks, multi-media cards, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, and the like. The memory 801 may in some embodiments be an internal storage unit of the computer device, such as a hard disk of the computer device. The memory 801 may also be a storage device of an external computer device in other embodiments, such as a plug-in hard disk provided on the computer device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 801 may also include both internal and external storage units of the computer device. The memory 801 may be used for storing various data and application software installed in the computer device, such as: codes or the like based on the robot-implemented autonomous vehicle emergency braking method may also be used to temporarily store data that has been output or is to be output.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the invention are brought about in whole or in part when the computer program instructions are loaded and executed on a computer. The computer apparatus may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the unit is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. The vehicle-mounted intelligent robot interaction method is characterized by comprising the following steps:
acquiring user voice information and calculating a sound source position;
controlling the robot to rotate towards the sound source position to acquire user image information;
judging the identity of the user according to the sound source position;
when the user identity is a driver, recognizing the emotion and driving state of the user according to the image information and the voice information of the user, and entering a first interaction mode;
and when the user identity is a passenger, recognizing the emotion of the user according to the image information and the voice information of the user, and entering a second interaction mode.
2. The vehicle-mounted intelligent robot interaction method according to claim 1, wherein when the user identity is a driver, the user emotion is recognized according to the user image and the voice information, and entering the first interaction mode comprises:
acquiring a user facial micro-expression according to the user image information;
judging the emotion of the user according to the facial micro-expression of the user and the voice information;
and if the user is in bad mood, playing corresponding soothing voice.
3. The vehicle-mounted intelligent robot interaction method according to claim 1, wherein when the user identity is a driver, the driving state is recognized according to the user image and the voice information, and entering the first interaction mode comprises:
identifying user portrait information according to the user image information, wherein the user portrait information comprises eye spirit, eyelid opening degree and head lowering frequency;
judging the driving state according to the user portrait information
And if the user is in fatigue driving, playing corresponding warning voice.
4. The in-vehicle intelligent robot interaction method according to claim 1, wherein when the user identity is a driver, the in-vehicle intelligent robot interaction method further comprises:
acquiring road information and identifying objects around the vehicle;
and playing corresponding reminding voice.
5. The vehicle-mounted intelligent robot interaction method according to claim 1, wherein when the user identity is a passenger, the user emotion is recognized according to the user image and the voice information, and the entering into the second interaction mode comprises:
acquiring a user facial micro-expression according to the user image information;
judging the emotion of the user according to the facial micro-expression of the user and the voice information;
and if the user is in bad mood, playing corresponding soothing voice.
6. The in-vehicle smart robot interaction method of claim 1, further comprising:
receiving user voice in real time and acquiring user voice information;
performing semantic analysis according to the user voice information to obtain the user intention;
responding to the user intent.
7. The in-vehicle smart robot interaction method of claim 1, further comprising:
acquiring the temperature in the vehicle and the body temperature of a user;
and regulating and controlling the temperature in the vehicle according to the temperature in the vehicle and the body temperature of the user.
8. A robot, characterized in that the robot:
the voice information module is used for acquiring the voice information of the user and calculating the position of a sound source;
the action control module is used for controlling the robot to rotate towards the sound source position;
the recognition module is used for recognizing the emotion and the driving state of the user according to the image information and the voice information of the user when the identity of the user is a driver, and recognizing the emotion of the user according to the image information and the voice information of the user when the identity of the user is a passenger, and entering a second interaction mode;
and the processing module is used for judging the identity of the user, entering a first interaction mode and entering a second interaction mode.
And the image collection module is used for acquiring the image information of the user.
9. A robot, characterized in that the robot comprises a processor and a memory, the memory is used for storing program instructions of the vehicle-mounted intelligent robot interaction method, and the processor is used for executing the program instructions of the vehicle-mounted intelligent robot interaction method so as to realize the vehicle-mounted intelligent robot interaction method according to any one of claims 1-7.
10. A vehicle characterized by comprising a vehicle body and the robot of claim 9 provided to the vehicle body.
CN202011629125.2A 2020-12-30 2020-12-30 Vehicle-mounted intelligent robot interaction method, robot and vehicle Pending CN112735440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011629125.2A CN112735440A (en) 2020-12-30 2020-12-30 Vehicle-mounted intelligent robot interaction method, robot and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011629125.2A CN112735440A (en) 2020-12-30 2020-12-30 Vehicle-mounted intelligent robot interaction method, robot and vehicle

Publications (1)

Publication Number Publication Date
CN112735440A true CN112735440A (en) 2021-04-30

Family

ID=75608206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011629125.2A Pending CN112735440A (en) 2020-12-30 2020-12-30 Vehicle-mounted intelligent robot interaction method, robot and vehicle

Country Status (1)

Country Link
CN (1) CN112735440A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113276861A (en) * 2021-06-21 2021-08-20 上汽通用五菱汽车股份有限公司 Vehicle control method, vehicle control system, and storage medium
CN113780062A (en) * 2021-07-26 2021-12-10 岚图汽车科技有限公司 Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip
CN113792663A (en) * 2021-09-15 2021-12-14 东北大学 Detection method and device for drunk driving and fatigue driving of driver and storage medium
CN113855542A (en) * 2021-09-08 2021-12-31 广州灵聚信息科技有限公司 Method and device for realizing intelligent massage based on intelligent voice interaction
CN114454811A (en) * 2021-12-31 2022-05-10 北京瞰瞰智能科技有限公司 Intelligent auxiliary method and device for automobile driving safety and vehicle
CN116038723A (en) * 2022-12-16 2023-05-02 中汽创智科技有限公司 Interaction method and system of vehicle-mounted robot
CN116061959A (en) * 2023-04-03 2023-05-05 北京永泰万德信息工程技术有限公司 Human-computer interaction method for vehicle, vehicle and storage medium
WO2023168895A1 (en) * 2022-03-07 2023-09-14 上汽海外出行科技有限公司 Vehicle-mounted robot and operation method therefor, and medium and computer program product
CN117389416A (en) * 2023-10-18 2024-01-12 广州易云信息技术有限公司 Interactive control method and device of intelligent robot and robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130096771A1 (en) * 2011-10-12 2013-04-18 Continental Automotive Systems, Inc. Apparatus and method for control of presentation of media to users of a vehicle
CN108882202A (en) * 2018-06-27 2018-11-23 肇庆高新区徒瓦科技有限公司 A kind of vehicle-mounted exchange method and device based on smart phone
CN109131167A (en) * 2018-08-03 2019-01-04 百度在线网络技术(北京)有限公司 Method for controlling a vehicle and device
CN109960407A (en) * 2019-03-06 2019-07-02 中山安信通机器人制造有限公司 A kind of method, computer installation and the computer readable storage medium of on-vehicle machines people active interaction
CN111368609A (en) * 2018-12-26 2020-07-03 深圳Tcl新技术有限公司 Voice interaction method based on emotion engine technology, intelligent terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130096771A1 (en) * 2011-10-12 2013-04-18 Continental Automotive Systems, Inc. Apparatus and method for control of presentation of media to users of a vehicle
CN108882202A (en) * 2018-06-27 2018-11-23 肇庆高新区徒瓦科技有限公司 A kind of vehicle-mounted exchange method and device based on smart phone
CN109131167A (en) * 2018-08-03 2019-01-04 百度在线网络技术(北京)有限公司 Method for controlling a vehicle and device
CN111368609A (en) * 2018-12-26 2020-07-03 深圳Tcl新技术有限公司 Voice interaction method based on emotion engine technology, intelligent terminal and storage medium
CN109960407A (en) * 2019-03-06 2019-07-02 中山安信通机器人制造有限公司 A kind of method, computer installation and the computer readable storage medium of on-vehicle machines people active interaction

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113276861A (en) * 2021-06-21 2021-08-20 上汽通用五菱汽车股份有限公司 Vehicle control method, vehicle control system, and storage medium
CN113780062A (en) * 2021-07-26 2021-12-10 岚图汽车科技有限公司 Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip
CN113855542A (en) * 2021-09-08 2021-12-31 广州灵聚信息科技有限公司 Method and device for realizing intelligent massage based on intelligent voice interaction
CN113792663A (en) * 2021-09-15 2021-12-14 东北大学 Detection method and device for drunk driving and fatigue driving of driver and storage medium
CN113792663B (en) * 2021-09-15 2024-05-14 东北大学 Method, device and storage medium for detecting drunk driving and fatigue driving of driver
CN114454811A (en) * 2021-12-31 2022-05-10 北京瞰瞰智能科技有限公司 Intelligent auxiliary method and device for automobile driving safety and vehicle
WO2023168895A1 (en) * 2022-03-07 2023-09-14 上汽海外出行科技有限公司 Vehicle-mounted robot and operation method therefor, and medium and computer program product
CN116038723A (en) * 2022-12-16 2023-05-02 中汽创智科技有限公司 Interaction method and system of vehicle-mounted robot
CN116061959A (en) * 2023-04-03 2023-05-05 北京永泰万德信息工程技术有限公司 Human-computer interaction method for vehicle, vehicle and storage medium
CN117389416A (en) * 2023-10-18 2024-01-12 广州易云信息技术有限公司 Interactive control method and device of intelligent robot and robot

Similar Documents

Publication Publication Date Title
CN112735440A (en) Vehicle-mounted intelligent robot interaction method, robot and vehicle
CN106803423B (en) Man-machine interaction voice control method and device based on user emotion state and vehicle
US20200057487A1 (en) Methods and systems for using artificial intelligence to evaluate, correct, and monitor user attentiveness
US10467488B2 (en) Method to analyze attention margin and to prevent inattentive and unsafe driving
US20200310528A1 (en) Vehicle system for providing driver feedback in response to an occupant's emotion
EP3675121B1 (en) Computer-implemented interaction with a user
JP7092116B2 (en) Information processing equipment, information processing methods, and programs
US20200247422A1 (en) Inattentive driving suppression system
JP7329755B2 (en) Support method and support system and support device using the same
JP5045302B2 (en) Automotive information provision device
CN111547063A (en) Intelligent vehicle-mounted emotion interaction device for fatigue detection
JP2017007652A (en) Method for recognizing a speech context for speech control, method for determining a speech control signal for speech control, and apparatus for executing the method
CN110395260A (en) Vehicle, safe driving method and device
US20220357912A1 (en) Apparatus and method for caring emotion based on vehicle sound
CN109472253B (en) Driving safety intelligent reminding method and device, intelligent steering wheel and intelligent bracelet
EP4042322A1 (en) Methods and systems for using artificial intelligence to evaluate, correct, and monitor user attentiveness
JP6677126B2 (en) Interactive control device for vehicles
JP2000181500A (en) Speech recognition apparatus and agent apparatus
KR20210116309A (en) Techniques for separating driving emotion from media induced emotion in a driver monitoring system
KR20200020313A (en) Vehicle and control method for the same
CN117842022A (en) Driving safety control method and device for artificial intelligent cabin, vehicle and medium
CN114253392A (en) Virtual conversation agent for controlling multiple in-vehicle intelligent virtual assistants
JP2017207997A (en) Driving support device
CN113771859A (en) Intelligent driving intervention method, device and equipment and computer readable storage medium
CN112562267A (en) Vehicle-mounted safety robot and safe driving assistance method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211029

Address after: 1706, floor 7, Section A, No. 203, zone 2, Lize Zhongyuan, Wangjing, Chaoyang District, Beijing 100102

Applicant after: Beijing Yankan Intelligent Technology Co.,Ltd.

Address before: 1813, 8th floor, Section A, No.203, zone 2, Lize Zhongyuan, Wangjing, Chaoyang District, Beijing

Applicant before: Beijing overlooking Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220428

Address after: B210, 2f, speed skating hall, winter training center, No. 68, Shijingshan Road, Shijingshan District, Beijing 100041

Applicant after: Beijing Yanyan Zhiyu Technology Co.,Ltd.

Address before: 1706, floor 7, Section A, No. 203, zone 2, Lize Zhongyuan, Wangjing, Chaoyang District, Beijing 100102

Applicant before: Beijing Yankan Intelligent Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210430