CN113299286A - Interaction method, device and system based on vehicle-mounted robot and readable medium - Google Patents
Interaction method, device and system based on vehicle-mounted robot and readable medium Download PDFInfo
- Publication number
- CN113299286A CN113299286A CN202110522063.3A CN202110522063A CN113299286A CN 113299286 A CN113299286 A CN 113299286A CN 202110522063 A CN202110522063 A CN 202110522063A CN 113299286 A CN113299286 A CN 113299286A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- preset
- mounted robot
- robot
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000014509 gene expression Effects 0.000 claims abstract description 58
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000002452 interceptive effect Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 238000003672 processing method Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides an interaction method, device and system based on a vehicle-mounted robot and a computer readable medium. The method comprises the following steps: sending a first control instruction to the vehicle-mounted robot so that the vehicle-mounted robot can display a first preset expression; receiving first voice information input by a user; judging whether the first voice message contains a preset awakening word or not, and if so, sending a second control instruction to the vehicle-mounted robot so that the vehicle-mounted robot displays a second preset expression of a first preset time length; receiving second voice information input by a user; and recognizing the second voice information, and sending a third control instruction to the vehicle-mounted robot after the recognition is finished so that the vehicle-mounted robot displays a third preset expression in a second preset time length. The method can improve the interaction efficiency and the overall driving experience of the user and the vehicle-mounted robot.
Description
Technical Field
The present application relates to the field of automobiles, and in particular, to an interaction method, device, system, and computer readable medium based on a vehicle-mounted robot.
Background
With the rapid development of automobile intelligent equipment, more and more automobiles are provided with vehicle-mounted intelligent robots. In the existing vehicle-mounted intelligent robot, interaction between the robot and a user is often limited to voice or a small number of simple and stiff expressions, so that user experience of the vehicle-mounted robot is isolated from other vehicle-mounted systems. The expression of the vehicle-mounted robot is stiff and sluggish, and the interaction efficiency and the whole driving experience of a user and the vehicle-mounted robot are seriously influenced.
Therefore, how to improve the interaction efficiency and the overall driving experience of the user and the vehicle-mounted robot is a problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
The technical problem to be solved by the application is to provide an interaction method, device and system based on a vehicle-mounted robot and a computer readable medium, which can improve the interaction efficiency and the whole driving experience of a user and the vehicle-mounted robot.
In order to solve the technical problem, the application provides an interaction method based on a vehicle-mounted robot, which comprises the following steps: sending a first control instruction to a vehicle-mounted robot so that the vehicle-mounted robot can display a first preset expression; receiving first voice information input by a user; judging whether the first voice message contains a preset awakening word or not, if so, sending a second control instruction to the vehicle-mounted robot so that the vehicle-mounted robot displays a second preset expression of a first preset time length; receiving second voice information input by a user; and recognizing the second voice information, and sending a third control instruction to the vehicle-mounted robot after the recognition is finished so that the vehicle-mounted robot displays a third preset expression of a second preset time length.
In an embodiment of the application, the first control instruction is further configured to rotate the in-vehicle robot to a preset direction.
In an embodiment of the application, the predetermined direction includes one or more of the following: a primary driver direction, a secondary driver direction, and an initial direction; and the method further comprises: determining the preset direction according to a current wake-up mode of the vehicle-mounted robot, wherein the wake-up mode comprises one or more of the following: the system comprises a main driver mode, a secondary driver mode and a full vehicle mode, wherein the main driver mode, the secondary driver mode and the full vehicle mode are respectively in one-to-one correspondence with the main driver direction, the secondary driver direction and the initial direction.
In an embodiment of the application, before the step of sending the first control instruction to the in-vehicle robot to cause the in-vehicle robot to display the first preset expression, the method further includes: acquiring face information of a user; identifying whether the user is in a preset list or not based on the face information; when the user is in the preset list, sending a first voice instruction to enable the vehicle-mounted robot to play a first preset voice; and when the user is not in the preset list, sending a second voice instruction to enable the vehicle-mounted robot to play a second preset voice.
In an embodiment of the application, the step of sending the first control instruction to the vehicle-mounted robot to enable the vehicle-mounted robot to display the first preset expression is performed when the power state of the entire vehicle is switched to the ACC ON state, the IGN ON state or the ready state.
In an embodiment of the present application, the method further includes: monitoring a user's seat belt state and/or door open state; judging whether the user intends to leave the automobile according to the safety belt state and/or the automobile door opening state; when the user intends to leave the vehicle, a fourth control instruction is sent to the vehicle-mounted robot, so that the vehicle-mounted robot displays a fourth preset expression and/or plays a third preset voice;
in an embodiment of the present application, the method further includes: judging whether the user leaves the vehicle according to the safety belt state and/or the vehicle door opening state; when the user leaves the vehicle, a fifth control instruction is sent to the vehicle-mounted robot, so that the vehicle-mounted robot finishes displaying a fourth preset expression and/or finishes playing a third preset voice;
in order to solve the above technical problem, the present application further provides an interaction device based on a vehicle-mounted robot, including: the first control module is used for sending a first control instruction to the vehicle-mounted robot so as to enable the vehicle-mounted robot to display a first preset expression; the first receiving module is used for receiving first voice information input by a user; the second control module is used for judging whether the first voice message contains a preset awakening word or not, and if so, sending a second control instruction to the vehicle-mounted robot so that the vehicle-mounted robot displays a second preset expression of a first preset time length; the second receiving module is used for receiving second voice information input by a user; and the third control module is used for identifying the second voice information and sending a third control instruction to the vehicle-mounted robot after the identification is finished so that the vehicle-mounted robot displays a third preset expression in a second preset time length.
In order to solve the above technical problem, the present application further provides an interaction system based on a vehicle-mounted robot, including: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the vehicle-mounted robot-based interaction method as described above.
To solve the above technical problem, the present application further provides a computer readable medium storing computer program code, which when executed by a processor implements the in-vehicle robot-based interaction method as described above.
Compared with the prior art, the interaction method, the interaction device, the interaction system and the computer readable medium based on the vehicle-mounted robot control the vehicle-mounted robot to display different expressions in different scenes, so that the cooperation of the expressions of the vehicle-mounted robot and other systems of the vehicle and the robot can be enriched, and different expression feedbacks are given to a driver in different scenes, so that the interaction experience of the vehicle-mounted robot and the driver is improved, and the interaction experience of the whole vehicle-mounted system and the driver is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the principle of the application. In the drawings:
FIG. 1 is a schematic flow chart diagram of an image processing method according to an embodiment of the present application;
FIG. 2 is a flow diagram illustrating a cheering mode in an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic block diagram of a vehicle-mounted robot based interaction device shown in accordance with an embodiment of the present application;
fig. 4 is a system block diagram illustrating an interaction system based on a vehicle-mounted robot according to an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only examples or embodiments of the application, from which the application can also be applied to other similar scenarios without inventive effort for a person skilled in the art. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
The relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations are added to or removed from these processes.
The application provides an interaction method based on a vehicle-mounted robot. Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present application, where the steps may be performed by a vehicle-mounted robot based interactive system. As shown in fig. 1, the interaction method based on the in-vehicle robot of the present embodiment includes the following steps:
102, receiving first voice information input by a user;
103, judging whether the first voice message contains a preset awakening word, if so, sending a second control instruction to the vehicle-mounted robot so that the vehicle-mounted robot displays a second preset expression of a first preset time length;
and 105, recognizing the second voice information, and sending a third control instruction to the vehicle-mounted robot after the recognition is finished so that the vehicle-mounted robot displays a third preset expression in a second preset time length.
In an embodiment of the present application, before step 101, the interaction method based on the vehicle-mounted robot may further include the following steps:
and step 109, when the user is not in the preset list, sending a second voice instruction to enable the vehicle-mounted robot to play a second preset voice.
In step 106 and 109, the system determines whether the user is a preset user through the face information, so as to provide personalized voice for the preset user, improve the man-machine interaction efficiency, and improve the user experience.
In step 101, the system sends a first control instruction to the in-vehicle robot so that the in-vehicle robot displays a first preset expression. In one example, the first preset emoticon may be a "waiting emoticon" for showing the user that the in-vehicle robot is in a waiting state.
In an embodiment of the application, the first control instruction may be further used to rotate the in-vehicle robot to a preset direction. In an embodiment of the application, the predetermined direction may include one or more of the following: primary driver direction, secondary driver direction, and initial direction. The interaction method based on the vehicle-mounted robot further comprises the following steps: and determining a preset direction according to the current awakening mode of the vehicle-mounted robot. The wake pattern may include one or more of the following: a primary driver mode, a secondary driver mode, and a full vehicle mode. The primary driver mode corresponds to a primary driver direction; the secondary driver mode corresponds to the secondary driver direction; the full car mode corresponds to the initial direction. For example, when the in-vehicle robot is currently in the primary driver mode, the first control command may cause the in-vehicle robot to rotate to the primary driver direction to face the primary driver, which may better respond to and interact with the primary driver's voice command.
In an embodiment of the application, the step of sending the first control instruction to the ON-board robot to cause the ON-board robot to display the first preset expression may be performed when the power state of the entire vehicle is switched to the ACC ON state, the IGN ON state or the ready state. The power supply states of the whole vehicle can be as follows: ACC ON (access ON) state, IGN ON (Ignition ON) state and Ready (Ready) state and power down (OFF) state. In the ACC ON state, the internal distributor can be powered without starting the vehicle. And under the IGN ON state, all electrical appliances in the vehicle are supplied with power. The ready state is a vehicle start state. The power-off state is the state that the automobile is not started.
In step 102, the user utters a first voice message and the system receives the first voice message.
In step 103, the system determines whether the first voice message of the user includes a preset wake-up word. And if the preset awakening words are contained, sending a second control instruction to the vehicle-mounted robot so that the vehicle-mounted robot can display a second preset expression of the first preset time length. In one example, the second preset expression may be a "listening expression" for showing to the user that the in-vehicle robot is in a listening state capable of accepting the user voice instruction. The first preset time length can be set according to actual needs. In one example, the in-vehicle robot may keep "listening for an expression" until the user utters the second voice information, i.e., the first preset length of time may be a time waiting for the user to utter the second voice information.
At step 104, the user utters the second voice message and the system receives the second voice message.
In step 105, the system recognizes the second voice message and sends a third control instruction to the in-vehicle robot after the recognition is completed, so that the in-vehicle robot displays a third preset expression for a second preset time length. In one example, the third preset emoticon may be a "response emoticon" for showing the user that the in-vehicle robot has received and processed the second voice information. The second preset time length can be set according to actual needs.
To sum up with the step 101, 105, the interaction method based on the vehicle-mounted robot of the embodiment can enrich the expression of the vehicle-mounted robot and the cooperation between the robot and other systems of the whole vehicle by controlling the vehicle-mounted robot to display different expressions in different scenes, and give different expression feedbacks to the driver in different scenes, thereby improving the interaction experience between the vehicle-mounted robot and the driver and improving the interaction experience between the whole vehicle-mounted system and the driver.
The vehicle-mounted robot-based interaction method can further comprise a cheering mode for correspondingly controlling the vehicle-mounted robot according to the leaving state of the user. FIG. 2 is a flow diagram illustrating a cheering mode in an image processing method according to an embodiment of the present application. In an embodiment of the present application, the interaction method based on the in-vehicle robot may further include the following steps:
and 203, when the user intends to leave the vehicle, the system sends a fourth control instruction to the vehicle-mounted robot so that the vehicle-mounted robot displays a fourth preset expression and/or plays a third preset voice.
In step 201 and 203, the system can determine the time when the user intends to leave the vehicle by monitoring the safety belt state and/or the door opening state of the user, and enable the vehicle-mounted robot to display a fourth preset expression and/or play a third preset voice when the user intends to leave the vehicle. In one example, the fourth preset expression and/or the playing of the third preset voice may be a preset cheering expression and/or a preset cheering voice in a "cheering mode" of the in-vehicle robot, respectively, for cheering the user away from the vehicle interior to further improve the user experience.
In an embodiment of the present application, the interaction method based on the in-vehicle robot may further include the following steps:
and step 205, when the user leaves the vehicle, sending a fifth control instruction to the vehicle-mounted robot, so that the vehicle-mounted robot finishes displaying the fourth preset expression and/or finishes playing the third preset voice.
In step 204 and 205, the system can determine whether the user has left the vehicle by monitoring the safety belt state and/or the door opening state of the user, and when the user leaves the vehicle, the vehicle-mounted robot ends displaying the fourth preset expression and/or playing the third preset voice. In one example, the fifth control instruction may also be for rotating the in-vehicle robot to an initial direction and restoring the initial state.
The step 201-.
The application also provides an interaction device based on the vehicle-mounted robot. Fig. 3 is a schematic block diagram of an interaction device based on a vehicle-mounted robot according to an embodiment of the present application. As shown in fig. 3, the in-vehicle robot-based interaction apparatus 300 includes a first control module 301, a first receiving module 302, a second control module 303, a second receiving module 304, and a third control module 305.
The first control module 301 is configured to send a first control instruction to the vehicle-mounted robot, so that the vehicle-mounted robot displays a first preset expression. In one example, the first preset emoticon may be a "waiting emoticon" for showing the user that the in-vehicle robot is in a waiting state.
In an embodiment of the application, the first control instruction may be further used to rotate the in-vehicle robot to a preset direction. In an embodiment of the application, the predetermined direction may include one or more of the following: primary driver direction, secondary driver direction, and initial direction.
The first control module 301 may include an awake mode unit 3011 for determining a preset direction according to a current awake mode of the in-vehicle robot. The wake pattern may include one or more of the following: a primary driver mode, a secondary driver mode, and a full vehicle mode. The primary driver mode corresponds to a primary driver direction; the secondary driver mode corresponds to the secondary driver direction; the full car mode corresponds to the initial direction. For example, when the in-vehicle robot is currently in the primary driver mode, the first control command may cause the in-vehicle robot to rotate to the primary driver direction to face the primary driver, which may better respond to and interact with the primary driver's voice command.
In an embodiment of the application, the first control module 301 may be a step of sending a first control instruction to the ON-board robot to enable the ON-board robot to display a first preset expression when the power state of the entire vehicle is switched to the ACC ON state, the IGN ON state or the ready state. The power supply states of the whole vehicle can be as follows: ACC ON (access ON) state, IGN ON (Ignition ON) state and Ready (Ready) state and power down (OFF) state. In the ACC ON state, the internal distributor can be powered without starting the vehicle. And under the IGN ON state, all electrical appliances in the vehicle are supplied with power. The ready state is a vehicle start state. The power-off state is the state that the automobile is not started.
The first receiving module 302 is used for receiving first voice information input by a user.
The second control module 303 is configured to determine whether the first voice message includes a preset wake-up word, and if so, send a second control instruction to the vehicle-mounted robot, so that the vehicle-mounted robot displays a second preset expression in the first preset time duration. In one example, the second preset expression may be a "listening expression" for showing to the user that the in-vehicle robot is in a listening state capable of accepting the user voice instruction. The first preset time length can be set according to actual needs. In one example, the in-vehicle robot may keep "listening to an expression" until the user utters the second voice information.
The second receiving module 304 is used for receiving second voice information input by the user.
The third control module 305 is configured to recognize the second voice message, and send a third control instruction to the vehicle-mounted robot after the recognition is completed, so that the vehicle-mounted robot displays a third preset expression for a second preset time duration. In one example, the third preset emoticon may be a "response emoticon" for showing the user that the in-vehicle robot has received and processed the second voice information. The second preset time length can be set according to actual needs.
The vehicle-mounted robot based interaction device controls the vehicle-mounted robot to display different expressions in different scenes, can enrich the expressions of the vehicle-mounted robot and the cooperation of the robot and other systems of the whole vehicle, and gives different expression feedbacks to a driver in different scenes, so that the interaction experience of the vehicle-mounted robot and the driver is improved, and the interaction experience of the whole vehicle-mounted system and the driver is improved.
The application also provides an interactive system based on the vehicle-mounted robot, including: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the vehicle-mounted robot-based interaction method as described above.
Fig. 4 is a system block diagram illustrating an interaction system based on a vehicle-mounted robot according to an embodiment of the present application. The in-vehicle robot-based interactive system 400 may include an internal communication bus 401, a Processor (Processor)402, a Read Only Memory (ROM)403, a Random Access Memory (RAM)404, and a communication port 405. When applied on a personal computer, the in-vehicle robot based interaction system 400 may further comprise a hard disk 407. The internal communication bus 401 may enable data communication among the components of the in-vehicle robot-based interactive system 400. The processor 402 may make the determination and issue the prompt. In some embodiments, processor 402 may be comprised of one or more processors. The communication port 405 may enable data communication with the in-vehicle robot-based interactive system 400 to the outside. In some embodiments, the in-vehicle robot-based interactive system 400 may send and receive information and data from a network through the communication port 405. The in-vehicle robot based interaction system 400 may also comprise different forms of program storage units as well as data storage units, such as a hard disk 407, a Read Only Memory (ROM)403 and a Random Access Memory (RAM)404, capable of storing various data files for computer processing and/or communication use, as well as possible program instructions for execution by the processor 402. The processor executes these instructions to implement the main parts of the method. The results processed by the processor are communicated to the user device through the communication port and displayed on the user interface.
The above-mentioned interaction method based on the in-vehicle robot may be implemented as a computer program, stored in the hard disk 407, and recorded in the processor 402 for execution, so as to implement any of the interaction methods based on the in-vehicle robot in the present application.
The present application also provides a computer readable medium having stored thereon computer program code, which when executed by a processor, implements the in-vehicle robot-based interaction method as described above.
The in-vehicle robot-based interaction method, when implemented as a computer program, may also be stored in a computer-readable storage medium as an article of manufacture. For example, computer-readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD)), smart cards, and flash memory devices (e.g., electrically Erasable Programmable Read Only Memory (EPROM), card, stick, key drive). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media (and/or storage media) capable of storing, containing, and/or carrying code and/or instructions and/or data.
It should be understood that the above-described embodiments are illustrative only. The embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, and/or other electronic units designed to perform the functions described herein, or a combination thereof.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing disclosure is by way of example only, and is not intended to limit the present application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), digital signal processing devices (DAPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media. For example, computer-readable media may include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips … …), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD) … …), smart cards, and flash memory devices (e.g., card, stick, key drive … …).
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Although the present application has been described with reference to the present specific embodiments, it will be recognized by those skilled in the art that the foregoing embodiments are merely illustrative of the present application and that various changes and substitutions of equivalents may be made without departing from the spirit of the application, and therefore, it is intended that all changes and modifications to the above-described embodiments that come within the spirit of the application fall within the scope of the claims of the application.
Claims (10)
1. An interaction method based on a vehicle-mounted robot comprises the following steps:
sending a first control instruction to a vehicle-mounted robot so that the vehicle-mounted robot can display a first preset expression;
receiving first voice information input by a user;
judging whether the first voice message contains a preset awakening word or not, if so, sending a second control instruction to the vehicle-mounted robot so that the vehicle-mounted robot displays a second preset expression of a first preset time length;
receiving second voice information input by a user; and
and recognizing the second voice information, and sending a third control instruction to the vehicle-mounted robot after the recognition is finished so that the vehicle-mounted robot displays a third preset expression of a second preset time length.
2. The method of claim 1, wherein the first control instruction is further for rotating the in-vehicle robot to a preset direction.
3. The method of claim 2, wherein the preset direction comprises one or more of: a primary driver direction, a secondary driver direction, and an initial direction; and the method further comprises:
determining the preset direction according to a current wake-up mode of the vehicle-mounted robot, wherein the wake-up mode comprises one or more of the following: the system comprises a main driver mode, a secondary driver mode and a full vehicle mode, wherein the main driver mode, the secondary driver mode and the full vehicle mode are respectively in one-to-one correspondence with the main driver direction, the secondary driver direction and the initial direction.
4. The method of claim 1, further comprising, prior to the step of sending a first control command to the in-vehicle robot to cause the in-vehicle robot to display a first preset expression:
acquiring face information of a user;
identifying whether the user is in a preset list or not based on the face information;
when the user is in the preset list, sending a first voice instruction to enable the vehicle-mounted robot to play a first preset voice; and
and when the user is not in the preset list, sending a second voice instruction to enable the vehicle-mounted robot to play a second preset voice.
5. The method of claim 1, wherein the step of transmitting the first control command to the in-vehicle robot to cause the in-vehicle robot to display the first preset expression is performed when the vehicle power state is switched to the ACC ON state, the IGN ON state, or the ready state.
6. The method of claim 1, further comprising:
monitoring a user's seat belt state and/or door open state;
judging whether the user intends to leave the automobile according to the safety belt state and/or the automobile door opening state; and
and when the user intends to leave the vehicle, sending a fourth control instruction to the vehicle-mounted robot so as to enable the vehicle-mounted robot to display a fourth preset expression and/or play a third preset voice.
7. The method of claim 6, further comprising:
judging whether the user leaves the vehicle according to the safety belt state and/or the vehicle door opening state; and
and when the user leaves the vehicle, sending a fifth control instruction to the vehicle-mounted robot so as to enable the vehicle-mounted robot to finish displaying a fourth preset expression and/or finish playing a third preset voice.
8. An interaction device based on a vehicle-mounted robot comprises:
the first control module is used for sending a first control instruction to the vehicle-mounted robot so as to enable the vehicle-mounted robot to display a first preset expression;
the first receiving module is used for receiving first voice information input by a user;
the second control module is used for judging whether the first voice message contains a preset awakening word or not, and if so, sending a second control instruction to the vehicle-mounted robot so that the vehicle-mounted robot displays a second preset expression of a first preset time length;
the second receiving module is used for receiving second voice information input by a user; and
and the third control module is used for identifying the second voice information and sending a third control instruction to the vehicle-mounted robot after the identification is finished so that the vehicle-mounted robot displays a third preset expression in a second preset time length.
9. An interaction system based on a vehicle-mounted robot, comprising:
a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method of any one of claims 1-7.
10. A computer-readable medium having stored thereon computer program code which, when executed by a processor, implements the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110522063.3A CN113299286A (en) | 2021-05-13 | 2021-05-13 | Interaction method, device and system based on vehicle-mounted robot and readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110522063.3A CN113299286A (en) | 2021-05-13 | 2021-05-13 | Interaction method, device and system based on vehicle-mounted robot and readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113299286A true CN113299286A (en) | 2021-08-24 |
Family
ID=77321874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110522063.3A Pending CN113299286A (en) | 2021-05-13 | 2021-05-13 | Interaction method, device and system based on vehicle-mounted robot and readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113299286A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023168895A1 (en) * | 2022-03-07 | 2023-09-14 | 上汽海外出行科技有限公司 | Vehicle-mounted robot and operation method therefor, and medium and computer program product |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110834338A (en) * | 2019-11-04 | 2020-02-25 | 深圳勇艺达机器人有限公司 | Vehicle-mounted robot and control method thereof |
-
2021
- 2021-05-13 CN CN202110522063.3A patent/CN113299286A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110834338A (en) * | 2019-11-04 | 2020-02-25 | 深圳勇艺达机器人有限公司 | Vehicle-mounted robot and control method thereof |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023168895A1 (en) * | 2022-03-07 | 2023-09-14 | 上汽海外出行科技有限公司 | Vehicle-mounted robot and operation method therefor, and medium and computer program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107804321B (en) | Advanced autonomous vehicle tutorial | |
CN112802468B (en) | Interaction method and device of automobile intelligent terminal, computer equipment and storage medium | |
US10351009B2 (en) | Electric vehicle display systems | |
CN110341709B (en) | Intelligent piloting driving switch control method and system based on L2 level | |
US11302124B2 (en) | Method and apparatus for evaluating vehicle, device and computer readable storage medium | |
CN112614491B (en) | Vehicle-mounted voice interaction method and device, vehicle and readable medium | |
JP2003202897A (en) | Speech recognizing device for on-vehicle equipment | |
CN113225433B (en) | Vehicle voice reminding method and device, electronic equipment and storage medium | |
CN111145750A (en) | Control method and device for vehicle-mounted intelligent voice equipment | |
CN112017650A (en) | Voice control method and device of electronic equipment, computer equipment and storage medium | |
CN111354359A (en) | Vehicle voice control method, device, equipment, system and medium | |
CN113299286A (en) | Interaction method, device and system based on vehicle-mounted robot and readable medium | |
CN113879235A (en) | Method, system, equipment and storage medium for multi-screen control of automobile | |
CN112540677A (en) | Control method, device and system of vehicle-mounted intelligent equipment and computer readable medium | |
US20230317072A1 (en) | Method of processing dialogue, user terminal, and dialogue system | |
US6978199B2 (en) | Method and apparatus for assisting vehicle operator | |
US10407051B2 (en) | Apparatus and method for controlling driving of hybrid vehicle | |
CN115830724A (en) | Vehicle-mounted recognition interaction method and system based on multi-mode recognition | |
CN115312046A (en) | Vehicle having voice recognition system and method of controlling the same | |
CN116061951A (en) | Vehicle control method and device, vehicle and storage medium | |
CN116443040A (en) | Vehicle control method and device | |
CN113534780B (en) | Remote control parking parameter and function definition method, automobile and readable storage medium | |
US11754030B1 (en) | Apparatus and method for optimizing engine restarts | |
CN114162045B (en) | Vehicle control method and device and vehicle | |
US20230007094A1 (en) | Method, Device, Computer Program and Computer-Readable Storage Medium for Operating a Vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210824 |
|
WD01 | Invention patent application deemed withdrawn after publication |