CN110727410A - Man-machine interaction method, terminal and computer readable storage medium - Google Patents

Man-machine interaction method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN110727410A
CN110727410A CN201910831283.7A CN201910831283A CN110727410A CN 110727410 A CN110727410 A CN 110727410A CN 201910831283 A CN201910831283 A CN 201910831283A CN 110727410 A CN110727410 A CN 110727410A
Authority
CN
China
Prior art keywords
acquiring
data
user
human
interaction method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910831283.7A
Other languages
Chinese (zh)
Inventor
田发景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Original Assignee
Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pateo Electronic Equipment Manufacturing Co Ltd filed Critical Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Priority to CN201910831283.7A priority Critical patent/CN110727410A/en
Publication of CN110727410A publication Critical patent/CN110727410A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3476Special cost functions, i.e. other than distance or default speed limit of road segments using point of interest [POI] information, e.g. a route passing visible POIs
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3644Landmark guidance, e.g. using POIs or conspicuous other objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The invention belongs to the technical field of artificial intelligence, and relates to a human-computer interaction method, a terminal and a computer readable storage medium, wherein the human-computer interaction method comprises the following steps: acquiring gesture operation information of a user about a target object; and acquiring voice data of the user, and acquiring a corresponding control instruction according to the voice data so as to perform corresponding operation on the target object. The man-machine interaction method provided by the invention can enable the user to complete corresponding interaction more quickly with less voice by combining the gesture operation of the user with the voice data, can more understand the intention of the user in simple voice interaction, and greatly increases the interaction experience in the vehicle.

Description

Man-machine interaction method, terminal and computer readable storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a man-machine interaction method, a terminal and a computer readable storage medium.
Background
Human-Computer Interaction technologies (collectively called Human-Computer Interaction technologies) are technologies that can realize Human-Computer Interaction and Interaction in an effective way through Computer input and output devices, and include a machine providing a large amount of relevant information and prompt requests to people through output or display devices, and a Human inputting relevant information and prompt requests to the machine through input devices. Nowadays, man-machine interaction technology is increasingly applied to the fields of mobile phones, tablet computers, televisions and the like. At present, a human-computer interaction mode mainly includes that a machine provides a human-computer interaction interface for a user, and the user and the machine perform information interaction through the human-computer interaction interface, for example, voice interaction, somatosensory interaction, touch interaction and the like are performed.
However, in the existing terminal products, there is only a single-path interaction mode, such as motion sensing, voice or touch, however, the single-path interaction mode is too monotonous and limited in operation, and feels unnatural when used, and cannot meet the demand of human-computer interaction between people and the terminal at will.
In view of the above problems, those skilled in the art have sought solutions.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above, the present invention provides a human-computer interaction method, a terminal and a computer-readable storage medium, and aims to provide a multi-path human-computer interaction mode, which expands human-computer interaction operations, so that the human-computer interaction operations are more natural and accord with living habits of people.
The invention is realized by the following steps:
the invention provides a man-machine interaction method, which comprises the following steps: acquiring gesture operation information of a user about a target object; and acquiring voice data of the user, and acquiring a corresponding control instruction according to the voice data so as to perform corresponding operation on the target object.
Further, the step of obtaining the gesture operation information of the user about the target object further comprises the following steps: acquiring a wake-up instruction of the user to output display data corresponding to the wake-up instruction; and outputting the display data.
Further, after the step of outputting the presentation data, the method comprises: obtaining a display type of the display data, wherein the display type comprises a three-dimensional class and/or an input/output class; and acquiring corresponding operable information according to the display type, wherein the operable information comprises at least one operation instruction.
Further, the step of acquiring the voice data of the user and acquiring a corresponding control instruction according to the voice data to perform corresponding operation on the target object includes: acquiring target data associated with the display data; switching to display the display data to display the target data; and performing corresponding operation on the target data according to the control instruction.
Further, the presentation data is map data comprising at least one point of interest; the step of obtaining target data associated with the presentation data comprises: acquiring the gesture operation information of the user aiming at the map data so as to select interest points in the map data according to the gesture operation information, wherein the selected interest points comprise address information; the step of acquiring the voice data of the user and acquiring a corresponding control instruction according to the voice data to perform corresponding operation on the target data comprises the following steps: acquiring a corresponding navigation control instruction according to the voice data of the user; and planning a navigation path according to the navigation control instruction and the address information.
The step of acquiring the gesture operation information of the user about the target object comprises the steps of acquiring and identifying image data in the vehicle; acquiring gesture operation information in the in-vehicle image data and in-vehicle equipment corresponding to the gesture direction in the gesture operation information; wherein the target object is the in-vehicle device.
Further, the step of obtaining a corresponding control instruction according to the voice data to perform a corresponding operation on the target object includes: recognizing keywords in the voice data to obtain sentences including the keywords in the voice data, wherein the keywords include at least one of names and indication pronouns of the equipment in the vehicle; and identifying the control instruction in the sentence comprising the keyword, and performing corresponding operation according to the control instruction so as to control the in-vehicle equipment.
The invention also provides a terminal comprising a memory and a processor. The processor is adapted to execute a computer program stored in the memory to implement the steps of the human-computer interaction method as described above.
The invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the human-computer interaction method as described above.
The invention provides a human-computer interaction method, a terminal and a computer readable storage medium, wherein the human-computer interaction method comprises the following steps: acquiring gesture operation information of a user about a target object; and acquiring voice data of the user, and acquiring a corresponding control instruction according to the voice data so as to perform corresponding operation on the target object. Therefore, the man-machine interaction method provided by the invention can enable the user to complete corresponding interaction more quickly with less voice by combining the gesture operation of the user with the voice data, can more understand the intention of the user in simple voice interaction, and greatly increases the interaction experience in the vehicle.
In order to make the aforementioned and other objects, features and advantages of the invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a flowchart illustrating a human-computer interaction method according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a terminal according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The first embodiment:
fig. 1 is a flowchart illustrating a human-computer interaction method according to a first embodiment of the present invention. For a clear description of the man-machine interaction method provided by the first embodiment of the present invention, please refer to fig. 1.
The man-machine interaction method provided by the first embodiment of the invention comprises the following steps:
and S2, acquiring gesture operation information of the user about the target object.
And S4, acquiring voice data of the user, and acquiring a corresponding control instruction according to the voice data to perform corresponding operation on the target object.
In detail, in an embodiment, the step S2 may include, but is not limited to: s22: acquiring and identifying image data in a vehicle; s24: acquiring gesture operation information in the in-vehicle image data and in-vehicle equipment corresponding to the gesture direction in the gesture operation information; the target object is an in-vehicle device.
In summary, the step S4 may include, but is not limited to: s42: recognizing keywords in the voice data to obtain sentences including the keywords in the voice data, wherein the keywords include at least one of names and indication pronouns of devices in the vehicle; s44: and identifying a control instruction in a sentence comprising the keyword, and performing corresponding operation according to the control instruction so as to control the equipment in the vehicle.
More specifically, the target object is, for example, a device inside a vehicle, a building outside the vehicle, or the like. In the present embodiment, the in-vehicle device is a sunroof of an automobile, and a camera, for example, an infrared camera, is provided in the automobile. In the specific operation, the interactive device combines the language understanding intention and the gesture recognition operation to judge the intention of the user. In this embodiment, if the user points at the skylight and says "help me open here" with a voice, the interaction device enables the camera to acquire the in-vehicle environment image, so as to recognize the gesture operation and the gesture direction of the user, acquire the gesture pointing information of the user and the in-vehicle device pointed by the gesture, and further judge that the target control device, that is, the in-vehicle device pointed by the gesture, is the skylight. Meanwhile, the interactive device recognizes the keyword in the language data, recognizes the indication pronoun "here", and further recognizes that the control instruction "open" in the sentence with "here". The interaction device then instructs the corresponding actuator to open the skylight.
The above describes one case of the present embodiment, which directly controls the target object, and the following describes another case, which controls the target object by showing data.
In detail, in another embodiment, before step S2, the method further includes: step S12, acquiring a wake-up instruction of a user to output display data corresponding to the wake-up instruction; and step S14, outputting the display data.
More specifically, in this embodiment, after the step of outputting the presentation data, the method includes: step S16, obtaining the display type of the display data, wherein the display type comprises a three-dimensional type and/or an input/output type; step S18, obtaining corresponding operational information according to the display type, where the operational information includes at least one operation instruction. The three-dimensional data may include map data, image data, and the like, and the corresponding operation information includes acquisition, selection, storage, rotation, and the like. The input and output class data may include voice data and the like, and the corresponding operation information includes input, recognition, storage, output and the like.
Further, in this embodiment, the presentation data may include voice data, and accordingly, the step S4 of acquiring the voice data of the user and acquiring a corresponding control instruction according to the voice data to perform a corresponding operation on the target object may include: step S42, acquiring target data associated with the display data; step S44, switching the display data into display target data; and step S46, performing corresponding operation on the target data according to the control command.
Further, in this embodiment, the presentation data may also include map data, such as map data including at least one point of interest. At this time, step S42, namely, the step of acquiring the target data associated with the presentation data, may include: step S422, gesture operation information of the user for the map data is acquired, so as to select an interest point in the map data according to the gesture operation information, where the selected interest point includes address information. Accordingly, step S4, namely, the step of acquiring the voice data of the user and acquiring a corresponding control instruction according to the voice data to perform a corresponding operation on the target data, includes: step S43, acquiring corresponding navigation control instruction according to the voice data of the user; and step S45, planning a navigation path according to the navigation control command and the address information.
Specifically, when a user performs a map operation, the user may point to a related map at the same time, and at this time, the gesture recognition system of the interactive device may recognize a location pointed by the user through an image recognition method. For example, when the user points to a certain location by a finger and then speaks to it by voice, or an unspecific instruction such as a building on the left. At this time, the natural language understanding system cooperating with the interactive apparatus understands the language of the user and inputs the related field of language understanding and related parameters to the interactive fusion system. In other words, the interactive device identifies the same type of data in the language data, for example, searches and plays music and vocal programs in the broadcast, adjusts the sound of the audio playing device in the vehicle, performs navigation, and the like, identifies parameters of each type of data, for example, the volume of the sound, silence, maximum volume, and the like, and inputs the parameters into the interactive device, and an interactive fusion system in the interactive device executes the program, combines the operations of voice and gestures, can more understand the intention of the user in simple voice interaction, and executes a clear user intention, controls the corresponding vehicle equipment, so that the user can complete corresponding interaction with faster and fewer languages, and greatly increases the interactive experience in the vehicle. In the present embodiment, the acquired data type is map data, and the interest point is, for example, an office building, a cell, a school, a gas station, or the like. The indicative pronouns include, but are not limited to, location-indicative pronouns such as here, and there, and building-generic pronouns such as buildings, cells, businesses, and the like.
Second embodiment:
fig. 2 is a schematic structural diagram of a terminal according to a second embodiment of the present invention. For a clear description of the terminal provided in the second embodiment of the present invention, please refer to fig. 2.
A terminal 1 according to a second embodiment of the present invention includes: a processor a101 and a memory a201, wherein the processor a101 is configured to execute the computer program a6 stored in the memory a201 to implement the steps of the human-computer interaction method as described in the first embodiment.
In an embodiment, the terminal 1 provided in this embodiment may include at least one processor a101 and at least one memory a 201. Wherein, at least one processor A101 may be referred to as a processing unit A1, and at least one memory A201 may be referred to as a memory unit A2. Specifically, the storage unit a2 stores a computer program a6, and when the computer program a6 is executed by the processing unit a1, the terminal 1 provided by the present embodiment implements the steps of the human-computer interaction method as described above, for example, S2 shown in fig. 1, that obtains gesture operation information of the user about the target object; and S4, acquiring voice data of the user, and acquiring a corresponding control instruction according to the voice data to perform corresponding operation on the target object.
In an embodiment, the terminal 1 provided in the present embodiment may include a plurality of memories a201 (referred to as a storage unit A2 for short), and the storage unit A2 may include, for example, a Random Access Memory (RAM) and/or a cache memory and/or a Read Only Memory (ROM), and/or the like.
In an embodiment, the terminal 1 further comprises a bus connecting the different components (e.g. the processor a101 and the memory a201, the touch sensitive display a3, the interaction means, etc.).
In one embodiment, the terminal 1 in this embodiment may further include a communication interface (e.g., I/O interface a4), which may be used for communication with an external device.
In an embodiment, the terminal 1 provided in this embodiment may further include a communication device a 5.
The terminal 1 provided by the second embodiment of the present invention includes a memory a101 and a processor a201, and the processor a101 is configured to execute the computer program a6 stored in the memory a201 to implement the steps of the human-computer interaction method described in the first embodiment, so that the terminal 1 provided by this embodiment can implement the purpose of extending the operation of human-computer interaction through such a multi-path interaction manner that interaction information and voice data are combined, thereby making the human-computer interaction operation more natural and conforming to the living habits of people.
The second embodiment of the present invention also provides a computer-readable storage medium, which stores a computer program a6, and when being executed by the processor a101, the computer program a6 implements the steps of the human-computer interaction method as in the first embodiment, for example, the steps shown in fig. 1 are S12 to S4.
In an embodiment, the computer readable storage medium provided by the embodiment may include any entity or device capable of carrying computer program code, a recording medium, such as ROM, RAM, magnetic disk, optical disk, flash memory, and the like.
When executed by the processor a101, the computer program a6 stored in the computer-readable storage medium according to the second embodiment of the present invention can implement an operation of expanding human-computer interaction in such a multi-path interaction manner that interaction information and voice data are combined, so that the human-computer interaction operation is more natural and conforms to the living habits of people.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The present invention is not limited to the above preferred embodiments, and any modification, equivalent replacement or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A human-computer interaction method is characterized by comprising the following steps:
acquiring gesture operation information of a user about a target object;
and acquiring voice data of the user, and acquiring a corresponding control instruction according to the voice data so as to perform corresponding operation on the target object.
2. The human-computer interaction method according to claim 1, wherein the step of obtaining the gesture operation information of the user about the target object is preceded by the step of:
acquiring a wake-up instruction of the user to output display data corresponding to the wake-up instruction;
and outputting the display data.
3. A human-computer interaction method according to claim 2, characterized in that after said step of outputting presentation data, it comprises:
obtaining a display type of the display data, wherein the display type comprises a three-dimensional class and/or an input/output class;
and acquiring corresponding operable information according to the display type, wherein the operable information comprises at least one operation instruction.
4. The human-computer interaction method according to claim 2, wherein the step of acquiring voice data of the user and acquiring a corresponding control instruction according to the voice data to perform a corresponding operation on the target object comprises:
acquiring target data associated with the display data;
switching to display the display data to display the target data;
and performing corresponding operation on the target data according to the control instruction.
5. A human-computer interaction method according to claim 4, wherein the presentation data is map data comprising at least one point of interest;
the step of obtaining target data associated with the presentation data comprises:
acquiring the gesture operation information of the user aiming at the map data so as to select interest points in the map data according to the gesture operation information, wherein the selected interest points comprise address information;
the step of acquiring the voice data of the user and acquiring a corresponding control instruction according to the voice data to perform corresponding operation on the target data comprises the following steps:
acquiring a corresponding navigation control instruction according to the voice data of the user;
and planning a navigation path according to the navigation control instruction and the address information.
6. The human-computer interaction method combining the gesture operation according to claim 1, wherein the step of acquiring gesture operation information of the user about the target object comprises:
acquiring and identifying image data in a vehicle;
acquiring gesture operation information in the in-vehicle image data and in-vehicle equipment corresponding to the gesture direction in the gesture operation information;
wherein the target object is the in-vehicle device.
7. The human-computer interaction method of claim 6, wherein: the step of obtaining a corresponding control instruction according to the voice data to perform corresponding operation on the target object comprises the following steps:
recognizing keywords in the voice data to obtain sentences including the keywords in the voice data, wherein the keywords include at least one of names and indication pronouns of the equipment in the vehicle;
and identifying the control instruction in the sentence comprising the keyword, and performing corresponding operation according to the control instruction so as to control the in-vehicle equipment.
8. A terminal comprising a memory and a processor;
the processor is adapted to execute a computer program stored in the memory to implement the steps of the human-computer interaction method of any one of claims 1-7.
9. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the human-computer interaction method according to any one of claims 1-8.
CN201910831283.7A 2019-09-04 2019-09-04 Man-machine interaction method, terminal and computer readable storage medium Pending CN110727410A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910831283.7A CN110727410A (en) 2019-09-04 2019-09-04 Man-machine interaction method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910831283.7A CN110727410A (en) 2019-09-04 2019-09-04 Man-machine interaction method, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110727410A true CN110727410A (en) 2020-01-24

Family

ID=69218909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910831283.7A Pending CN110727410A (en) 2019-09-04 2019-09-04 Man-machine interaction method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110727410A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309153A (en) * 2020-03-25 2020-06-19 北京百度网讯科技有限公司 Control method and device for man-machine interaction, electronic equipment and storage medium
CN111538456A (en) * 2020-07-10 2020-08-14 深圳追一科技有限公司 Human-computer interaction method, device, terminal and storage medium based on virtual image
CN111966320A (en) * 2020-08-05 2020-11-20 湖北亿咖通科技有限公司 Multimodal interaction method for vehicle, storage medium, and electronic device
CN112420043A (en) * 2020-12-03 2021-02-26 深圳市欧瑞博科技股份有限公司 Intelligent awakening method and device based on voice, electronic equipment and storage medium
CN115476366A (en) * 2021-06-15 2022-12-16 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot type robot
CN115933890A (en) * 2023-03-15 2023-04-07 北京点意空间展览展示有限公司 Interactive projection method and system for exhibition hall
CN117316158A (en) * 2023-11-28 2023-12-29 科大讯飞股份有限公司 Interaction method, device, control equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105501121A (en) * 2016-01-08 2016-04-20 北京乐驾科技有限公司 Intelligent awakening method and system
CN109522835A (en) * 2018-11-13 2019-03-26 北京光年无限科技有限公司 Children's book based on intelligent robot is read and exchange method and system
CN109643166A (en) * 2016-09-21 2019-04-16 苹果公司 The control based on gesture of autonomous vehicle
CN109933272A (en) * 2019-01-31 2019-06-25 西南电子技术研究所(中国电子科技集团公司第十研究所) The multi-modal airborne cockpit man-machine interaction method of depth integration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105501121A (en) * 2016-01-08 2016-04-20 北京乐驾科技有限公司 Intelligent awakening method and system
CN109643166A (en) * 2016-09-21 2019-04-16 苹果公司 The control based on gesture of autonomous vehicle
CN109522835A (en) * 2018-11-13 2019-03-26 北京光年无限科技有限公司 Children's book based on intelligent robot is read and exchange method and system
CN109933272A (en) * 2019-01-31 2019-06-25 西南电子技术研究所(中国电子科技集团公司第十研究所) The multi-modal airborne cockpit man-machine interaction method of depth integration

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309153A (en) * 2020-03-25 2020-06-19 北京百度网讯科技有限公司 Control method and device for man-machine interaction, electronic equipment and storage medium
CN111538456A (en) * 2020-07-10 2020-08-14 深圳追一科技有限公司 Human-computer interaction method, device, terminal and storage medium based on virtual image
CN111966320A (en) * 2020-08-05 2020-11-20 湖北亿咖通科技有限公司 Multimodal interaction method for vehicle, storage medium, and electronic device
CN112420043A (en) * 2020-12-03 2021-02-26 深圳市欧瑞博科技股份有限公司 Intelligent awakening method and device based on voice, electronic equipment and storage medium
CN115476366A (en) * 2021-06-15 2022-12-16 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot type robot
CN115476366B (en) * 2021-06-15 2024-01-09 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot robot
CN115933890A (en) * 2023-03-15 2023-04-07 北京点意空间展览展示有限公司 Interactive projection method and system for exhibition hall
CN117316158A (en) * 2023-11-28 2023-12-29 科大讯飞股份有限公司 Interaction method, device, control equipment and storage medium
CN117316158B (en) * 2023-11-28 2024-04-12 科大讯飞股份有限公司 Interaction method, device, control equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110727410A (en) Man-machine interaction method, terminal and computer readable storage medium
US10656909B2 (en) Learning intended user actions
ES2958183T3 (en) Control procedure for electronic devices based on voice and motion recognition, and electronic device that applies the same
US10827067B2 (en) Text-to-speech apparatus and method, browser, and user terminal
US9613618B2 (en) Apparatus and method for recognizing voice and text
EP2717259B1 (en) Method and apparatus for performing preset operation mode using voice recognition
JP2021009701A (en) Interface intelligent interaction control method, apparatus, system, and program
EP3193328A1 (en) Method and device for performing voice recognition using grammar model
US11024300B2 (en) Electronic device and control method therefor
US10586528B2 (en) Domain-specific speech recognizers in a digital medium environment
CN110415679B (en) Voice error correction method, device, equipment and storage medium
CN104485115A (en) Pronunciation evaluation equipment, method and system
US9164579B2 (en) Electronic device for granting authority based on context awareness information
KR20100116462A (en) Input processing device for portable device and method including the same
JP2020004382A (en) Method and device for voice interaction
KR20210032875A (en) Voice information processing method, apparatus, program and storage medium
CN103970451A (en) Method and apparatus for controlling content playback
CN114154459A (en) Speech recognition text processing method and device, electronic equipment and storage medium
US11120219B2 (en) User-customized computer-automated translation
CN111722779A (en) Man-machine interaction method, terminal and computer readable storage medium
CN108682437B (en) Information processing method, device, medium and computing equipment
CN110888706A (en) Interface content display method and device and storage medium
CN112307162A (en) Method and device for information interaction
CN114690992B (en) Prompting method, prompting device and computer storage medium
CN113282472B (en) Performance test method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 208, building 4, 1411 Yecheng Road, Jiading District, Shanghai, 201821

Applicant after: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Address before: Room 208, building 4, 1411 Yecheng Road, Jiading District, Shanghai, 201821

Applicant before: SHANGHAI PATEO ELECTRONIC EQUIPMENT MANUFACTURING Co.,Ltd.

CB02 Change of applicant information