CN110262662A - A kind of intelligent human-machine interaction method - Google Patents

A kind of intelligent human-machine interaction method Download PDF

Info

Publication number
CN110262662A
CN110262662A CN201910537956.8A CN201910537956A CN110262662A CN 110262662 A CN110262662 A CN 110262662A CN 201910537956 A CN201910537956 A CN 201910537956A CN 110262662 A CN110262662 A CN 110262662A
Authority
CN
China
Prior art keywords
audio
machine interaction
interaction method
intelligent human
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910537956.8A
Other languages
Chinese (zh)
Inventor
何建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Xueyuan Information Technology Development Co Ltd
Original Assignee
Hebei Xueyuan Information Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Xueyuan Information Technology Development Co Ltd filed Critical Hebei Xueyuan Information Technology Development Co Ltd
Priority to CN201910537956.8A priority Critical patent/CN110262662A/en
Publication of CN110262662A publication Critical patent/CN110262662A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention discloses a kind of intelligent human-machine interaction method, include the following steps: to wear visual apparatus, by the infrared sensor real-time detection preset range of visual apparatus whether someone;When the test results is yes, control speech enabled unit brings into operation, and acquires three-dimensional image information and obtain the depth map of different angle;Client generates image information, and the audio instructions file transferring 3D virtual scene data from memory and prestoring;The audio operation that audio input unit detects is instructed and is matched with the audio instructions file prestored, after successful match, obtains the audio operation instruction of successful match in real time;It is instructed according to audio operation, carries out gesture operation, the inertial sensor of visual apparatus obtains position and pose information, the position of visual sensor real-time capture manpower and the posture for identifying manpower.The present invention makes realistic virtual environment, and authenticity is stronger, and contrast learning can be carried out under the scene of high emulation, more convenient and efficient.

Description

A kind of intelligent human-machine interaction method
Technical field
The present invention relates to human-computer interaction technique fields, more particularly to a kind of intelligent human-machine interaction method.
Background technique
With the development of dynamic hand gesture recognition, the development of intelligent control and field of video monitoring is all produced great It influences, and an important purposes of dynamic hand gesture recognition is exactly human-computer interaction, there are many gesture in dynamic gesture interaction at present, Home sign is unreasonable, and the abuse of gesture also results in the decline of ease for use and experience property in human-computer interaction.Especially apply When teaching, the existing mode for mostly using viewing video greatly and listening to voice, learner intelligently passively sees or listens the file prestored, Interaction can not be generated, was presented later using VR equipment, but the effect of authenticity is poor.
Summary of the invention
In view of the deficienciess of the prior art, the object of the invention is that provide a kind of intelligent human-machine interaction method, Realistic virtual environment is made, authenticity is stronger, and contrast learning can be carried out under the scene of high emulation, more convenient and high Effect.
To achieve the goals above, the technical solution adopted by the present invention is that such: a kind of intelligent human-machine interaction method, packet Include following steps:
(1) visual apparatus is worn on demonstrator, the visual apparatus of respective numbers is worn according further to learner's quantity;
(2) by the infrared sensor real-time detection preset range of visual apparatus whether someone;
(3) it when the testing result in step (2) is no, repeats step (2), when the testing result in step (2) is yes When, control speech enabled unit brings into operation, and acquires three-dimensional image information by visual sensor and obtain different angle Depth map;
(4) client generates image information by graphics processor, and transfer from memory 3D virtual scene data with The audio instructions file prestored;
(5) audio prestored in audio operation instruction that step (3) sound intermediate frequency input unit detects and step (4) is referred to It enables file be matched, after successful match, obtains the audio operation instruction of successful match in real time;
(6) instructed according to audio operation, demonstrator carries out gesture operation, the inertial sensor of visual apparatus obtain position with Pose information, the position of visual sensor real-time capture manpower and the posture for identifying manpower.
As a preferred embodiment, the inertial sensor in the step (6) includes accelerometer and gyroscope.
As a preferred embodiment, the speech enabled unit in the step (3) includes voice input module, speech recognition Module, voice output module.
As a preferred embodiment, the visual sensor in the step (3) includes that image information camera and depth are believed Cease camera.
As a preferred embodiment, the visual apparatus is MR visor.
As a preferred embodiment, the client is PC computer.
Compared with prior art, beneficial effects of the present invention: the present invention makes realistic virtual environment, and authenticity is stronger, Contrast learning can be carried out under the scene of high emulation, it is more convenient and efficient.
Detailed description of the invention
Fig. 1 is the principle of the present invention block diagram.
Specific embodiment
The invention will be further described combined with specific embodiments below.Following embodiment is only used for clearly illustrating Technical solution of the present invention, and not intended to limit the protection scope of the present invention.
Embodiment:
A kind of intelligent human-machine interaction method, includes the following steps:
(1) visual apparatus is worn on demonstrator, the visual apparatus of respective numbers is worn according further to learner's quantity;
(2) by the infrared sensor real-time detection preset range of visual apparatus whether someone;
(3) it when the testing result in step (2) is no, repeats step (2), when the testing result in step (2) is yes When, control speech enabled unit brings into operation, and acquires three-dimensional image information by visual sensor and obtain different angle Depth map;
(4) client generates image information by graphics processor, and transfer from memory 3D virtual scene data with The audio instructions file prestored;
(5) audio prestored in audio operation instruction that step (3) sound intermediate frequency input unit detects and step (4) is referred to It enables file be matched, after successful match, obtains the audio operation instruction of successful match in real time;
(6) instructed according to audio operation, demonstrator carries out gesture operation, the inertial sensor of visual apparatus obtain position with Pose information, the position of visual sensor real-time capture manpower and the posture for identifying manpower.
Specifically, the inertial sensor in the step (6) includes accelerometer and gyroscope.
Further, position is obtained using accelerometer and gyroscope and pose information, high sensitivity is stable.
Specifically, the speech enabled unit in the step (3) includes voice input module, speech recognition module, voice Output module.
Further, the voice input module can use microphone inputting audio, and voice output module can use Loudspeaker plays audio.
Specifically, the visual sensor in the step (3) includes image information camera and depth information camera.
Further, the graphical information camera and depth information camera are for taking in image information.
Specifically, the visual apparatus is MR visor, the client is PC computer.
Further, as shown in Figure 1, MR visor is at least more than two when using, using video distributor and PC computer It is connected, and MR visor is provided with microprocessor, the microprocessor and infrared sensor, visual sensor, inertial sensor, sound Frequency interactive unit is connected;PC computer is provided with primary processor, which is connected with image processor;When specific implementation MR vision is connected with PC computer by communication units such as bluetooth modules or WIFI module.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, without departing from the technical principles of the invention, several improvement and deformations can also be made, these improvement and deformations Also it should be regarded as protection scope of the present invention.

Claims (6)

1. a kind of intelligent human-machine interaction method, which comprises the steps of:
(1) visual apparatus is worn on demonstrator, the visual apparatus of respective numbers is worn according further to learner's quantity;
(2) by the infrared sensor real-time detection preset range of visual apparatus whether someone;
(3) it when the testing result in step (2) is no, repeats step (2), when the testing result in step (2), which is, is, control Speech enabled unit processed brings into operation, and acquires the depth that three-dimensional image information obtains different angle by visual sensor Figure;
(4) client generates image information by graphics processor, and transfers 3D virtual scene data from memory and prestore Audio instructions file;
(5) the audio instructions text that will be prestored in audio operation instruction that step (3) sound intermediate frequency input unit detects and step (4) Part is matched, and after successful match, obtains the audio operation instruction of successful match in real time;
(6) it is instructed according to audio operation, demonstrator carries out gesture operation, and the inertial sensor of visual apparatus obtains position and posture Information, the position of visual sensor real-time capture manpower and the posture for identifying manpower.
2. a kind of intelligent human-machine interaction method according to claim 1, it is characterised in that: the inertia in the step (6) Sensor includes accelerometer and gyroscope.
3. a kind of intelligent human-machine interaction method according to claim 1, it is characterised in that: the audio in the step (3) Interactive unit includes voice input module, speech recognition module, voice output module.
4. a kind of intelligent human-machine interaction method according to claim 1, it is characterised in that: the vision in the step (3) Sensor includes image information camera and depth information camera.
5. a kind of intelligent human-machine interaction method according to claim 1, it is characterised in that: the visual apparatus is MR view Mirror.
6. a kind of intelligent human-machine interaction method according to claim 1, it is characterised in that: the client is PC computer.
CN201910537956.8A 2019-06-20 2019-06-20 A kind of intelligent human-machine interaction method Pending CN110262662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910537956.8A CN110262662A (en) 2019-06-20 2019-06-20 A kind of intelligent human-machine interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910537956.8A CN110262662A (en) 2019-06-20 2019-06-20 A kind of intelligent human-machine interaction method

Publications (1)

Publication Number Publication Date
CN110262662A true CN110262662A (en) 2019-09-20

Family

ID=67920034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910537956.8A Pending CN110262662A (en) 2019-06-20 2019-06-20 A kind of intelligent human-machine interaction method

Country Status (1)

Country Link
CN (1) CN110262662A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099623A (en) * 2020-08-20 2020-12-18 昆山火灵网络科技有限公司 Man-machine interaction system and method
CN113719810A (en) * 2021-06-07 2021-11-30 西安理工大学 Human-computer interaction lighting device based on visual identification and intelligent control

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN106502424A (en) * 2016-11-29 2017-03-15 上海小持智能科技有限公司 Based on the interactive augmented reality system of speech gestures and limb action
CN108304155A (en) * 2018-01-26 2018-07-20 广州源创网络科技有限公司 A kind of man-machine interaction control method
CN109172066A (en) * 2018-08-18 2019-01-11 华中科技大学 Intelligent artificial limb hand and its system and method based on voice control and visual identity

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN106502424A (en) * 2016-11-29 2017-03-15 上海小持智能科技有限公司 Based on the interactive augmented reality system of speech gestures and limb action
CN108304155A (en) * 2018-01-26 2018-07-20 广州源创网络科技有限公司 A kind of man-machine interaction control method
CN109172066A (en) * 2018-08-18 2019-01-11 华中科技大学 Intelligent artificial limb hand and its system and method based on voice control and visual identity

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099623A (en) * 2020-08-20 2020-12-18 昆山火灵网络科技有限公司 Man-machine interaction system and method
CN113719810A (en) * 2021-06-07 2021-11-30 西安理工大学 Human-computer interaction lighting device based on visual identification and intelligent control
CN113719810B (en) * 2021-06-07 2023-08-04 西安理工大学 Man-machine interaction lamp light device based on visual identification and intelligent control

Similar Documents

Publication Publication Date Title
CN107103801B (en) Remote three-dimensional scene interactive teaching system and control method
US20220156986A1 (en) Scene interaction method and apparatus, electronic device, and computer storage medium
CN113011723B (en) Remote equipment maintenance system based on augmented reality
CN104575137B (en) Split type scene interaction multi-medium intelligent terminal
CN106363637A (en) Fast teaching method and device for robot
CN111240490A (en) Equipment insulation test training system based on VR virtual immersion and circular screen interaction
CN106652590A (en) Teaching method, teaching recognizer and teaching system
CN109358754B (en) Mixed reality head-mounted display system
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
CN107463245A (en) Virtual reality emulation experimental system
CN110969905A (en) Remote teaching interaction and teaching aid interaction system for mixed reality and interaction method thereof
CN108805766B (en) AR somatosensory immersive teaching system and method
CN102779000A (en) User interaction system and method
CN206105869U (en) Quick teaching apparatus of robot
CN109426343B (en) Collaborative training method and system based on virtual reality
CN110796005A (en) Method, device, electronic equipment and medium for online teaching monitoring
CN110262662A (en) A kind of intelligent human-machine interaction method
CN110389664B (en) Fire scene simulation analysis device and method based on augmented reality
CN209572101U (en) Teaching share system based on augmented reality
US20190355281A1 (en) Learning support system and recording medium
CN104575139A (en) Scene interactive type intelligent multimedia terminal and application thereof
CN109116987A (en) A kind of holographic display system based on Kinect gesture control
CN112331001A (en) Teaching system based on virtual reality technology
CN204480506U (en) Split type scene interaction multi-medium intelligent terminal
WO2023250267A1 (en) Robotic learning of tasks using augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190920

WD01 Invention patent application deemed withdrawn after publication