CN112732079A - Virtual human-computer interaction method, device and storage medium based on motion sensing - Google Patents

Virtual human-computer interaction method, device and storage medium based on motion sensing Download PDF

Info

Publication number
CN112732079A
CN112732079A CN202011614046.4A CN202011614046A CN112732079A CN 112732079 A CN112732079 A CN 112732079A CN 202011614046 A CN202011614046 A CN 202011614046A CN 112732079 A CN112732079 A CN 112732079A
Authority
CN
China
Prior art keywords
virtual
animation event
animation
user
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011614046.4A
Other languages
Chinese (zh)
Inventor
艾元平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Desheng Photoelectric Technology Inc
Original Assignee
Guangzhou Desheng Photoelectric Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Desheng Photoelectric Technology Inc filed Critical Guangzhou Desheng Photoelectric Technology Inc
Priority to CN202011614046.4A priority Critical patent/CN112732079A/en
Publication of CN112732079A publication Critical patent/CN112732079A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a virtual human-computer interaction method, a device and a storage medium based on body feeling, wherein the virtual human-computer interaction method comprises the following steps of S1: presetting a matching relation between the behavior action and the animation event; step S2: receiving identification data generated by detecting the current behavior action of the user by the detection equipment, calling an animation event matched with the identification data, and controlling a virtual target in a virtual scene to make a corresponding virtual action according to the animation event. The invention can increase the flexibility of the interactive process, enrich the interactive function and improve the use experience of the user.

Description

Virtual human-computer interaction method, device and storage medium based on motion sensing
Technical Field
The invention relates to the technical field of virtual human-computer interaction, in particular to a virtual human-computer interaction method, virtual human-computer interaction equipment and a virtual human-computer interaction storage medium based on motion sensing.
Background
At present, the existing virtual human-computer interaction is mainly performed on a mobile phone terminal or a computer terminal, and a user clicks a picture displayed by the terminal in a mouse or finger touch manner to achieve the interaction purpose. In order to increase the interest of interaction, a somatosensory device is added in an existing interaction system to achieve the purpose of interaction, but a general somatosensory device can only detect gesture actions of a user, actions of a virtual target in a virtual scene are synchronous with current gesture actions of the user, and the actions of the virtual target can only change along with the current gesture actions of the user, so that the interaction function is single, and the user experience is poor.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a virtual human-computer interaction method based on body feeling, so that the interaction function is richer, and the use experience of a user is improved.
Another object of the present invention is to provide an electronic device.
It is a further object of the present invention to provide a storage medium.
One of the purposes of the invention is realized by adopting the following technical scheme:
a virtual human-computer interaction method based on body feeling comprises the following steps:
step S1: presetting a matching relation between the behavior action and the animation event;
step S2: receiving identification data generated by detecting the current behavior action of the user by the detection equipment, calling an animation event matched with the identification data, and controlling a virtual target in a virtual scene to make a corresponding virtual action according to the animation event.
Further, the detection device comprises a body sensing device, an infrared touch device and/or a microphone device.
Further, the identification data includes gesture data and position data obtained by the motion sensing device, touch signals obtained by the infrared touch device, and/or sound data obtained by the microphone device.
Further, the method for presetting the matching relationship between the behavior action and the animation event in step S1 includes:
selecting any one animation event, calling and displaying associated prompt information corresponding to the animation event;
and starting the detection equipment to collect the identification data generated when the user executes the corresponding behavior action according to the associated prompt information, and matching the identification data with the animation event.
Further, after the matching relationship between the behavior and the animation event is preset in step S1, feature extraction is performed on the identification data when the user executes the associated prompt information to obtain sound feature information, motion feature information, and touch feature information, and each feature information corresponding to the same animation event is stored in the database.
Further, the method for calling the animation event matched with the identification data in step S2 is as follows:
and extracting features of the identification data to obtain corresponding feature information, comparing the analyzed feature information with the feature information corresponding to the animation events in the database, and calling the animation events with the consistent comparison results if the comparison results are consistent.
Further, step S1 includes receiving a user-defined instruction, and performing user-defined setting on the associated prompt information corresponding to each animation event and the virtual action of the virtual target for the animation event according to the user-defined instruction.
Further, the associated prompt information prompts that the associated action made by the user is different from the virtual action made by the virtual target for the animation event.
The second purpose of the invention is realized by adopting the following technical scheme:
an electronic device comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the virtual human-computer interaction method based on the body feeling.
The third purpose of the invention is realized by adopting the following technical scheme:
a storage medium having stored thereon a computer program which, when executed, implements the above-described virtual human-machine interaction method based on body sensation.
Compared with the prior art, the invention has the beneficial effects that:
the user can preset the matching relation between the user behavior action and the animation event according to the requirement, and when the identification data of the user is detected, the animation event matched with the identification data is called through the matching relation, so that the virtual target executes the virtual action corresponding to the animation event; the matching relation between the behavior action and the animation event is set, so that the action finally presented by the virtual target can be the same as the user behavior action, and the action of the virtual target can be different from the user behavior action, so that the interaction interestingness is improved, and the user experience is improved.
Drawings
FIG. 1 is a flowchart illustrating a virtual human-machine interaction method according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Example one
The embodiment provides a virtual human-computer interaction method based on motion sensing, which specifically includes the following steps, as shown in fig. 1:
step S1: and presetting a matching relation between the behavior action and the animation event.
The interactive system of this embodiment is preset with a plurality of animation events, each animation event records a virtual action to be executed by the virtual target and associated prompt information for prompting the user, where the associated prompt information may prompt the user to make a corresponding associated action, may prompt the user to make a corresponding sound, may prompt the user to touch a designated location, and the associated prompt information may prompt in a manner of animation, picture, voice, or text.
And the user can perform custom setting on the associated prompt information corresponding to each animation event and the virtual action of the virtual target aiming at the animation event. In this embodiment, after entering the setting interface, the user selects any one animation event, and may perform custom setting on the associated prompt information alone, may perform custom setting on the virtual action of the virtual target alone, or may perform the custom setting on both, and the system generates a custom instruction according to the content set by the user, and determines the associated prompt information corresponding to each animation event and the virtual action made by the virtual target for each animation event.
In the same animation event, the associated action prompted by the associated prompting information for the user may be different from the virtual action of the virtual target for the animation event, and the associated action prompted by the associated prompting information for the user may also be the same as the virtual action of the virtual target for the animation event, which may be set according to the user requirement.
In addition, the user can also set the user behavior action matched with each animation event, specifically: after entering a setting interface, a user selects any one animation event, at the moment, the system calls the associated prompt information corresponding to the selected animation event and displays the associated prompt information; and meanwhile, the detection equipment is started, when the user makes corresponding behavior actions according to the association prompt information, the detection equipment collects identification data in the motion process of the user and matches the identification data with the corresponding animation events, so that each animation event is matched with the corresponding behavior action, when the subsequent user executes the behavior actions again, the corresponding animation events can be directly called through the matching relation, and the virtual target is controlled to execute the virtual actions corresponding to the animation events.
Step S2: receiving identification data generated by detecting the current behavior action of the user by the detection equipment, calling an animation event matched with the identification data, and controlling a virtual target in a virtual scene to make a corresponding virtual action according to the animation event.
In this embodiment, the detection device includes a motion sensing device, an infrared touch device, and/or a microphone device, and the user may start the motion sensing device to identify a user behavior when the user is prompted to perform a corresponding action by the association prompt information in the preset process in step S1; when the associated prompt information prompts the user to make corresponding sound, the microphone equipment is started to record the voice made by the user; when the association prompt information prompts a user to touch a designated position, the infrared touch equipment is started to identify the touch position of the user; the various detection devices can be combined and matched differently.
The motion sensing device is used for recognizing gesture actions of a user, the obtained recognition data are gesture data and position data, the obtained recognition data are touch signals, the obtained recognition data are obtained by recognizing touch actions of the user through the infrared touch device, and the obtained recognition data are voice data.
After the matching relationship between the behavior and the animation event is preset in step S1, feature extraction is performed on the identification data generated when the user executes the associated prompt information to obtain sound feature information, motion feature information, and touch feature information, and each feature information corresponding to the same animation event is stored in the database.
In step S2, after the detection device is used to obtain the identification data, feature extraction is performed on the identification data to obtain corresponding feature information, the analyzed feature information is compared with the feature information corresponding to each animation event in the database, and if the analyzed feature information is consistent with the feature information corresponding to any animation event, the animation event is called, so that the virtual target executes a preset virtual action according to the animation event.
Example two
The embodiment provides an electronic device, which comprises a processor, a memory and a computer program, wherein the computer program is stored on the memory and can run on the processor; in addition, the present embodiment also provides a storage medium, on which a computer program is stored, and when the computer program is executed, the virtual human-computer interaction method based on motion sensing is implemented.
The apparatus and the storage medium in this embodiment are based on two aspects of the same inventive concept, and the method implementation process has been described in detail in the foregoing, so that those skilled in the art can clearly understand the structure and implementation process of the system in this embodiment according to the foregoing description, and for the sake of brevity of the description, details are not repeated here.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.

Claims (10)

1. A virtual human-computer interaction method based on body feeling is characterized by comprising the following steps:
step S1: presetting a matching relation between the behavior action and the animation event;
step S2: receiving identification data generated by detecting the current behavior action of the user by the detection equipment, calling an animation event matched with the identification data, and controlling a virtual target in a virtual scene to make a corresponding virtual action according to the animation event.
2. The virtual human-computer interaction method based on body feeling of claim 1, wherein the detection device comprises a body feeling device, an infrared touch device and/or a microphone device.
3. The method for virtual human-computer interaction based on body feeling of claim 2, wherein the recognition data comprises gesture data and position data recognized by the body feeling device, touch signals recognized by the infrared touch device, and/or sound data recognized by the microphone device.
4. The somatosensory-based virtual human-machine interaction method according to claim 1, wherein the method for presetting the matching relationship between the behavior and the animation event in the step S1 is as follows:
selecting any one animation event, calling and displaying associated prompt information corresponding to the animation event;
and starting the detection equipment to collect the identification data generated when the user executes the corresponding behavior action according to the associated prompt information, and matching the identification data with the animation event.
5. The method of claim 4, wherein after a matching relationship between the behavior and the animation event is preset in step S1, feature extraction is performed on the recognition data when the user executes the associated prompt information to obtain sound feature information, motion feature information, and touch feature information, and feature information corresponding to the same animation event is stored in a database.
6. The method for virtual human-computer interaction based on body feeling of claim 5, wherein the method for calling the animation event matched with the identification data in step S2 is as follows:
and extracting features of the identification data to obtain corresponding feature information, comparing the analyzed feature information with the feature information corresponding to the animation events in the database, and calling the animation events with the consistent comparison results if the comparison results are consistent.
7. The method of virtual human-computer interaction based on body feeling of claim 4, wherein step S1 is preceded by receiving a custom instruction, and performing custom setting on the associated prompt information corresponding to each animation event and the virtual action of the virtual target for the animation event according to the custom instruction.
8. The method of claim 7, wherein the associated prompt information prompts a user to make an associated action different from a virtual action of the virtual target on the animation event.
9. An electronic device, comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein the processor implements the virtual human-computer interaction method based on body sensation according to any one of claims 1 to 8 when executing the computer program.
10. A storage medium having stored thereon a computer program that, when executed, implements the virtual human-machine interaction method based on body sensation according to any one of claims 1 to 8.
CN202011614046.4A 2020-12-30 2020-12-30 Virtual human-computer interaction method, device and storage medium based on motion sensing Pending CN112732079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011614046.4A CN112732079A (en) 2020-12-30 2020-12-30 Virtual human-computer interaction method, device and storage medium based on motion sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011614046.4A CN112732079A (en) 2020-12-30 2020-12-30 Virtual human-computer interaction method, device and storage medium based on motion sensing

Publications (1)

Publication Number Publication Date
CN112732079A true CN112732079A (en) 2021-04-30

Family

ID=75611222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011614046.4A Pending CN112732079A (en) 2020-12-30 2020-12-30 Virtual human-computer interaction method, device and storage medium based on motion sensing

Country Status (1)

Country Link
CN (1) CN112732079A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470466A (en) * 2021-06-15 2021-10-01 华北科技学院(中国煤矿安全技术培训中心) Mixed reality tunneling machine operation training system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470466A (en) * 2021-06-15 2021-10-01 华北科技学院(中国煤矿安全技术培训中心) Mixed reality tunneling machine operation training system

Similar Documents

Publication Publication Date Title
CN110083411B (en) Apparatus and method for generating user interface from template
CN103853481B (en) Method and system for simulating touch screen mobile terminal key
WO2017118329A1 (en) Method and apparatus for controlling tab bar
CN106415472B (en) Gesture control method and device, terminal equipment and storage medium
CN107765985B (en) Application program control method, user terminal and medium product
CN111831205B (en) Device control method, device, storage medium and electronic device
CN108920066B (en) Touch screen sliding adjustment method and device and touch equipment
CN109034063A (en) Plurality of human faces tracking, device and the electronic equipment of face special efficacy
CN104363205B (en) Using login method and device
US10770077B2 (en) Electronic device and method
CN108897589B (en) Human-computer interaction method and device in display equipment, computer equipment and storage medium
CN111831204B (en) Device control method, device, storage medium and electronic device
EP3940518A1 (en) Method for moving interface elements, system, vehicle and storage medium
WO2015131590A1 (en) Method for controlling blank screen gesture processing and terminal
CN106921802B (en) Audio data playing method and device
CN112732079A (en) Virtual human-computer interaction method, device and storage medium based on motion sensing
CN107786894B (en) User feedback data identification method, mobile terminal and storage medium
CN106791226B (en) Call fault detection method and system
CN111077997B (en) Click-to-read control method in click-to-read mode and electronic equipment
CN111459272A (en) Interaction method, interaction device, storage medium and electronic equipment
CN111007974A (en) Touch pen-based interaction method, terminal and readable storage medium
CN115981542A (en) Intelligent interactive touch control method, system, equipment and medium for touch screen
CN111382598A (en) Identification method and device and electronic equipment
CN111090383B (en) Instruction identification method and electronic equipment
CN111160097B (en) Content identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination