CN116978112A - Motion detection and virtual operation method and system thereof, storage medium and terminal equipment - Google Patents

Motion detection and virtual operation method and system thereof, storage medium and terminal equipment Download PDF

Info

Publication number
CN116978112A
CN116978112A CN202310412463.8A CN202310412463A CN116978112A CN 116978112 A CN116978112 A CN 116978112A CN 202310412463 A CN202310412463 A CN 202310412463A CN 116978112 A CN116978112 A CN 116978112A
Authority
CN
China
Prior art keywords
motion
real
information
action
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310412463.8A
Other languages
Chinese (zh)
Inventor
李旦
陈千举
汪茗纯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310412463.8A priority Critical patent/CN116978112A/en
Publication of CN116978112A publication Critical patent/CN116978112A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a motion detection and virtual operation method and system thereof, a storage medium and terminal equipment, and is applied to the technical field of information processing. In order to detect the real motion change of the target object, the motion detection system can acquire the real motion information of the part contained in the target object, and match the real motion information with the motion process information respectively corresponding to each group of reference decomposition motions in the reference motion set preset in the system, so as to determine the real motion change of the target object according to the matching result. The motion under one motion scene is thinned into a reference decomposition motion, so that the real motion change of a target object can be determined more precisely, the accuracy of real motion detection is improved, and the precise real motion change can be applied to a wider application scene; the acquisition of the real motion information of the target object can be carried out without depending on equipment or handheld equipment worn by the target object, and the acquisition of the real motion information is convenient and flexible.

Description

Motion detection and virtual operation method and system thereof, storage medium and terminal equipment
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a motion detection method, a motion detection system, a virtual operation method, a virtual operation system, a storage medium, and a terminal device.
Background
The Virtual Reality (VR) implementation technology is a brand new practical technology, mainly is that a computer simulates a Virtual environment to feel the environment immersion for a user, and can be applied to various application scenes, such as somatosensory games, and the like, so that the user needs to wear equipment such as a bracelet or a handheld handle, and the user can determine the actual motion of the user by detecting the information of the bracelet or the handle in the process of driving the bracelet or the handle to move, and then the actual motion of the user is converted into Virtual operation in the Virtual environment, thereby realizing the interaction between the user and the Virtual Reality system.
Therefore, the accuracy of the virtual reality system in detecting and identifying the actual motion of the user directly influences whether the interaction between the user and the virtual reality system is smooth or not, so that the accuracy in detecting the actual motion of the user is important.
Disclosure of Invention
The embodiment of the invention provides a motion detection method, a motion detection system, a virtual operation method, a storage medium and a terminal device, which improves the detection precision of actual actions.
In one aspect, an embodiment of the present invention provides a motion detection method, including:
collecting real motion information of a part contained in a target object;
invoking a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively;
sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result;
and determining the real action change of the target object according to the matching result.
Another aspect of the embodiments of the present invention provides a virtual operation method based on motion detection, including:
collecting real motion information of a part contained in a real target object;
invoking a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively;
sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result;
determining the real action change of the real target object according to the matching result;
and determining virtual operation in the virtual application according to the real action change so as to execute the virtual operation.
Another aspect of an embodiment of the present invention provides a motion detection system, including:
the acquisition unit is used for acquiring the real motion information of the part contained in the target object;
the reference calling unit is used for calling a preset reference motion set, and the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively;
the matching unit is used for sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result;
and the action determining unit is used for determining the real action change of the target object according to the matching result.
Another aspect of an embodiment of the present invention provides a virtual operating system based on motion detection, including:
the real motion acquisition unit is used for acquiring real motion information of a part contained in a real target object;
the set calling unit is used for calling a preset reference motion set, and the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively;
the information matching unit is used for sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result;
The real action unit is used for determining the real action change of the real target object according to the matching result;
and the virtual operation unit is used for determining virtual operation in the virtual application according to the real action change so as to execute the virtual operation.
Another aspect of the embodiments of the present invention also provides a computer readable storage medium storing a plurality of computer programs adapted to be loaded by a processor and to perform the motion detection method according to the one aspect of the embodiments of the present invention, or to perform the virtual operation method based on motion detection according to the another aspect of the embodiments of the present invention.
In another aspect, the embodiment of the invention further provides a terminal device, which comprises a processor and a memory;
the memory is used for storing a plurality of computer programs, and the computer programs are used for being loaded by a processor and executing the motion detection method according to one aspect of the embodiment of the invention or executing the virtual operation method based on the motion detection according to another aspect of the embodiment of the invention; the processor is configured to implement each of the plurality of computer programs.
It can be seen that, in the method of this embodiment, in order to detect the real motion change of the target object, the motion detection system may collect the real motion information of the portion included in the target object, and match the real motion information with the motion process information corresponding to each group of reference decomposition actions in the reference motion set preset in the system, so as to determine the real motion change of the target object according to the matching result. In the process, as the motion in one motion scene is thinned into the reference decomposition motion, the real motion change of the target object can be determined more precisely, the accuracy of real motion detection is improved, and the precise real motion change can be applied to wider application scenes; in addition, the acquisition of the real motion information of the target object does not need to depend on equipment worn by the target object or handheld equipment, and the real motion change of the target object can be determined based on the reference motion set after the acquisition by the image shooting device, so that the acquisition of the real motion information is more convenient and flexible.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic diagram of a motion detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a motion detection method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method of presetting a reference motion set in one embodiment of the invention;
FIG. 4 is a flow chart of a method of presetting a reference motion set in an application embodiment of the invention;
FIG. 5a is a schematic diagram of a reference resolution action corresponding to a hand-up action in an embodiment of the present invention;
FIG. 5b is a schematic diagram of another reference resolution action corresponding to the upward hand-lifting action in an embodiment of the present invention;
FIG. 5c is a schematic diagram of a reference decomposition action corresponding to a jump action in an application embodiment of the invention;
FIG. 5d is a schematic diagram of a baseline decomposition of other parts of an embodiment of the invention;
FIG. 6 is a flow chart of a motion detection method in an application embodiment of the invention;
FIG. 7 is a flow chart of a virtual operation method based on motion detection according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of a logic structure of a motion detection system according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a logical structure of a virtual operating system based on motion detection according to an embodiment of the present invention;
fig. 10 is a schematic logic structure diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention provides a motion detection method, which is mainly applied to a motion detection system, wherein the motion detection system can be applied to a system which needs to detect the real motion change of a target object, such as a system for virtual application, such as a virtual reality system, and the like, and particularly as shown in fig. 1, the motion detection system can detect the real motion change of the target object according to the following method:
collecting real motion information of a part contained in a target object; invoking a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively; sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result; and determining the real action change of the target object according to the matching result.
In practical applications, the motion detection system may be applied in particular, but not exclusively, in the following terminal devices: a cell phone, a personal computer, a tablet computer, a game terminal, a virtual reality terminal, etc.
In the process, as the motion in one motion scene is thinned into the reference decomposition motion, the real motion change of the target object can be determined more precisely, the accuracy of real motion detection is improved, and the precise real motion change can be applied to wider application scenes; in addition, the acquisition of the real motion information of the target object does not need to depend on equipment worn by the target object or handheld equipment, and after the acquisition is carried out through the image shooting device, the real motion change of the target object can be determined based on the reference motion set, so that the acquisition of the real motion information is more convenient and flexible.
An embodiment of the present invention provides a motion detection method, mainly implemented by the motion detection system, where a flowchart is shown in fig. 2, and the method includes:
step 101, acquiring real motion information of a part contained in a target object.
It can be understood that the motion detection method of the present embodiment may be applied to various scenarios, so that when a certain application scenario is opened, the motion detection flow of the present embodiment may be initiated, for example, when application interfaces of games such as dance games are opened in the scenario of the peer-to-peer game.
Specifically, in one implementation, the motion detection system may acquire, through the image capturing device, a real motion image of the target object at a current time instant, and then identify the real motion image at the current time instant, so as to identify real motion information of a portion included in the target object at the current time instant, where the method specifically may include: position information, angle information, etc., and may also include pointing information, etc.
In other implementations, the motion detection system may also provide, but is not limited to, the following other hardware devices: the device comprises a handle, a wearing device and a peripheral device with six degrees of freedom (6degrees of freedom,6DOF), wherein real motion information of a part contained in a target object is acquired in real time through the hardware devices.
The target object may be a real user or a non-real substitute user (such as a model of other parts such as a hand model), and the real motion information of which part of the target object needs to be collected is determined by an actual application scene, for example, in a dance game scene, the real motion information of the hand, the head and the leg of the target object needs to be collected, and in other game scenes, only the real motion information of the hand of the target object and the real motion information of the part such as a finger joint may be collected.
Step 102, calling a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively.
It may be appreciated that, in the motion detection system, a reference motion set may be stored in advance, where each group of reference decomposition actions included in the reference motion set may include at least one reference decomposition sub-action, where at least one group of reference decomposition actions may be combined in a certain order to form a motion scene, and motion process information of each reference decomposition sub-action may include information of a start gesture and end gesture of the reference decomposition sub-action, and may also include information of a motion direction from the start gesture to the end gesture, and the start gesture information and the end gesture information may include, but are not limited to, information of a motion position or a motion angle, and the like. Wherein, the reference decomposition actions of different groups are reference decomposition actions of different moments, and each reference decomposition sub-action in the reference decomposition actions of the same group is a reference decomposition sub-action of the same moment.
For example, for a motion scene of shaking the head of a user, two groups of reference decomposition actions, namely, head left rotation and head right rotation, are included, each group of reference decomposition actions only includes one reference decomposition sub-action, and motion process information of the two groups of reference decomposition actions is respectively: the head rotates left from a forward position to an angled position and the head rotates right from a left to an angled position to another angled position.
For another example, for the user walking this exercise scenario, three sets of reference decomposition actions may be included, that is, the palm or the knuckle moves upward, the palm or the knuckle moves downward, and the palm or the knuckle moves upward, where each set of reference decomposition actions may include only one reference decomposition sub-action, and the exercise process information of the three sets of reference decomposition actions includes: the palm or finger joint moves up from the original position a to the position b, the palm or finger joint moves down from the position b to the original position a, and the palm or finger joint moves up from the original position a to the position b.
For another example, for a user jumping this motion scenario, two sets of reference decomposition actions, i.e. upward movement of joints of finger tips of the hands and upward lifting of palms of the hands, may be included, where the first set of reference decomposition actions includes four reference decomposition sub-actions, specifically, motion process information of the first set of reference decomposition actions is respectively: the finger tip 1 rotates from a downward original angle to an upward angle of the finger tip 1, the palm 1 rises from an original height to a certain height, the finger tip 2 rotates from a downward original angle to an upward angle of the finger tip 2, and the palm 2 rises from the original height to a certain height; the second group of reference decomposition actions comprises two reference decomposition sub-actions, specifically, the motion process information of the second group of reference decomposition actions is respectively: palm 1 is raised from a certain height to another height, and palm 2 is raised from a certain height to another height.
For another example, for a user to make a fist, the motion scenario may include two sets of reference decomposition actions, that is, the knuckles at the middle and upper end positions of all the fingers rotate simultaneously, and the knuckles at the middle and upper end positions of all the fingers rotate simultaneously, where the two sets of reference decomposition actions include multiple reference decomposition sub-actions, and the motion process information of the first set of reference decomposition actions is respectively: all finger joints at the upper end positions of the fingers rotate from an original angle to a certain angle; the motion process information of the second group of reference decomposition actions is respectively as follows: all the finger joints at the upper end positions of the fingers rotate from a certain angle to another angle, and all the finger joints at the middle positions of the fingers also rotate from a certain angle to another angle, namely the finger joints at the upper end positions and the middle positions rotate by the same angle.
It should be noted that, in the motion detection system, multiple types of reference motion sets may be stored in advance, where each type of reference motion set corresponds to one type of motion scene, and what type of reference motion set needs to be preset in the motion detection system, and needs to be determined by a scene to which the current motion detection method is applied, for example, when the current motion detection method is in a dance game scene, the motion detection system will preset a reference motion set of a dance motion scene, and when the motion detection flow of the embodiment is initiated, the reference motion set of the dance motion scene is invoked; when the game is played through hand movement, the movement detection system presets a reference movement set of the hand movement scene.
Step 103, matching the real motion information with the motion process information of each reference decomposition action in the reference motion set in sequence to obtain a matching result.
Specifically, if the reference motion set includes motion process information corresponding to multiple groups of reference decomposition actions arranged in a certain order, the motion process information of each group of reference decomposition actions includes start gesture information and end gesture information corresponding to at least one reference decomposition sub-action respectively, where the reference decomposition sub-actions are performed simultaneously, and one reference decomposition sub-action is an action of a certain part included in the target object, so that the end gesture information of each reference decomposition sub-action in the motion process information of a group of reference decomposition actions arranged at a position (such as a first position) is the start gesture information of the corresponding reference decomposition sub-action in the motion process information of a group of reference decomposition actions arranged at a position (such as a second position) behind the first position.
For example, for a sports scene where a user walks, the set of motion process information for the benchmark decomposition actions includes: the palm or finger joint moves upwards from the original position a to the position b; the motion process information of the next set of reference decomposition actions includes: the palm or finger joint moves downwards from the position b to the original position a; the motion process information of the next set of reference decomposition actions includes: the palm or knuckle moves upward from the home position a to the position b.
In this case, when the matching in the present step 103 is performed, mainly the following two cases may be included, but are not limited to:
(1) Under the condition, the current acquired real motion information can be matched with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at the first position, if not, the step of executing the acquisition of the real motion information in the step 101 is returned, and the current acquired real motion information is matched with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at the first position; if the motion information is matched, returning to the step 101 to re-execute the collection of the real motion information, matching the re-collected real motion information with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at the second position behind the first position, and further determining whether to re-match the motion process information corresponding to a group of reference decomposition actions at the second position respectively or continuously match the motion process information corresponding to other groups of reference decomposition actions respectively according to the matching result. The real motion information is collected cyclically and matched with the motion process information of each group of reference decomposition actions until the motion process information of all groups of reference decomposition actions in the reference motion set is matched.
In this case, it can be seen that only when the matching result of the matching for one set of reference decomposition actions is the matching, the matching for the next set of reference decomposition actions is performed. In other cases, when the matching result performed for a certain group of reference decomposition actions is not matching, the matching may be performed directly for the next group of reference decomposition actions without returning to the matching for the group of reference decomposition actions.
Specifically, when the currently acquired real motion information is matched with the initial pose information (such as initial position information or initial angle information) of each reference decomposition sub-action in a certain group of reference decomposition actions, the position information of a certain part of the currently acquired target object can be matched with the initial position information of the corresponding reference decomposition sub-action, such as calculating the difference between the two position information, if the difference is smaller than or equal to a preset value, the two position information are matched, otherwise, the two position information are not matched. Or matching the angle information of a certain part of the currently acquired target object with the initial angle information of the corresponding reference decomposition sub-action, such as calculating the difference between the two angle information, and if the difference is smaller than or equal to another preset value, matching, otherwise, not matching.
It should be noted that, when the motion detection system collects real motion information at any moment in real time, the real-time matching in step 103 can be performed for the real motion information at the moment, and it is not necessary to wait for the real motion information at each moment to be collected for a period of time and then perform matching. Thus, a preset time can be set for the matching of each group of reference decomposition actions, and when the matching is completed from the time when the real motion information at a certain moment is acquired to the time when the matching time of the motion process information corresponding to one group of reference decomposition actions exceeds the preset time; or when the acquisition time for acquiring the real motion information at a certain moment exceeds the preset time; or from the start of collecting real motion information at a certain moment to the completion of matching with the motion process information of a group of reference decomposition actions, when the collection matching time of the process exceeds the preset time, determining that the matching of the group of reference decomposition actions is unsuccessful, and returning to collect the real motion information again and re-matching the group of reference decomposition actions.
Because each group of reference decomposition actions can correspond to one reference decomposition sub-action or a plurality of reference decomposition sub-actions, the data quantity of the motion process information corresponding to each group of reference decomposition actions is different, different preset times can be respectively set for each group of reference decomposition actions, and the matching of the motion process information with different data quantities is adapted, so that the matching time for each group of reference decomposition actions can be limited through the preset times.
In addition, by means of a preset time corresponding to each group of reference decomposition actions, the execution time of the real motion information of any part of the target object can be limited, for example, when the palm needs to be moved quickly, a smaller preset time can be set, for example, 0.5s, for example, when the palm needs to be moved according to a general speed, a larger preset time can be set, for example, 1s and the like.
Further, when a set of reference decomposition actions is re-matched, since the target object may actually perform an action after performing an action during movement, if the movement detection system re-matches the set of reference decomposition actions, a situation that the matching is always unsuccessful may occur, and the movement detection system may perform a user prompt to prompt the target object to re-perform a corresponding action, where the corresponding action is an action corresponding to the set of reference decomposition actions. Thus, the target object can instantly adjust the actual actions of the corresponding parts through the user prompt.
(2) In another case, when the real motion information corresponding to each of the plurality of moments in a period of time is collected in the step 101, during the matching in the step 103, the real motion information of each moment may be sequentially matched with the initial gesture information of each reference decomposition sub-action in the group of reference decomposition actions, so as to determine a matching result of matching or non-matching. In this case, when the matching result of the matching for one group of reference decomposition actions is not matching, the matching result is stored without returning again to the matching for the group of reference decomposition actions, and the matching is directly performed for the next group of reference decomposition actions, so as to finally obtain the matching results respectively corresponding to the groups of reference decomposition actions.
In both cases, the real motion information at a moment is circularly matched with the initial gesture information of each reference decomposition sub-action in the corresponding group of reference decomposition actions, until the motion process information corresponding to all groups of reference decomposition actions is matched, whether the real motion change of the target object in a period of time belongs to a motion scene corresponding to the reference motion set can not be determined.
And 104, determining the real action change of the target object according to the matching result.
Specifically, through the matching in the step 103, if the real motion information acquired in the step 101 at a plurality of adjacent moments is sequentially matched with the motion process information corresponding to a group of reference decomposition motions in the reference motion set, the real motion change of the target object in a period of time includes: motion changes in the motion scene corresponding to the set of reference motion sets.
It can be seen that, in the method of this embodiment, in order to detect the real motion change of the target object, the motion detection system may collect the real motion information of the portion included in the target object, and match the real motion information with the motion process information corresponding to each group of reference decomposition actions in the reference motion set preset in the system, so as to determine the real motion change of the target object according to the matching result. In the process, the motion in a motion scene is thinned into a reference decomposition motion, so that the real motion change of a target object can be determined more precisely, the accuracy of real motion detection is improved, and the precise real motion change can be applied to a wider application scene; in addition, the acquisition of the real motion information of the target object does not need to depend on equipment worn by the target object or handheld equipment, and the real motion change of the target object can be determined based on the reference motion set after the acquisition by the image shooting device, so that the acquisition of the real motion information is more convenient and flexible.
It should be noted that, in order to implement steps 101 to 104 described above, a reference motion set needs to be set in advance in the motion detection system. As shown in fig. 3, in a specific embodiment, the motion detection system may implement the presetting of the reference motion set by:
in step 201, a sample video of a sample object based on any motion scene is acquired.
Specifically, the user can shoot a video of any motion scene based on a real user through the motion detection system as a sample video, or can take a video of any motion scene based on a virtual character as a sample video.
Step 202, a plurality of key change frames of a portion included in a sample object in a sample video are extracted.
The key change frame herein is mainly a frame in which the motion state of a part included in the sample object changes, for example, in a certain two-frame image, the relative position between a certain part and other parts in the sample object changes, and the two-frame image may be used as the key change frame. In this embodiment, it is necessary to extract key change frames of which parts are determined according to the actual motion scene, and it is also necessary to extract key change frames of as few parts as possible according to the actual motion scene.
For example, in a motion scene where a sample object jumps, the motion can be decomposed into two actions, namely, upward movement of finger joints of hands and upward lifting of palms of the hands, so that key change frames of positions of the finger joints and palms of the user can be extracted.
In step 203, pose information of the parts included in the sample objects in the plurality of key change frames is detected respectively. The posture information here may include position information, angle information, and the like of the portion included in the sample object.
Step 204, determining motion process information of any action according to the gesture information corresponding to at least two adjacent key change frames, and taking any action as a group of reference decomposition actions.
The motion process information of any action may include the start gesture information and the end gesture information of the part related to the action, and the motion direction from the start gesture to the end gesture, where the start gesture information and the end gesture information may be information such as a motion position or a motion angle, and the motion process information of one action may be determined by at least two adjacent key change frames, so that according to the processing of the plurality of key change frames in step 202, the motion process information corresponding to at least one group of reference decomposition actions included in one motion scene may be determined.
Step 205, storing the motion process information of the set of reference decomposition actions determined in step 204 into the reference motion set of any motion scene.
It should be noted that, in the foregoing steps 201 to 205, for one motion scene, the reference motion set under the motion scene is preset in the motion detection system, and in practical applications, multiple types of motion scenes may be involved, so that the reference motion sets under the various types of motion scenes need to be preset. Specifically, when the motion detection system executes the above step 201, a sample video of a sample object based on multiple types of motion scenes is obtained, and further through the above steps 202 to 205, reference motion sets corresponding to the multiple types of motion scenes respectively can be obtained respectively, and the reference motion sets corresponding to the multiple types of motion scenes respectively are stored.
Further, reference motion sets corresponding to various types of motion scenes in the multiple types can be respectively associated with specific application events, so that when the specific application events are executed, the reference motion sets of the corresponding types are called.
Wherein any type of motion scene is a series of continuous motions of any part of the target object, such as a dance motion, a fist-making motion, a finger-pointing motion, etc.
The motion detection method in the present invention is mainly applied to a virtual reality system, and may specifically include the following two parts:
(1) As shown in fig. 4, the method can preset the reference motion set under at least one motion scene in the motion detection system by the following steps:
in step 301, a user may sample a video of a real person based on any motion scene through a motion detection system, and determine the captured video as a sample video. Any motion scenario herein may include, but is not limited to, the following: the movements of all parts of the real figures such as shaking head, bringing hands, jumping, making fists and the like.
Step 302, extracting a plurality of key change frames of the parts included in the real person in the sample video.
In step 303, pose information of the parts included in the real person in the plurality of key change frames is detected, which may include position information, angle information, and the like.
Step 304, selecting at least two adjacent keychange frames from the plurality of keychange frames from the earliest time according to the time sequence of the plurality of keychange frames, determining motion process information of any motion according to gesture information respectively corresponding to the selected at least two keychange frames, and taking any motion as a group of reference decomposition motion.
Step 305, determining whether the above step 304 is performed on all the key change frames, and if yes, performing step 306; if not, then the adjacent at least two keyframes are selected again according to the time sequence of the keyframes, and the step 304 is executed again for the selected at least two keyframes.
Step 306, the motion process information corresponding to at least one group of reference decomposition actions determined for all the key change frames in step 304 is stored in the motion detection system as a reference motion set in the corresponding motion scene.
For example, as shown in FIG. 5a, for a dance motion of "lift up" the corresponding course of motion information may include information that the user's hand moves up from position A to position B, specifically, movement of position A (0, 0) to position B (1, 0), i.e., a change in the position of the motion. The information that the hand of the user rotates a certain angle along with the elbow as shown in fig. 5b may also be specifically included: the user's hand rotates from the original angle (0, 0) to the position of another angle (0,30,0), i.e. a change in the movement angle. For the "lift up" motion, the corresponding motion process information may be the information of the change of the motion position or the information of the change of the motion angle, and any motion process information is selected as the motion process information of the "lift up" reference decomposition motion. .
In other specific embodiments, in a sports scenario where at least two parts of the user are required to move simultaneously, such as shown in fig. 5c, for a "jump" action, the corresponding course of motion information may include information that the user's left hand moves up from position 1 to position 2, i.e. a left hand position change, while the user's right hand moves up from position 3 to position 4, i.e. a right hand position change. Thus, it is necessary to store both the information of the left hand position change and the information of the right hand position change as the movement process information of the reference decomposition action of "jump".
Further, in addition to presetting the reference motion set for the motion of the user's hand, the reference motion set may also be preset for the motion of other parts of the user, such as shown in fig. 5d, in some complex continuous motion scenarios, it is also necessary to detect the angle of the user's knuckle rotation or head rotation.
(2) After the reference motion set is preset in the motion detection system, as shown in fig. 6, the motion detection system may detect continuous motion of the target object in real time according to the following steps:
step 401, when the motion detection system initiates a motion detection flow in a certain application scene, invoking a preset reference action set in the system, where the reference action set includes motion process information corresponding to multiple groups of reference decomposition actions respectively.
Step 402, the motion detection system records the number of the currently matched set of reference decomposition actions (hereinafter referred to as the current matching set number for short), marks as currentmotionstep=0, determines motion process information of the set of reference decomposition actions corresponding to the current matching set number, determines whether the recorded current matching set number exceeds the total set number of the reference decomposition actions in the reference motion set, and if not, continues to execute the following step 403; if so, the following step 407 is performed and the flow ends.
Step 403, real motion information of the part included in the target object is collected in real time, and the collection time of the real motion information at any moment is determined.
Step 404, judging whether the acquisition time exceeds the preset time of a group of reference decomposition actions corresponding to the current matching group number, if not, continuing to execute the following step 405; if so, return to step 403 to re-acquire the real motion information of the region contained in the target object.
In this embodiment, the collection time is defined by a preset time of a set of reference decomposition actions corresponding to the current matching set number, in other embodiments, the matching time in step 405 described below may be defined by a preset time, and the collection time in step 403 and the matching time in step 405 may be defined by a preset time, which is not described herein.
Step 405, matching the currently acquired real motion information at any moment with the motion process information of a group of reference decomposition actions corresponding to the current matching group number to obtain a matching result whether to match, if the matching result is matching, continuing to execute the following step 406; if the match is not a match, the process returns to step 403.
Specifically, when matching is performed on the motion process information of a group of reference decomposition actions, the position information of the currently acquired target object at any moment can be matched with the start-stop position information in the motion process information of the corresponding part, or the angle information of a certain part of the target object at any two moments is matched with the initial angle information in the motion process information of the part, so that a matching result whether to match or not can be obtained.
Step 406, updating the current matching group number, specifically, increasing 1 based on the current matching group number, that is, adding 1 to the CurrxentMotionStep, and returning to execute step 402 for the updated current matching group number.
In step 407, the actual motion change of the target object in a period of time (including the collected multiple moments) is determined as the motion change in the motion scene corresponding to the invoked reference motion set.
Further, after determining a real motion change of the target object over a period of time, a virtual operation in the virtual environment may be generated based on the real motion change, and the virtual operation may be further performed.
Therefore, the real action change of the target object can be determined more precisely by thinning the action in one motion scene into the reference decomposition action, the accuracy of real action detection is improved, and the precise real action change can be applied to wider application scenes.
The embodiment of the invention also provides a virtual operation method based on motion detection, which is mainly applied to a virtual operation system based on motion detection, such as an application system of virtual reality, as shown in fig. 7, and the virtual operation system based on motion detection can realize virtual operation according to the following steps:
in step 501, the real motion information of the part included in the real target object is collected.
Specifically, in a virtual operating system based on motion detection, a real motion image of a real target object at a current moment can be obtained in real time through an image capturing device, then the real motion image at the current moment is identified, and real motion information of a part included in the real target object at the current moment is identified, which specifically includes: position information, angle information, etc., and may also include pointing information, etc.
In other implementations, the virtual operating system based on motion detection may also provide, but is not limited to, the following other hardware devices: the device comprises a handle, wearing equipment and a 6DOF peripheral device, wherein real motion information of a part contained in a real target object is acquired in real time through the hardware equipment.
The real motion information of which part of the real target object needs to be collected is determined by an actual application scene, for example, in a dance game scene, the real motion information of the hand, the head and the leg of the real target object needs to be collected, and in other game scenes, only the real motion information of the hand of the real target object can be collected, and the real motion information of the finger joints and other parts can also be collected.
Step 502, calling a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively.
It may be appreciated that, in the virtual operating system based on motion detection, a reference motion set may be stored in advance, where each group of reference decomposition actions included in the reference motion set may include at least one reference decomposition sub-action, where at least one group of reference decomposition actions are combined in a certain order to form a motion scene, and motion process information of each reference decomposition sub-action may include start gesture information and end gesture information of the reference decomposition sub-action, and may also include information such as a motion direction from a start gesture to an end gesture, where the start gesture information and end gesture information may include, but are not limited to, information such as a motion position or a motion angle. Wherein, the reference decomposition actions of different groups are reference decomposition actions of different moments, and each reference decomposition sub-action in the reference decomposition actions of the same group is a reference decomposition sub-action of the same moment.
Step 503, matching the real motion information with the motion process information of each reference decomposition action in the reference motion set in sequence to obtain a matching result.
Specifically, if the reference motion set includes motion process information corresponding to multiple groups of reference decomposition actions arranged in a certain order, the motion process information of each group of reference decomposition actions includes start gesture information and end gesture information corresponding to at least one reference decomposition sub-action respectively, where the reference decomposition sub-actions are performed simultaneously, and one reference decomposition sub-action is an action of a certain part included in the real target object, so that the end gesture information of each reference decomposition sub-action in the motion process information of a group of reference decomposition actions arranged at a position (such as a first position) is the start gesture information of the corresponding reference decomposition sub-action in the motion process information of a group of reference decomposition actions arranged at a position (such as a second position) behind the first position.
Specifically, in one case, when the matching in this step is performed, the currently collected real motion information may be matched with the initial pose information of each reference decomposition sub-action in the group of reference decomposition actions arranged in the first position, and if not matched, the step of collecting the real motion information in the above step 501 is performed, and the currently collected real motion information is matched with the initial pose information of each reference decomposition sub-action in the group of reference decomposition actions arranged in the first position; if so, returning to re-execute the step 501 to collect the real motion information, and matching the re-collected real motion information with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at the second position after the first position, thereby determining whether to re-match the motion process information corresponding to a group of reference decomposition actions at the second position respectively or continuously match the motion process information corresponding to other groups of reference decomposition actions respectively according to the matching result. The real motion information is collected cyclically and matched with the motion process information of each group of reference decomposition actions until the motion process information of all groups of reference decomposition actions in the reference motion set is matched.
In another case, when the real motion information corresponding to each of the plurality of moments in a period of time is collected in the above step 501, during the matching in the present step 503, the real motion information of each moment may be matched with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions in sequence, so as to determine a matching result of matching or non-matching. In this case, when the matching result of the matching for one group of reference decomposition actions is not matching, the matching result is stored without returning again to the matching for the group of reference decomposition actions, and the matching is directly performed for the next group of reference decomposition actions, so as to finally obtain the matching results respectively corresponding to the groups of reference decomposition actions.
Step 504, determining the real action change of the real target object according to the matching result.
Specifically, through the matching in the step 503, if the real motion information acquired in the step 501 at a plurality of adjacent moments is sequentially matched with the motion process information corresponding to a group of reference decomposition motions in the reference motion set, the real motion change of the real target object in a period of time includes: motion changes in the motion scene corresponding to the set of reference motion sets.
Step 505, determining a virtual operation in the virtual application according to the real action change so as to execute the virtual operation.
Specifically, how to determine the virtual operation is determined by the actual virtual application, for example, in a virtual application, such as a dance game application, etc., the motion change of the virtual character in the virtual application may be determined according to the actual motion change, and then the motion change of the virtual character is scored to obtain a motion score, and the motion score is displayed.
The virtual application is an application that can enable a host computer to build and execute one or more virtual environments, the virtual application is likely to simulate a set of complete computer system by using an effective simulation, and then an operating system application is installed on the virtual computer system, so that the difference between the virtual environment and a real complete entity computer can not be perceived from the perspective of an operating system, and the virtual application moves in a completely traditional complete real machine control manner. Such as virtual reality applications, general game applications, somatosensory game applications, and the like.
In another virtual application, such as a somatosensory game application, a virtual operation corresponding to the actual motion change may be determined according to a correspondence between a motion change preset in the system and the virtual operation. For example, when the user changes from a fist-making state to a palm-spreading state, a task button in the virtual environment is displayed, and when the user extends a finger to point forward, the virtual character walks forward, etc. In this case, the actual motion change of the actual target object in a period of time is taken as an input instruction of the system.
In the process, the motion in a motion scene is thinned into the reference decomposition motion, so that the real motion change of a real target object can be determined more precisely, the accuracy of real motion detection is improved, the refined real motion change can be further applied to the virtual application, and the accuracy of virtual operation in the virtual application realized through the real target object can be improved.
The embodiment of the invention also provides a motion detection system, the structural schematic diagram of which is shown in fig. 8, and the motion detection system specifically can comprise:
and the acquisition unit 10 is used for acquiring the real motion information of the part contained in the target object.
And the reference calling unit 11 is used for calling a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to the reference decomposition actions respectively.
And the matching unit 12 is used for sequentially matching the real motion information acquired by the acquisition unit 10 with the motion process information of each group of reference decomposition actions in the reference motion set called by the reference calling unit 11 to obtain a matching result.
And the action determining unit 13 is used for determining the real action change of the target object according to the matching result obtained by the matching unit 12.
The motion determining unit 13 is specifically configured to sequentially match the real motion information at a plurality of adjacent moments with motion process information corresponding to a set of reference decomposition motions in the reference motion set, where the real motion change of the target object in a period of time includes a motion change of a motion scene corresponding to the reference motion set.
Specifically, in this embodiment, the matching unit 12 is specifically configured to, if the reference motion set includes motion process information corresponding to multiple groups of reference decomposition actions arranged in a certain order, include in a motion process of each group of reference decomposition actions: at least one reference decomposes the starting gesture information of the sub-action; matching the currently acquired real motion information with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at the first position; if the real motion information is matched, returning to acquire the real motion information again, and matching the acquired real motion information with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at the second position behind the first position; and if the real motion information is not matched with the initial gesture information of each reference decomposition sub-action in the group of reference decomposition actions arranged at the first position, returning to the step of executing the acquisition of the real motion information and the matching of the current acquired real motion information with the initial gesture information of each reference decomposition sub-action in the group of reference decomposition actions arranged at the first position.
Further, the matching unit 12 is further configured to, if a matching time for matching the currently acquired real motion information with the initial pose information of each reference decomposition sub-action in the set of reference decomposition actions arranged at any position exceeds a preset time, re-acquire the real motion information, and match the re-acquired real motion information with the initial pose information of each reference decomposition sub-action in the set of reference decomposition actions arranged at any position.
Further, the motion detection system of the present embodiment may further include:
and a prompting unit 14, configured to, if the matching unit 12 determines to return to executing the step of collecting the real motion information and matching the currently collected real motion information with the initial gesture information of each reference decomposition sub-action in the set of reference decomposition actions arranged at the first position, perform a user prompt to prompt the target object to re-execute a corresponding action, where the corresponding action is an action corresponding to the set of reference decomposition actions at the first position.
A set preset unit 15, configured to acquire a sample video of a sample object based on any motion scene; extracting a plurality of key change frames of a part contained in the sample object in the sample video; respectively detecting the gesture information of the parts contained in the sample objects in the key change frames; determining motion process information of any action according to gesture information corresponding to at least two adjacent key change frames respectively, and taking any action as a group of reference decomposition actions; and storing the motion process information of the group of reference decomposition actions into the reference motion set of any motion scene. Thus, the reference calling unit 11 directly calls the reference motion set preset by the set preset unit 15.
The set presetting unit 15 is further configured to, if a sample video of a sample object based on multiple types of motion scenes is acquired; and storing the reference motion sets respectively corresponding to the multiple types of motion scenes.
And the set presetting unit 15 is further configured to associate reference motion sets corresponding to each type of motion scene in the multiple types with a specific application event, so that when the specific application event is executed, the reference motion set of the corresponding type is invoked.
Therefore, in the motion detection system of the embodiment, the motion in one motion scene is thinned into the reference decomposition motion, so that the real motion change of the target object can be determined more precisely, the accuracy of real motion detection is improved, and the precise real motion change can be applied to a wider application scene; in addition, the acquisition of the real motion information of the target object does not need to depend on equipment worn by the target object or handheld equipment, and the real motion change of the target object can be determined based on the reference motion set after the acquisition by the image shooting device, so that the acquisition of the real motion information is more convenient and flexible.
The embodiment of the invention also provides a virtual operation system based on motion detection, such as a virtual reality system, and the structure schematic diagram thereof is shown in fig. 9, which specifically may include:
the real motion acquisition unit 20 is configured to acquire real motion information of a portion included in the real target object.
And the set calling unit 21 is used for calling a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to the reference decomposition actions respectively.
The information matching unit 22 is configured to match the real motion information acquired by the real motion acquisition unit 20 with the motion process information of each group of reference decomposition actions in the reference motion set called by the set calling unit 21 in sequence, so as to obtain a matching result.
And a real action unit 23, configured to determine a real action change of the real target object according to the matching result obtained by the information matching unit 22.
A virtual operation unit 24 for determining a virtual operation in the virtual application according to the real motion change determined by the real motion unit 23 to perform the virtual operation.
In the system of the embodiment, the fine real action change is applied to the virtual application, so that the accuracy of realizing the virtual operation in the virtual application through the real target object can be improved.
The embodiment of the present invention further provides a terminal device, whose structure schematic diagram is shown in fig. 10, where the terminal device may generate relatively large differences due to different configurations or performances, and may include one or more central processing units (central processing units, CPU) 30 (e.g., one or more processors) and a memory 31, and one or more storage media 32 (e.g., one or more mass storage devices) storing application 321 or data 322. Wherein the memory 31 and the storage medium 32 may be transitory or persistent. The program stored in the storage medium 32 may include one or more modules (not shown), each of which may include a series of instruction operations in the terminal device. Still further, the central processor 30 may be arranged to communicate with the storage medium 32 and execute a series of instruction operations in the storage medium 32 on the terminal device.
Specifically, the application 321 stored in the storage medium 32 includes an application for motion detection, and the application may include the acquisition unit 10, the reference call unit 11, the matching unit 12, the action determination unit 13, the prompting unit 14, and the integrated preset unit 15 in the motion detection system described above, which will not be described herein. Still further, the central processor 30 may be arranged to communicate with the storage medium 32 and to perform a series of operations on the terminal device corresponding to the application of motion detection stored in the storage medium 32.
The terminal device may also include one or more power supplies 33, one or more wired or wireless network interfaces 34, one or more input/output interfaces 35, and/or one or more operating systems 323, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The steps performed by the motion detection system described in the above method embodiments may be based on the structure of the terminal device shown in fig. 10.
Further, the embodiment of the present invention also provides another terminal device, where the structure of the terminal device may be as shown in fig. 10, and the difference is that, in the terminal device of the embodiment of the present invention:
the application programs stored in the storage medium include application programs of virtual operations based on motion detection, and the programs may include the real motion acquisition unit 20, the set calling unit 21, the information matching unit 22, the real motion unit 23, and the virtual operation unit 24 in the virtual operation system based on motion detection described above, which will not be described herein. Still further, the central processor may be configured to communicate with the storage medium, and execute a series of operations corresponding to an application program of virtual operations based on motion detection stored in the storage medium on the terminal device. The steps described in the above method embodiments as being performed by the virtual operating system based on motion detection may be based on the structure of the terminal device in this embodiment.
Still further, another aspect of the embodiments of the present application provides a computer-readable storage medium storing a plurality of computer programs adapted to be loaded by a processor and to perform a motion detection method as performed by the motion detection system described above, or to perform a virtual operation method based on motion detection as performed by a virtual operation system based on motion detection described above.
In another aspect, the embodiment of the application further provides a terminal device, which comprises a processor and a memory;
the memory is used for storing a plurality of computer programs, and the computer programs are used for loading and executing a motion detection method executed by the motion detection system or executing a virtual operation method based on motion detection executed by a virtual operation system based on motion detection; the processor is configured to implement each of the plurality of computer programs.
Further, according to an aspect of the present application, there is provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the motion detection methods provided in the various alternative implementations described above.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
The motion detection and the virtual operation method, system, storage medium and terminal device provided by the embodiment of the present invention are described in detail, and specific examples are applied to illustrate the principles and implementation of the present invention, and the description of the above embodiments is only used to help understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (15)

1. A method of motion detection, comprising:
collecting real motion information of a part contained in a target object;
invoking a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively;
Sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result;
and determining the real action change of the target object according to the matching result.
2. The method of claim 1, wherein the reference motion set includes motion process information corresponding to a plurality of groups of reference decomposition actions arranged in a certain order, and the motion process of each group of reference decomposition actions includes: at least one reference decomposes the starting gesture information of the sub-action;
the matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set in sequence specifically comprises:
matching the currently acquired real motion information with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at the first position;
if the real motion information is matched, returning to acquire the real motion information again, and matching the acquired real motion information with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at the second position behind the first position;
And if the real motion information is not matched with the initial gesture information of each reference decomposition sub-action in the group of reference decomposition actions arranged at the first position, returning to the step of executing the acquisition of the real motion information and the matching of the current acquired real motion information with the initial gesture information of each reference decomposition sub-action in the group of reference decomposition actions arranged at the first position.
3. The method as recited in claim 2, further comprising:
if the matching time of the currently acquired real motion information and the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at any position exceeds the preset time, the real motion information is acquired again, and the acquired real motion information is matched with the initial gesture information of each reference decomposition sub-action in a group of reference decomposition actions arranged at any position.
4. The method as recited in claim 2, further comprising:
and if the step of collecting the real motion information and the step of matching the currently collected real motion information with the initial gesture information of each reference decomposition sub-action in the group of reference decomposition actions arranged at the first position are returned to be executed, prompting the user to prompt the target object to execute the corresponding action again, wherein the corresponding action is an action corresponding to the group of reference decomposition actions at the first position.
5. The method according to claim 2, wherein determining the actual motion change of the target object according to the matching result specifically comprises:
and if the real motion information at a plurality of adjacent moments is sequentially matched with the motion process information corresponding to a group of reference decomposition actions in the reference motion set, the real action change of the target object in a period of time comprises the action change of a motion scene corresponding to the reference motion set.
6. The method of any one of claims 1 to 5, further comprising:
acquiring a sample video of a sample object based on any motion scene;
extracting a plurality of key change frames of a part contained in the sample object in the sample video;
respectively detecting the gesture information of the parts contained in the sample objects in the key change frames;
determining motion process information of any action according to gesture information corresponding to at least two adjacent key change frames respectively, and taking any action as a group of reference decomposition actions;
and storing the motion process information of the group of reference decomposition actions into the reference motion set of any motion scene.
7. The method of claim 6, wherein the acquiring a sample object is based on a sample video of any motion scene, specifically comprising: acquiring sample videos of sample objects based on various types of motion scenes;
the method further comprises the steps of:
and storing the reference motion sets respectively corresponding to the multiple types of motion scenes.
8. The method as recited in claim 7, further comprising:
and associating the reference motion sets respectively corresponding to the various types of motion scenes in the multiple types with a specific application event, so that when the specific application event is executed, the reference motion sets of the corresponding types are called.
9. A virtual operation method based on motion detection, comprising:
collecting real motion information of a part contained in a real target object;
invoking a preset reference motion set, wherein the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively;
sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result;
determining the real action change of the real target object according to the matching result;
And determining virtual operation in the virtual application according to the real action change so as to execute the virtual operation.
10. The method according to claim 9, wherein said determining virtual operations in a virtual application from said real action changes, in particular comprises:
determining the action change of the virtual character in the virtual application according to the real action change;
and scoring the action change of the virtual character to obtain an action score, and displaying the action score.
11. The method according to claim 9, wherein said determining virtual operations in a virtual application from said real action changes, in particular comprises:
and determining the virtual operation corresponding to the real action change according to the corresponding relation between the action change and the virtual operation.
12. An operation detection system, comprising:
the acquisition unit is used for acquiring the real motion information of the part contained in the target object;
the reference calling unit is used for calling a preset reference motion set, and the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively;
the matching unit is used for sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result;
And the action determining unit is used for determining the real action change of the target object according to the matching result.
13. A virtual operating system based on motion detection, comprising:
the real motion acquisition unit is used for acquiring real motion information of a part contained in a real target object;
the set calling unit is used for calling a preset reference motion set, and the reference motion set comprises at least one group of motion process information corresponding to reference decomposition actions respectively;
the information matching unit is used for sequentially matching the real motion information with the motion process information of each group of reference decomposition actions in the reference motion set to obtain a matching result;
the real action unit is used for determining the real action change of the real target object according to the matching result;
and the virtual operation unit is used for determining virtual operation in the virtual application according to the real action change so as to execute the virtual operation.
14. A computer readable storage medium, characterized in that it stores a plurality of computer programs adapted to be loaded by a processor and to perform the motion detection method according to any of claims 1 to 8 or to perform the virtual operation method based on motion detection according to any of claims 9 to 11.
15. A terminal device comprising a processor and a memory;
the memory is used for storing a plurality of computer programs for loading and executing the motion detection method according to any one of claims 1 to 8 or executing the virtual operation method based on motion detection according to any one of claims 9 to 11 by a processor; the processor is configured to implement each of the plurality of computer programs.
CN202310412463.8A 2023-04-10 2023-04-10 Motion detection and virtual operation method and system thereof, storage medium and terminal equipment Pending CN116978112A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310412463.8A CN116978112A (en) 2023-04-10 2023-04-10 Motion detection and virtual operation method and system thereof, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310412463.8A CN116978112A (en) 2023-04-10 2023-04-10 Motion detection and virtual operation method and system thereof, storage medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN116978112A true CN116978112A (en) 2023-10-31

Family

ID=88482101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310412463.8A Pending CN116978112A (en) 2023-04-10 2023-04-10 Motion detection and virtual operation method and system thereof, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN116978112A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015710A (en) * 2024-04-09 2024-05-10 浙江深象智能科技有限公司 Intelligent sports identification method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015710A (en) * 2024-04-09 2024-05-10 浙江深象智能科技有限公司 Intelligent sports identification method and device

Similar Documents

Publication Publication Date Title
CN102184020B (en) Gestures and gesture modifiers for manipulating a user-interface
US9342230B2 (en) Natural user interface scrolling and targeting
US8824802B2 (en) Method and system for gesture recognition
CN111488824A (en) Motion prompting method and device, electronic equipment and storage medium
CN102262438A (en) Gestures and gesture recognition for manipulating a user-interface
CN110113523A (en) Intelligent photographing method, device, computer equipment and storage medium
KR20100001408A (en) Robot game system and robot game method relating virtual space to real space
CN110102044B (en) Game control method based on smart band, smart band and storage medium
CN110298309A (en) Motion characteristic processing method, device, terminal and storage medium based on image
CN116978112A (en) Motion detection and virtual operation method and system thereof, storage medium and terminal equipment
CN115331314A (en) Exercise effect evaluation method and system based on APP screening function
US20130229348A1 (en) Driving method of virtual mouse
CN108543308B (en) Method and device for selecting virtual object in virtual scene
CN114513694A (en) Scoring determination method and device, electronic equipment and storage medium
CN111353347B (en) Action recognition error correction method, electronic device, and storage medium
CN112837339B (en) Track drawing method and device based on motion capture technology
Bernardes Jr et al. Design and implementation of a flexible hand gesture command interface for games based on computer vision
Choondal et al. Design and implementation of a natural user interface using hand gesture recognition method
Siam et al. Human computer interaction using marker based hand gesture recognition
CN116069157A (en) Virtual object display method, device, electronic equipment and readable medium
CN110321008B (en) Interaction method, device, equipment and storage medium based on AR model
CN115798054B (en) Gesture recognition method based on AR/MR technology and electronic equipment
CN112034975B (en) Gesture filtering method, system, device and readable storage medium
KR102531789B1 (en) Cloud-based metaverse content collaboration system
CN112231220B (en) Game testing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication