CN109213322A - The method and system of gesture identification in a kind of virtual reality - Google Patents

The method and system of gesture identification in a kind of virtual reality Download PDF

Info

Publication number
CN109213322A
CN109213322A CN201810965488.XA CN201810965488A CN109213322A CN 109213322 A CN109213322 A CN 109213322A CN 201810965488 A CN201810965488 A CN 201810965488A CN 109213322 A CN109213322 A CN 109213322A
Authority
CN
China
Prior art keywords
gesture
template
particle
user
threshold value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810965488.XA
Other languages
Chinese (zh)
Other versions
CN109213322B (en
Inventor
朱映映
谭代强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201810965488.XA priority Critical patent/CN109213322B/en
Publication of CN109213322A publication Critical patent/CN109213322A/en
Application granted granted Critical
Publication of CN109213322B publication Critical patent/CN109213322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of methods of gesture identification in virtual reality, this method comprises: initialization population, obtain the characteristic information that user inputs gesture, the particle of population is subjected to state transfer, obtain prediction particle, the prediction particle is attached in preset template gesture, calculate the Euclidean distance that user inputs gesture and template gesture snap point, the likelihood probability and weight of prediction particle are updated according to the Euclidean distance, then resampling and normalized are carried out to updated prediction particle according to the weight, when being aligned value and being greater than the second preset threshold value of gesture is inputted when the gesture probability of template gesture is greater than the first preset threshold value and template gesture and user, correctly identify that user inputs gesture.Prediction particle after normalizing in this method indicates similarity, therefore can accurately identify the gesture, and in entire technical solution, time complexity is low.The processing time is shorter, and accuracy is improved, in real time and accurately identify to gesture.

Description

The method and system of gesture identification in a kind of virtual reality
Technical field
The present invention relates to a kind of method of gesture identification in technical field of computer vision more particularly to virtual reality and it is System.
Background technique
Gesture identification human-computer interaction, Entertainment, it is vehicle-mounted in terms of have a wide range of applications.Currently used hand Gesture identification is the gesture identification based on image information, but the existing gesture identification based on image information has the following problems: Real-time is poor, it is difficult to meet the frame number requirement of virtual reality needs, and for dynamic gesture, recognition accuracy is low, acquisition Gesture it is cumbersome, it is difficult to the gesture of user is identified precisely in real time.
Summary of the invention
The main purpose of the present invention is to provide a kind of method and system of gesture identification in virtual reality, existing for solving Not the technical issues of some Gesture Recognitions not can be carried out in real time and accurately identify.
To achieve the above object, first aspect present invention provides a kind of method of gesture identification in virtual reality, the side Method includes:
Step 1, initialization population;
Step 2 obtains the characteristic information that user inputs gesture, and the characteristic information includes gesture path point;
The particle of the population is carried out state transfer by step 3, obtains prediction particle, the prediction particle is attached to pre- In the template gesture set;
Step 4 calculates the Euclidean distance that the user inputs gesture and the template gesture snap point, according to the Euclidean Distance updates the likelihood probability and weight of the prediction particle;
Step 5 carries out resampling and normalized to the prediction particle according to the weight;
Step 6 judges whether the gesture probability in the template gesture is greater than the first preset threshold value and judges the template Gesture is aligned whether value is greater than the second preset threshold value with user input gesture, and the gesture probability in the template gesture is It is described prediction particle be normalized after rate of specific gravity;
If gesture probability in step 7, the template gesture be greater than first preset threshold value and the template gesture with The alignment value that the user inputs gesture is greater than second preset threshold value, it is determined that and the user inputs gesture identification success, Output identifies successful gesture path;
If the gesture probability in step 8, the template gesture is not more than first preset threshold value or the template gesture The value that is aligned for inputting gesture with the user is not more than second preset threshold value, then return step 3.
Second aspect of the present invention provides a kind of system of gesture identification in virtual reality, the system comprises:
Initialization module, for initializing population;
Module is obtained, the characteristic information of gesture is inputted for obtaining user, the characteristic information includes gesture path point;
Prediction module obtains prediction particle, the prediction particle for the particle of the population to be carried out state transfer It is attached in preset template gesture;
Update module is calculated, for calculating the Euclidean distance of the input gesture and the template gesture snap point, according to The Euclidean distance updates the likelihood probability and weight of the prediction particle;
Processing module, for carrying out resampling and normalized to the prediction particle according to the weight;
Judgment module the first preset threshold value and judges institute for judging whether the gesture probability in the template gesture is greater than That states that template gesture and the user input gesture is aligned gesture of the value whether greater than the second preset threshold value, in the template gesture Probability is the rate of specific gravity after the prediction particle is normalized;
Output module is greater than first preset threshold value and the template for the gesture probability in the template gesture Gesture inputs when being aligned value greater than second preset threshold value of gesture with the user, determines that the user inputs gesture identification Success, output identify successful gesture path;
Return module is not more than first preset threshold value or the mould for the gesture probability in the template gesture Wrench gesture inputs when being aligned value no more than second preset threshold value of gesture with the user, returns to prediction module.
From the technical solution of above-mentioned offer it is found that the technology obtains user and input gesture path by initialization population, State update is carried out to the particle in population, finds prediction particle, prediction particle is attached in preset template gesture, passes through mould The Euclidean distance that wrench gesture and user input gesture updates the likelihood probability and weight of prediction particle, to updated prediction particle It is normalized, until the gesture probability of template gesture inputs gesture greater than the first preset threshold value and template gesture and user Alignment value be greater than the second preset threshold value, to user input gesture identification success, output user input gesture gesture path. Prediction particle after normalizing in this method indicates similarity, therefore can accurately identify the gesture, in entire technical solution, the time Complexity is low.The processing time is shorter, and accuracy is improved, in real time and accurately identify to gesture.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those skilled in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 is the flow diagram of the method for gesture identification in a kind of virtual reality provided in an embodiment of the present invention;
Fig. 2 be another embodiment of the present invention provides a kind of virtual reality in gesture identification method flow diagram;
Fig. 3 is the structural schematic diagram of the system of gesture identification in a kind of virtual reality provided in an embodiment of the present invention;
Fig. 4 is that the structure of the increase module of the system of gesture identification in a kind of virtual reality provided in an embodiment of the present invention is shown It is intended to;
Fig. 5 is the structural schematic diagram of the refinement module of return module;
Fig. 6 is the structural schematic diagram of the refinement module of detection data module;
Fig. 7 be another embodiment of the present invention provides a kind of virtual reality in gesture identification system structural schematic diagram;
Fig. 8 be another embodiment of the present invention provides calculating equipment structural schematic diagram.
Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described reality Applying example is only a part of the embodiment of the present invention, and not all embodiments.Based on the embodiments of the present invention, those skilled in the art Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Due to the technical issues of cannot gesture being carried out in real time and be accurately identified in the prior art.In order to solve above-mentioned technology Problem, the method that the present invention proposes gesture identification in a kind of virtual reality.
Referring to Fig. 1, being the process signal of the method for gesture identification in a kind of virtual reality provided in an embodiment of the present invention Scheme, the method for gesture identification includes: in the virtual reality
Step 101, initialization population.Wherein, initialization population i.e. initialize number of particles, state-space model and Weight determines the search space of particle.Wherein, the state-space model of particle includes the characteristic value i.e. progress and speed of particle Deng.
Step 102 obtains the characteristic information that user inputs gesture, and this feature information includes gesture path point.
Wherein, the characteristic information of acquisition is the characteristic information of the gesture of extraneous input, and this feature information includes gesture path Point, the gesture path point use in subsequent operation.
The particle of population is carried out state transfer by step 103, obtains prediction particle, which is attached to preset In template gesture.
Wherein, template gesture is the template set of the gesture of extraneous input, and user is defeated for identification for preset template gesture The particle of population is carried out state transfer, obtains the prediction particle for being attached to preset template gesture by the correctness for entering gesture It can show that user inputs the similarity of gesture and template gesture to a certain degree.
Step 104 calculates the Euclidean distance that user inputs gesture and template gesture snap point, is updated according to the Euclidean distance Predict the likelihood probability and weight of particle.
Wherein, the likelihood probability and weight of prediction particle are updated according to Euclidean distance, that is, update the population of prediction particle, Preferably find optimal particle group.
Step 105 carries out resampling and normalized to prediction particle according to the weight.
Wherein, resampling is that the small particle of weight is substituted for the big particle of weight, to eliminate the small particle of weight, The particle after resampling is normalized again, the specific gravity that the prediction particle of template gesture is attached to after normalization can indicate to use Family inputs the similarity of gesture and the template gesture, and the quantity that the prediction particle of template gesture is attached to after normalization is more, table Show user input gesture be template gesture probability it is bigger.
Gesture probability in step 106, judge templet gesture whether be greater than preset first threshold and judge templet gesture with Whether the alignment value that user inputs gesture is greater than preset second threshold, and the gesture probability in the template gesture is that prediction particle carries out Rate of specific gravity after normalization.
Wherein, the first preset threshold value is the threshold value of pre-set template gesture probability, and the second preset threshold value is to set in advance The template gesture set is aligned the threshold value set with user's input gesture, and whether the gesture probability in judge templet gesture is greater than first Preset threshold value judges to predict whether rate of specific gravity of the particle after normalization is greater than the first preset threshold value.It is predicted after normalization The quantity that particle is attached in preset template gesture is more, indicates that the rate of specific gravity of prediction particle is bigger, the probability of template gesture is got over Greatly.And template gesture and user input being aligned value also and can characterizing the similarity of template gesture and user's input gesture for gesture.
If the gesture probability in step 107, template gesture is greater than the first preset threshold value and template gesture and user input hand The alignment value of gesture is greater than the second preset threshold value, it is determined that user inputs gesture identification success, and output identifies successful gesture path.
Wherein, when template gesture probability reaches the first preset threshold value and template gesture and user input the value that is aligned of gesture and reach When to the second preset threshold value, can determine that user inputs gesture is template gesture, and user inputs gesture identification success, and output is known The gesture path that not successful gesture path, i.e. output user input gesture.
If the gesture probability in step 108, template gesture is inputted no more than the first preset threshold value or template gesture and user The alignment value of gesture is not more than the second preset threshold value, then return step 103.
Wherein, the gesture probability of template gesture reaches the first preset threshold value and template gesture and user and inputs being aligned for gesture It is different classes of condition that value, which reaches the second preset threshold value, when two conditions have one be unsatisfactory for when, not can determine that template gesture Input whether gesture is the same gesture with user, it is therefore desirable to which return step 103 carries out state transfer to particle state, obtains New prediction particle is taken, is then updated prediction particle, until the gesture probability of template gesture is greater than the first preset threshold value And template gesture is greater than the second preset threshold value with the value that is aligned that user inputs gesture.
The method of gesture identification is it is found that this method passes through initialization grain from virtual reality provided in an embodiment of the present invention Son obtains user and inputs gesture path, carries out state update to the particle of population, finds prediction particle, and prediction particle is attached to In preset template gesture, the Euclidean distance of gesture is inputted by template gesture and user, updates the likelihood probability of prediction particle And weight, updated prediction particle is normalized, until the gesture probability of template gesture is greater than the first preset threshold What value and template gesture and user inputted gesture is aligned value greater than the second preset threshold value, inputs the identification of gesture successfully to user, Export the gesture path that user inputs gesture.Prediction particle indicates similarity after normalizing in this method, therefore can accurately identify The gesture, in entire technical solution, time complexity is low.The processing time is shorter, and accuracy is improved, to carry out to gesture It identifies in real time and accurately.
Further, the specific steps for obtaining the characteristic information that user inputs gesture include: to obtain gesture using sensor Each characteristic point three-dimensional data.Wherein, the tracing point that gesture can be obtained from the three-dimensional data is clicked through using the gesture path The subsequent operation of row.
Further, before obtaining user and inputting the characteristic information of gesture further include: building template gesture.Construct template The step of gesture is concretely: the characteristic information of a large amount of gestures acquired, characteristic information is handled, establishes the model of gesture, The model is template gesture.
Further, the gesture probability in template gesture is inputted no more than the first preset threshold value or template gesture and user The alignment value of gesture be not more than the second preset threshold value when further include: judge population particle carry out state transfer number whether Greater than third preset threshold value, if the number that the particle of population carries out state transfer is greater than third preset threshold value, return step 102, if the number that the particle of population carries out state transfer is not more than third preset threshold value, return step 103.
Wherein, third preset threshold value is the threshold value of the number of preset carry out state transfer.The particle of population carries out shape The number of state transfer reaches third preset threshold value, then illustrates that user inputs gesture and do not reach also after repeatedly identifying with template gesture The successful condition of gesture, the necessity not gone on, therefore return step 102 are inputted to identification user, is reacquired new User inputs gesture, is to reduce not by the number setting threshold value that particle shifts to identify that the user inputs gesture Necessary calculation amount, improving this method is validity.
When the number that the particle of population carries out state transfer does not reach third preset threshold value, then return step 103, after It is continuous that population is subjected to state transfer, update prediction particle, when the gesture probability in template gesture be greater than the first preset threshold value and Template gesture inputs when being aligned value greater than the second preset threshold value of gesture with user, identifies successfully.
Further, between population is extremely updated by the characteristic information for obtaining user's input gesture the step of, also wraps It includes: judging whether the characteristic information of the user obtained input gesture is complete, if the characteristic information that the user obtained inputs gesture is Completely, 103 are thened follow the steps, if the characteristic information that the user obtained inputs gesture is incomplete, output error message Prompt.
Wherein, data sensor obtained carry out that sensing data during typing is lost or to collect user defeated Enter gesture it is imperfect when, the characteristic information that the user of acquisition inputs gesture is as imperfect, at this point, output error message mentions Show, identification is carried out to obtain false judgment to the mistake to prevent subsequent.
It should be noted that this method can also provide the prediction of gesture future trend according to current gesture.
Referring to Fig. 2, Fig. 2 be another embodiment of the present invention provides a kind of virtual reality in gesture identification method stream Journey schematic diagram, the present embodiment and a upper embodiment the difference is that, the present embodiment also adds step 109, and step 109 is specific As follows: feedback interactive system inputs gesture to the successful user of identification and handles, and obtains the operation knot that the user inputs gesture Fruit, so that user interface shows the operating result.
Specifically, according to a upper embodiment, if identifying successfully, the successful gesture path of identification is exported, that is, is exported User inputs the gesture path of gesture, and feedback interactive system obtains the gesture path that user inputs gesture, feeds back interactive system root It deals with according to the gesture path that user inputs gesture, obtains that treated as a result, feedback interactive system exists this as the result is shown The function of display surface, user's human-computer interaction is completed, and user can naturally be interacted by gesture path with machine.
The method of gesture identification is it is found that this method passes through initialization grain from virtual reality provided in an embodiment of the present invention Son obtains user and inputs gesture path, carries out state update to the particle of population, finds prediction particle, and prediction particle is attached to In preset template gesture, the Euclidean distance of gesture is inputted by template gesture and user, updates the likelihood probability of prediction particle And weight, updated prediction particle is normalized, until the gesture probability of template gesture is greater than the first preset threshold What value and template gesture and user inputted gesture is aligned value greater than the second preset threshold value, inputs the identification of gesture successfully to user, Export the gesture path that user inputs gesture.On the one hand, prediction particle indicates similarity after normalizing in this method, therefore can be quasi- Really identify the gesture, in entire technical solution, time complexity is low.The processing time is shorter, and accuracy is improved, thus opponent Gesture in real time and accurately identify;On the other hand, user can naturally be interacted by gesture path with machine, be used Family experience is preferable.
Referring to Fig. 3, being the structural representation of the system of gesture identification in a kind of virtual reality provided in an embodiment of the present invention Scheme, the system of gesture identification includes: in the virtual reality
Initialization module 201, for initializing population.Wherein, initialization module 201 initializes population and initializes Number of particles, state-space model and weight determine the search space of particle.Wherein, the state-space model of particle includes grain Characteristic value, that is, progress and speed of son etc..
Module 202 is obtained, the characteristic information of gesture is inputted for obtaining user, this feature information includes gesture path point.
Wherein, the characteristic information that the characteristic information that module 202 obtains is the gesture of extraneous input, this feature packet are obtained Gesture path point is included, which uses in subsequent operation.
Prediction module 203 obtains prediction particle, the prediction particle for the particle of the population to be carried out state transfer It is attached in preset template gesture.
Wherein, template gesture is the template set of the gesture of extraneous input, and user is defeated for identification for preset template gesture Enter the correctness of gesture.The particle of population is carried out state transfer by prediction module 203, obtains being attached to preset template gesture Prediction particle can show to a certain degree user input gesture and template gesture similarity.
Update module 204 is calculated, the Euclidean distance of gesture and template gesture snap point is inputted for calculating user, according to this Euclidean distance updates the likelihood probability and weight of prediction particle.
Wherein, likelihood probability and weight that update module updates prediction particle according to Euclidean distance are calculated, that is, updates prediction The population of particle preferably finds optimal particle group.
Processing module 205, for carrying out resampling and normalized to prediction particle according to weight.
Wherein, it is that the small particle of weight is substituted for the big particle of weight that processing module 205, which carries out resampling, to eliminate Fall the small particle of weight, then the particle after resampling is normalized, the prediction particle of template gesture is attached to after normalization Specific gravity can indicate user input gesture and the template gesture similarity, the prediction grain of template gesture is attached to after normalization The quantity of son is more, and expression user's input gesture is that the probability of template gesture is bigger.
Whether judgment module 206 is greater than the first preset threshold value for the gesture probability in judge templet gesture and judges mould Wrench gesture is aligned whether value is greater than the second preset threshold value with user's input gesture, and the gesture probability in the template gesture is prediction Particle be normalized after rate of specific gravity.
Wherein, the first preset threshold value is the threshold value of pre-set template gesture probability, and the second preset threshold value is to set in advance The template gesture set is aligned the threshold value set with user's input gesture.Gesture probability in 206 judge templet gesture of judgment module Whether it is greater than the first preset threshold value, that is, judges to predict whether rate of specific gravity of the particle after normalization is greater than the first preset threshold value. The quantity for predicting that particle is attached in preset template gesture after normalization is more, indicates that the rate of specific gravity of prediction particle is bigger, template The probability of gesture is bigger.And template gesture and user input being aligned value also and can characterizing template gesture and user's input gesture for gesture Similarity.
Output module 207 greater than the first preset threshold value and template gesture and is used for the gesture probability in template gesture When the alignment value that family inputs gesture is greater than the second preset threshold value, determine that user inputs gesture identification success, output identification is successful Gesture path.
Wherein, when template gesture probability reaches the first preset threshold value and template gesture and user input the value that is aligned of gesture and reach When to the second preset threshold value, it is template gesture that output module 207, which can determine that user inputs gesture, and user inputs gesture identification Success, output identify successful gesture path, the i.e. gesture path of output user input gesture.
Return module 208 is not more than first preset threshold value or template gesture for the gesture probability in template gesture When being aligned value no more than second preset threshold value of gesture is inputted with user, returns to prediction module.
Wherein, the gesture probability of template gesture reaches the first preset threshold value and template gesture and user and inputs being aligned for gesture It is different classes of condition that value, which reaches the second preset threshold value, when two conditions have one be unsatisfactory for when, return module 208 cannot be true Solid plate gesture and user input whether gesture is the same gesture, it is therefore desirable to return to prediction module 203.Prediction module 203 State transfer is carried out to particle state, obtains new prediction particle, then calculating update module 204 will predict that particle carries out more Newly, until return module 208 determines that the gesture probability of template gesture is inputted greater than the first preset threshold value and template gesture and user The alignment value of gesture is greater than the second preset threshold value.
The system of gesture identification is it is found that the system passes through initialization module from virtual reality provided in an embodiment of the present invention Particle is initialized, module is obtained and obtains user's input gesture path, prediction module carries out state update to the particle of population, looks for To prediction particle, prediction particle is attached in preset template gesture, is calculated update module by template gesture and user and is inputted hand The Euclidean distance of gesture, updates the likelihood probability and weight of prediction particle, and processing module carries out normalizing to updated prediction particle Change processing, until determining that the gesture probability of template gesture is greater than the first preset threshold value and pair of template gesture and user's input gesture Neat value is greater than the second preset threshold value, and output module output identifies successful gesture path.Particle is predicted after normalizing in the system It indicates similarity, therefore the gesture can be accurately identified, in entire technical solution, time complexity is low.The processing time is shorter, accurately Property be improved, to be carried out in real time and accurate identification to gesture.
Further, module 202 is obtained to be specifically used for obtaining user inputs each characteristic point of gesture three using sensor Dimension data.Wherein, the tracing point of gesture can be obtained from the three-dimensional data by obtaining module 202, be carried out using the gesture path point Subsequent operation.
Further, as shown in figure 4, Fig. 4 is that gesture identification is in a kind of virtual reality provided in an embodiment of the present invention The structural schematic diagram of the increase module of system, which further includes building module 209, for constructing template gesture.Construct module 209 Construct template gesture the step of concretely: acquire the characteristic information of a large amount of gestures, characteristic information handled, establish gesture Model, the model be template gesture.
Further, as shown in figure 5, Fig. 5 is the structural schematic diagram of the refinement module of return module, return module 208 is also Include:
Judgment threshold module 2081, for judging it is pre- whether the number of particle progress state transfer of population is greater than third Set threshold value;
First return module 2082, the number for carrying out state transfer for the particle in population are greater than third preset threshold value When, then it returns and obtains module 202;
Second return module 2083, the number for carrying out state transfer for the particle in population are not more than the preset threshold of third When value, prediction module 203 is returned.
Wherein, third preset threshold value is the threshold value of the number of preset carry out state transfer.The particle of population carries out shape The number of state transfer reaches third preset threshold value, then illustrates that user inputs gesture and do not reach also after repeatedly identifying with template gesture The successful condition of gesture, the necessity not gone on are inputted to identification user, therefore returns and obtains module 202, obtains module 202, which reacquire new user, inputs gesture, to make the system identification gesture, threshold is arranged in the number that particle is shifted Value is to reduce unnecessary calculation amount, and improving this method is validity.
When the number that the particle of population carries out state transfer does not reach third preset threshold value, then prediction module is returned 203, which continues population carrying out state transfer, updates prediction particle, when the gesture probability in template gesture is greater than the One preset threshold value and template gesture and user input when being aligned value and being greater than the second preset threshold value of gesture, identify successfully.
Further, as shown in figure 4, the system further includes detection data module 210, as shown in fig. 6, Fig. 6 is testing number According to the structural schematic diagram of the refinement module of module, detection data module 210 is specifically included:
Whether judging characteristic information module 2101, the characteristic information that the user for judging to obtain inputs gesture are complete;
First execution module 2102, when the characteristic information for user's input gesture in acquisition is complete, into prediction Module;
Second execution module 2103, when the characteristic information for user's input gesture in acquisition is imperfect, output is wrong The prompt of false information.
Wherein, data sensor obtained carry out that sensing data during typing is lost or to collect user defeated Enter gesture it is imperfect when, obtain the user that module 202 obtains input gesture characteristic information it is as imperfect, therefore need testing number Judge whether the characteristic information of the gesture of acquisition is complete according to module 210.Identification is carried out to obtain mistake to the mistake to prevent subsequent Erroneous judgement.
It should be noted that this method can also provide the prediction of gesture future trend according to current gesture.
Referring to Fig. 7, Fig. 7 be another embodiment of the present invention provides a kind of virtual reality in gesture identification system knot Structure schematic diagram, the present embodiment and a upper embodiment the difference is that, the present embodiment also adds feedback interactive system 211, uses It is handled to obtain the operating result of user input gesture in inputting gesture to the successful user of identification, so that user interface will The operating result is shown.
Specifically, according to a upper embodiment, if identifying successfully, output module exports the successful gesture rail of identification The gesture path that mark, i.e. output user input gesture, feedback interactive system obtain the gesture path that user inputs gesture, and feedback is handed over Mutual system is dealt with according to the gesture path that user inputs gesture, obtains that treated as a result, feedback interactive system is by the knot Fruit is shown in display surface, and the function of user's human-computer interaction is completed, and user can be carried out by gesture path with machine natural Interaction.
The system of gesture identification is it is found that the system passes through initialization grain from virtual reality provided in an embodiment of the present invention Son obtains module and obtains user's input gesture path, and prediction module carries out state update to the particle of population, finds prediction grain Son, prediction particle are attached in preset template gesture, calculate the Euclidean that update module inputs gesture by template gesture and user Distance updates the likelihood probability and weight of prediction particle, and updated prediction particle is normalized in processing module, directly Gesture probability to determining template gesture is greater than the first preset threshold value and template gesture is greater than with the value that is aligned that user inputs gesture Second preset threshold value, output module output identify successful gesture path.Prediction particle indicates similar after normalizing in the system Degree, therefore the gesture can be accurately identified, in entire technical solution, time complexity is low.The processing time is shorter, and accuracy is mentioned Height, in real time and accurately identify to gesture.On the other hand, user can be carried out certainly by gesture path with machine Right interaction, better user experience.
Fig. 8 be another embodiment of the present invention provides calculating equipment structural schematic diagram.As shown in figure 8, the embodiment Equipment 3 is calculated to include: processor 301, memory 302 and storage in the memory 302 and can run on processor 301 Computer program 303, for example, in virtual reality the method for gesture identification program.When processor 301 executes computer program 303 Realize the step in above-mentioned virtual reality in the embodiment of the method for gesture identification, such as step 101 shown in FIG. 1 is to step 108. Alternatively, processor 301 realizes the function of each module/unit in above-mentioned each Installation practice when executing computer program 303, such as Initialization module 201 shown in Fig. 3 obtains module 202, prediction module 203, calculates update module 204, processing module 205, judgement The function of module 206, output module 207 and return module 208.
Illustratively, the computer program 303 of the method for gesture identification specifically includes that initialization particle in virtual reality Group;The characteristic information that user inputs gesture is obtained, this feature information includes gesture path point;The particle of population is subjected to state Transfer, obtains prediction particle, which is attached in preset template gesture;It calculates user and inputs gesture and template gesture pair The Euclidean distance put together updates the likelihood probability and weight of prediction particle according to the Euclidean distance;According to the weight to prediction grain Son carries out resampling and normalized;Whether the gesture probability in judge templet gesture is greater than preset first threshold and judges mould Wrench gesture is aligned whether value is greater than preset second threshold with user's input gesture, and the gesture probability in the template gesture is prediction Particle be normalized after rate of specific gravity;If the gesture probability in template gesture greater than the first preset threshold value and template gesture and is used The alignment value that family inputs gesture is greater than the second preset threshold value, it is determined that user inputs gesture identification success, and output identification is successful Gesture path;If the gesture probability in template gesture inputs pair of gesture no more than the first preset threshold value or template gesture and user Neat value is not more than the second preset threshold value, then returns the particle of population carrying out state transfer, obtains prediction particle, the prediction grain Son is attached in preset template gesture.Computer program 303 can be divided into one or more module/units, one or Multiple module/units are stored in memory 302, and by or processor 301 execute, to complete the present invention.One multiple mould Block/unit can be the series of computation machine program instruction section that can complete specific function, and the instruction segment is for describing computer Program 303 is calculating the implementation procedure in equipment 3.For example, computer program 303 can be divided into initialization module 201, obtain Modulus block 202, prediction module 203 calculate update module 204, processing module 205, judgment module 206, output module 207 and return The function of module 208 (module in virtual bench) is returned, each module concrete function is as follows: initialization module 201, for initializing Population;Module 202 is obtained, for inputting the characteristic information of gesture for obtaining user, this feature information includes gesture path Point;Prediction module 203 obtains prediction particle, which is attached to pre- for the particle of the population to be carried out state transfer In the template gesture set;Calculate update module 204, for calculate user input the Euclidean of gesture and template gesture snap point away from From according to the likelihood probability and weight of Euclidean distance update prediction particle;Processing module 205 is used for according to weight to prediction Particle carries out resampling and normalized;Whether judgment module 206 is greater than for the gesture probability in judge templet gesture Whether what one preset threshold value and judge templet gesture and user inputted gesture is aligned value greater than the second preset threshold value, the template gesture In gesture probability be predict particle be normalized after rate of specific gravity;Output module 207, for the gesture in template gesture Probability is greater than the first preset threshold value and template gesture when being aligned value greater than the second preset threshold value, is determined and used with user's input gesture Family inputs gesture identification success, and output identifies successful gesture path;Return module 208, for the gesture in template gesture Probability is not more than second preset threshold value with the value that is aligned that user inputs gesture no more than first preset threshold value or template gesture When, return to prediction module.
Calculating equipment 3 may include, but are not limited to processor 301, memory 302.It will be understood by those skilled in the art that Fig. 8 is only the example for calculating equipment 3, does not constitute the restriction to equipment 3 is calculated, and may include more more or fewer than illustrating Component perhaps combines certain components or different components, such as calculating equipment can also include input-output equipment, network Access device, bus etc..
Alleged processor 301 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
Memory 302 can be the internal storage unit for calculating equipment 3, such as calculate the hard disk or memory of equipment 3.Storage Device 302 is also possible to calculate the External memory equipment of equipment 3, such as calculates the plug-in type hard disk being equipped in equipment 3, intelligent storage Block (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc.. Further, memory 302 can also both include calculating the internal storage unit of equipment 8 or including External memory equipment.Storage Device 302 is for other programs and data needed for storing computer program and calculating equipment.Memory 302 can be also used for temporarily When store the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of device are divided into different functional unit or module, to complete above description All or part of function.Each functional unit in embodiment, module can integrate in one processing unit, be also possible to Each unit physically exists alone, and can also be integrated in one unit with two or more units, above-mentioned integrated unit Both it can take the form of hardware realization, can also realize in the form of software functional units.In addition, each functional unit, mould The specific name of block is also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.It is single in above system Member, the specific work process of module, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device/calculating device and method, it can be with It realizes by another way.For example, device described above/calculating apparatus embodiments are only schematical, for example, mould The division of block or unit, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple Unit or assembly can be combined or can be integrated into another system, or some features can be ignored or not executed.It is another Point, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device or The INDIRECT COUPLING or communication connection of unit can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks On unit.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If integrated module/unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, the present invention realizes above-described embodiment side All or part of the process in method can also instruct relevant hardware to complete, hand in virtual reality by computer program The computer program that gesture knows method for distinguishing can be stored in a computer readable storage medium, and the computer program is by processor When execution, it can be achieved that the step of above-mentioned each embodiment of the method, that is, initialization population;Obtain the feature that user inputs gesture Information, this feature information include gesture path point;The particle of population is subjected to state transfer, obtains prediction particle, the prediction Particle is attached in preset template gesture;The Euclidean distance that user inputs gesture and template gesture snap point is calculated, according to the Europe Family name's distance updates the likelihood probability and weight of prediction particle;Prediction particle is carried out at resampling and normalization according to the weight Reason;Whether the gesture probability in judge templet gesture is greater than preset first threshold and judge templet gesture and user inputs gesture Whether alignment value is greater than preset second threshold, and the gesture probability in the template gesture is the specific gravity after prediction particle is normalized Value;If the gesture probability in template gesture is greater than the first preset threshold value and template gesture is greater than with the value that is aligned that user inputs gesture Second preset threshold value, it is determined that user inputs gesture identification success, and output identifies successful gesture path;If in template gesture Gesture probability is not more than the second preset threshold value with the value that is aligned that user inputs gesture no more than the first preset threshold value or template gesture, It then returns and the particle of population is subjected to state transfer, obtain prediction particle, which is attached in preset template gesture.
Wherein, computer program includes computer program code, and computer program code can be source code form, object Code form, executable file or certain intermediate forms etc..Computer-readable medium may include: that can carry computer program Any entity or device of code, recording medium, USB flash disk, mobile hard disk, magnetic disk, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, electricity Believe signal and software distribution medium etc..It should be noted that the content that computer-readable medium includes can be managed according to the administration of justice Local legislation and the requirement of patent practice carry out increase and decrease appropriate, such as in certain jurisdictions, according to legislation and patent Practice, computer-readable medium does not include electric carrier signal and telecommunication signal.Above embodiments are only to illustrate skill of the invention Art scheme, rather than its limitations;Although the present invention is described in detail referring to the foregoing embodiments, the common skill of this field Art personnel are it is understood that it is still possible to modify the technical solutions described in the foregoing embodiments, or to its middle part Technical characteristic is divided to be equivalently replaced;And these are modified or replaceed, the present invention that it does not separate the essence of the corresponding technical solution The spirit and scope of each embodiment technical solution, should all be included in the protection scope of the present invention.
The above are method, system and the computers to gesture identification in a kind of virtual reality provided by the present invention to deposit The description of storage media, for those skilled in the art, thought according to an embodiment of the present invention, in specific embodiment and application There will be changes in range, and to sum up, the contents of this specification are not to be construed as limiting the invention.

Claims (10)

1. a kind of method of gesture identification in virtual reality, which is characterized in that the described method includes:
Step 1, initialization population;
Step 2 obtains the characteristic information that user inputs gesture, and the characteristic information includes gesture path point;
The particle of the population is carried out state transfer by step 3, obtains prediction particle, the prediction particle is attached to preset In template gesture;
Step 4 calculates the Euclidean distance that the user inputs gesture and the template gesture snap point, according to the Euclidean distance Update the likelihood probability and weight of the prediction particle;
Step 5 carries out resampling and normalized to the prediction particle according to the weight;
Step 6 judges whether the gesture probability in the template gesture is greater than the first preset threshold value and judges the template gesture Whether value is aligned greater than the second preset threshold value with what the user inputted gesture, and the gesture probability in the template gesture is described Predict the rate of specific gravity after particle is normalized;
If gesture probability in step 7, the template gesture be greater than first preset threshold value and the template gesture with it is described The alignment value that user inputs gesture is greater than second preset threshold value, it is determined that the user inputs gesture identification success, output Identify successful gesture path;
If the gesture probability in step 8, the template gesture is no more than first preset threshold value or the template gesture and institute The alignment value for stating user's input gesture is not more than second preset threshold value, then return step 3.
2. the method according to claim 1, wherein the characteristic information for obtaining user's input gesture includes: The three-dimensional data that user inputs each characteristic point of gesture is obtained using sensor.
3. the method according to claim 1, wherein before the characteristic information for obtaining user's input gesture also It include: building template gesture.
4. the method according to claim 1, wherein the method also includes:
To identification, successfully user's input gesture is handled feedback interactive system, obtains the behaviour that the user inputs gesture Make as a result, so that user interface shows the operating result.
5. the method according to claim 1, wherein after the characteristic information for obtaining user's input gesture also Include:
Judge whether the characteristic information of user's input gesture of the acquisition is complete;
If the characteristic information that the user obtained inputs gesture is completely, to then follow the steps 3;
If the characteristic information that the user obtained inputs gesture is incomplete, the prompt of output error message.
6. the method according to claim 1, wherein gesture probability in the template gesture is no more than described the One preset threshold value or the template gesture are not more than second preset threshold value with the value that is aligned that the user inputs gesture, then return Returning step 3 includes: to judge whether the number of the particle progress state transfer of the population is greater than third preset threshold value, if particle The number that the particle of group carries out state transfer is greater than the third preset threshold value, then return step 2;If the particle of population carries out The number of state transfer is not more than the third preset threshold value, then return step 3.
7. the system of gesture identification in a kind of virtual reality, which is characterized in that the system comprises:
Initialization module, for initializing population;
Module is obtained, the characteristic information of gesture is inputted for obtaining user, the characteristic information includes gesture path point;
Prediction module obtains prediction particle, the prediction particle is attached to for the particle of the population to be carried out state transfer In preset template gesture;
Update module is calculated, for calculating the Euclidean distance of the input gesture and the template gesture snap point, according to described Euclidean distance updates the likelihood probability and weight of the prediction particle;
Processing module, for carrying out resampling and normalized to the prediction particle according to the weight;
Judgment module the first preset threshold value and judges the mould for judging whether the gesture probability in the template gesture is greater than Wrench gesture is aligned whether value is greater than the second preset threshold value with user input gesture, the gesture probability in the template gesture Rate of specific gravity after being normalized for the prediction particle;
Output module is greater than first preset threshold value and the template gesture for the gesture probability in the template gesture Input when being aligned value greater than second preset threshold value of gesture with the user, determine user's input gesture identification at Function, output identify successful gesture path;
Return module is not more than first preset threshold value or the template hand for the gesture probability in the template gesture Gesture inputs when being aligned value no more than second preset threshold value of gesture with the user, returns to prediction module.
8. system according to claim 7, which is characterized in that the system also includes building modules, for constructing template Gesture.
9. a kind of calculating equipment, including memory, processor and storage are in the memory and can be on the processor The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 6 when executing the computer program The step of any one the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In realization is such as the step of claim 1 to 6 any one the method when the computer program is executed by processor.
CN201810965488.XA 2018-08-23 2018-08-23 Method and system for gesture recognition in virtual reality Active CN109213322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810965488.XA CN109213322B (en) 2018-08-23 2018-08-23 Method and system for gesture recognition in virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810965488.XA CN109213322B (en) 2018-08-23 2018-08-23 Method and system for gesture recognition in virtual reality

Publications (2)

Publication Number Publication Date
CN109213322A true CN109213322A (en) 2019-01-15
CN109213322B CN109213322B (en) 2021-05-04

Family

ID=64989060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810965488.XA Active CN109213322B (en) 2018-08-23 2018-08-23 Method and system for gesture recognition in virtual reality

Country Status (1)

Country Link
CN (1) CN109213322B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631430A (en) * 2020-12-30 2021-04-09 安徽鸿程光电有限公司 Gesture motion trajectory processing method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404086A (en) * 2008-04-30 2009-04-08 浙江大学 Target tracking method and device based on video
CN102722706A (en) * 2012-05-24 2012-10-10 哈尔滨工程大学 Particle filter-based infrared small dim target detecting and tracking method and device
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
US20160093273A1 (en) * 2014-09-30 2016-03-31 Samsung Electronics Co., Ltd. Dynamic vision sensor with shared pixels and time division multiplexing for higher spatial resolution and better linear separable data
CN105550559A (en) * 2015-12-03 2016-05-04 深圳市汇顶科技股份有限公司 Gesture unlocking method and apparatus and mobile terminal
CN107037878A (en) * 2016-12-14 2017-08-11 中国科学院沈阳自动化研究所 A kind of man-machine interaction method based on gesture
CN107392163A (en) * 2017-07-28 2017-11-24 深圳市唯特视科技有限公司 A kind of human hand and its object interaction tracking based on the imaging of short Baseline Stereo

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404086A (en) * 2008-04-30 2009-04-08 浙江大学 Target tracking method and device based on video
CN102722706A (en) * 2012-05-24 2012-10-10 哈尔滨工程大学 Particle filter-based infrared small dim target detecting and tracking method and device
US20160093273A1 (en) * 2014-09-30 2016-03-31 Samsung Electronics Co., Ltd. Dynamic vision sensor with shared pixels and time division multiplexing for higher spatial resolution and better linear separable data
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN105550559A (en) * 2015-12-03 2016-05-04 深圳市汇顶科技股份有限公司 Gesture unlocking method and apparatus and mobile terminal
CN107037878A (en) * 2016-12-14 2017-08-11 中国科学院沈阳自动化研究所 A kind of man-machine interaction method based on gesture
CN107392163A (en) * 2017-07-28 2017-11-24 深圳市唯特视科技有限公司 A kind of human hand and its object interaction tracking based on the imaging of short Baseline Stereo

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张中甫: ""基于深度信息的手势识别研究及应用"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631430A (en) * 2020-12-30 2021-04-09 安徽鸿程光电有限公司 Gesture motion trajectory processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN109213322B (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN108229555B (en) Sample weights distribution method, model training method, electronic equipment and storage medium
CN109325538B (en) Object detection method, device and computer-readable storage medium
CN108595585B (en) Sample data classification method, model training method, electronic equipment and storage medium
CN109753991A (en) Abnormal deviation data examination method and device
CN108664897A (en) Bank slip recognition method, apparatus and storage medium
CN108460346B (en) Fingerprint identification method and device
CN109740630A (en) Method for processing abnormal data and device
CN109962855A (en) A kind of current-limiting method of WEB server, current-limiting apparatus and terminal device
CN107908940A (en) The method and terminal device of a kind of fingerprint recognition
CN110443120A (en) A kind of face identification method and equipment
CN109886143A (en) Multi-tag disaggregated model training method and equipment
CN110069546A (en) A kind of data classification method, device for classifying data and terminal device
CN110276243A (en) Score mapping method, face comparison method, device, equipment and storage medium
CN113393211A (en) Method and system for intelligently improving automatic production efficiency
CN110415044A (en) Cheat detection method, device, equipment and storage medium
CN110110001A (en) Service performance data processing method, device, storage medium and system
CN109657711A (en) A kind of image classification method, device, equipment and readable storage medium storing program for executing
CN109376079A (en) The test method and server that interface calls
CN107977504A (en) A kind of asymmetric in-core fuel management computational methods, device and terminal device
CN109213322A (en) The method and system of gesture identification in a kind of virtual reality
CN108228785A (en) The check method and check device of device parameter
CN109508087A (en) Brain line signal recognition method and terminal device
CN114360027A (en) Training method and device for feature extraction network and electronic equipment
CN109597745A (en) Method for processing abnormal data and device
CN112906554A (en) Model training optimization method and device based on visual image and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant