CN112667088B - Gesture application identification method and system based on VR walking platform - Google Patents

Gesture application identification method and system based on VR walking platform Download PDF

Info

Publication number
CN112667088B
CN112667088B CN202110015663.0A CN202110015663A CN112667088B CN 112667088 B CN112667088 B CN 112667088B CN 202110015663 A CN202110015663 A CN 202110015663A CN 112667088 B CN112667088 B CN 112667088B
Authority
CN
China
Prior art keywords
gesture
detected
binocular camera
sequence
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110015663.0A
Other languages
Chinese (zh)
Other versions
CN112667088A (en
Inventor
康望才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Hankun Industrial Co Ltd
Original Assignee
Hunan Hankun Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Hankun Industrial Co Ltd filed Critical Hunan Hankun Industrial Co Ltd
Priority to CN202110015663.0A priority Critical patent/CN112667088B/en
Publication of CN112667088A publication Critical patent/CN112667088A/en
Application granted granted Critical
Publication of CN112667088B publication Critical patent/CN112667088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a gesture application identification method and system based on a VR walking platform, which receives video image signals collected by a binocular camera; performing gesture segmentation on the received video image signals collected by the binocular camera; capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model; and applying the constructed gesture model to a virtual scene generated by VR glasses. According to the gesture application identification method and system based on the VR walking platform, the gesture is introduced into the VR walking platform, so that the interaction mode of the VR walking platform is enriched, and the user experience is improved; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.

Description

Gesture application identification method and system based on VR walking platform
Technical Field
The invention relates to the technical field of virtual reality, and particularly discloses a gesture application recognition method and system based on a VR walking platform.
Background
Driven by global intelligence, VRs begin to stand out revealing the corners of the head. VR walking platform utilizes VR technique, electronic technology and multimedia technology, combines the VR experience platform that mechanical device and electronic instrument made, is provided with blast apparatus, elevating platform, pounces equipment such as head ball, brings more real impression for experience person. After the experiencer wears the VR glasses, the experiencer seems to be on the scene, and the whole construction site is vividly displayed in front of the eyes and seems to be reachable by hands. However, the interaction mode in the existing VR walking platform is not perfect, and the user experience sense is not immersed enough.
Therefore, the above-mentioned defects existing in the existing VR walking platform are a technical problem to be solved urgently.
Disclosure of Invention
The invention provides a gesture application recognition method and system based on a VR (virtual reality) walking platform, and aims to solve the technical problem of the defects in the existing VR walking platform.
The invention relates to a gesture application recognition method based on a VR walking platform, which comprises the following steps:
receiving a video image signal collected by a binocular camera;
performing gesture segmentation on the received video image signals collected by the binocular camera;
capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model;
and applying the constructed gesture model to a virtual scene generated by VR glasses.
Further, the step of receiving the video image signal collected by the binocular camera comprises:
acquiring left and right visual images of gesture actions of an operator by using a binocular camera;
and converting left and right visual images acquired by the binocular camera into depth images through a stereoscopic vision algorithm.
Further, the step of performing gesture segmentation on the received video image signal collected by the binocular camera comprises:
extracting the complexion of the collected video image by utilizing a color space and complexion modeling based on a Gaussian model, analyzing motion information through image difference operation, and removing a similar complexion background in the video image;
and mining and removing gesture information in the video image of the skin color-like background through a hand detector to eliminate background interference.
Further, the steps of capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model comprise:
taking a binocular camera as a reference, and carrying out region division according to a region of an object in front of the binocular camera;
the method comprises the steps of defining a focusing point of a binocular camera as a direct-view demarcation point, determining the moving direction of an object by taking the direct-view demarcation point as a judgment reference, and identifying the original position and the target position of a moving object in the front area of the binocular camera.
Further, the steps of capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model comprise:
creating a training gesture sequence, preprocessing the created training gesture sequence, extracting gesture features of the created training gesture sequence, and establishing a training template library;
detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture characteristics of the detected gesture sequence to be detected, and performing DTW matching analysis on the gesture characteristics and an established training template library;
and constructing a gesture model according to an identification result obtained after DTW matching analysis.
Another aspect of the present invention relates to a gesture application recognition system based on a VR-based walking platform, comprising:
the receiving module is used for receiving video image signals collected by the binocular camera;
the gesture segmentation module is used for performing gesture segmentation on the received video image signals collected by the binocular camera;
the gesture model building module is used for capturing and recognizing the segmented gestures by utilizing a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures and building a gesture model;
and the application module is used for applying the constructed gesture model to a virtual scene generated by the VR glasses.
Further, the receiving module includes:
the acquisition unit is used for acquiring left and right visual images of gesture actions of an operator by adopting a binocular camera;
and the generating unit is used for converting the left and right visual images acquired by the binocular camera into depth images through a stereoscopic vision algorithm.
Further, the gesture segmentation module comprises:
the removing module is used for extracting the complexion of the collected video image by utilizing the color space and the complexion modeling based on the Gaussian model, analyzing the motion information through image difference operation and removing the similar complexion background in the video image;
and the mining module is used for mining and removing the gesture information in the video image of the skin color-like background through the hand detector and eliminating background interference.
Further, the gesture model building module comprises:
the area division unit is used for carrying out area division according to the area of an object in front of the binocular camera by taking the binocular camera as a reference;
and the identification unit is used for defining the focusing point of the binocular camera as a direct-view demarcation point, determining the moving direction of the object by taking the direct-view demarcation point as a judgment reference, and identifying the original position and the target position of the moving object in the front area of the binocular camera.
Further, the gesture model building module comprises:
the training gesture recognition device comprises a creating unit, a training template library and a recognition unit, wherein the creating unit is used for creating a training gesture sequence, preprocessing the created training gesture sequence, extracting gesture features of the created training gesture sequence and creating the training template library;
the analysis unit is used for detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture characteristics of the detected gesture sequence to be detected, and performing DTW matching analysis on the extracted gesture characteristics and an established training template library;
and the construction unit is used for constructing the gesture model according to the recognition result obtained after the DTW matching analysis.
The beneficial effects obtained by the invention are as follows:
according to the gesture application identification method and system based on the VR walking platform, video image signals collected by the binocular camera are received; performing gesture segmentation on the received video image signals collected by the binocular camera; capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model; and applying the constructed gesture model to a virtual scene generated by VR glasses. According to the gesture application recognition method and system based on the VR walking platform, gestures are introduced into the VR walking platform, interaction modes of the VR walking platform are enriched, and user experience is improved; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Drawings
Fig. 1 is a schematic flowchart of an embodiment of a gesture application recognition method based on a VR walking platform according to the present invention;
fig. 2 is a schematic view of a detailed flow chart of an embodiment of the step of receiving video image signals collected by the binocular camera shown in fig. 1;
FIG. 3 is a schematic view of a detailed flow chart of an embodiment of the step of performing gesture segmentation on the received video image signal acquired by the binocular camera shown in FIG. 1;
FIG. 4 is a flowchart illustrating a detailed process of a first embodiment of the steps of capturing and recognizing a segmented gesture, analyzing and tracking the recognized gesture, and constructing a gesture model, shown in FIG. 1, using a dynamic gesture recognition algorithm;
FIG. 5 is a flowchart illustrating a detailed process of a second embodiment of the steps of capturing and recognizing a segmented gesture, analyzing and tracking the recognized gesture, and constructing a gesture model using a dynamic gesture recognition algorithm shown in FIG. 1;
FIG. 6 is a functional block diagram of an embodiment of a gesture application recognition system based on a VR walking platform in accordance with the present invention;
FIG. 7 is a functional block diagram of an embodiment of the receiving module shown in FIG. 6;
FIG. 8 is a functional block diagram of one embodiment of the gesture segmentation module shown in FIG. 6;
FIG. 9 is a functional block diagram of a first embodiment of the gesture model building block shown in FIG. 6;
FIG. 10 is a functional block diagram of a second embodiment of the gesture model building module shown in FIG. 6.
The reference numbers illustrate:
10. a receiving module; 20. a gesture segmentation module; 30. a gesture model construction module; 40. an application module; 11. a collection unit; 12. a generating unit; 21. removing the module; 22. a digging module; 31. a region dividing unit; 32. an identification unit; 33. a creating unit; 34. an analysis unit; 35. and constructing a unit.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
As shown in fig. 1, a first embodiment of the present invention provides a gesture application recognition method based on a VR walking platform, including the following steps:
and S100, receiving a video image signal collected by a binocular camera.
The method comprises the steps of collecting video image signals by adopting gesture recognition equipment based on binocular vision, extracting information including three-dimensional positions by the aid of two cameras according to a binocular stereoscopic vision imaging principle, performing comprehensive analysis and judgment on gestures, and establishing a three-dimensional model of a hand. The binocular vision is utilized to have smaller input limit on user gestures, more natural man-machine interaction can be realized, but because stereo matching is required, and because of the complexity of a stereo model, a large amount of data needs to be processed, and the calculation is relatively complex.
And S200, performing gesture segmentation on the received video image signals collected by the binocular camera.
And performing gesture segmentation on the received video image signal acquired by the binocular camera, wherein the gesture segmentation comprises gesture feature extraction, extracting skin color by utilizing a color space and skin color modeling based on a Gaussian model, and analyzing motion information through image difference operation to remove a skin color-like background in the image.
And S300, capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model.
The task of a dynamic gesture recognition algorithm is that when an operator makes a gesture, the computer can capture and accurately recognize the gesture. In this embodiment, one way is to connect the computer and the operator together based on wired technology. For example, the data glove can transmit hand information of an operator to a computer and perform gesture recognition on the hand information through an algorithm; although the identification effect is good, the naturalness and the usability of the user experience are affected by the complicated wearing mode and the operation environment with strict requirements. The other method is a vision-based method, and the RGB modal data of the gesture can be directly captured by only a common camera without any contact between the hand and the computer.
And S400, applying the constructed gesture model to a virtual scene generated by VR glasses.
The constructed gesture model is applied to a specific virtual scene generated by VR glasses, for example, a real scene and an emergency are constructed in a three-dimensional dynamic mode virtual simulation working place. The walking platform software module can make up a plurality of VR safety module wantonly according to customer's demand, and the VR hardware freely combines, and VR walking platform can independently arrange the lift, pound multiple hardware such as head, fan, take many helmets, jet-propelled, can bring the virtual experience of multidimension degree. And the framework design of the gesture model ensures that the VR walking platform runs stably and smoothly. In this embodiment, advanced VR development techniques are employed to add interactive functionality to the three-dimensional simulation. And simulating the experimental environment and the practical training field of a real laboratory by a professional three-dimensional engine. The interaction between objects in a real environment is simulated by a physics engine, such as: gravity, impact, etc. Simulating special effects in a real environment by a particle system, such as: fire, electricity, and smoke, etc.
According to the gesture application recognition method based on the VR walking platform, video image signals collected by a binocular camera are received; performing gesture segmentation on the received video image signals collected by the binocular camera; capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model; and applying the constructed gesture model to a virtual scene generated by VR glasses. According to the gesture application recognition method based on the VR walking platform, gestures are introduced into the VR walking platform, interaction modes of the VR walking platform are enriched, and user experience is improved; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Preferably, please refer to fig. 2, fig. 2 is a schematic detailed flowchart of an embodiment of the step of receiving video image signals captured by a binocular camera shown in fig. 1, in this embodiment, the step S100 includes:
and step S110, acquiring left and right visual images of the gesture actions of the operator by adopting a binocular camera.
A binocular stereoscopic vision imaging principle is utilized, and a binocular camera is used for collecting left and right visual images of gesture actions of an operator.
And step S120, converting left and right visual images collected by the binocular camera into depth images through a stereoscopic vision algorithm.
And converting left and right visual images acquired by the binocular camera into depth images through a stereoscopic vision algorithm. The specific process is as follows: and after the three-dimensional calibration, acquiring a calibrated three-dimensional image pair, performing three-dimensional matching to obtain a parallax image, and performing triangular calculation by using internal parameters and external parameters of the camera to obtain a depth image.
According to the gesture application recognition method based on the VR walking platform, left and right visual images of gesture actions of an operator are collected by a binocular camera; and converting left and right visual images acquired by the binocular camera into depth images through a stereoscopic vision algorithm. According to the gesture application recognition method based on the VR walking platform, gestures are introduced into the VR walking platform, interaction modes of the VR walking platform are enriched, and user experience is improved; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Further, referring to fig. 3, fig. 3 is a schematic view of a detailed flow of an embodiment of step S200 shown in fig. 1, in this embodiment, step S200 includes:
step S210, extracting the skin color of the collected video image by utilizing the color space and the skin color modeling based on the Gaussian model, analyzing the motion information through image difference operation, and removing the similar skin color background in the video image.
Gesture segmentation includes gesture feature extraction using YC b C r The color space and the complexion modeling based on the Gaussian model extract the complexion of the collected video image, and the motion information analysis is carried out through the image difference operation to remove the similar complexion background in the video image, thereby ensuring the accuracy of gesture segmentation under the complex background.
Gesture segmentation includes segmentation of motion information, motion templates, and color information. And detecting the gesture through the difference operation of the front frame video image and the rear frame video image based on the motion information. Because the background (especially the area similar to human skin color) is kept unchanged, the difference operation can be carried out on two adjacent frames of images in the same gesture sequence, and the changed gestures are effectively reserved. . And based on the motion template, searching for a motion gesture in a template matching mode. Based on the color information, a background color is employed to detect the gesture.
Step S220, the hand detector is used for mining and removing gesture information in the video image of the skin color-like background, and background interference is eliminated.
The hand detection is an important preprocessing link in gesture recognition, and can dig out important gesture information and remove background interference. Early hand detection methods mainly utilized manually extracted features to obtain the region of the hand in the picture, such as skin color based methods, shape based methods, motion information based methods, and the like. The methods are easily affected by illumination change, skin color difference, background interference, posture change, self-shielding between fingers and the like, the detection effect is not ideal and unstable, the calculated amount is large, the calculation speed is low, and the detection requirement in an actual scene is difficult to meet. With the development of deep learning, the detector based on deep learning automatically learns the hand features from the pictures, and the hand features have stronger expression capacity on hand information. When the hand detector detects an operator gesture, modeling of the gesture is required to allow the user to see how the gesture looks. In this embodiment, a gesture model is defined to accept user gesture data; when gesture data is imported, a gesture model is created; in the process of creating the gesture model, each gesture is provided with a corresponding node and a serial number graph, and the node on the gesture model is a point needing triggering and responding. After the gesture data is imported into the gesture model, importing the gesture model into the gesture module through a hand module which is arranged on the gesture module and is used for searching manual conversion and distribution; the component contents of the gesture model imported are: acquiring a gesture model; the gesture model is then attached with gesture nodes. The gesture node has the following functions: after the gesture model is imported and acquired, whether the direction, straightening and bending and the like of the fingers can be well realized.
Whether the finger direction, straightening and bending and the like can be well realized is that each gesture node is set with a node threshold, for example, the gesture node 1 at the palm center is set as B3-B6 in the horizontal direction, and the gesture nodes B6-B1 and B3-B2 in the vertical directions at two sides are set as B3-B6; the vector formed by connecting the gesture node 2 and the gesture node 3 is perpendicular to the node 1 and parallel to the palms B3-B6, but the curvature and the included angle accuracy cannot be precisely guaranteed by human fingers, so that when the node threshold is defined, a vertical interpolation value a is added to the node threshold, the size of the vertical interpolation value a is determined by user definition, the smaller the value of the vertical interpolation value a is, the more precise the threshold definition is, meanwhile, when the gesture is recognized by a user, the more standard the gesture is required to be recognized by the user, the larger the value of the vertical interpolation value a is, the more extensive the gesture threshold definition is, when the user recognizes the gesture, the fuzzy gesture can be recognized by gesture recognition, and the user does not need to be very precise and only needs to be within the threshold range.
Likewise, the gesture model provides information, characteristics, and movement patterns of the bound hand object. And the object can be picked up and the ray can be emitted through the gesture model in the operation process. The gesture model provides as much information about the hand as possible, but is not able to determine all attributes of each frame; all of which will produce some expected error. Such as: the gesture suddenly clenches into a fist, and at this time, the forefinger may suddenly be unavailable, and then the list of the fingers suddenly reports to the air, so when the finger operates to the user, the hand detector needs to detect some response conditions, and the user timely throws out the abnormal condition and stops operating to prevent the load from causing pressure on the memory.
According to the gesture application identification method based on the VR walking platform, the complexion of the collected video image is extracted by utilizing a color space and complexion modeling based on a Gaussian model, motion information analysis is carried out through image difference operation, and a similar complexion background in the video image is removed; and mining and removing gesture information in the video image of the skin color-like background through a hand detector to eliminate background interference. According to the gesture application recognition method based on the VR walking platform, gestures are introduced into the VR walking platform, interaction modes of the VR walking platform are enriched, and user experience is improved; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Preferably, please refer to fig. 4, fig. 4 is a detailed flowchart of a first embodiment of step S300 shown in fig. 1, in this embodiment, step S300 includes:
and S310, taking the binocular camera as a reference, and dividing the area according to the area of the object in front of the binocular camera.
And identifying the area, namely dividing the area according to the area of the object in front of the binocular camera by taking the binocular camera as a reference, wherein the identification of the dynamic gesture is carried out in the earlier stage of division of the identification area.
And step S320, defining the focus point of the binocular camera as a direct-view demarcation point, determining the moving direction of the object by taking the direct-view demarcation point as a judgment reference, and identifying the original position and the target position of the moving object in the front area of the binocular camera.
After the area of the front area of the binocular camera is divided, defining the focusing point of the binocular camera as a direct-view demarcation point, determining the moving direction of an object by taking the direct-view demarcation point as a judgment reference, and identifying the original position and the target position of a moving object in the front area of the binocular camera, wherein the movement of the object is relative to the original position of the object and moves from the original position to the target position, and at the moment, the moving object is required to be the object of the focusing point of the camera.
According to the gesture application recognition method based on the VR walking platform, a binocular camera is taken as a reference, and region division is carried out according to the region of an object in front of the binocular camera; the focusing point of the binocular camera is defined as a direct-view demarcation point, the moving direction of the object is judged by taking the direct-view demarcation point as a judgment reference, and the original position and the target position of the moving object in the front area of the binocular camera are identified. According to the gesture application recognition method based on the VR walking platform, gestures are introduced into the VR walking platform, interaction modes of the VR walking platform are enriched, and user experience is improved; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Further, referring to fig. 5, fig. 5 is a detailed flowchart of a second embodiment of step S300 shown in fig. 1, and on the basis of the first embodiment, step S300 includes:
and S330, creating a training gesture sequence, preprocessing the created training gesture sequence, extracting gesture features of the created training gesture sequence, and establishing a training template library.
Creating a training gesture sequence, preprocessing the created training gesture sequence, extracting gesture features of the created training gesture sequence, and establishing a training template library of corresponding gestures.
And step S340, detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture features of the detected gesture sequence to be detected, and performing DTW matching analysis on the gesture features and the established training template library.
Detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture features of the detected gesture sequence to be detected, comparing the gesture features with corresponding gestures established in a training template library, and performing DTW (Dynamic Time Warping) matching analysis according to a preset gesture feature extraction rule to obtain a DTW matching analysis result. And the DTW matching analysis realizes the matching of the gesture sequence and the training template library by using a DTW algorithm.
And S350, constructing a gesture model according to an identification result obtained after DTW matching analysis.
And constructing a corresponding gesture model according to the matching analysis result of the gesture sequence and the training template library realized by the DTW algorithm.
According to the gesture application recognition method based on the VR walking platform, a training gesture sequence is created, the created training gesture sequence is preprocessed, gesture features of the created training gesture sequence are extracted, and a training template library is established; detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture characteristics of the detected gesture sequence to be detected, and performing DTW matching analysis on the gesture characteristics and an established training template library; and constructing a gesture model according to an identification result obtained after DTW matching analysis. According to the gesture application recognition method based on the VR walking platform, gestures are introduced into the VR walking platform, interaction modes of the VR walking platform are enriched, and user experience is improved; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
As shown in fig. 6, fig. 6 is a functional block diagram of an embodiment of a gesture application recognition system based on a VR walking platform provided in the present invention, in this embodiment, the gesture application recognition system based on a VR walking platform includes a receiving module 10, a gesture segmentation module 20, a gesture model construction module 30, and an application module 40, where the receiving module 10 is configured to receive video image signals acquired by a binocular camera; the gesture segmentation module 20 is configured to perform gesture segmentation on the received video image signal acquired by the binocular camera; a gesture model construction module 30, configured to capture and recognize the segmented gesture by using a dynamic gesture recognition algorithm, analyze and track the recognized gesture, and construct a gesture model; and the application module 40 is used for applying the constructed gesture model to a virtual scene generated by the VR glasses.
The receiving module 10 collects video image signals by using gesture recognition equipment based on binocular vision, the binocular vision is that two cameras are provided, information including three-dimensional positions is extracted by the two cameras by using a binocular stereoscopic vision imaging principle, then comprehensive analysis and judgment of gestures are carried out, and a three-dimensional model of a hand is established. The binocular vision is utilized to have smaller input limit on user gestures, more natural man-machine interaction can be realized, but because stereo matching is required, and because of the complexity of a stereo model, a large amount of data needs to be processed, and the calculation is relatively complex.
The gesture segmentation module 20 performs gesture segmentation on the received video image signal acquired by the binocular camera, wherein the gesture segmentation includes gesture feature extraction, skin color is extracted by using a color space and skin color modeling based on a gaussian model, and motion information analysis is performed through image difference operation to remove a skin color-like background in the image.
And the gesture model building module 30 is configured to capture and recognize the segmented gesture by using a dynamic gesture recognition algorithm, analyze and track the recognized gesture, and build a gesture model. The task of a dynamic gesture recognition algorithm is that when an operator makes a gesture, the computer can capture and accurately recognize the gesture. In this embodiment, one way is to connect the computer and the operator together based on wired technology. For example, the data glove can transmit hand information of an operator to a computer and perform gesture recognition on the hand information through an algorithm; although the identification effect is good, the naturalness and the usability of the user experience are affected by the complicated wearing mode and the operation environment with strict requirements. The other method is a vision-based method, and the RGB modal data of the gesture can be directly captured by only a common camera without any contact between the hand and the computer.
The application module 40 applies the constructed gesture model to a specific virtual scene generated by the VR glasses, for example, a real scene and an emergency of the construction site are virtually simulated in a three-dimensional dynamic manner. The walking platform software module can make up a plurality of VR safety module wantonly according to customer's demand, and the VR hardware freely combines, and VR walking platform can independently arrange the lift, pound multiple hardware such as head, fan, take many helmets, jet-propelled, can bring the virtual experience of multidimension degree. And the framework design of the gesture model ensures that the VR walking platform runs stably and smoothly. In this embodiment, advanced VR development techniques are employed to add interactive functionality to the three-dimensional simulation. And simulating the experimental environment and the practical training field of a real laboratory by a professional three-dimensional engine. The interaction between objects in a real environment is simulated by a physics engine, such as: gravity, impact, and the like. Simulating special effects in a real environment by a particle system, such as: fire, electricity, and smoke, etc.
Compared with the prior art, the gesture application recognition system based on the VR walking platform provided by the embodiment adopts the receiving module 10, the gesture segmentation module 20, the gesture model construction module 30 and the application module 40 to receive the video image signals collected by the binocular camera; performing gesture segmentation on the received video image signals collected by the binocular camera; capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model; and applying the constructed gesture model to a virtual scene generated by VR glasses. The gesture application recognition system based on the VR walking platform provided by the embodiment introduces gestures into the VR walking platform, enriches the interaction mode of the VR walking platform, and improves the user experience; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Preferably, please refer to fig. 7, fig. 7 is a schematic diagram of functional modules of an embodiment of the receiving module shown in fig. 6, in this embodiment, the receiving module 10 includes an acquisition unit 11 and a generation unit 12, where the acquisition unit 11 is configured to acquire left and right visual images of gesture actions of an operator by using a binocular camera; and the generating unit 12 is used for converting the left and right visual images acquired by the binocular camera into depth images through a stereoscopic vision algorithm.
The acquisition unit 11 acquires left and right visual images of gesture actions of an operator by using a binocular stereo vision imaging principle and a binocular camera.
The generation unit 12 converts left and right visual images collected by the binocular camera into depth images through a stereoscopic vision algorithm. The specific process is as follows: and after the three-dimensional calibration, acquiring a calibrated three-dimensional image pair, performing three-dimensional matching to obtain a parallax image, and performing triangular calculation by using internal parameters and external parameters of the camera to obtain a depth image.
Compared with the prior art, in the gesture application recognition system based on the VR walking platform provided by the embodiment, the receiving module 10 adopts the acquisition unit 11 and the generation unit 12, and left and right visual images of gesture actions of an operator are acquired by adopting a binocular camera; and converting left and right visual images acquired by the binocular camera into depth images through a stereoscopic vision algorithm. The gesture application recognition system based on the VR walking platform provided by the embodiment introduces gestures into the VR walking platform, enriches the interaction mode of the VR walking platform, and improves the user experience; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Further, referring to fig. 8, fig. 8 is a functional module schematic diagram of an embodiment of the gesture segmentation module shown in fig. 6, in this embodiment, the gesture segmentation module 20 includes a removal module 21 and a mining module 22, where the removal module 21 is configured to extract a skin color of a collected video image by using a color space and a skin color modeling based on a gaussian model, and perform motion information analysis through image difference operation to remove a skin color-like background in the video image; and the mining module 22 is used for mining the gesture information in the video image with the skin-color-like background removed through the hand detector to eliminate background interference.
The removal module 21 extracts the gesture features using YC b C r The color space and the complexion modeling based on the Gaussian model extract the complexion of the collected video image, and the motion information analysis is carried out through the image difference operation to remove the similar complexion background in the video image, thereby ensuring the accuracy of gesture segmentation under the complex background.
Gesture segmentation includes segmentation of motion information, motion templates, and color information. And detecting the gesture through the difference operation of the front frame video image and the rear frame video image based on the motion information. Because the background (especially the area similar to human skin color) is kept unchanged, the difference operation can be carried out on two adjacent frames of images in the same gesture sequence, and the changed gestures are effectively reserved. . And based on the motion template, searching for the motion gesture in a template matching mode. Based on the color information, a background color is employed to detect the gesture.
The mining module 22 mines gesture information in the video image with the skin-color-like background removed through a hand detector, and eliminates background interference. The hand detection is an important preprocessing link in gesture recognition, and can dig out important gesture information and remove background interference. Early hand detection methods mainly utilized manually extracted features to obtain the region of the hand in the picture, such as skin color based methods, shape based methods, motion information based methods, and the like. The methods are easily affected by illumination change, skin color difference, background interference, posture change, self-shielding between fingers and the like, the detection effect is not ideal and unstable, the calculated amount is large, the calculation speed is low, and the detection requirement in an actual scene is difficult to meet. With the development of deep learning, the detector based on deep learning automatically learns the hand characteristics from the pictures, and the hand characteristics have stronger expression capacity on hand information. When the hand detector detects an operator gesture, modeling of the gesture is required to allow the user to see how the gesture looks. In this embodiment, a gesture model is defined to accept user gesture data; when gesture data is imported, a gesture model is created; in the process of creating the gesture model, each gesture is provided with a corresponding node and a serial number graph, and the node on the gesture model is a point needing triggering and responding. After the gesture data are imported into the gesture model, the gesture model is imported into the gesture component through a hand component which is arranged on the gesture component and used for searching for manual conversion and distribution; the component contents that the gesture model is imported into are: acquiring a gesture model; the gesture model is then attached with gesture nodes. The gesture node has the following functions: after the gesture model is imported and acquired, whether the direction, straightening and bending and the like of the fingers can be well realized.
Whether the finger direction, straightening and bending and the like can be well realized is that each gesture node is set with a node threshold, for example, the gesture node 1 at the palm center is set as B3-B6 in the horizontal direction, and the gesture nodes B6-B1 and B3-B2 in the vertical directions at two sides are set as B3-B6; the vector formed by connecting the gesture node 2 and the gesture node 3 is perpendicular to the node 1 and parallel to the palms B3-B6, but the human fingers cannot accurately guarantee the curvature and the accuracy of an included angle, so that when a node threshold is defined, a vertical interpolation a is added to the node threshold, the size of the vertical interpolation a is determined by user definition, the smaller the value of the vertical interpolation a is, the more accurate the threshold definition is, meanwhile, when a user identifies gesture actions, the more standard the gesture is required to be made by the user, the larger the value of the vertical interpolation a is, the wider the gesture threshold definition is, when the user identifies the gesture actions, the fuzzy gesture can be identified by gesture identification, and the user does not need to make very accurate and only needs to be within the threshold range.
Likewise, the gesture model provides information, characteristics, and movement patterns of the bound hand object. And the object can be picked up and the ray can be emitted through the gesture model in the operation process. The gesture model provides as much information about the hand as possible, but is not able to determine all attributes of each frame; all of which will produce some expected error. Such as: the gesture suddenly clenches into a fist, and at this time, the forefinger may suddenly be unavailable, and then the list of the fingers suddenly reports to the air, so when the finger operates to the user, the hand detector needs to detect some response conditions, and the user timely throws out the abnormal condition and stops operating to prevent the load from causing pressure on the memory.
Compared with the prior art, the gesture application recognition system based on the VR walking platform provided by the embodiment has the advantages that the gesture segmentation module 20 comprises a removal module 21 and a mining module 22, the complexion of the collected video image is extracted by using a color space and a complexion model based on a gaussian model, the motion information is analyzed through image difference operation, and the similar complexion background in the video image is removed; and mining gesture information in the video image with the skin color-like background removed through a hand detector, and eliminating background interference. The gesture application recognition system based on the VR walking platform provided by the embodiment introduces gestures into the VR walking platform, enriches the interaction mode of the VR walking platform, and improves the user experience; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Preferably, please refer to fig. 9, fig. 9 is a functional module schematic diagram of a first embodiment of the gesture model building module shown in fig. 6, in this embodiment, the gesture model building module 30 includes a region dividing unit 31 and a recognition unit 32, where the region dividing unit 31 is configured to perform region division according to a region of an object in front of a binocular camera with the binocular camera as a reference; and the identification unit 32 is used for defining the focusing point of the binocular camera as a direct-view demarcation point, determining the moving direction of the object by taking the direct-view demarcation point as a judgment reference, and identifying the original position and the target position of the moving object in the front area of the binocular camera.
The region dividing unit 31 identifies a region, divides the region based on a region of an object in front of the binocular camera with reference to the binocular camera, and performs the identification of the dynamic gesture in an early stage of division of the identified region.
The recognition unit 32 divides the area of the front area of the binocular camera, defines the focus point of the binocular camera as a direct-view demarcation point, determines the moving direction of the object by using the direct-view demarcation point as a determination reference, and recognizes the original position and the target position of the moving object in the front area of the binocular camera, wherein the moving of the object is relative to the original position of the object and moves from the original position to the target position, and the moving object is the object of the camera focus point.
Compared with the prior art, the gesture application recognition system based on the VR walking platform provided by the embodiment has the advantages that the gesture model building module 30 comprises the area division unit 31 and the recognition unit 32, and area division is performed according to the area of an object in front of the binocular camera by taking the binocular camera as a reference; the focusing point of the binocular camera is defined as a direct-view demarcation point, the moving direction of the object is judged by taking the direct-view demarcation point as a judgment reference, and the original position and the target position of the moving object in the front area of the binocular camera are identified. The gesture application recognition system based on the VR walking platform provided by the embodiment introduces gestures into the VR walking platform, enriches the interaction mode of the VR walking platform, and improves the user experience; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
Further, referring to fig. 10, fig. 10 is a functional module schematic diagram of a second embodiment of the gesture model building module shown in fig. 6, on the basis of the first embodiment, the gesture model building module 30 includes a creating unit 33, an analyzing unit 34, and a building unit 35, where the creating unit 33 is configured to create a training gesture sequence, pre-process the created training gesture sequence, extract gesture features of the created training gesture sequence, and create a training template library; the analysis unit 34 is configured to detect a gesture sequence to be detected, preprocess the detected gesture sequence to be detected, extract gesture features of the detected gesture sequence to be detected, and perform DTW matching analysis with the established training template library; and the construction unit 35 is used for constructing a gesture model according to an identification result obtained after the DTW matching analysis.
The creating unit 33 creates a training gesture sequence, preprocesses the created training gesture sequence, extracts gesture features of the created training gesture sequence, and creates a training template library of corresponding gestures.
The analysis unit 34 detects a gesture sequence to be detected, preprocesses the detected gesture sequence to be detected, extracts gesture features of the detected gesture sequence to be detected, compares the gesture features with corresponding gestures established in the training template library, and performs DTW (Dynamic Time Warping) matching analysis according to a preset gesture feature extraction rule to obtain a DTW matching analysis result. The DTW matching analysis realizes the matching of the gesture sequence and the training template library by using a DTW algorithm.
The construction unit 35 constructs a corresponding gesture model according to the matching analysis result of the gesture sequence and the training template library realized by the DTW algorithm.
Compared with the prior art, the gesture application recognition system based on the VR walking platform provided by the embodiment adopts the creating unit 33, the analyzing unit 34 and the constructing unit 35 to create the training gesture sequence, preprocess the created training gesture sequence, extract the gesture features of the created training gesture sequence, and create the training template library; detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture characteristics of the detected gesture sequence to be detected, and performing DTW matching analysis on the gesture characteristics and an established training template library; and constructing a gesture model according to an identification result obtained after DTW matching analysis. According to the gesture application recognition method based on the VR walking platform, gestures are introduced into the VR walking platform, interaction modes of the VR walking platform are enriched, and user experience is improved; the experience mode, the gesture interaction precision and the accuracy of the rich VR are high, and the immersion of the user is strong.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. A gesture application recognition method based on a VR walking platform is characterized by comprising the following steps:
receiving a video image signal collected by a binocular camera;
performing gesture segmentation on the received video image signals collected by the binocular camera;
capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model;
applying the constructed gesture model to a virtual scene generated by VR glasses;
the steps of capturing and recognizing the segmented gestures by using a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures, and constructing a gesture model comprise:
taking the binocular camera as a reference, and carrying out region division according to a region of an object in front of the binocular camera;
defining the focusing point of the binocular camera as a direct-view demarcation point, determining the moving direction of the object by taking the direct-view demarcation point as a judgment reference, and identifying the original position and the target position of the moving object in the front area of the binocular camera;
creating a training gesture sequence, preprocessing the created training gesture sequence, extracting gesture features of the created training gesture sequence, and establishing a training template library;
detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture characteristics of the detected gesture sequence to be detected, and performing DTW matching analysis on the gesture characteristics and an established training template library;
constructing a gesture model according to an recognized result obtained after DTW matching analysis;
the method comprises the following steps of detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture characteristics of the detected gesture sequence to be detected, and performing DTW matching analysis with an established training template library:
detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture features of the detected gesture sequence to be detected, comparing the gesture features with corresponding gestures established in a training template library, and performing DTW matching analysis according to a preset gesture feature extraction rule to obtain a DTW matching analysis result; and the DTW matching analysis realizes the matching of the gesture sequence and the training template library by using a DTW algorithm.
2. The VR walking platform based gesture application recognition method of claim 1, wherein the step of receiving a video image signal collected by a binocular camera includes:
acquiring left and right visual images of gesture actions of an operator by using a binocular camera;
and converting left and right visual images acquired by the binocular camera into depth images through a stereoscopic vision algorithm.
3. The VR walking platform based gesture application recognition method of claim 1, wherein the step of performing gesture segmentation on the received video image signals collected by the binocular camera comprises:
extracting the complexion of the collected video image by utilizing a color space and complexion modeling based on a Gaussian model, analyzing motion information through image difference operation, and removing a similar complexion background in the video image;
and mining and removing gesture information in the video image of the skin color-like background through a hand detector to eliminate background interference.
4. A gesture application recognition system based on VR walking platform, includes:
the receiving module (10) is used for receiving the video image signals collected by the binocular camera;
the gesture segmentation module (20) is used for performing gesture segmentation on the received video image signals collected by the binocular camera;
the gesture model building module (30) is used for capturing and recognizing the segmented gestures by utilizing a dynamic gesture recognition algorithm, analyzing and tracking the recognized gestures and building a gesture model;
an application module (40) for applying the constructed gesture model to a virtual scene generated by VR glasses;
the gesture model building module (30) comprises:
the area division unit (31) is used for carrying out area division according to the area of an object in front of the binocular camera by taking the binocular camera as a reference;
the identification unit (32) is used for defining the focusing point of the binocular camera as a direct-view demarcation point, determining the moving direction of the object by taking the direct-view demarcation point as a judgment reference, and identifying the original position and the target position of the moving object in the front area of the binocular camera;
the creating unit (33) is used for creating a training gesture sequence, preprocessing the created training gesture sequence, extracting gesture features of the created training gesture sequence and establishing a training template library;
the analysis unit (34) is used for detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture features of the detected gesture sequence to be detected, and performing DTW matching analysis on the extracted gesture features and an established training template library;
the construction unit (35) is used for constructing a gesture model according to an identification result obtained after DTW matching analysis;
the construction unit (35) is specifically used for detecting a gesture sequence to be detected, preprocessing the detected gesture sequence to be detected, extracting gesture features of the detected gesture sequence to be detected, comparing the gesture features with corresponding gestures established in a training template library, and performing DTW matching analysis according to a preset gesture feature extraction rule to obtain a DTW matching analysis result; and the DTW matching analysis realizes the matching of the gesture sequence and the training template library by using a DTW algorithm.
5. The VR walking platform-based gesture application recognition system of claim 4,
the receiving module (10) comprises:
the acquisition unit (11) is used for acquiring left and right visual images of gesture actions of an operator by adopting a binocular camera;
and the generating unit (12) is used for converting the left and right visual images collected by the binocular camera into depth images through a stereoscopic vision algorithm.
6. The VR walking platform-based gesture application recognition system of claim 4,
the gesture segmentation module (20) comprises:
the removing module (21) is used for extracting the skin color of the collected video image by utilizing the color space and the skin color modeling based on the Gaussian model, analyzing the motion information through image difference operation and removing the similar skin color background in the video image;
and the mining module (22) is used for mining the gesture information in the video image with the skin-color-like background removed through a hand detector, and eliminating background interference.
CN202110015663.0A 2021-01-06 2021-01-06 Gesture application identification method and system based on VR walking platform Active CN112667088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110015663.0A CN112667088B (en) 2021-01-06 2021-01-06 Gesture application identification method and system based on VR walking platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110015663.0A CN112667088B (en) 2021-01-06 2021-01-06 Gesture application identification method and system based on VR walking platform

Publications (2)

Publication Number Publication Date
CN112667088A CN112667088A (en) 2021-04-16
CN112667088B true CN112667088B (en) 2023-03-24

Family

ID=75413215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110015663.0A Active CN112667088B (en) 2021-01-06 2021-01-06 Gesture application identification method and system based on VR walking platform

Country Status (1)

Country Link
CN (1) CN112667088B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958218A (en) * 2017-11-22 2018-04-24 南京邮电大学 A kind of real-time gesture knows method for distinguishing
CN108256421A (en) * 2017-12-05 2018-07-06 盈盛资讯科技有限公司 A kind of dynamic gesture sequence real-time identification method, system and device
CN111435429B (en) * 2019-01-15 2024-03-01 北京伟景智能科技有限公司 Gesture recognition method and system based on binocular stereo data dynamic cognition
CN111639531A (en) * 2020-04-24 2020-09-08 中国人民解放军总医院 Medical model interaction visualization method and system based on gesture recognition

Also Published As

Publication number Publication date
CN112667088A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN105809144B (en) A kind of gesture recognition system and method using movement cutting
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
EP3811337A1 (en) System for predicting articulated object feature location
US20100208038A1 (en) Method and system for gesture recognition
WO2021052208A1 (en) Auxiliary photographing device for movement disorder disease analysis, control method and apparatus
Patil et al. Real time facial expression recognition using RealSense camera and ANN
dos Santos Anjo et al. A real-time system to recognize static gestures of Brazilian sign language (libras) alphabet using Kinect.
CN114241379B (en) Passenger abnormal behavior identification method, device, equipment and passenger monitoring system
KR102371127B1 (en) Gesture Recognition Method and Processing System using Skeleton Length Information
CN111062263A (en) Method, device, computer device and storage medium for hand pose estimation
KR20150127381A (en) Method for extracting face feature and apparatus for perforimg the method
CN114333046A (en) Dance action scoring method, device, equipment and storage medium
Guo et al. Gesture recognition of traffic police based on static and dynamic descriptor fusion
Amrutha et al. Human Body Pose Estimation and Applications
Adhikari et al. A Novel Machine Learning-Based Hand Gesture Recognition Using HCI on IoT Assisted Cloud Platform.
CN117523659A (en) Skeleton-based multi-feature multi-stream real-time action recognition method, device and medium
Enikeev et al. Recognition of sign language using leap motion controller data
CN114093024A (en) Human body action recognition method, device, equipment and storage medium
CN117456558A (en) Human body posture estimation and control method based on camera and related equipment
CN117122887A (en) AI coach system
Abdallah et al. An overview of gesture recognition
CN112667088B (en) Gesture application identification method and system based on VR walking platform
KR101447958B1 (en) Method and apparatus for recognizing body point
CN115713808A (en) Gesture recognition system based on deep learning
Shah et al. Gesture recognition technique: a review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant