CN116627262B - VR interactive device control method and system based on data processing - Google Patents

VR interactive device control method and system based on data processing Download PDF

Info

Publication number
CN116627262B
CN116627262B CN202310923037.0A CN202310923037A CN116627262B CN 116627262 B CN116627262 B CN 116627262B CN 202310923037 A CN202310923037 A CN 202310923037A CN 116627262 B CN116627262 B CN 116627262B
Authority
CN
China
Prior art keywords
moving image
motion
image
user
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310923037.0A
Other languages
Chinese (zh)
Other versions
CN116627262A (en
Inventor
范文漪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University
Original Assignee
Hebei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University filed Critical Hebei University
Priority to CN202310923037.0A priority Critical patent/CN116627262B/en
Publication of CN116627262A publication Critical patent/CN116627262A/en
Application granted granted Critical
Publication of CN116627262B publication Critical patent/CN116627262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses a VR interactive device control method and a system based on data processing, which belong to the technical field of VR device management, and the method comprises the following steps: s1, generating a motion gesture of a user according to a motion image; s2, generating a three-dimensional gesture of a user according to the hand pose data; and S3, transmitting the motion gesture and the three-dimensional gesture of the user to the VR interaction device. According to the VR interactive device control method, the hand pose data are collected, the Gaussian mixture model corresponding to the hand pose data at different moments is utilized for correction, so that the constructed model is more suitable for a current user, and gestures are determined by constructing a motion equation, so that the user requirements are accurately reflected; and the generated pose and gesture are transmitted to VR interaction equipment, so that man-machine interaction is conveniently realized, and user experience is improved.

Description

VR interactive device control method and system based on data processing
Technical Field
The invention belongs to the technical field of VR equipment management, and particularly relates to a VR interactive equipment control method and system based on data processing.
Background
VR devices are products that utilize a variety of technologies, such as simulation technology and computer graphics, man-machine interface technology, multimedia technology, sensing technology, and networking technology, to enable man-machine interaction with the help of computers and sensors. At present, in the control process of realizing man-machine interaction, the gesture recognition and gesture recognition accuracy of a user are required to be improved, misoperation is easy to occur, and inconvenience is brought to input of man-machine interaction signals.
Disclosure of Invention
The invention provides a VR interactive device control method and system based on data processing in order to solve the problems.
The technical scheme of the invention is as follows: the VR interactive device control method based on data processing comprises the following steps:
s1, acquiring a moving image of a user through a camera of VR interaction equipment, and generating a moving gesture of the user according to the moving image;
s2, acquiring hand pose data of a user through a handle of the VR interaction device, and generating a three-dimensional gesture of the user according to the hand pose data;
s3, transmitting the motion gesture and the three-dimensional gesture of the user to VR interaction equipment;
s1 comprises the following substeps:
s11, acquiring a moving image of a user through a camera of the VR interaction device, cutting the moving image, determining a moving area of the moving image, and generating a local moving image;
s12, extracting a limb movement area of a user in the local moving image to generate a limb moving image;
s13, extracting edge contours of the limb moving images;
s14, generating a motion gesture of the user according to the edge outline of the limb motion image.
The beneficial effects of the invention are as follows:
(1) According to the VR interactive device control method, the moving image of the user is cut step by step, the limb moving area is accurately determined, the edge outline is generated according to the limb moving area, the moving gesture of the user can be obtained based on the edge outline, the interference caused by the background of the user can be reduced in the whole process, and the gesture recognition accuracy is improved;
(2) According to the VR interactive device control method, the hand pose data are collected, the Gaussian mixture model corresponding to the hand pose data at different moments is utilized for correction, so that the constructed model is more suitable for a current user, and gestures are determined by constructing a motion equation, so that the user requirements are accurately reflected; and the generated pose and gesture are transmitted to VR interaction equipment, so that man-machine interaction is conveniently realized, and user experience is improved.
Further, in S11, the specific method for cropping the moving image is: setting a pixel characteristic threshold value, and sliding on a moving image by utilizing a sliding window to obtain a plurality of image blocks; calculating pixel characteristic values of all the image blocks, eliminating the image blocks with the pixel characteristic values smaller than the pixel characteristic threshold value, and generating a local image by taking the rest image blocks as a motion area of a motion image;
further, pixel eigenvalues of the image blockσThe calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the In the method, in the process of the invention,V max represents the maximum luminance value of the moving image,V min representing transportThe minimum luminance value of the moving image,V k representing the first of the image blockskThe luminance value of the individual pixel points,Kthe number of pixels representing an image block,Srepresenting the area of the image block.
Further, in S12, the specific method for generating the limb moving image is as follows: and clustering the local moving image by using a K-means clustering algorithm to obtain the similarity between each pixel point and the initial clustering center, generating a maximum connected region according to all the pixel points with the similarity less than 0.5, and generating the limb moving image by taking the difference value between the moving region of the moving image and the maximum connected region as a limb moving region.
Further, in S13, the specific method for extracting the edge contour of the limb moving image is as follows: and extracting the local outline of each image block in the limb moving image, and connecting the local outlines of all the image blocks to be used as the edge outline of the limb moving image.
Further, S2 comprises the following sub-steps:
s21, acquiring hand pose data of a user at the current moment through a handle of VR interactive equipment, and constructing a hand Gaussian mixture model at the current moment;
s22, carrying out iterative correction on the hand Gaussian mixture model according to the hand pose data at the previous moment and the hand pose data at the next moment;
s23, respectively determining an articulation equation and a palm motion equation at the current moment by using the iteratively corrected hand Gaussian mixture model;
s24, determining the three-dimensional gesture of the user according to the joint motion equation and the palm motion equation at the current moment.
Further, in S22, the iteratively corrected hand gaussian mixture modelFThe expression of (2) is:the method comprises the steps of carrying out a first treatment on the surface of the In the method, in the process of the invention,a n+1 indicating the acceleration of the palm movement at the next moment,a n-1 indicating the acceleration of the palm movement at the previous moment,b n+1 joint for indicating next momentThe acceleration of the movement is determined,b n-1 indicating the joint movement acceleration at the previous moment,f(. Cndot.) represents the probability density function,y t a hand pose data set representing the current moment,μ n representing the mean value of the hand Gaussian mixture model at the current moment,δ n representing the covariance of the hand gaussian mixture model at the current moment.
The hand pose data comprises pose data of the palm portion and pose data of the joint portion, so that the mean value of the hand Gaussian mixture model refers to the mean value between the sub-Gaussian mixture model of the palm portion and the sub-Gaussian mixture model of the joint portion, and the covariance of the hand Gaussian mixture model refers to the covariance between the sub-Gaussian mixture model of the palm portion and the sub-Gaussian mixture model of the joint portion.
Further, in S23, the palm motion equation at the current timeAThe expression of (2) is:the method comprises the steps of carrying out a first treatment on the surface of the In the method, in the process of the invention,εthe constant is represented by a value that is a function of,c n+1 indicating the palm movement speed at the next moment,c n indicating the palm movement speed at the current time,c n-1 indicating the palm movement speed at the previous moment,t 1 indicating the duration between the next time and the current time,t 2 indicating the duration between the current time and the last time,Frepresenting the iteratively modified hand gaussian mixture model.
Further, in S23, the equation of motion of the joint at the present timeBThe expression of (2) is:the method comprises the steps of carrying out a first treatment on the surface of the In the method, in the process of the invention,εthe constant is represented by a value that is a function of,d n+1 indicating the articulation speed at the next moment in time,d n indicating the articulation speed at the current time of day,d n-1 indicating the articulation speed at the last moment,t 1 indicating the duration between the next time and the current time,t 2 representing the current time and the previous timeThe duration of the time between the two,Frepresenting the iteratively modified hand gaussian mixture model.
Based on the method, the invention also provides a VR interactive device control system based on data processing, which comprises a motion gesture generating unit, a three-dimensional gesture generating unit and a terminal transmission unit;
the motion gesture generating unit is used for acquiring a motion image of the user through a camera of the VR interaction device and generating a motion gesture of the user according to the motion image;
the three-dimensional gesture generating unit is used for collecting hand pose data of a user through a handle of the VR interaction device and generating three-dimensional gestures of the user according to the hand pose data;
the terminal transmission unit is used for transmitting the motion gesture and the three-dimensional gesture of the user to the VR interaction device.
The beneficial effects of the invention are as follows: the VR interactive device control system can generate accurate gestures and gestures for the user, is convenient for realizing man-machine interaction, and improves user experience.
Drawings
FIG. 1 is a flow chart of a VR interactive device control method based on data processing;
fig. 2 is a block diagram of a VR interactive device control system based on data processing.
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a VR interactive device control method based on data processing, which includes the following steps:
s1, acquiring a moving image of a user through a camera of VR interaction equipment, and generating a moving gesture of the user according to the moving image;
s2, acquiring hand pose data of a user through a handle of the VR interaction device, and generating a three-dimensional gesture of the user according to the hand pose data;
s3, transmitting the motion gesture and the three-dimensional gesture of the user to VR interaction equipment;
s1 comprises the following substeps:
s11, acquiring a moving image of a user through a camera of the VR interaction device, cutting the moving image, determining a moving area of the moving image, and generating a local moving image;
s12, extracting a limb movement area of a user in the local moving image to generate a limb moving image;
s13, extracting edge contours of the limb moving images;
s14, generating a motion gesture of the user according to the edge outline of the limb motion image.
In the invention, the motion video of the user is not directly acquired, and then the images obtained by framing the motion video are processed, but the motion image of the user is directly acquired and processed, so that the generation flow of the three-dimensional motion gesture can be simplified, one of the elements for ensuring the accuracy of the motion gesture is to preprocess the motion image, ensure that the motion image contains gesture data as much as possible, and reduce the interference of background or incoherent data.
The moving image collected by the camera only has the interference of the background of the user, so that the moving image is subjected to preliminary cutting by using the cutting frame to generate a local moving image, the background interference is avoided during the image processing in the subsequent step, and the accuracy of the moving gesture is improved. Because the local moving image is directly cut by adopting the cutting frame, or there is a part of background interference, the extracted local moving image needs to be subjected to limb activity areas, and the limb activity areas only comprise limb outlines of users, so that the three-dimensional gesture generation method is more accurate when used for generating the three-dimensional gesture. Since interference noise of the limb moving image is small, an edge contour can be directly extracted therein.
In the embodiment of the present invention, in S11, a specific method for clipping a moving image is as follows: setting a pixel characteristic threshold value, and sliding on a moving image by utilizing a sliding window to obtain a plurality of image blocks; calculating pixel characteristic values of all the image blocks, eliminating the image blocks with the pixel characteristic values smaller than the pixel characteristic threshold value, and generating a local image by taking the rest image blocks as a motion area of a motion image;
in an embodiment of the present invention, pixel characteristic values of an image blockσIs of the meter(s)The calculation formula is as follows:the method comprises the steps of carrying out a first treatment on the surface of the In the method, in the process of the invention,V max represents the maximum luminance value of the moving image,V min represents the minimum luminance value of the moving image,V k representing the first of the image blockskThe luminance value of the individual pixel points,Kthe number of pixels representing an image block,Srepresenting the area of the image block.
And taking the average value of the brightness values of all the pixel points in the moving image as a pixel characteristic threshold value. The sliding window with fixed size is utilized to slide on the moving image, the moving image can be divided into a plurality of image blocks with the same size, each image block can be used as a sample, the brightness value of a pixel point is used as a sample feature, the size comparison is carried out between the brightness value of the pixel point and a pixel feature threshold value, and the image blocks with abnormal pixel feature values belong to the area where the background interference exists, so that the image blocks are removed.
In the embodiment of the present invention, in S12, a specific method for generating a limb moving image is as follows: and clustering the local moving image by using a K-means clustering algorithm to obtain the similarity between each pixel point and the initial clustering center, generating a maximum connected region according to all the pixel points with the similarity less than 0.5, and generating the limb moving image by taking the difference value between the moving region of the moving image and the maximum connected region as a limb moving region.
The K-means clustering algorithm can divide the local moving image into a high pixel class and a low pixel class, so that the pixel points with the similarity class smaller than 0.5 are the low pixel class, and the pixel points are taken as the maximum connected region.
In the embodiment of the present invention, in S13, a specific method for extracting an edge contour of a limb moving image is as follows: and extracting the local outline of each image block in the limb moving image, and connecting the local outlines of all the image blocks to be used as the edge outline of the limb moving image.
The process of clipping from moving image to local moving image is to clip redundant background only, and the integrity of user outline in local image is not destroyed. The limb moving area is further determined from the local moving image to the limb moving image, so that the limb moving image comprises all image blocks capable of forming edge contours, and the edge contours of all the image blocks in the limb moving image are directly extracted and connected, so that the limb moving image can be used as the whole edge contour of the limb moving image. The contours of the image blocks can be extracted by adopting a hollowed-out interior point method.
In an embodiment of the present invention, S2 comprises the following sub-steps:
s21, acquiring hand pose data of a user at the current moment through a handle of VR interactive equipment, and constructing a hand Gaussian mixture model at the current moment; gaussian mixture model: the object is precisely quantized by using a Gaussian probability density function (normal distribution curve), which is a model formed by decomposing the object into a plurality of Gaussian probability density functions (normal distribution curve).
S22, carrying out iterative correction on the hand Gaussian mixture model according to the hand pose data at the previous moment and the hand pose data at the next moment;
s23, respectively determining an articulation equation and a palm motion equation at the current moment by using the iteratively corrected hand Gaussian mixture model;
s24, determining the three-dimensional gesture of the user according to the joint motion equation and the palm motion equation at the current moment.
In an embodiment of the invention, the gaussian mixture model: the object is precisely quantized by using a Gaussian probability density function (normal distribution curve), which is a model formed by decomposing the object into a plurality of Gaussian probability density functions (normal distribution curve).
The hand Gaussian mixture model is further subjected to iterative correction through the hand pose data at the previous moment and the hand pose data at the next moment, so that the hand pose data can be accurately quantized and is close to the real gesture as much as possible. The three-dimensional gesture mainly comprises a palm, fingers and finger joints, and the finger joints can often determine the direction of the hand in the three-dimensional gesture and the specific operation to be completed, so that a palm motion equation and a joint motion equation are required to be obtained, and then the specific gesture is obtained by the motion equation.
In the embodiment of the present invention, in S22, iterative repairPositive and negative hand Gaussian mixture modelFThe expression of (2) is:the method comprises the steps of carrying out a first treatment on the surface of the In the method, in the process of the invention,a n+1 indicating the acceleration of the palm movement at the next moment,a n-1 indicating the acceleration of the palm movement at the previous moment,b n+1 indicating the joint movement acceleration at the next moment,b n-1 indicating the joint movement acceleration at the previous moment,f(. Cndot.) represents the probability density function,y t a hand pose data set representing the current moment,μ n representing the mean value of the hand Gaussian mixture model at the current moment,δ n representing the covariance of the hand gaussian mixture model at the current moment.
The hand pose data comprises pose data of the palm portion and pose data of the joint portion, so that the mean value of the hand Gaussian mixture model refers to the mean value between the sub-Gaussian mixture model of the palm portion and the sub-Gaussian mixture model of the joint portion, and the covariance of the hand Gaussian mixture model refers to the covariance between the sub-Gaussian mixture model of the palm portion and the sub-Gaussian mixture model of the joint portion.
In the embodiment of the present invention, in S23, the palm motion equation at the current timeAThe expression of (2) is:the method comprises the steps of carrying out a first treatment on the surface of the In the method, in the process of the invention,εthe constant is represented by a value that is a function of,c n+1 indicating the palm movement speed at the next moment,c n indicating the palm movement speed at the current time,c n-1 indicating the palm movement speed at the previous moment,t 1 indicating the duration between the next time and the current time,t 2 indicating the duration between the current time and the last time,Frepresenting the iteratively modified hand gaussian mixture model.
In the embodiment of the present invention, in S23, the equation of motion of the joint at the current timeBThe expression of (2) is:the method comprises the steps of carrying out a first treatment on the surface of the In the method, in the process of the invention,εthe constant is represented by a value that is a function of,d n+1 indicating the articulation speed at the next moment in time,d n indicating the articulation speed at the current time of day,d n-1 indicating the articulation speed at the last moment,t 1 indicating the duration between the next time and the current time,t 2 indicating the duration between the current time and the last time,Frepresenting the iteratively modified hand gaussian mixture model.
Based on the method, the invention also provides a VR interactive device control system based on data processing, as shown in figure 2, comprising a motion gesture generating unit, a three-dimensional gesture generating unit and a terminal transmission unit;
the motion gesture generating unit is used for acquiring a motion image of the user through a camera of the VR interaction device and generating a motion gesture of the user according to the motion image;
the three-dimensional gesture generating unit is used for collecting hand pose data of a user through a handle of the VR interaction device and generating three-dimensional gestures of the user according to the hand pose data;
the terminal transmission unit is used for transmitting the motion gesture and the three-dimensional gesture of the user to the VR interaction device.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (7)

1. The VR interactive device control method based on data processing is characterized by comprising the following steps:
s1, acquiring a moving image of a user through a camera of VR interaction equipment, and generating a moving gesture of the user according to the moving image;
s2, acquiring hand pose data of a user through a handle of the VR interaction device, and generating a three-dimensional gesture of the user according to the hand pose data;
s3, transmitting the motion gesture and the three-dimensional gesture of the user to VR interaction equipment;
the step S1 comprises the following substeps:
s11, acquiring a moving image of a user through a camera of the VR interaction device, cutting the moving image, determining a moving area of the moving image, and generating a local moving image;
s12, extracting a limb movement area of a user in the local moving image to generate a limb moving image;
s13, extracting edge contours of the limb moving images;
s14, generating a motion gesture of a user according to the edge contour of the limb motion image;
the step S2 comprises the following substeps:
s21, acquiring hand pose data of a user at the current moment through a handle of VR interactive equipment, and constructing a hand Gaussian mixture model at the current moment;
s22, carrying out iterative correction on the hand Gaussian mixture model according to the hand pose data at the previous moment and the hand pose data at the next moment;
s23, respectively determining an articulation equation and a palm motion equation at the current moment by using the iteratively corrected hand Gaussian mixture model;
s24, determining a three-dimensional gesture of a user according to an articulation equation and a palm motion equation at the current moment;
in the step S22, the hand Gaussian mixture model after iterative correctionFThe expression of (2) is:
in the method, in the process of the invention,a n+1 indicating the acceleration of the palm movement at the next moment,a n-1 indicating the acceleration of the palm movement at the previous moment,b n+1 indicating the next momentThe acceleration of the joint motion,b n-1 indicating the joint movement acceleration at the previous moment,f(. Cndot.) represents the probability density function,y t a hand pose data set representing the current moment,μ n representing the mean value of the hand Gaussian mixture model at the current moment,δ n representing the covariance of the hand Gaussian mixture model at the current moment;
the hand pose data comprises pose data of the palm portion and pose data of the joint portion, so that the mean value of the hand Gaussian mixture model refers to the mean value between the sub-Gaussian mixture model of the palm portion and the sub-Gaussian mixture model of the joint portion, and the covariance of the hand Gaussian mixture model refers to the covariance between the sub-Gaussian mixture model of the palm portion and the sub-Gaussian mixture model of the joint portion.
2. The VR interactive apparatus control method based on data processing of claim 1, wherein in S11, the specific method for clipping a moving image is as follows: setting a pixel characteristic threshold value, and sliding on a moving image by utilizing a sliding window to obtain a plurality of image blocks; calculating pixel characteristic values of all the image blocks, eliminating the image blocks with the pixel characteristic values smaller than the pixel characteristic threshold value, and generating a local image by taking the rest image blocks as a motion area of a motion image;
wherein the pixel characteristic value of the image blockThe calculation formula of (2) is as follows:
in the method, in the process of the invention,V max represents the maximum luminance value of the moving image,V min represents the minimum luminance value of the moving image,V k representing the first of the image blockskThe luminance value of the individual pixel points,Kthe number of pixels representing an image block,Srepresenting the area of the image block.
3. The VR interactive apparatus control method based on data processing of claim 1, wherein in S12, the specific method for generating the limb moving image is as follows: and clustering the local moving image by using a K-means clustering algorithm to obtain the similarity between each pixel point and the initial clustering center, generating a maximum connected region according to all the pixel points with the similarity less than 0.5, and generating the limb moving image by taking the difference value between the moving region of the moving image and the maximum connected region as a limb moving region.
4. The VR interactive apparatus control method based on data processing of claim 1, wherein in S13, the specific method for extracting the edge contour of the limb moving image is as follows: the local contour of each image block in the limb moving image is advanced, and the local contour of all the image blocks is connected to be used as the edge contour of the limb moving image.
5. The VR interactive apparatus control method based on data processing of claim 1, wherein in S23, a palm motion equation at a current time isAThe expression of (2) is:
in the method, in the process of the invention,εthe constant is represented by a value that is a function of,c n+1 indicating the palm movement speed at the next moment,c n indicating the palm movement speed at the current time,c n-1 indicating the palm movement speed at the previous moment,t 1 indicating the duration between the next time and the current time,t 2 indicating the duration between the current time and the last time,Frepresenting the iteratively modified hand gaussian mixture model.
6. The VR interactive device control method based on data processing of claim 1, wherein theIn S23, the equation of motion of the joint at the current timeBThe expression of (2) is:
in the method, in the process of the invention,εthe constant is represented by a value that is a function of,d n+1 indicating the articulation speed at the next moment in time,d n indicating the articulation speed at the current time of day,d n-1 indicating the articulation speed at the last moment,t 1 indicating the duration between the next time and the current time,t 2 indicating the duration between the current time and the last time,Frepresenting the iteratively modified hand gaussian mixture model.
7. A VR interactive device control system based on data processing for use in the VR interactive device control method based on data processing as claimed in any one of claims 1-6, characterized by comprising a motion gesture generating unit, a three-dimensional gesture generating unit and a terminal transmitting unit;
the motion gesture generating unit is used for acquiring a motion image of a user through a camera of the VR interactive device and generating a motion gesture of the user according to the motion image;
the three-dimensional gesture generation unit is used for collecting hand pose data of a user through a handle of the VR interaction device and generating a three-dimensional gesture of the user according to the hand pose data;
the three-dimensional gesture generation unit is used for transmitting the motion gesture and the three-dimensional gesture of the user to the VR interaction device.
CN202310923037.0A 2023-07-26 2023-07-26 VR interactive device control method and system based on data processing Active CN116627262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310923037.0A CN116627262B (en) 2023-07-26 2023-07-26 VR interactive device control method and system based on data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310923037.0A CN116627262B (en) 2023-07-26 2023-07-26 VR interactive device control method and system based on data processing

Publications (2)

Publication Number Publication Date
CN116627262A CN116627262A (en) 2023-08-22
CN116627262B true CN116627262B (en) 2023-10-13

Family

ID=87610342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310923037.0A Active CN116627262B (en) 2023-07-26 2023-07-26 VR interactive device control method and system based on data processing

Country Status (1)

Country Link
CN (1) CN116627262B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106346485A (en) * 2016-09-21 2017-01-25 大连理工大学 Non-contact control method of bionic manipulator based on learning of hand motion gestures
CN108318038A (en) * 2018-01-26 2018-07-24 南京航空航天大学 A kind of quaternary number Gaussian particle filtering pose of mobile robot calculation method
CN108628455A (en) * 2018-05-14 2018-10-09 中北大学 A kind of virtual husky picture method for drafting based on touch-screen gesture identification
CN115050095A (en) * 2022-06-06 2022-09-13 浙江工业大学 Human body posture prediction method based on Gaussian process regression and progressive filtering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9213890B2 (en) * 2010-09-17 2015-12-15 Sony Corporation Gesture recognition system for TV control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106346485A (en) * 2016-09-21 2017-01-25 大连理工大学 Non-contact control method of bionic manipulator based on learning of hand motion gestures
CN108318038A (en) * 2018-01-26 2018-07-24 南京航空航天大学 A kind of quaternary number Gaussian particle filtering pose of mobile robot calculation method
CN108628455A (en) * 2018-05-14 2018-10-09 中北大学 A kind of virtual husky picture method for drafting based on touch-screen gesture identification
CN115050095A (en) * 2022-06-06 2022-09-13 浙江工业大学 Human body posture prediction method based on Gaussian process regression and progressive filtering

Also Published As

Publication number Publication date
CN116627262A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN109461167B (en) Training method, matting method, device, medium and terminal of image processing model
Rao et al. Selfie video based continuous Indian sign language recognition system
US10891473B2 (en) Method and device for use in hand gesture recognition
EP3174012B1 (en) Locating and tracking fingernails in images
Premaratne et al. Hand gesture tracking and recognition system using Lucas–Kanade algorithms for control of consumer electronics
CN104966016B (en) Mobile terminal child user cooperatively judges and the method for limitation operating right
CN102508547A (en) Computer-vision-based gesture input method construction method and system
WO2023173646A1 (en) Expression recognition method and apparatus
CN112036261A (en) Gesture recognition method and device, storage medium and electronic device
KR102121654B1 (en) Deep Learning Based Automatic Gesture Recognition Method and System
CN112613384A (en) Gesture recognition method, gesture recognition device and control method of interactive display equipment
Ardiansyah et al. Systematic literature review: American sign language translator
CN113191421A (en) Gesture recognition system and method based on Faster-RCNN
CN111445386A (en) Image correction method based on four-point detection of text content
CN110610131A (en) Method and device for detecting face motion unit, electronic equipment and storage medium
CN116627262B (en) VR interactive device control method and system based on data processing
US9342152B2 (en) Signal processing device and signal processing method
CN117392484A (en) Model training method, device, equipment and storage medium
Enikeev et al. Recognition of sign language using leap motion controller data
CN109886164B (en) Abnormal gesture recognition and processing method
CN109299743B (en) Gesture recognition method and device and terminal
Kumar et al. SSVM classifier and hand gesture based sign language recognition
CN113392820B (en) Dynamic gesture recognition method and device, electronic equipment and readable storage medium
Csóka et al. Recognition of sign language from high resolution images using adaptive feature extraction and classification
CN112308041A (en) Unmanned platform gesture control method based on vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant