CN117891339A - Naked hand interaction method and system of desktop VR interaction integrated machine - Google Patents

Naked hand interaction method and system of desktop VR interaction integrated machine Download PDF

Info

Publication number
CN117891339A
CN117891339A CN202410008326.2A CN202410008326A CN117891339A CN 117891339 A CN117891339 A CN 117891339A CN 202410008326 A CN202410008326 A CN 202410008326A CN 117891339 A CN117891339 A CN 117891339A
Authority
CN
China
Prior art keywords
data set
interaction
bare hand
recognition model
gesture recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410008326.2A
Other languages
Chinese (zh)
Inventor
王凯
马朝威
刘飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Future 3d Tech Co ltd
Original Assignee
Shenzhen Future 3d Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Future 3d Tech Co ltd filed Critical Shenzhen Future 3d Tech Co ltd
Priority to CN202410008326.2A priority Critical patent/CN117891339A/en
Publication of CN117891339A publication Critical patent/CN117891339A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a bare hand interaction method and a bare hand interaction system of a desktop VR interaction integrated machine, wherein the method comprises the following steps: acquiring a data set of each naked gesture image, constructing a naked hand gesture recognition model by using a convolutional neural network algorithm, and training the naked hand gesture recognition model by sampling samples by adopting a batch gradient sample method; and verifying the bare hand gesture recognition model in the training process by using the verification data set: calculating loss and output, stopping training if the loss does not drop after a specified number of iterations, and storing an optimal bare hand gesture recognition model; and recognizing the input gesture image by using the trained bare hand gesture model, and mapping the recognition result into a corresponding interaction instruction to control the object of the VR scene. By implementing the method and the device, the accuracy of bare hand gesture recognition is improved, and the problems of high cost, troublesome wearing and poor user experience of the existing gesture interaction technical scheme of the desktop VR interaction integrated machine are solved.

Description

Naked hand interaction method and system of desktop VR interaction integrated machine
Technical Field
The invention relates to the technical field of desktop interaction integrated machines, in particular to a bare hand interaction method and a bare hand interaction system of a desktop VR interaction integrated machine.
Background
The desktop VR interaction integrated machine integrates a display, a computer and VR head display equipment, and can provide immersive virtual reality experience services. Desktop VR interactive all-in-one machines typically require an external positioning system, such as an infrared camera, to capture the position and posture of the user's head and hands, thereby enabling interaction and control. However, the interaction mode of the conventional desktop VR interaction integrated machine mainly uses an infrared control pen to perform interaction operation, and the mode requires a user to hold a pen-shaped device, but is not intuitive and convenient to operate and does not conform to the habit of the user.
To address this issue, some desktop VR interactive all-in-one machines attempt to control objects in a three-dimensional scene using a gesture interaction approach, by capturing a user's hand and recognizing gesture actions through a camera tracking. The mode can realize more visual and convenient operation interaction, but the current gesture interaction mode is mainly focused on modes of using gloves, handles, various sensors and the like, the modes need expensive hardware equipment, the wearing is troublesome, and the operation is not as convenient as that of bare hands.
Therefore, gloves, handles and various sensors used in the gesture interaction mode of the conventional desktop VR interaction integrated machine are troublesome to wear, the cost is high, and the user experience is low.
Disclosure of Invention
The gesture interaction technical scheme of the conventional desktop VR interaction integrated machine is high in cost, troublesome to wear and low in user experience.
Aiming at the problems, a bare hand interaction method and a bare hand interaction system of a desktop VR interaction integrated machine are provided to solve the problems.
In a first aspect, a bare hand interaction method of a desktop VR interaction integrated machine includes:
step 100, acquiring a data set of each naked gesture image, constructing a naked hand gesture recognition model by using a convolutional neural network algorithm, and sampling samples by a batch gradient sample method to train the naked hand gesture recognition model;
step 200, verifying the bare hand gesture recognition model in the training process by using a verification data set: calculating loss and output, stopping training if the loss does not drop after a specified number of iterations, and storing an optimal bare hand gesture recognition model;
step 300, recognizing an input gesture image by using the trained bare hand gesture model, and mapping a recognition result into a corresponding interaction instruction to control an object of the VR scene;
wherein the dataset comprises: the gesture control system comprises a first data set, a second data set and a third data set, wherein the first data set is a gesture control data set, the second data set is a gesture position data set, the third data set is a gesture control direction data set, and the first data set, the second data set and the third data set are respectively divided into a training data set, a verification data set and a test data set according to a division proportion.
In combination with the bare hand interaction method of the desktop VR interaction integrated machine of the present invention, in a first possible implementation manner, the step 100 includes:
step 110: sampling in batches by adopting a batch gradient descent method, and inputting the bare hand gesture recognition model;
step 120, calculating loss information of the bare hand gesture recognition model by using the loss function;
step 130, updating gradient parameters according to the loss information;
step 140, updating the bare hand gesture recognition model according to the gradient parameters;
step 150, repeating steps 110-140 until the model loss converges or reaches a preset number of iterations.
In combination with the first possible embodiment of the present invention, in a second possible embodiment, the step 100 further includes:
step 160, adopting a Tensor Flow framework of a convolutional neural network algorithm;
step 170, respectively configuring a convolution layer for extracting the features of the image, a pooling layer for reducing the dimension of the image, a residual block for enhancing the depth and the expression capability of the model and a full connection layer for converting the feature vector into the category probability in the convolution neural network algorithm model.
In combination with the first possible embodiment of the present invention, in a third possible embodiment, the step 170 includes:
step 171, configuring a convolution layer of the convolution neural network algorithm model as: using a convolution kernel of 3×33×3, filling in as the same, the activation function is ReLU;
step 172, configuring a pooling layer of the convolutional neural network algorithm model to: 2×22×2, step size is 2, and filling is valid;
step 173, configuring a residual block of the convolutional neural network algorithm model as: the bottleneck structure is used, the bottleneck structure comprises three convolution layers, the number of output channels of a first convolution layer and a third convolution layer is 64, the number of output channels of a second convolution layer is 256, the first convolution layer and the third convolution layer are connected with batch normalization and a ReLU activation function, the second convolution layer is connected with batch normalization, and finally, the output and the input of the third convolution layer are added and then connected with the ReLU activation function;
step 174, configuring the fully connected layer of the convolutional neural network algorithm model as: the softmax activation function is used to output probability distributions for a specified number of categories.
In combination with the bare hand interaction method of the desktop VR interaction integrated machine of the present invention, in a fourth possible implementation manner, the step 100 further includes:
step 180, initializing the weight of the bare hand gesture recognition model;
step 190, adopting a cross entropy loss function as a loss function of the bare hand gesture recognition model, adopting an Adam optimizer as an optimizer of the bare hand gesture recognition model, and adopting accuracy as an evaluation index of the bare hand gesture recognition model.
In combination with the fourth possible embodiment of the present invention, in a fifth possible embodiment, the accuracy is C:
where TP is the true case, TN is the true negative case, FP is the false positive case, and FN is the false negative case.
In combination with the bare hand interaction method of the desktop VR interaction integrated machine of the present invention, in a sixth possible implementation manner, the dividing ratio is: 8:1:1.
In a second aspect, a bare hand interaction system of a desktop VR interaction integrated machine adopts the interaction method of the first aspect, and includes an acquisition module, a construction module, a training module, a verification module and an identification module;
the acquisition module is used for acquiring a data set of each bare gesture image;
the construction module is used for constructing a bare hand gesture recognition model by adopting a convolutional neural network algorithm;
the training module is used for sampling samples from the data set by adopting a batch gradient sample method to train the constructed bare hand gesture recognition model;
the verification module is used for verifying the bare hand gesture recognition model in the training process by using a verification data set: calculating loss and output, stopping training if the loss does not drop after a specified number of iterations, and storing an optimal bare hand gesture recognition model;
the recognition module is used for recognizing the input gesture image by using the bare hand gesture model after training, and mapping the recognition result into a corresponding interaction instruction to control an object of the VR scene;
wherein the dataset comprises: the gesture control system comprises a first data set, a second data set and a third data set, wherein the first data set is a gesture control data set, the second data set is a gesture position data set, the third data set is a gesture control direction data set, and the first data set, the second data set and the third data set are respectively divided into a training data set, a verification data set and a test data set according to a certain dividing proportion.
In combination with the bare hand interaction system of the desktop VR interaction integrated machine according to the second aspect of the present invention, in a first possible implementation manner, the training module is further configured to:
and inputting samples extracted in batches by a batch gradient descent method into the bare hand gesture recognition model, calculating loss information of the bare hand gesture recognition model by using the loss function, updating gradient parameters according to the loss information, and updating the bare hand gesture recognition model according to the gradient parameters.
With reference to the first possible implementation manner of the second aspect of the present invention, in a second possible implementation manner, the dividing ratio is: 8:1:1.
According to the bare hand interaction method and system for the desktop VR interaction all-in-one machine, the bare hand gesture recognition model is built, the model is trained by adopting a batch gradient sample method, and the recognition model in the training process is verified, so that the accuracy of bare hand gesture recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a first schematic diagram of a bare hand interaction method of a desktop VR interaction integrated machine in the present invention;
fig. 2 is a second schematic diagram of a bare hand interaction method of the desktop VR interaction integrated machine of the present invention;
fig. 3 is a third schematic diagram of a bare hand interaction method of the desktop VR interaction integrated machine in the present invention;
fig. 4 is a fourth schematic diagram of a bare hand interaction method of the desktop VR interaction integrated machine of the present invention;
fig. 5 is a fifth schematic diagram of a bare hand interaction method of the desktop VR interaction integrated machine in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Based on the embodiments of the present invention, other embodiments that may be obtained by those of ordinary skill in the art without undue burden are within the scope of the present invention.
The gesture interaction technical scheme of the conventional desktop VR interaction integrated machine is high in cost, troublesome to wear and low in user experience.
Aiming at the problems, a bare hand interaction method and a bare hand interaction system of a desktop VR interaction integrated machine are provided to solve the problems.
Example 1
In a first aspect, as shown in fig. 1, fig. 1 is a first schematic diagram of a bare hand interaction method of a desktop VR interaction integrated machine in the present invention; a bare hand interaction method of a desktop VR interaction integrated machine comprises the following steps:
step 100, acquiring a data set of each naked gesture image, constructing a naked hand gesture recognition model by using a convolutional neural network algorithm, and sampling samples by a batch gradient sample method to train the naked hand gesture recognition model; step 200, verifying the bare hand gesture recognition model in the training process by using a verification data set: calculating loss and output, stopping training if the loss does not drop after a specified number of iterations, and storing an optimal bare hand gesture recognition model; step 300, recognizing an input gesture image by using a trained bare hand gesture model, and mapping a recognition result into a corresponding interaction instruction to control an object of the VR scene; wherein the data set comprises: the system comprises a first data set, a second data set and a third data set, wherein the first data set is a gesture control data set, the second data set is a gesture position data set, the third data set is a gesture control direction data set, and the first data set, the second data set and the third data set are respectively divided into a training data set, a verification data set and a test data set according to a dividing proportion.
The first data set is a gesture control data set comprising control gestures such as clicking, dragging, zooming, rotating, idle and the like, the idle represents a gesture image without an operation gesture, the second data set is a gesture position data set comprising upper left, lower right, middle and the like, the third data set is a gesture control direction data set comprising directions such as one direction upwards, two directions downwards, three directions leftwards and the like, and the training set, the verification set and the test set are respectively used for training a model and verifying and testing the performance of the model. The training set, the verification set and the test set are divided according to the dividing proportion: partitioning is performed at a ratio of 8:1:1.
Preferably, as shown in fig. 2, fig. 2 is a second schematic diagram of a bare hand interaction method of the desktop VR interaction integrated machine in the present invention; step 100 comprises: step 110: sampling in batches by adopting a batch gradient descent method, and inputting a bare hand gesture recognition model; step 120, calculating loss information of the bare hand gesture recognition model by using a loss function; step 130, updating gradient parameters according to the loss information; step 140, updating the bare hand gesture recognition model according to the gradient parameters; step 150, repeating steps 110-140 until the model loss converges or reaches a preset number of iterations.
When model training is carried out, samples are extracted in batches by adopting a batch gradient descent method, and the method comprises the following steps:
the samples are extracted by using a batch gradient descent method, namely, each time a batch of samples is randomly extracted from the training set, the output and the loss of the model are calculated, the gradient is calculated according to the loss, the parameters of the model are updated according to the gradient, and the process is repeated until the loss of the model converges or reaches the preset iteration number, and the batch size of the embodiment is 32 and the iteration number is 100.
Preferably, as shown in fig. 3, fig. 3 is a third schematic diagram of a bare hand interaction method of the desktop VR interaction integrated machine in the present invention; step 100 further comprises:
step 160, adopting a Tensor Flow framework of a convolutional neural network algorithm; step 170, respectively configuring a convolution layer for extracting the features of the image, a pooling layer for reducing the dimension of the image, a residual block for enhancing the depth and the expression capability of the model and a full connection layer for converting the feature vector into the category probability in the convolution neural network algorithm model.
Further, as shown in fig. 4, fig. 4 is a fourth schematic diagram of a bare hand interaction method of the desktop VR interaction integrated machine in the present invention; step 170 includes:
step 171, configuring a convolution layer of the convolution neural network algorithm model as: using a convolution kernel of 3×33×3, filling in as the same, the activation function is ReLU; step 172, configuring a pooling layer of the convolutional neural network algorithm model to: 2×22×2, step size is 2, and filling is valid; step 173, configuring a residual block of the convolutional neural network algorithm model as: the bottleneck structure is used, the bottleneck structure comprises three convolution layers, the number of output channels of a first convolution layer and a third convolution layer is 64, the number of output channels of a second convolution layer is 256, the first convolution layer and the third convolution layer are connected with batch normalization and a ReLU activation function, the second convolution layer is connected with batch normalization, and finally, the output and the input of the third convolution layer are added and then connected with the ReLU activation function; step 174, configuring the fully connected layer of the convolutional neural network algorithm model as: the softmax activation function is used to output probability distributions for a specified number of categories.
In the process of model construction, a TensorFlow framework is used for improving recognition accuracy, a bare hand gesture recognition model is constructed on the basis of ResNet-50, a dataset is utilized for training the model, and gesture classification and positioning are carried out on an input image. The model outputs a probability distribution of 5 categories, representing the likelihood that the image belongs to each category. The main components of the model are as follows:
the convolution layer is used for extracting the characteristics of the image, a convolution kernel of 3 multiplied by 33 multiplied by 3 is used, the step length is 1, the filling is same, and the activation function is ReLU;
a pooling layer for reducing the dimension of the image, using 2×22×2 maximum pooling, the step length being 2, and filling as valid;
the residual block is used for enhancing the depth and the expression capacity of the model, and a bottleneck structure is used, wherein the bottleneck structure comprises three convolution layers, the number of output channels of a first convolution layer and a third convolution layer is 64, the number of output channels of a second convolution layer is 256, the first convolution layer and the third convolution layer are connected with a batch normalization and a ReLU activation function, the second convolution layer is connected with a batch normalization, and finally, the output and the input of the third convolution layer are added and then connected with the ReLU activation function.
And the full connection layer is used for converting the feature vector into category probabilities and outputting probability distribution of 5 categories by using a softmax activation function.
Preferably, as shown in fig. 5, fig. 5 is a fifth schematic diagram of a bare hand interaction method of the desktop VR interaction integrated machine in the present invention, and step 100 further includes:
step 180, initializing the weight of a bare hand gesture recognition model;
step 190, adopting a cross entropy loss function as a loss function of the bare hand gesture recognition model, adopting an Adam optimizer as an optimizer of the bare hand gesture recognition model, and adopting accuracy as an evaluation index of the bare hand gesture recognition model.
The parameters of the optimizer are:
learning rate α=0.001, first-order moment estimated exponential decay rate β=0.9, second-order moment estimated exponential decay rate β2=0.999, and numerical stability parameter ε=10 -8
Wherein, the accuracy is C:
where TP is the true case, TN is the true negative case, FP is the false positive case, and FN is the false negative case.
Example 2
In a second aspect, a bare hand interaction system of a desktop VR interaction integrated machine adopts the interaction method of the first aspect, and includes an acquisition module, a construction module, a training module, a verification module and an identification module; the acquisition module is used for acquiring a data set of each bare gesture image; the construction module is used for constructing a bare hand gesture recognition model by adopting a convolutional neural network algorithm; the training module is used for sampling samples from the data set by adopting a batch gradient sample method to train the constructed bare hand gesture recognition model; the verification module is used for verifying the bare hand gesture recognition model in the training process by using the verification data set: calculating loss and output, stopping training if the loss does not drop after a specified number of iterations, and storing an optimal bare hand gesture recognition model; the recognition module is used for recognizing the input gesture image by using the trained bare hand gesture model, and mapping the recognition result into a corresponding interaction instruction to control an object of the VR scene; wherein the data set comprises: the system comprises a first data set, a second data set and a third data set, wherein the first data set is a gesture control data set, the second data set is a gesture position data set, the third data set is a gesture control direction data set, and the first data set, the second data set and the third data set are respectively divided into a training data set, a verification data set and a test data set according to a certain dividing proportion.
Further, the training module is further configured to:
and inputting samples extracted in batches by a batch gradient descent method into a bare hand gesture recognition model, calculating loss information of the bare hand gesture recognition model by using a loss function, updating gradient parameters according to the loss information, and updating the bare hand gesture recognition model according to the gradient parameters.
Preferably, the division ratio is: 8:1:1.
According to the bare hand interaction method and system for the desktop VR interaction all-in-one machine, the bare hand gesture recognition model is built, the model is trained by adopting the batch gradient sample method, and the recognition model in the training process is verified, so that the accuracy of bare hand gesture recognition is improved.
The foregoing is only illustrative of the present invention and is not to be construed as limiting thereof, but rather as various modifications, equivalent arrangements, improvements, etc., within the spirit and principles of the present invention.

Claims (10)

1. A bare hand interaction method of a desktop VR interaction integrated machine is characterized by comprising the following steps:
step 100, acquiring a data set of each naked gesture image, constructing a naked hand gesture recognition model by using a convolutional neural network algorithm, and sampling samples by a batch gradient sample method to train the naked hand gesture recognition model;
step 200, verifying the bare hand gesture recognition model in the training process by using a verification data set: calculating loss and output, stopping training if the loss does not drop after a specified number of iterations, and storing an optimal bare hand gesture recognition model;
step 300, recognizing an input gesture image by using the trained bare hand gesture model, and mapping a recognition result into a corresponding interaction instruction to control an object of the VR scene;
wherein the dataset comprises: the gesture control system comprises a first data set, a second data set and a third data set, wherein the first data set is a gesture control data set, the second data set is a gesture position data set, the third data set is a gesture control direction data set, and the first data set, the second data set and the third data set are respectively divided into a training data set, a verification data set and a test data set according to a division proportion.
2. The bare hand interaction method of the desktop VR interaction integrated machine of claim 1, wherein the step 100 includes:
step 110: sampling in batches by adopting a batch gradient descent method, and inputting the bare hand gesture recognition model;
step 120, calculating loss information of the bare hand gesture recognition model by using the loss function;
step 130, updating gradient parameters according to the loss information;
step 140, updating the bare hand gesture recognition model according to the gradient parameters;
step 150, repeating steps 110-140 until the model loss converges or reaches a preset number of iterations.
3. The bare hand interaction method of the desktop VR interaction all-in-one machine of claim 2, wherein the step 100 further comprises:
step 160, adopting a Tensor Flow framework of a convolutional neural network algorithm;
step 170, respectively configuring a convolution layer for extracting the features of the image, a pooling layer for reducing the dimension of the image, a residual block for enhancing the depth and the expression capability of the model and a full connection layer for converting the feature vector into the category probability in the convolution neural network algorithm model.
4. The bare hand interaction method of the desktop VR interaction integrated machine of claim 3, wherein the step 170 comprises:
step 171, configuring a convolution layer of the convolution neural network algorithm model as: using a convolution kernel of 3×33×3, filling in as the same, the activation function is ReLU;
step 172, configuring a pooling layer of the convolutional neural network algorithm model to: 2×22×2, step size is 2, and filling is valid;
step 173, configuring a residual block of the convolutional neural network algorithm model as: the bottleneck structure is used, the bottleneck structure comprises three convolution layers, the number of output channels of a first convolution layer and a third convolution layer is 64, the number of output channels of a second convolution layer is 256, the first convolution layer and the third convolution layer are connected with batch normalization and a ReLU activation function, the second convolution layer is connected with batch normalization, and finally, the output and the input of the third convolution layer are added and then connected with the ReLU activation function;
step 174, configuring the fully connected layer of the convolutional neural network algorithm model as: the softmax activation function is used to output probability distributions for a specified number of categories.
5. The method of bare hand interaction for a desktop VR interaction integrated machine of claim 4, wherein step 100 further comprises:
step 180, initializing the weight of the bare hand gesture recognition model;
step 190, adopting a cross entropy loss function as a loss function of the bare hand gesture recognition model, adopting an Adam optimizer as an optimizer of the bare hand gesture recognition model, and adopting accuracy as an evaluation index of the bare hand gesture recognition model.
6. The bare hand interaction method of the desktop VR interaction integrated machine of claim 5, wherein the accuracy is C:
where TP is the true case, TN is the true negative case, FP is the false positive case, and FN is the false negative case.
7. The bare hand interaction method of the desktop VR interaction integrated machine of claim 1, wherein the dividing ratio is: 8:1:1.
8. A bare-hand interaction system of a desktop VR interaction integrated machine, adopting the interaction method of any one of claims 1-7, characterized by comprising an acquisition module, a construction module, a training module, a verification module and an identification module;
the acquisition module is used for acquiring a data set of each bare gesture image;
the construction module is used for constructing a bare hand gesture recognition model by adopting a convolutional neural network algorithm;
the training module is used for sampling samples from the data set by adopting a batch gradient sample method to train the constructed bare hand gesture recognition model;
the verification module is used for verifying the bare hand gesture recognition model in the training process by using a verification data set: calculating loss and output, stopping training if the loss does not drop after a specified number of iterations, and storing an optimal bare hand gesture recognition model;
the recognition module is used for recognizing the input gesture image by using the bare hand gesture model after training, and mapping the recognition result into a corresponding interaction instruction to control an object of the VR scene;
wherein the dataset comprises: the gesture control system comprises a first data set, a second data set and a third data set, wherein the first data set is a gesture control data set, the second data set is a gesture position data set, the third data set is a gesture control direction data set, and the first data set, the second data set and the third data set are respectively divided into a training data set, a verification data set and a test data set according to a certain dividing proportion.
9. The bare hand interaction system of the desktop VR interaction integrated machine of claim 8, wherein the training module is further to:
and inputting samples extracted in batches by a batch gradient descent method into the bare hand gesture recognition model, calculating loss information of the bare hand gesture recognition model by using the loss function, updating gradient parameters according to the loss information, and updating the bare hand gesture recognition model according to the gradient parameters.
10. The nude hand interaction system of the desktop VR interaction integrated machine of claim 9, wherein the dividing ratio is: 8:1:1.
CN202410008326.2A 2024-01-03 2024-01-03 Naked hand interaction method and system of desktop VR interaction integrated machine Pending CN117891339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410008326.2A CN117891339A (en) 2024-01-03 2024-01-03 Naked hand interaction method and system of desktop VR interaction integrated machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410008326.2A CN117891339A (en) 2024-01-03 2024-01-03 Naked hand interaction method and system of desktop VR interaction integrated machine

Publications (1)

Publication Number Publication Date
CN117891339A true CN117891339A (en) 2024-04-16

Family

ID=90650247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410008326.2A Pending CN117891339A (en) 2024-01-03 2024-01-03 Naked hand interaction method and system of desktop VR interaction integrated machine

Country Status (1)

Country Link
CN (1) CN117891339A (en)

Similar Documents

Publication Publication Date Title
EP3398034B1 (en) Electrical device for hand gestures detection
JP7193252B2 (en) Captioning image regions
CN109471945B (en) Deep learning-based medical text classification method and device and storage medium
CN106951484B (en) Picture retrieval method and device, computer equipment and computer readable medium
KR20180064371A (en) System and method for recognizing multiple object inputs
CN112148128B (en) Real-time gesture recognition method and device and man-machine interaction system
CN112801219B (en) Multi-mode emotion classification method, device and equipment
CN112509690B (en) Method, apparatus, device and storage medium for controlling quality
CN108073851B (en) Grabbing gesture recognition method and device and electronic equipment
CN107368820B (en) Refined gesture recognition method, device and equipment
CN114419509B (en) Multi-mode emotion analysis method and device and electronic equipment
CN112070027A (en) Network training and action recognition method, device, equipment and storage medium
CN109919077A (en) Gesture recognition method, device, medium and calculating equipment
Linqin et al. Dynamic hand gesture recognition using RGB-D data for natural human-computer interaction
CN111797854A (en) Scene model establishing method and device, storage medium and electronic equipment
CN111444802B (en) Face recognition method and device and intelligent terminal
CN112115131A (en) Data denoising method, device and equipment and computer readable storage medium
CN110717407A (en) Human face recognition method, device and storage medium based on lip language password
CN110738261B (en) Image classification and model training method and device, electronic equipment and storage medium
CN111639318A (en) Wind control method based on gesture monitoring on mobile terminal and related device
CN117891339A (en) Naked hand interaction method and system of desktop VR interaction integrated machine
CN111275683A (en) Image quality grading processing method, system, device and medium
CN106446696A (en) Information processing method and electronic device
CN114926887A (en) Face recognition method and device and terminal equipment
US11308150B2 (en) Mobile device event control with topographical analysis of digital images inventors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication