CN112949587B - Hand holding gesture correction method, system and computer readable medium based on key points - Google Patents

Hand holding gesture correction method, system and computer readable medium based on key points Download PDF

Info

Publication number
CN112949587B
CN112949587B CN202110345365.8A CN202110345365A CN112949587B CN 112949587 B CN112949587 B CN 112949587B CN 202110345365 A CN202110345365 A CN 202110345365A CN 112949587 B CN112949587 B CN 112949587B
Authority
CN
China
Prior art keywords
gesture
key point
hand
key points
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110345365.8A
Other languages
Chinese (zh)
Other versions
CN112949587A (en
Inventor
张鑫
刘子枭
朱枭
黎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN202110345365.8A priority Critical patent/CN112949587B/en
Publication of CN112949587A publication Critical patent/CN112949587A/en
Application granted granted Critical
Publication of CN112949587B publication Critical patent/CN112949587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a hand holding gesture correcting method and system based on key points and a computer readable medium, wherein the correcting method comprises the following steps: step 1: collecting a standard hand posture image; step 2: detecting key points of hand gestures in the image and storing key point data; step 3: preprocessing the key point data obtained in the step 2, and establishing a standard gesture model by utilizing the preprocessed data; step 4: collecting a new hand gesture image and detecting key points of the new hand gesture image; step 5: preprocessing key point data of a new image, inputting a standard hand gesture model, judging whether the current gesture is a standard gesture, if yes, outputting the current gesture without correction, otherwise, executing the step 6; step 6: judging that the current gesture needs to be corrected, and simultaneously outputting key point data needing to be corrected, and prompting a user to correct the gesture at the corresponding key point position. Compared with the prior art, the invention has the advantages of high accuracy, low application cost and the like.

Description

Hand holding gesture correction method, system and computer readable medium based on key points
Technical Field
The invention relates to the technical field of hand detection, in particular to a hand holding gesture correction method, a hand holding gesture correction system and a computer readable medium based on key points.
Background
Current computer vision-based hand applications focus primarily on gesture recognition. Traditional methods, such as static gesture recognition (also known as static two-dimensional gesture recognition), recognize relatively simple static gesture actions, such as: making a fist or opening five fingers is difficult to accurately identify for complex finger movements.
Chinese patent CN111626135a discloses a depth map based three-dimensional gesture recognition system. According to the method, a sensor is adopted to acquire a hand depth information graph, the hand depth information graph is input into a CNN network module after being preprocessed to acquire the three-dimensional position of a key joint, and finally, the three-dimensional joint model is built by utilizing the acquired joint points. The reconstructed three-dimensional model can be matched with a three-dimensional template in a database to complete gesture recognition. The system not only needs a depth sensor, but also needs neural network training to reconstruct a three-dimensional model, and the cost is high.
Chinese patent CN111665937a discloses an integrated self-driven all-textile gesture recognition data glove. The self-powered glove is manufactured by using materials such as a glove matrix, a self-powered strain sensor, flexible weakly sensitive conductive yarns and the like. The method has high cost, high manufacturing difficulty and complicated earlier acquisition work.
Chinese patent CN107443957a discloses a pen shaft assembly and pen for correcting holding posture. The invention enables a user to correct the holding gesture through the small ball on the pen by improving the mechanical structure of the pen. The method has certain limitations and subjectivity, can not correct specific fingers, and can not accurately judge whether the holding gesture of the user is standard or not under the guidance of no other person.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a hand holding gesture correcting method, a hand holding gesture correcting system and a computer readable medium based on key points, wherein the hand holding gesture correcting method, the hand holding gesture correcting system and the computer readable medium are high in accuracy and low in application cost.
The aim of the invention can be achieved by the following technical scheme:
a hand holding gesture correcting method based on key points comprises the following steps:
step 1: acquiring a standard hand gesture image by using an image acquisition device;
step 2: detecting key points of hand gestures in the image and storing key point data;
step 3: preprocessing the key point data obtained in the step 2, and establishing a standard gesture model by utilizing the preprocessed data;
step 4: collecting a new hand gesture image and detecting key points of the new hand gesture image;
step 5: preprocessing key point data of a new image, inputting a standard hand gesture model, judging whether the current gesture is a standard gesture, if yes, outputting the current gesture without correction, otherwise, executing the step 6;
step 6: judging that the current gesture needs to be corrected, and simultaneously outputting key point data needing to be corrected, and prompting a user to correct the gesture at the corresponding key point position.
Preferably, the step 2 specifically includes:
and detecting key points of hand gestures in the acquired images by using an OpenPose library, and storing the key point data in json format.
Preferably, the step 3 specifically includes:
step 3-1: selecting two key points with the relative distances kept unchanged in various holding postures from all the key points;
step 3-2: performing transformation processing on all the key point data based on the key points selected in the step 3-1 to obtain a data set;
step 3-3: establishing a Gaussian model by using all sample data of the same key point;
step 3-4: and establishing a multidimensional Gaussian mixture model, namely a standard gesture model, for the whole holding gesture by utilizing each key point parameter.
More preferably, the step 3-2 specifically comprises the following steps:
assuming that two key points selected in the step 3-1 are A and B, the transformation process of all the key point data is as follows:
Figure BDA0003000644190000021
Figure BDA0003000644190000022
Figure BDA0003000644190000023
Figure BDA0003000644190000024
Figure BDA0003000644190000031
Figure BDA0003000644190000032
Figure BDA0003000644190000033
wherein, (x) i ,y i ) Pixel coordinates of the ith key point of the hand;
Figure BDA0003000644190000034
and->
Figure BDA0003000644190000035
Initializing key point coordinates of A and B respectively; (X) i ,Y i ) The pixel coordinates after the i-th key point transformation are obtained;
after the post-transformation, the dataset is constructed.
More preferably, the step 3-3 specifically comprises:
the following is done for n samples of a single keypoint:
k i =x i +y i
Figure BDA0003000644190000036
Figure BDA0003000644190000037
wherein, (x) i ,y i ) An ith sample coordinate value for the key point;
for a certain key point coordinate (x, y) of whether the holding gesture to be detected is the same, calculating probability density:
s=x+y
Figure BDA0003000644190000038
the probability density is used to determine whether a single keypoint meets a standard grip gesture keypoint, thereby indicating a specific single finger that requires correction.
More preferably, the steps 3-4 specifically include:
μ=(μ 01 ,…,μ n-1 ) T
Figure BDA0003000644190000041
wherein n is the number of key points; mu is an average value vector consisting of the average value of each key point; sigma is a covariance matrix composed of each keypoint variance;
for the holding gesture s= (S) 0 ,s 1 ,…,s n-1 ) The probability density is calculated:
Figure BDA0003000644190000042
preferably, the step 5 specifically includes:
preprocessing key point data of a new image, inputting a standard hand gesture model, calculating similar probability density between the to-be-picked-up object and the standard pick-up object by using the Gaussian mixture model obtained in the step 3, judging whether the current gesture is the standard gesture, if so, outputting the current gesture without correction, otherwise, executing the step 6.
Preferably, the step 6 specifically includes:
step 6-1: calculating probability density of each key point as a standard holding gesture;
step 6-2: selecting key points with probability density lower than a preset threshold value;
step 6-3: the output holding gesture is not standard, and the finger needing to be corrected is fed back.
The hand holding gesture correcting system based on the key points comprises image collecting equipment and a data processing terminal; the image acquisition equipment is connected with the data processing terminal; the data processing terminal is used for executing the hand holding gesture correcting method based on the key points.
A computer readable medium having stored therein the hand grip correction method based on any one of the above-described key points.
Compared with the prior art, the invention has the following beneficial effects:
1. the accuracy is high: the hand holding gesture correcting method adopts a hand detecting technology based on key points, and records the information of each finger joint by utilizing a plurality of key points so as to record the complex actions of the finger; the algorithm provided for the key point data can more accurately record the gesture of the finger under various application scenes with fixed holding gesture, so that the detection and comparison of the complicated hand holding gesture are realized, the subsequent model establishment and detection and correction are more accurate, and the recognition accuracy is improved.
2. The application cost is low: the hand holding gesture correcting method and the hand holding gesture correcting system can achieve the purpose only by using the computer vision technology and the image acquisition equipment and the data processing terminal, and the traditional method achieves the detection target by using the contact sensors such as the data acquisition glove and the like, so that the application cost is effectively reduced.
Drawings
FIG. 1 is a flow chart of a method for correcting hand holding gesture according to the present invention;
FIG. 2 is a schematic diagram of a hand key point in an embodiment of the present invention;
fig. 3 is a schematic diagram of detecting a mobile phone key point based on openPose in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a key point holding gesture data before processing according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the key point holding gesture data after processing according to the embodiment of the present invention;
fig. 6 is a schematic structural diagram of an example in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
A hand holding gesture correcting method based on key points is shown in figure 1, and comprises the following steps:
step 1: acquiring a standard hand gesture image by using an image acquisition device;
step 2: detecting key points of hand gestures in the image and storing key point data;
performing key point detection on hand gestures in the acquired image by using an OpenPose library, storing key point data in json format, wherein the key points selected in the embodiment are shown in fig. 2, and the detection process is shown in fig. 3;
step 3: preprocessing the key point data obtained in the step 2, and establishing a standard gesture model by utilizing the preprocessed data;
step 3-1: selecting two key points with the relative distances kept unchanged in various holding postures from all the key points;
step 3-2: performing transformation processing on all the key point data based on the key points selected in the step 3-1 to obtain a data set;
assuming that two key points selected in the step 3-1 are A and B, the transformation process of all the key point data is as follows:
Figure BDA0003000644190000051
/>
Figure BDA0003000644190000061
Figure BDA0003000644190000062
Figure BDA0003000644190000063
Figure BDA0003000644190000064
Figure BDA0003000644190000065
Figure BDA0003000644190000066
wherein, (x) i ,y i ) Pixel coordinates of the ith key point of the hand;
Figure BDA0003000644190000067
and->
Figure BDA0003000644190000068
Initializing key point coordinates of A and B respectively; (X) i ,Y i ) The pixel coordinates after the i-th key point transformation are obtained;
then establishing a data set;
the key point holding gesture data is shown in fig. 4 before processing, and is shown in fig. 5 after processing;
step 3-3: establishing a Gaussian model by using all sample data of the same key point;
the step 3-3 is specifically as follows:
the following is done for n samples of a single keypoint:
k i =x i +y i
Figure BDA0003000644190000069
Figure BDA00030006441900000610
wherein, (x) i ,y i ) An ith sample coordinate value for the key point;
for a certain key point coordinate (x, y) of whether the holding gesture to be detected is the same, calculating probability density:
s=x+y
Figure BDA0003000644190000071
/>
the probability density is used for judging whether the single key point accords with the standard holding gesture key point, so that a specific single finger needing correction is pointed out;
step 3-4: establishing a multidimensional Gaussian mixture model, namely a standard gesture model, for the whole holding gesture by utilizing each key point parameter;
μ=(μ 01 ,…,μ n-1 ) T
Figure BDA0003000644190000072
wherein n is the number of key points; mu is an average value vector consisting of the average value of each key point; sigma is a covariance matrix composed of each keypoint variance;
for the holding gesture s= (S) 0 ,s 1 ,…,s n-1 ) The probability density is calculated:
Figure BDA0003000644190000073
step 4: collecting a new hand gesture image and detecting key points of the new hand gesture image;
step 5: preprocessing key point data of a new image, inputting a standard hand gesture model, calculating similar probability density between the to-be-picked-up object and the standard pick-up object by using the Gaussian mixture model obtained in the step 3, judging whether the current gesture is a standard gesture, if so, outputting the current gesture without correction, otherwise, executing the step 6;
step 6: judging that the current gesture needs to be corrected, and simultaneously outputting key point data needing to be corrected, and prompting a user to correct the gesture at the position of the corresponding key point;
step 6-1: calculating probability density of each key point as a standard holding gesture;
step 6-2: selecting key points with probability density lower than a preset threshold value;
step 6-3: the output holding gesture is not standard, and the finger needing to be corrected is fed back.
The basic idea of the present embodiment of establishing a standard gesture model is that: the collecting device collects a large number of samples of the standard holding gesture and extracts key point information, and a Gaussian mixture model of the standard holding gesture is built by using pixel coordinate data of the key points of the large number of samples.
Because the key point information is stored by pixel coordinates of 21 joint points of the hand on the image, but the image area, angle and size of each standard holding gesture are different in the process of collecting the sample, and a model cannot be built by directly utilizing the data, a corresponding algorithm is designed to solve the problems.
Algorithm principle: according to the key point diagram of fig. 1, the relative positions of the key points of the hand are fixed, so that the positions of the two key points can be determined in advance, and the problem of different positions and scales of different holding image areas can be solved by integrally translating, rotating and zooming all the key points according to the position relation of the two key points.
From the schematic diagram of the key points, the relative distance between the key point 0 and the key point 9 of the hand on the hand is relatively unchanged, so that the position of the whole hand can be determined by determining the positions of the key points.
The specific transformation process formula is as follows:
Figure BDA0003000644190000081
Figure BDA0003000644190000082
Figure BDA0003000644190000083
Figure BDA0003000644190000084
Figure BDA0003000644190000085
Figure BDA0003000644190000086
Figure BDA0003000644190000087
in the above, (x) i ,y i ) Pixel coordinates of the ith key point of the hand;
Figure BDA0003000644190000088
and->
Figure BDA0003000644190000089
Respectively initializing key point coordinates, I 0 =(0,0),I 9 =(20,20);(X i ,Y i ) And the pixel coordinates after the i-th key point transformation.
The collected large amount of standard key point data are changed by adopting the algorithm to obtain a data set K t . Since the standard holding posture has a fixed hand posture, K is t The coordinates of the same key point of different standard holding poses should satisfy the gaussian distribution. And independently establishing a Gaussian mixture model for each key point to calculate the expected sum and variance of the Gaussian mixture model, and establishing a multi-element Gaussian model for the whole holding gesture by using model parameters of all the key points.
N samples of a single keypoint are processed as follows:
k i =x i +y i
Figure BDA0003000644190000091
Figure BDA0003000644190000092
wherein, (x) i ,y i ) An ith sample coordinate value for the key point;
for a certain key point coordinate (x, y) of whether the holding gesture to be detected is the same, calculating probability density:
s=x+y
Figure BDA0003000644190000093
the probability density is used to determine whether a single keypoint meets a standard grip gesture keypoint, thereby indicating a specific single finger that requires correction.
The parameters of the multidimensional Gaussian distribution probability density function are calculated by using the parameters of each key point for the whole holding gesture as follows:
μ=(μ 01 ,…,μ n-1 ) T
Figure BDA0003000644190000094
wherein n is the number of key points; mu is an average value vector consisting of the average value of each key point; sigma is a covariance matrix composed of each keypoint variance;
for the holding gesture s= (S) 0 ,s 1 ,…,s 20 ) The probability density is calculated:
Figure BDA0003000644190000095
the probability density may be used to determine whether the overall hand grip is standard.
An example is provided below, in which the model parameters are used to calculate the test gesture, and the key points of the test gesture are visually compared with the key points of the template, and after the data is processed, as shown in fig. 6, the probability of the key points 17, 18 and 20 is the lowest, so that the small finger is considered to deviate to a larger extent, and the small finger gesture needs to be corrected.
The embodiment also relates to a hand holding gesture correcting system based on the key points, which comprises an image acquisition device and a data processing terminal, wherein the image acquisition device is connected with the data processing terminal, and the data processing terminal is used for executing the hand holding gesture correcting method based on the key points.
The embodiment also relates to a computer readable medium, wherein the hand holding gesture correcting method based on the key points is stored in the medium.
According to the scheme, the holding gesture of the user in different application scenes can be analyzed, and the holding gesture error in the training process can be corrected by a beginner through comparison with the standard holding gesture, so that the purpose of improving training is achieved, and the method can be used for comparison and correction of hand gestures such as writing brush holding gesture, badminton racket holding gesture, golf club holding gesture and the like.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (3)

1. The hand holding gesture correcting method based on the key points is characterized by comprising the following steps of:
step 1: acquiring a standard hand gesture image by using an image acquisition device;
step 2: detecting key points of hand gestures in the image and storing key point data;
step 3: preprocessing the key point data obtained in the step 2, and establishing a standard gesture model by utilizing the preprocessed data;
step 4: acquiring a real-time hand gesture image and detecting key points of the hand gesture image;
step 5: preprocessing key point data of a new image, inputting a standard gesture model, judging whether the current gesture is a standard gesture, if yes, outputting the current gesture without correction, otherwise, executing the step 6;
step 6: judging that the current gesture needs to be corrected, and simultaneously outputting key point data needing to be corrected, and prompting a user to correct the gesture at the position of the corresponding key point;
the step 3 specifically comprises the following steps:
step 3-1: selecting two key points with the relative distances kept unchanged in various holding postures from all the key points;
step 3-2: performing transformation processing on all the key point data based on the key points selected in the step 3-1 to obtain a data set;
step 3-3: establishing a Gaussian model by using all sample data of the same key point;
step 3-4: establishing a multidimensional Gaussian mixture model, namely a standard gesture model, for the whole holding gesture by utilizing each key point parameter;
the step 3-3 specifically comprises the following steps:
the following is done for n samples of a single keypoint:
k i =x i +y i
Figure FDA0004124669460000011
Figure FDA0004124669460000012
wherein, (x) i ,y i ) An ith sample coordinate value for the key point;
for a certain key point coordinate (x, y) of whether the holding gesture to be detected is the same, calculating probability density:
s=x+y
Figure FDA0004124669460000021
the probability density is used to determine whether a single keypoint meets a standard grip gesture keypoint, thereby indicating a specific single finger that requires correction.
2. The hand holding posture correcting method based on the key points according to claim 1, wherein the step 2 is specifically:
and detecting key points of hand gestures in the acquired images by using an OpenPose library, and storing the key point data in json format.
3. The hand holding gesture correcting system based on the key points is characterized by comprising an image acquisition device and a data processing terminal; the image acquisition equipment is connected with the data processing terminal; the data processing terminal is used for executing the hand holding gesture correcting method based on the key points according to any one of claims 1-2.
CN202110345365.8A 2021-03-31 2021-03-31 Hand holding gesture correction method, system and computer readable medium based on key points Active CN112949587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345365.8A CN112949587B (en) 2021-03-31 2021-03-31 Hand holding gesture correction method, system and computer readable medium based on key points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345365.8A CN112949587B (en) 2021-03-31 2021-03-31 Hand holding gesture correction method, system and computer readable medium based on key points

Publications (2)

Publication Number Publication Date
CN112949587A CN112949587A (en) 2021-06-11
CN112949587B true CN112949587B (en) 2023-05-02

Family

ID=76231213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345365.8A Active CN112949587B (en) 2021-03-31 2021-03-31 Hand holding gesture correction method, system and computer readable medium based on key points

Country Status (1)

Country Link
CN (1) CN112949587B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147767A (en) * 2019-05-22 2019-08-20 深圳市凌云视迅科技有限责任公司 Three-dimension gesture attitude prediction method based on two dimensional image

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956523B (en) * 2016-04-22 2019-09-17 广东小天才科技有限公司 A kind of pen holding posture antidote and device
CN106372294A (en) * 2016-08-30 2017-02-01 苏州品诺维新医疗科技有限公司 Method and device for correcting posture
CN106344030A (en) * 2016-08-30 2017-01-25 苏州品诺维新医疗科技有限公司 Posture correction method and device
CN106781327B (en) * 2017-03-09 2020-02-07 广东小天才科技有限公司 Sitting posture correction method and mobile terminal
CN109858524B (en) * 2019-01-04 2020-10-16 北京达佳互联信息技术有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN110349096A (en) * 2019-06-14 2019-10-18 平安科技(深圳)有限公司 Bearing calibration, device, equipment and the storage medium of palm image
CN111347438A (en) * 2020-02-24 2020-06-30 五邑大学 Learning type robot and learning correction method based on same
CN112541382A (en) * 2020-04-13 2021-03-23 深圳优地科技有限公司 Method and system for assisting movement and identification terminal equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147767A (en) * 2019-05-22 2019-08-20 深圳市凌云视迅科技有限责任公司 Three-dimension gesture attitude prediction method based on two dimensional image

Also Published As

Publication number Publication date
CN112949587A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN108399367B (en) Hand motion recognition method and device, computer equipment and readable storage medium
CN111780764B (en) Visual positioning method and device based on visual map
CN103718175B (en) Detect equipment, method and the medium of subject poses
CN101482920B (en) Hand-written character recognition method and system
CN111104816A (en) Target object posture recognition method and device and camera
JP5262705B2 (en) Motion estimation apparatus and program
CN101477426A (en) Method and system for recognizing hand-written character input
CN110717385A (en) Dynamic gesture recognition method
CN109993116B (en) Pedestrian re-identification method based on mutual learning of human bones
CN112883922B (en) Sign language identification method based on CNN-BiGRU neural network fusion
CN101452357A (en) Hand-written character input method and system
CN113011344B (en) Pull-up quantity calculation method based on machine vision
CN107194916A (en) A kind of vision measurement system of feature based Point matching
CN110007764A (en) A kind of gesture skeleton recognition methods, device, system and storage medium
CN112949587B (en) Hand holding gesture correction method, system and computer readable medium based on key points
CN116310976A (en) Learning habit development method, learning habit development device, electronic equipment and storage medium
Liu et al. Ultrasonic positioning and IMU data fusion for pen-based 3D hand gesture recognition
JP5032415B2 (en) Motion estimation apparatus and program
CN115050095A (en) Human body posture prediction method based on Gaussian process regression and progressive filtering
CN111126294B (en) Method and server for identifying gait of terminal user based on mobile terminal data
Liu et al. Real-Time marker localization learning for GelStereo tactile sensing
CN114913541A (en) Human body key point detection method, device and medium based on orthogonal matching pursuit
CN113470073A (en) Animal center tracking method based on deep learning
CN110263702A (en) A kind of real-time three-dimensional gesture method for tracing based on method of geometry
CN117707746B (en) Method and system for scheduling interactive holographic data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant