CN112949587B - Hand holding gesture correction method, system and computer readable medium based on key points - Google Patents
Hand holding gesture correction method, system and computer readable medium based on key points Download PDFInfo
- Publication number
- CN112949587B CN112949587B CN202110345365.8A CN202110345365A CN112949587B CN 112949587 B CN112949587 B CN 112949587B CN 202110345365 A CN202110345365 A CN 202110345365A CN 112949587 B CN112949587 B CN 112949587B
- Authority
- CN
- China
- Prior art keywords
- gesture
- key point
- hand
- key points
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012937 correction Methods 0.000 title claims abstract description 16
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 9
- 239000000203 mixture Substances 0.000 claims description 8
- 230000036544 posture Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 4
- 230000003068 static effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 210000001145 finger joint Anatomy 0.000 description 1
- 230000005057 finger movement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000004753 textile Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/11—Hand-related biometrics; Hand pose recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a hand holding gesture correcting method and system based on key points and a computer readable medium, wherein the correcting method comprises the following steps: step 1: collecting a standard hand posture image; step 2: detecting key points of hand gestures in the image and storing key point data; step 3: preprocessing the key point data obtained in the step 2, and establishing a standard gesture model by utilizing the preprocessed data; step 4: collecting a new hand gesture image and detecting key points of the new hand gesture image; step 5: preprocessing key point data of a new image, inputting a standard hand gesture model, judging whether the current gesture is a standard gesture, if yes, outputting the current gesture without correction, otherwise, executing the step 6; step 6: judging that the current gesture needs to be corrected, and simultaneously outputting key point data needing to be corrected, and prompting a user to correct the gesture at the corresponding key point position. Compared with the prior art, the invention has the advantages of high accuracy, low application cost and the like.
Description
Technical Field
The invention relates to the technical field of hand detection, in particular to a hand holding gesture correction method, a hand holding gesture correction system and a computer readable medium based on key points.
Background
Current computer vision-based hand applications focus primarily on gesture recognition. Traditional methods, such as static gesture recognition (also known as static two-dimensional gesture recognition), recognize relatively simple static gesture actions, such as: making a fist or opening five fingers is difficult to accurately identify for complex finger movements.
Chinese patent CN111626135a discloses a depth map based three-dimensional gesture recognition system. According to the method, a sensor is adopted to acquire a hand depth information graph, the hand depth information graph is input into a CNN network module after being preprocessed to acquire the three-dimensional position of a key joint, and finally, the three-dimensional joint model is built by utilizing the acquired joint points. The reconstructed three-dimensional model can be matched with a three-dimensional template in a database to complete gesture recognition. The system not only needs a depth sensor, but also needs neural network training to reconstruct a three-dimensional model, and the cost is high.
Chinese patent CN111665937a discloses an integrated self-driven all-textile gesture recognition data glove. The self-powered glove is manufactured by using materials such as a glove matrix, a self-powered strain sensor, flexible weakly sensitive conductive yarns and the like. The method has high cost, high manufacturing difficulty and complicated earlier acquisition work.
Chinese patent CN107443957a discloses a pen shaft assembly and pen for correcting holding posture. The invention enables a user to correct the holding gesture through the small ball on the pen by improving the mechanical structure of the pen. The method has certain limitations and subjectivity, can not correct specific fingers, and can not accurately judge whether the holding gesture of the user is standard or not under the guidance of no other person.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a hand holding gesture correcting method, a hand holding gesture correcting system and a computer readable medium based on key points, wherein the hand holding gesture correcting method, the hand holding gesture correcting system and the computer readable medium are high in accuracy and low in application cost.
The aim of the invention can be achieved by the following technical scheme:
a hand holding gesture correcting method based on key points comprises the following steps:
step 1: acquiring a standard hand gesture image by using an image acquisition device;
step 2: detecting key points of hand gestures in the image and storing key point data;
step 3: preprocessing the key point data obtained in the step 2, and establishing a standard gesture model by utilizing the preprocessed data;
step 4: collecting a new hand gesture image and detecting key points of the new hand gesture image;
step 5: preprocessing key point data of a new image, inputting a standard hand gesture model, judging whether the current gesture is a standard gesture, if yes, outputting the current gesture without correction, otherwise, executing the step 6;
step 6: judging that the current gesture needs to be corrected, and simultaneously outputting key point data needing to be corrected, and prompting a user to correct the gesture at the corresponding key point position.
Preferably, the step 2 specifically includes:
and detecting key points of hand gestures in the acquired images by using an OpenPose library, and storing the key point data in json format.
Preferably, the step 3 specifically includes:
step 3-1: selecting two key points with the relative distances kept unchanged in various holding postures from all the key points;
step 3-2: performing transformation processing on all the key point data based on the key points selected in the step 3-1 to obtain a data set;
step 3-3: establishing a Gaussian model by using all sample data of the same key point;
step 3-4: and establishing a multidimensional Gaussian mixture model, namely a standard gesture model, for the whole holding gesture by utilizing each key point parameter.
More preferably, the step 3-2 specifically comprises the following steps:
assuming that two key points selected in the step 3-1 are A and B, the transformation process of all the key point data is as follows:
wherein, (x) i ,y i ) Pixel coordinates of the ith key point of the hand;and->Initializing key point coordinates of A and B respectively; (X) i ,Y i ) The pixel coordinates after the i-th key point transformation are obtained;
after the post-transformation, the dataset is constructed.
More preferably, the step 3-3 specifically comprises:
the following is done for n samples of a single keypoint:
k i =x i +y i
wherein, (x) i ,y i ) An ith sample coordinate value for the key point;
for a certain key point coordinate (x, y) of whether the holding gesture to be detected is the same, calculating probability density:
s=x+y
the probability density is used to determine whether a single keypoint meets a standard grip gesture keypoint, thereby indicating a specific single finger that requires correction.
More preferably, the steps 3-4 specifically include:
μ=(μ 0 ,μ 1 ,…,μ n-1 ) T
wherein n is the number of key points; mu is an average value vector consisting of the average value of each key point; sigma is a covariance matrix composed of each keypoint variance;
for the holding gesture s= (S) 0 ,s 1 ,…,s n-1 ) The probability density is calculated:
preferably, the step 5 specifically includes:
preprocessing key point data of a new image, inputting a standard hand gesture model, calculating similar probability density between the to-be-picked-up object and the standard pick-up object by using the Gaussian mixture model obtained in the step 3, judging whether the current gesture is the standard gesture, if so, outputting the current gesture without correction, otherwise, executing the step 6.
Preferably, the step 6 specifically includes:
step 6-1: calculating probability density of each key point as a standard holding gesture;
step 6-2: selecting key points with probability density lower than a preset threshold value;
step 6-3: the output holding gesture is not standard, and the finger needing to be corrected is fed back.
The hand holding gesture correcting system based on the key points comprises image collecting equipment and a data processing terminal; the image acquisition equipment is connected with the data processing terminal; the data processing terminal is used for executing the hand holding gesture correcting method based on the key points.
A computer readable medium having stored therein the hand grip correction method based on any one of the above-described key points.
Compared with the prior art, the invention has the following beneficial effects:
1. the accuracy is high: the hand holding gesture correcting method adopts a hand detecting technology based on key points, and records the information of each finger joint by utilizing a plurality of key points so as to record the complex actions of the finger; the algorithm provided for the key point data can more accurately record the gesture of the finger under various application scenes with fixed holding gesture, so that the detection and comparison of the complicated hand holding gesture are realized, the subsequent model establishment and detection and correction are more accurate, and the recognition accuracy is improved.
2. The application cost is low: the hand holding gesture correcting method and the hand holding gesture correcting system can achieve the purpose only by using the computer vision technology and the image acquisition equipment and the data processing terminal, and the traditional method achieves the detection target by using the contact sensors such as the data acquisition glove and the like, so that the application cost is effectively reduced.
Drawings
FIG. 1 is a flow chart of a method for correcting hand holding gesture according to the present invention;
FIG. 2 is a schematic diagram of a hand key point in an embodiment of the present invention;
fig. 3 is a schematic diagram of detecting a mobile phone key point based on openPose in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a key point holding gesture data before processing according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the key point holding gesture data after processing according to the embodiment of the present invention;
fig. 6 is a schematic structural diagram of an example in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
A hand holding gesture correcting method based on key points is shown in figure 1, and comprises the following steps:
step 1: acquiring a standard hand gesture image by using an image acquisition device;
step 2: detecting key points of hand gestures in the image and storing key point data;
performing key point detection on hand gestures in the acquired image by using an OpenPose library, storing key point data in json format, wherein the key points selected in the embodiment are shown in fig. 2, and the detection process is shown in fig. 3;
step 3: preprocessing the key point data obtained in the step 2, and establishing a standard gesture model by utilizing the preprocessed data;
step 3-1: selecting two key points with the relative distances kept unchanged in various holding postures from all the key points;
step 3-2: performing transformation processing on all the key point data based on the key points selected in the step 3-1 to obtain a data set;
assuming that two key points selected in the step 3-1 are A and B, the transformation process of all the key point data is as follows:
wherein, (x) i ,y i ) Pixel coordinates of the ith key point of the hand;and->Initializing key point coordinates of A and B respectively; (X) i ,Y i ) The pixel coordinates after the i-th key point transformation are obtained;
then establishing a data set;
the key point holding gesture data is shown in fig. 4 before processing, and is shown in fig. 5 after processing;
step 3-3: establishing a Gaussian model by using all sample data of the same key point;
the step 3-3 is specifically as follows:
the following is done for n samples of a single keypoint:
k i =x i +y i
wherein, (x) i ,y i ) An ith sample coordinate value for the key point;
for a certain key point coordinate (x, y) of whether the holding gesture to be detected is the same, calculating probability density:
s=x+y
the probability density is used for judging whether the single key point accords with the standard holding gesture key point, so that a specific single finger needing correction is pointed out;
step 3-4: establishing a multidimensional Gaussian mixture model, namely a standard gesture model, for the whole holding gesture by utilizing each key point parameter;
μ=(μ 0 ,μ 1 ,…,μ n-1 ) T
wherein n is the number of key points; mu is an average value vector consisting of the average value of each key point; sigma is a covariance matrix composed of each keypoint variance;
for the holding gesture s= (S) 0 ,s 1 ,…,s n-1 ) The probability density is calculated:
step 4: collecting a new hand gesture image and detecting key points of the new hand gesture image;
step 5: preprocessing key point data of a new image, inputting a standard hand gesture model, calculating similar probability density between the to-be-picked-up object and the standard pick-up object by using the Gaussian mixture model obtained in the step 3, judging whether the current gesture is a standard gesture, if so, outputting the current gesture without correction, otherwise, executing the step 6;
step 6: judging that the current gesture needs to be corrected, and simultaneously outputting key point data needing to be corrected, and prompting a user to correct the gesture at the position of the corresponding key point;
step 6-1: calculating probability density of each key point as a standard holding gesture;
step 6-2: selecting key points with probability density lower than a preset threshold value;
step 6-3: the output holding gesture is not standard, and the finger needing to be corrected is fed back.
The basic idea of the present embodiment of establishing a standard gesture model is that: the collecting device collects a large number of samples of the standard holding gesture and extracts key point information, and a Gaussian mixture model of the standard holding gesture is built by using pixel coordinate data of the key points of the large number of samples.
Because the key point information is stored by pixel coordinates of 21 joint points of the hand on the image, but the image area, angle and size of each standard holding gesture are different in the process of collecting the sample, and a model cannot be built by directly utilizing the data, a corresponding algorithm is designed to solve the problems.
Algorithm principle: according to the key point diagram of fig. 1, the relative positions of the key points of the hand are fixed, so that the positions of the two key points can be determined in advance, and the problem of different positions and scales of different holding image areas can be solved by integrally translating, rotating and zooming all the key points according to the position relation of the two key points.
From the schematic diagram of the key points, the relative distance between the key point 0 and the key point 9 of the hand on the hand is relatively unchanged, so that the position of the whole hand can be determined by determining the positions of the key points.
The specific transformation process formula is as follows:
in the above, (x) i ,y i ) Pixel coordinates of the ith key point of the hand;and->Respectively initializing key point coordinates, I 0 =(0,0),I 9 =(20,20);(X i ,Y i ) And the pixel coordinates after the i-th key point transformation.
The collected large amount of standard key point data are changed by adopting the algorithm to obtain a data set K t . Since the standard holding posture has a fixed hand posture, K is t The coordinates of the same key point of different standard holding poses should satisfy the gaussian distribution. And independently establishing a Gaussian mixture model for each key point to calculate the expected sum and variance of the Gaussian mixture model, and establishing a multi-element Gaussian model for the whole holding gesture by using model parameters of all the key points.
N samples of a single keypoint are processed as follows:
k i =x i +y i
wherein, (x) i ,y i ) An ith sample coordinate value for the key point;
for a certain key point coordinate (x, y) of whether the holding gesture to be detected is the same, calculating probability density:
s=x+y
the probability density is used to determine whether a single keypoint meets a standard grip gesture keypoint, thereby indicating a specific single finger that requires correction.
The parameters of the multidimensional Gaussian distribution probability density function are calculated by using the parameters of each key point for the whole holding gesture as follows:
μ=(μ 0 ,μ 1 ,…,μ n-1 ) T
wherein n is the number of key points; mu is an average value vector consisting of the average value of each key point; sigma is a covariance matrix composed of each keypoint variance;
for the holding gesture s= (S) 0 ,s 1 ,…,s 20 ) The probability density is calculated:
the probability density may be used to determine whether the overall hand grip is standard.
An example is provided below, in which the model parameters are used to calculate the test gesture, and the key points of the test gesture are visually compared with the key points of the template, and after the data is processed, as shown in fig. 6, the probability of the key points 17, 18 and 20 is the lowest, so that the small finger is considered to deviate to a larger extent, and the small finger gesture needs to be corrected.
The embodiment also relates to a hand holding gesture correcting system based on the key points, which comprises an image acquisition device and a data processing terminal, wherein the image acquisition device is connected with the data processing terminal, and the data processing terminal is used for executing the hand holding gesture correcting method based on the key points.
The embodiment also relates to a computer readable medium, wherein the hand holding gesture correcting method based on the key points is stored in the medium.
According to the scheme, the holding gesture of the user in different application scenes can be analyzed, and the holding gesture error in the training process can be corrected by a beginner through comparison with the standard holding gesture, so that the purpose of improving training is achieved, and the method can be used for comparison and correction of hand gestures such as writing brush holding gesture, badminton racket holding gesture, golf club holding gesture and the like.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.
Claims (3)
1. The hand holding gesture correcting method based on the key points is characterized by comprising the following steps of:
step 1: acquiring a standard hand gesture image by using an image acquisition device;
step 2: detecting key points of hand gestures in the image and storing key point data;
step 3: preprocessing the key point data obtained in the step 2, and establishing a standard gesture model by utilizing the preprocessed data;
step 4: acquiring a real-time hand gesture image and detecting key points of the hand gesture image;
step 5: preprocessing key point data of a new image, inputting a standard gesture model, judging whether the current gesture is a standard gesture, if yes, outputting the current gesture without correction, otherwise, executing the step 6;
step 6: judging that the current gesture needs to be corrected, and simultaneously outputting key point data needing to be corrected, and prompting a user to correct the gesture at the position of the corresponding key point;
the step 3 specifically comprises the following steps:
step 3-1: selecting two key points with the relative distances kept unchanged in various holding postures from all the key points;
step 3-2: performing transformation processing on all the key point data based on the key points selected in the step 3-1 to obtain a data set;
step 3-3: establishing a Gaussian model by using all sample data of the same key point;
step 3-4: establishing a multidimensional Gaussian mixture model, namely a standard gesture model, for the whole holding gesture by utilizing each key point parameter;
the step 3-3 specifically comprises the following steps:
the following is done for n samples of a single keypoint:
k i =x i +y i
wherein, (x) i ,y i ) An ith sample coordinate value for the key point;
for a certain key point coordinate (x, y) of whether the holding gesture to be detected is the same, calculating probability density:
s=x+y
the probability density is used to determine whether a single keypoint meets a standard grip gesture keypoint, thereby indicating a specific single finger that requires correction.
2. The hand holding posture correcting method based on the key points according to claim 1, wherein the step 2 is specifically:
and detecting key points of hand gestures in the acquired images by using an OpenPose library, and storing the key point data in json format.
3. The hand holding gesture correcting system based on the key points is characterized by comprising an image acquisition device and a data processing terminal; the image acquisition equipment is connected with the data processing terminal; the data processing terminal is used for executing the hand holding gesture correcting method based on the key points according to any one of claims 1-2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110345365.8A CN112949587B (en) | 2021-03-31 | 2021-03-31 | Hand holding gesture correction method, system and computer readable medium based on key points |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110345365.8A CN112949587B (en) | 2021-03-31 | 2021-03-31 | Hand holding gesture correction method, system and computer readable medium based on key points |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112949587A CN112949587A (en) | 2021-06-11 |
CN112949587B true CN112949587B (en) | 2023-05-02 |
Family
ID=76231213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110345365.8A Active CN112949587B (en) | 2021-03-31 | 2021-03-31 | Hand holding gesture correction method, system and computer readable medium based on key points |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112949587B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082957A (en) * | 2022-05-27 | 2022-09-20 | 中国科学院半导体研究所 | Writing gesture recognition method and device, electronic equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110147767A (en) * | 2019-05-22 | 2019-08-20 | 深圳市凌云视迅科技有限责任公司 | Three-dimension gesture attitude prediction method based on two dimensional image |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956523B (en) * | 2016-04-22 | 2019-09-17 | 广东小天才科技有限公司 | Pen holding posture correction method and device |
CN106372294A (en) * | 2016-08-30 | 2017-02-01 | 苏州品诺维新医疗科技有限公司 | Method and device for correcting posture |
CN106344030A (en) * | 2016-08-30 | 2017-01-25 | 苏州品诺维新医疗科技有限公司 | Posture correction method and device |
CN106781327B (en) * | 2017-03-09 | 2020-02-07 | 广东小天才科技有限公司 | Sitting posture correction method and mobile terminal |
CN109858524B (en) * | 2019-01-04 | 2020-10-16 | 北京达佳互联信息技术有限公司 | Gesture recognition method and device, electronic equipment and storage medium |
CN110349096B (en) * | 2019-06-14 | 2024-08-02 | 平安科技(深圳)有限公司 | Palm image correction method, device, equipment and storage medium |
CN111347438A (en) * | 2020-02-24 | 2020-06-30 | 五邑大学 | Learning type robot and learning correction method based on same |
CN112541382B (en) * | 2020-04-13 | 2024-06-21 | 深圳优地科技有限公司 | Auxiliary movement method, system and identification terminal equipment |
-
2021
- 2021-03-31 CN CN202110345365.8A patent/CN112949587B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110147767A (en) * | 2019-05-22 | 2019-08-20 | 深圳市凌云视迅科技有限责任公司 | Three-dimension gesture attitude prediction method based on two dimensional image |
Also Published As
Publication number | Publication date |
---|---|
CN112949587A (en) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399367B (en) | Hand motion recognition method and device, computer equipment and readable storage medium | |
CN111104816A (en) | Target object posture recognition method and device and camera | |
CN109753891A (en) | Football player's orientation calibration method and system based on human body critical point detection | |
WO2010078698A1 (en) | Handwritten character recognition method and system | |
CN113361542B (en) | Local feature extraction method based on deep learning | |
JP5262705B2 (en) | Motion estimation apparatus and program | |
Hu et al. | Exemplar-based recognition of human–object interactions | |
CN116968022B (en) | Method and system for grabbing target object by mechanical arm based on visual guidance | |
CN112949587B (en) | Hand holding gesture correction method, system and computer readable medium based on key points | |
CN110717385A (en) | Dynamic gesture recognition method | |
CN110516638B (en) | Sign language recognition method based on track and random forest | |
CN116310976A (en) | Learning habit development method, learning habit development device, electronic equipment and storage medium | |
CN109993116B (en) | Pedestrian re-identification method based on mutual learning of human bones | |
CN112883922B (en) | Sign language identification method based on CNN-BiGRU neural network fusion | |
CN101452357A (en) | Hand-written character input method and system | |
CN113011344B (en) | Pull-up quantity calculation method based on machine vision | |
CN106529480A (en) | Finger tip detection and gesture identification method and system based on depth information | |
CN110007764A (en) | A kind of gesture skeleton recognition methods, device, system and storage medium | |
CN112181145A (en) | Intelligent glove sign language recognition method | |
WO2024036825A1 (en) | Attitude processing method, apparatus and system, and storage medium | |
CN116909393A (en) | Gesture recognition-based virtual reality input system | |
Chen et al. | A fusion recognition method based on multifeature hidden markov model for dynamic hand gesture | |
JP5032415B2 (en) | Motion estimation apparatus and program | |
CN111126294B (en) | Method and server for identifying gait of terminal user based on mobile terminal data | |
Liu et al. | Real-time marker localization learning for GelStereo tactile sensing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |