CN109829442A - A kind of method and system of the human action scoring based on camera - Google Patents

A kind of method and system of the human action scoring based on camera Download PDF

Info

Publication number
CN109829442A
CN109829442A CN201910132276.8A CN201910132276A CN109829442A CN 109829442 A CN109829442 A CN 109829442A CN 201910132276 A CN201910132276 A CN 201910132276A CN 109829442 A CN109829442 A CN 109829442A
Authority
CN
China
Prior art keywords
key point
movement
picture
human
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910132276.8A
Other languages
Chinese (zh)
Inventor
房鹏展
吕晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Focus Technology Co Ltd
Original Assignee
Focus Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Focus Technology Co Ltd filed Critical Focus Technology Co Ltd
Priority to CN201910132276.8A priority Critical patent/CN109829442A/en
Publication of CN109829442A publication Critical patent/CN109829442A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The method and system for the human action scoring based on camera that the invention discloses a kind of, it is characterized in that, it can be according to the markup information of human region and key point in data set, information extraction model is set using deep neural network training human body key point, pass through the model, the key point location information of movement picture and standard operation picture to be scored, the key point positional information calculation similarity acted to two can be extracted, to realize movement scoring.This method and system are capable of the execution scoring of efficiently and accurately, do not depend on expert, it is only necessary to shoot picture or video by camera.

Description

A kind of method and system of the human action scoring based on camera
Technical field
The present invention relates to computer deep learning fields, more particularly to a kind of human action scoring based on camera Method and system.
Background technique
In real life, there are many study course standard operations, such as dancing, body-building, weight lifting, long-jump, high jump, to these The scoring of movement, error correcting generally requires expert just can be carried out, and has many amateurs, usually only can be against study course mark Quasi- action video is practiced, and the guidance of expert is lacked, be difficult to assess oneself movement whether standard.
The present invention has carried out the design of method and system aiming at the problem that acting scoring, crucial according to the human body in data set Point position, movement to be scored can be extracted by the model by extracting model using deep neural network training human body key point With the key point position of study course standard operation, the key point acted to two calculates similarity, to realize movement scoring.The party Method and system are capable of the execution scoring of efficiently and accurately, do not depend on expert, it is only necessary to shoot picture or video by camera.
Summary of the invention
The technical problem to be solved by the present invention is to overcome the deficiencies of the prior art and provide a kind of human body based on camera Act the method and system of scoring.
In order to solve the above technical problems, the present invention provides a kind of method of human action scoring based on camera, it is special Sign is, includes the following steps:
Step 1: prepare human action image data collection, and acquisition mark letter is labeled to human region and key point Breath, the human region and key point refer to the specified region on human body and key point in the area;
Step 2: the markup information of the human region and key point concentrated according to human action image data utilizes depth Neural metwork training deep learning model extraction key point location information;Model is extracted for human body key point location information, it is defeated Enter for a picture Image, exports as { KeyPointki, one deep learning model of training, so that F (Image)= {KeyPointki}。
Step 3: by camera capture include movement to be scored picture, for movement to be scored picture with The picture of standard operation calls the human body key point location information in two pictures of deep learning model extraction respectively;
Step 4: the human body key point location information of two movements of comparison calculates similarity, carries out movement scoring.
In the step 1, the human action image data, which is concentrated, includes at least 100,000 movement pictures, marks every Human region and the location information of key point in picture, the number and title corresponding relationship in location information are as follows: 1- left eye, 2- Right eye, 3- mouth, 4- neck, the left shoulder of 5-, the left elbow of 6-, 7- left finesse, 8- left hand, the right shoulder of 9-, the right elbow of 10-, 11- right finesse, 12- The left buttocks of the right hand, 13-, 14- left knee, 15- left ankle, 16- left foot point, 17- right hips, the right knee of 18-, 19- right ankle, 20- right crus of diaphragm point, the markup information that final every figure obtains are { Bodyk,KeyPointki, wherein k indicates k-th of people in figure, BodykIndicate that the zone position information of k-th of people, i indicate i-th of key point, KeyPointkiIndicate k-th of people, i-th of key The position coordinates of point.
In the step 2, training deep learning model further includes following steps:
Step 1: using deep neural network one human region detection model of training, so that F1(Image)={ Bodyk, For detecting human region position in picture, wherein human region is rectangle, and the location information of rectangular area is the rectangle upper left corner With the apex coordinate in the lower right corner.
Step 2: model is extracted using the key point of deep neural network one human region of training, so that F2(Bodyk)= {KeyPointki, for extracting the key point position in human region picture.
Step 3: above-mentioned two models coupling is got up, obtain human body key point location information extract model F (Image)= F2(F1(Image))={ KeyPointki, for extracting the human body key point location information in picture;
In the step 3, movement picture to be scored is Image1, corresponding key point location information is F (Image1) ={ KeyPoint1i, standard operation picture is Image2, corresponding key point location information is F (Image2)= {KeyPoint2i}。
In the step 4, the final scoring of movement are as follows:
S(Image1,Image2)=Similarity ({ KeyPoint1i},{KeyPoint2i})
Wherein Similarity is the similarity calculated between two movement key points, and the method used is chosen for calculating The average value of cosine similarity between the vector that two neighboring key point is formed.
In the step 4, the vector Vector of 18 descriptions movement is chosent, wherein t value is 1-18, and number corresponds to Vector be respectively as follows: 1- left eye -> right eye, 2- mouth -> left eye, 3- mouth -> neck, 4- neck -> left shoulder, the left shoulder -> left side 5- Elbow, the left elbow -> left finesse of 6-, 7- left finesse -> left hand, 8- neck -> right shoulder, the right shoulder of 9- -> right elbow, the right elbow -> right finesse of 10-, 11- right finesse -> the right hand, 12- neck -> left buttocks, the left buttocks -> left knee of 12-, 13- left knee -> left ankle, 14- left foot Ankle -> left foot point, 15- neck -> right hips, 16- right hips -> right knee, the right knee -> right ankle of 17-, the 18- right ankle -> right side Tiptoe, then the movement picture finally scores are as follows:
In the step 3, when the movement wait score derives from action video, each frame is dynamic in interception action video Make picture, and every movement picture is scored to obtain appraisal result, average value is calculated to all picture appraisal results, specifically Are as follows:
Video to be scored is Video1, { Image is combined by the pictures that frame intercepts1s, call human body key point to extract The set of keypoints that model obtains is { KeyPoint1si, obtained vector set is combined into { Vector1st, corresponding standard view Frequency is Video2, pictures are combined into { Image2s, set of keypoints is { KeyPoint2si, vector set is combined into { Vector2st, Wherein s indicates video frame number serial number, and i indicates key point number, and t indicates vector numbers, then the final scoring of the video are as follows:
A kind of system of the human action marking based on camera, it is characterised in that: include: the data source mould being sequentially connected Block, model training module and movement scoring modules.
The data source modules are specifically included that for training human body key point to extract the data set of preparation required for model The markup information of human action image data collection, human region and key point.
The model training module, for utilizing data set and deep neural network training deep learning model.
The movement scoring modules receive the figure of movement to be given a mark and standard operation for the interface of offer movement marking Piece finally returns that marking as a result, the submodule being sequentially connected including four: movement to be given a mark and standard by calling model It acts picture receiving submodule, human body key point location information extracting sub-module, human body key point confidence and ceases similarity calculation Submodule, movement marking result return to submodule;The movement to be given a mark and standard operation picture receiving submodule, for receiving The pictorial information of movement to be given a mark and standard operation;The human body key point location information extracting sub-module, for be scored Movement and standard operation call the location information of the human body key point of deep learning model extraction two movements respectively;The people Body key point confidence ceases similarity computational submodule, and the key point location information acted to two calculates similarity, moved It gives a mark;The movement marking result returns to submodule, and the result finally given a mark is returned.
Advantageous effects of the invention: utilizing deep neural network according to the human body key point position in data set Training human body key point, which extracts model, can extract the key point position of movement and standard operation to be given a mark by the model, Then similarity is calculated by the key point acted to two, to realize movement marking.This method and system can be efficiently quasi- True execution marking, does not depend on expert, it is only necessary to shoot picture or video by camera.
Detailed description of the invention
Fig. 1 is the flow diagram of the human action scoring method based on camera in exemplary embodiment of the present invention;
Fig. 2 is the structural schematic diagram of the human action scoring system based on camera in exemplary embodiment of the present invention.
Specific embodiment
The present invention is further illustrated with exemplary embodiment with reference to the accompanying drawing:
As shown in Figure 1, a kind of method that the present invention discloses human action marking based on camera, comprising:
Step 11: preparing human action image data collection, and acquisition mark letter is labeled to human region and key point Breath.The present embodiment prepares data set by taking dance movement as an example as follows.
Step 111: collecting 100 dance movement videos, extract picture by frame, obtain 100,000 dancings by artificial screening Picture is acted, covers different movements as far as possible.
Step 112: movement picture manually being marked, the position of human region and key point in every picture is marked and believes It ceases, the number and title corresponding relationship in location information are as follows: 1- left eye, 2- right eye, 3- mouth, 4- neck, the left shoulder of 5-, 6- are left Elbow, 7- left finesse, 8- left hand, the right shoulder of 9-, the right elbow of 10-, 11- right finesse, the 12- right hand, the left buttocks of 13-, 14- left knee, 15- are left Ankle, 16- left foot point, 17- right hips, the right knee of 18-, 19- right ankle, 20- right crus of diaphragm point.It can be obtained by more key points More detailed motion characteristic, help more accurately to carry out motion characteristic scoring, here key point can according to concern to Scoring movement position is flexibly chosen, and method and system disclosed by the invention are applicable.The mark that final every figure obtains Information is { Bodyk,KeyPointki, wherein k indicates k-th of people in figure, BodykIndicate the rectangular area position letter of k-th of people The breath coordinate of bottom right angular vertex (the rectangle upper left corner), i indicate i-th of key point, KeyPointkiIndicate i-th of k-th of people pass The position coordinates of key point.
Step 12: according to human region and the markup information of key point, utilizing deep neural network training human body key point Location information extracts model.Model is extracted for human body key point location information, is inputted as a picture Image, exports and is {KeyPointki, in accordance with the following steps, one deep learning model of training, so that F (Image)={ KeyPointki}。
Step 121: using deep neural network one human region detection model of training, so that F1(Image)= {Bodyk, for detecting human region position in picture.
Step 122: model is extracted using the key point of deep neural network one human region of training, so that F2(Bodyk) ={ KeyPointki, for extracting the key point position in human region picture.
Step 123: above-mentioned two models coupling being got up, human body key point location information is obtained and extracts model F (Image) =F2(F1(Image))={ KeyPointki, eventually for the human body key point location information extracted in picture.
This method be capable of it is more fast and accurate positioning human body key point location information, facilitate it is subsequent accurately Carry out movement scoring.
Step 13: for movement picture and standard operation picture to be scored, calling human body key point to extract model respectively Extract the human body key point of two movements.Movement picture to be scored is Image1, corresponding key point is F (Image1)= {KeyPoint1i, study course standard operation picture is Image2, corresponding key point is F (Image2)={ KeyPoint2i}。
Step 14: the key point of two movements of comparison calculates similarity, carries out movement marking.It finally gives a mark and is
S(Image1,Image2)=Similarity ({ KeyPoint1i},{KeyPoint2i})
Wherein Similarity is the similarity calculated between two movement key points, and the method used is chosen for calculating The average value of cosine similarity between the vector that two neighboring key point is formed.The present embodiment is chosen such as by taking dance movement as an example The vector Vector of lower 18 descriptions dance movementt, wherein t value is 1-18, numbers corresponding vector and is respectively as follows: 1- left eye- > right eye, 2- mouth -> left eye, 3- mouth -> neck, 4- neck -> left shoulder, the left shoulder of 5- -> left elbow, the left elbow -> left finesse of 6-, 7- are left Wrist -> left hand, 8- neck -> right shoulder, the right shoulder of 9- -> right elbow, the right elbow -> right finesse of 10-, the 11- right finesse -> right hand, 12- neck- > left buttocks, the left buttocks -> left knee of 12-, 13- left knee -> left ankle, 14- left ankle -> left foot point, 15- neck -> right stern Portion, 16- right hips -> right knee, the right knee -> right ankle of 17-, 18- right ankle -> right crus of diaphragm is sharp, then the movement picture final score For
This method can both carry out whole marking to movement, can also prompt the lower office of score by the similarity of single vector Portion acts (such as head, shoulder, hand, leg, foot), and user can be helped targetedly to improve, and score is relatively low to be moved Make.Here vector can flexibly be chosen according to the key point information that the transfer to be evaluated of concern makees position and mark, and the present invention is public The method and system opened are applicable.
When the movement wait score derives from action video, the movement picture of each frame in action video is intercepted, and to every It opens movement picture to be scored to obtain appraisal result, average value is calculated to all picture appraisal results.Assuming that video to be scored For Video1, { Image is combined by the pictures that frame intercepts1s, call human body key point to extract the set of keypoints that model obtains For { KeyPoint1si, obtained vector set is combined into { Vector1st, corresponding study course normal video is Video2, pictures It is combined into { Image2s, set of keypoints is { KeyPoint2si, vector set is combined into { Vector2st, wherein s indicates video frame number Serial number, i indicate key point number, and t indicates vector numbers, then the final score of the video are as follows:
It is scored with this standard of action video, scoring accuracy obtained is high and evaluation speed is fast, a frame Interval can be adapted for most of scene, can also be with flexible adaptation special action by adjusting the frame number interval of interception The case where rate or video length, has wide applicability.
As shown in Fig. 2, the system that the present invention discloses a kind of human action marking based on camera, specifically includes that successively Connected data source modules 21, model training module 22 and movement scoring modules 23.
The data source modules 21, it is main to wrap for training human body key point to extract the data set of preparation required for model It includes: the markup information of human action image data collection, human region and key point.
The model training module 22, for utilizing data set and deep neural network training pedestrian's body key point confidence Breath extracts model.
The movement scoring modules 23 receive movement to be given a mark and standard operation for the interface of offer movement marking Picture finally returns that marking as a result, the submodule being sequentially connected including four: movement to be given a mark and mark by calling model Quasi- movement picture receiving submodule 231, human body key point location information extracting sub-module 232, human body key point confidence manner of breathing Submodule 234 is returned like degree computational submodule 233, movement marking result;
The movement to be given a mark and standard operation picture receiving submodule 231, it is dynamic for receiving movement to be given a mark and standard The pictorial information of work;
The human body key point location information extracting sub-module 232, for movement to be scored and standard operation, is adjusted respectively The human body key point location information of information extraction model extraction two movements is set with human body key point;
The human body key point confidence ceases similarity computational submodule 233, the key point location information acted to two, Similarity is calculated, movement marking is carried out;
The movement marking result returns to submodule 234, and the result finally given a mark is returned.
Present invention is mainly used for a kind of method and system of human action marking based on camera are provided, according to data set In human body key point position, extracting model using deep neural network training human body key point can be extracted by the model The key point position of movement and standard operation to be given a mark, the key point acted to two calculates similarity, to realize movement Marking.This method and system are capable of the execution marking of efficiently and accurately, do not depend on expert, it is only necessary to pass through camera shooting figure Piece or video.
Above embodiments do not limit the present invention in any way, all to be made in a manner of equivalent transformation to above embodiments Other improvement and application, belong to protection scope of the present invention.

Claims (7)

1. a kind of method of the human action scoring based on camera, which comprises the steps of:
Step 1: prepare human action image data collection, and acquisition markup information, institute are labeled to human region and key point It states human region and key point refers to the specified region on human body and key point in the area;
Step 2: the markup information of the human region and key point concentrated according to human action image data utilizes depth nerve Network training human body key point location information extracts model extraction key point location information;Human body key point location information is mentioned Modulus type is inputted as a picture Image, is exported as { KeyPointki, one deep learning model of training, so that F (Image)={ KeyPointki};
Step 3: the picture comprising movement to be scored is captured by camera, for the picture and standard of movement to be scored The picture of movement calls human body key point location information to extract the human body key point confidence in two pictures of model extraction respectively Breath;
Step 4: the human body key point location information of two movements of comparison calculates similarity, carries out movement scoring.
2. a kind of method of human action scoring based on camera as described in claim 1, it is characterised in that: the step In one, the human action image data, which is concentrated, includes at least 100,000 movement pictures, mark in every picture human region and The markup information of key point, the number and title corresponding relationship in markup information are as follows: 1- left eye, 2- right eye, 3- mouth, 4- neck Son, the left shoulder of 5-, the left elbow of 6-, 7- left finesse, 8- left hand, the right shoulder of 9-, the right elbow of 10-, 11- right finesse, the 12- right hand, the left buttocks of 13-, 14- left knee, 15- left ankle, 16- left foot point, 17- right hips, the right knee of 18-, 19- right ankle, 20- right crus of diaphragm point, it is final every Opening the markup information that figure obtains is { Bodyk,KeyPointki};Wherein k indicates k-th of people in figure, BodykIndicate k-th of people's Zone position information, i indicate i-th of key point, KeyPointkiIndicate the position coordinates of k-th of people, i-th of key point.
3. a kind of method of human action scoring based on camera as claimed in claim 2, it is characterised in that: the step In two, training deep learning model further includes following steps:
Step 1: using deep neural network one human region detection model of training, so that F1(Image)={ Bodyk, it is used for Human region position in picture is detected, wherein human region is rectangle, and the location information of rectangular area is the rectangle upper left corner and the right side The apex coordinate of inferior horn;
Step 2: model is extracted using the key point of deep neural network one human region of training, so that F2(Bodyk)= {KeyPointki, for extracting the key point position in human region picture;
Step 3: above-mentioned two models coupling being got up, human body key point location information is obtained and extracts model F (Image)=F2(F1 (Image))={ KeyPointki, for extracting the human body key point location information in picture;
In the step 3, movement picture to be scored is Image1, corresponding key point location information is F (Image1)= {KeyPoint1i, standard operation picture is Image2, corresponding key point location information is F (Image2)= {KeyPoint2i}。
4. a kind of method of human action scoring based on camera as claimed in claim 3, it is characterised in that: the step In four, the final scoring of movement are as follows:
S(Image1,Image2)=Similarity ({ KeyPoint1i},{KeyPoint2i})
Wherein Similarity is the similarity calculated between two movement key points, and the method used is chosen adjacent for calculating The average value of cosine similarity between the vector that two key points are formed.
5. a kind of method of human action scoring based on camera as claimed in claim 4, it is characterised in that: the step In four, the vector Vector of 18 descriptions movement is chosent, wherein t value is 1-18, numbers corresponding vector and is respectively as follows: the left side 1- Eye -> right eye, 2- mouth -> left eye, 3- mouth -> neck, 4- neck -> left shoulder, the left shoulder of 5- -> left elbow, the left elbow -> left finesse of 6-, 7- left finesse -> left hand, 8- neck -> right shoulder, the right shoulder of 9- -> right elbow, the right elbow -> right finesse of 10-, the 11- right finesse -> right hand, 12- Neck -> left buttocks, the left buttocks -> left knee of 12-, 13- left knee -> left ankle, 14- left ankle -> left foot point, 15- neck -> Right hips, 16- right hips -> right knee, the right knee -> right ankle of 17-, 18- right ankle -> right crus of diaphragm point, then the movement picture is final Scoring are as follows:
6. a kind of method of human action scoring based on camera as claimed in claim 5, which is characterized in that the step In three, when the movement wait score derives from action video, the movement picture of each frame in action video is intercepted, and dynamic to every It is scored to obtain appraisal result as picture, average value is calculated to all picture appraisal results, specifically:
Video to be scored is Video1, { Image is combined by the pictures that frame intercepts1s, call human body key point to extract model Obtained set of keypoints is { KeyPoint1si, obtained vector set is combined into { Vector1st, corresponding normal video is Video2, pictures are combined into { Image2s, set of keypoints is { KeyPoint2si, vector set is combined into { Vector2st, wherein s Indicate video frame number serial number, i indicates key point number, and t indicates vector numbers, then the final scoring of the video are as follows:
7. a kind of system of the human action marking based on camera of operation of one of -6 the methods according to claim 1, It is characterized in that: including: the data source modules being sequentially connected, model training module and movement scoring modules.
The data source modules specifically include that human body for training human body key point to extract the data set of preparation required for model Act the markup information of image data collection, human region and key point.
The model training module, for being extracted using data set and deep neural network training pedestrian's body key point location information Model.
The movement scoring modules receive the picture of movement to be given a mark and standard operation for the interface of offer movement marking, lead to Crossing calling model finally returns that marking as a result, the submodule being sequentially connected including four: movement to be given a mark and standard operation Picture receiving submodule, human body key point location information extracting sub-module, human body key point confidence cease similarity calculation submodule Block, movement marking result return to submodule;The movement to be given a mark and standard operation picture receiving submodule, for receiving wait beat Transfer makees and the pictorial information of standard operation;The human body key point location information extracting sub-module, for movement to be scored With standard operation, human body key point location information is called to extract the position letter of the human body key point of model extraction two movements respectively Breath;The human body key point confidence ceases similarity computational submodule, and the key point location information acted to two calculates similar Degree, carries out movement marking;The movement marking result returns to submodule, and the result finally given a mark is returned.
CN201910132276.8A 2019-02-22 2019-02-22 A kind of method and system of the human action scoring based on camera Pending CN109829442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910132276.8A CN109829442A (en) 2019-02-22 2019-02-22 A kind of method and system of the human action scoring based on camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910132276.8A CN109829442A (en) 2019-02-22 2019-02-22 A kind of method and system of the human action scoring based on camera

Publications (1)

Publication Number Publication Date
CN109829442A true CN109829442A (en) 2019-05-31

Family

ID=66864122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910132276.8A Pending CN109829442A (en) 2019-02-22 2019-02-22 A kind of method and system of the human action scoring based on camera

Country Status (1)

Country Link
CN (1) CN109829442A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222977A (en) * 2019-06-03 2019-09-10 张学志 One kind movement sport methods of marking based on computer vision and device
CN110781857A (en) * 2019-11-05 2020-02-11 北京沃东天骏信息技术有限公司 Motion monitoring method, device, system and storage medium
CN111259822A (en) * 2020-01-19 2020-06-09 杭州微洱网络科技有限公司 Method for detecting key point of special neck in E-commerce image
CN111314665A (en) * 2020-03-07 2020-06-19 上海中科教育装备集团有限公司 Key video segment extraction system and method for video post-scoring
CN111767768A (en) * 2019-07-31 2020-10-13 北京京东尚科信息技术有限公司 Image processing method, device and equipment
CN111797778A (en) * 2020-07-08 2020-10-20 龙岩学院 Automatic scoring method for breaking street dance anchor and wheat dance
CN111986260A (en) * 2020-09-04 2020-11-24 北京小狗智能机器人技术有限公司 Image processing method and device and terminal equipment
CN112348942A (en) * 2020-09-18 2021-02-09 当趣网络科技(杭州)有限公司 Body-building interaction method and system
CN112381035A (en) * 2020-11-25 2021-02-19 山东云缦智能科技有限公司 Motion similarity evaluation method based on motion trail of skeleton key points
CN112990011A (en) * 2021-03-15 2021-06-18 上海工程技术大学 Body-building action recognition and evaluation method based on machine vision and deep learning
CN113095248A (en) * 2021-04-19 2021-07-09 中国石油大学(华东) Technical action correction method for badminton
CN113137923A (en) * 2020-01-17 2021-07-20 上海淡竹体育科技有限公司 Standing long jump sport result measuring method
CN114783046A (en) * 2022-03-01 2022-07-22 北京赛思信安技术股份有限公司 CNN and LSTM-based human body continuous motion similarity scoring method
CN116392800A (en) * 2023-04-23 2023-07-07 电子科技大学 Based on target detection and image processing standing long jump distance measuring method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107219925A (en) * 2017-05-27 2017-09-29 成都通甲优博科技有限责任公司 Pose detection method, device and server
CN107392086A (en) * 2017-05-26 2017-11-24 深圳奥比中光科技有限公司 Apparatus for evaluating, system and the storage device of human body attitude
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN109191588A (en) * 2018-08-27 2019-01-11 百度在线网络技术(北京)有限公司 Move teaching method, device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392086A (en) * 2017-05-26 2017-11-24 深圳奥比中光科技有限公司 Apparatus for evaluating, system and the storage device of human body attitude
CN107219925A (en) * 2017-05-27 2017-09-29 成都通甲优博科技有限责任公司 Pose detection method, device and server
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN109191588A (en) * 2018-08-27 2019-01-11 百度在线网络技术(北京)有限公司 Move teaching method, device, storage medium and electronic equipment

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222977A (en) * 2019-06-03 2019-09-10 张学志 One kind movement sport methods of marking based on computer vision and device
CN111767768A (en) * 2019-07-31 2020-10-13 北京京东尚科信息技术有限公司 Image processing method, device and equipment
CN110781857B (en) * 2019-11-05 2022-09-06 北京沃东天骏信息技术有限公司 Motion monitoring method, device, system and storage medium
CN110781857A (en) * 2019-11-05 2020-02-11 北京沃东天骏信息技术有限公司 Motion monitoring method, device, system and storage medium
CN113137923A (en) * 2020-01-17 2021-07-20 上海淡竹体育科技有限公司 Standing long jump sport result measuring method
CN111259822A (en) * 2020-01-19 2020-06-09 杭州微洱网络科技有限公司 Method for detecting key point of special neck in E-commerce image
CN111314665A (en) * 2020-03-07 2020-06-19 上海中科教育装备集团有限公司 Key video segment extraction system and method for video post-scoring
CN111797778A (en) * 2020-07-08 2020-10-20 龙岩学院 Automatic scoring method for breaking street dance anchor and wheat dance
CN111797778B (en) * 2020-07-08 2023-06-02 龙岩学院 Automatic scoring method for break-in street dance and wheat-linking dancing
CN111986260A (en) * 2020-09-04 2020-11-24 北京小狗智能机器人技术有限公司 Image processing method and device and terminal equipment
CN112348942A (en) * 2020-09-18 2021-02-09 当趣网络科技(杭州)有限公司 Body-building interaction method and system
CN112348942B (en) * 2020-09-18 2024-03-19 当趣网络科技(杭州)有限公司 Body-building interaction method and system
CN112381035A (en) * 2020-11-25 2021-02-19 山东云缦智能科技有限公司 Motion similarity evaluation method based on motion trail of skeleton key points
CN112990011A (en) * 2021-03-15 2021-06-18 上海工程技术大学 Body-building action recognition and evaluation method based on machine vision and deep learning
CN113095248B (en) * 2021-04-19 2022-10-25 中国石油大学(华东) Technical action correcting method for badminton
CN113095248A (en) * 2021-04-19 2021-07-09 中国石油大学(华东) Technical action correction method for badminton
CN114783046A (en) * 2022-03-01 2022-07-22 北京赛思信安技术股份有限公司 CNN and LSTM-based human body continuous motion similarity scoring method
CN116392800A (en) * 2023-04-23 2023-07-07 电子科技大学 Based on target detection and image processing standing long jump distance measuring method and system

Similar Documents

Publication Publication Date Title
CN109829442A (en) A kind of method and system of the human action scoring based on camera
CN104517102B (en) Student classroom notice detection method and system
CN106022213B (en) A kind of human motion recognition method based on three-dimensional bone information
WO2018120964A1 (en) Posture correction method based on depth information and skeleton information
WO2021164283A1 (en) Clothing color recognition method, device and system based on semantic segmentation
JP6448223B2 (en) Image recognition system, image recognition apparatus, image recognition method, and computer program
CN109299659A (en) A kind of human posture recognition method and system based on RGB camera and deep learning
CN109919141A (en) A kind of recognition methods again of the pedestrian based on skeleton pose
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
US11945125B2 (en) Auxiliary photographing device for dyskinesia analysis, and control method and apparatus for auxiliary photographing device for dyskinesia analysis
CN110008913A (en) Pedestrian re-identification method based on fusion of attitude estimation and viewpoint mechanism
CN102800126A (en) Method for recovering real-time three-dimensional body posture based on multimodal fusion
CN104794449B (en) Gait energy diagram based on human body HOG features obtains and personal identification method
CN111027432A (en) Gait feature-based visual following robot method
CN102567703A (en) Hand motion identification information processing method based on classification characteristic
CN108921881A (en) A kind of across camera method for tracking target based on homography constraint
CN109766796A (en) A kind of depth pedestrian detection method towards dense population
CN110472473A (en) The method fallen based on people on Attitude estimation detection staircase
CN110084192A (en) Quick dynamic hand gesture recognition system and method based on target detection
CN113537019B (en) Detection method for identifying wearing of safety helmet of transformer substation personnel based on key points
CN109840478A (en) A kind of movement appraisal procedure, device, mobile terminal and readable storage medium storing program for executing
KR102377767B1 (en) Handwriting and arm movement learning-based sign language translation system and method
CN106909890A (en) A kind of Human bodys' response method based on position cluster feature
Yang et al. Human exercise posture analysis based on pose estimation
CN109766782A (en) Real-time body action identification method based on SVM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190531