CN109635644A - A kind of evaluation method of user action, device and readable medium - Google Patents
A kind of evaluation method of user action, device and readable medium Download PDFInfo
- Publication number
- CN109635644A CN109635644A CN201811296176.0A CN201811296176A CN109635644A CN 109635644 A CN109635644 A CN 109635644A CN 201811296176 A CN201811296176 A CN 201811296176A CN 109635644 A CN109635644 A CN 109635644A
- Authority
- CN
- China
- Prior art keywords
- movement
- standard
- frame image
- posture information
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Abstract
The invention discloses a kind of evaluation method of user action, device and readable medium, it is related to sport health technical field, in method provided by the invention, several frame images when acquisition user follows standard movement video to move;For each frame image, gesture recognition is carried out to the image using gesture recognition model trained in advance, determines the posture information for characterizing that user action is presented in the image;It determines respectively based on the gap between the posture information presented in standard picture corresponding with the image of acquisition in each frame image posture information determined and standard movement video;Gap is determined based on each frame image, determines the evaluation result of user action.It, can evaluation result that is objective and accurately determining out movement when user follows standard movement video motion by using the above method.
Description
Technical field
The present invention relates to sport health technical field more particularly to a kind of evaluation methods of user action, device and readable
Medium.
Background technique
Human synovial is large number of, and scope of activities is bigger, thus human motion has very strong flexibility and variability.
Type of exercise is various simultaneously, differs widely in the judgment criteria of different motion professional domain, movement.It is directed to movement accuracy at present
Judge mainly according to experienced professional person's artificial judgment, this method is very high to the dependence of people, thus can exist pair
The problem of the evaluation result inaccuracy for the movement that user completes.
Summary of the invention
The embodiment of the present invention provides evaluation method, device and the readable medium of a kind of user action, to improve movement
The accuracy of evaluation result.
In a first aspect, the embodiment of the present invention provides a kind of evaluation method of user action, comprising:
At least frame frame image when acquisition user follows standard movement video to move;
For each frame image, gesture recognition is carried out to the image using gesture recognition model trained in advance, determines table
Levy the posture information that user action is presented in the image;
The image phase in the posture information and standard movement video determined based on each frame image with acquisition is determined respectively
The gap between posture information presented in corresponding standard picture;
Gap is determined based on each frame image, determines the evaluation result of user action.
Preferably, the posture information includes the characteristic information at characteristics of human body position;And it determines be based on each frame respectively
The posture presented in standard picture corresponding with the image of acquisition in the posture information and standard movement video that image is determined
Gap between information, specifically includes:
For each frame image, it is performed both by following processes:
Determine at least one basis movement that the characteristic information at the characteristics of human body position of the frame image is constituted;
For each basis movement, mark corresponding with basis movement in basis movement and standard movement maneuver library is determined
Gap between quasi- basis movement, wherein acted in the standard movement maneuver library comprising several standard bases, the standard
Basis movement is to be decomposed to the user action in the standard movement video;
Based on each gap for determining of basis movement, determine in the posture information and standard movement video of the frame image with
The gap between posture information presented in the corresponding standard picture of frame image.
Preferably, the gesture recognition model is trained to obtain to VGG network and convolutional layer group, wherein described
Convolutional layer group includes several continuous convolutional layers.
Optionally, the method, further includes:
Quantification treatment is carried out to the evaluation result, obtains quantization motion result;And
Based on quantization motion result, output is directed to the guiding opinion of the user action.
Second aspect, the embodiment of the present invention provide a kind of evaluating apparatus of user action, comprising:
Acquisition unit, for acquiring at least frame image when user follows standard movement video to move;
Gesture recognition unit, for being directed to each frame image, using gesture recognition model trained in advance to the image into
Row gesture recognition determines the posture information for characterizing that user action is presented in the image;
First determination unit, for determining the posture information and standard movement video determined based on each frame image respectively
In gap between the posture information that presents in standard picture corresponding with the image of acquisition;
Second determination unit determines gap based on each frame image, determines the evaluation result of user action.
Preferably, the posture information includes the characteristic information at characteristics of human body position;And
First determination unit is specifically used for being directed to each frame image, is performed both by following processes: determining the frame image
At least one basis movement that the characteristic information at characteristics of human body position is constituted;For each basis movement, determine that the basis acts
And the gap in standard movement maneuver library between standard base movement corresponding with basis movement, wherein the standard movement
It is acted in maneuver library comprising several standard bases, the standard base movement is dynamic to the user in the standard movement video
It is decomposed;Based on the gap that each basis movement is determined, the posture information and standard fortune of the frame image are determined
The gap between posture information presented in standard picture corresponding with the frame image in dynamic video.
Preferably, the gesture recognition model is trained to obtain to VGG network and convolutional layer group, wherein described
Convolutional layer group includes several continuous convolutional layers.
Optionally, described device, further includes:
Quantization processing unit obtains quantization motion result for carrying out quantification treatment to the evaluation result;
Output unit, the quantization motion result for being obtained based on quantization processing unit, output are directed to the user action
Guiding opinion.
The third aspect, the embodiment of the present invention provide a kind of nonvolatile computer storage media, and being stored with computer can hold
Row instruction, the computer executable instructions are used to execute the evaluating apparatus method of user action provided by the present application.
Fourth aspect, the embodiment of the present invention provide a kind of electronic equipment, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
A processor executes, so that at least one described processor is able to carry out the evaluating apparatus side of user action provided by the present application
Method.
The invention has the advantages that:
Evaluation method, device and the readable medium of user action provided in an embodiment of the present invention, acquisition user follow standard
At least frame image when sport video is moved;For each frame image, gesture recognition model pair trained in advance is utilized
The image carries out gesture recognition, determines the posture information for characterizing that user action is presented in the image;It determines respectively and is based on each frame
The posture presented in standard picture corresponding with the image of acquisition in the posture information and standard movement video that image is determined
Gap between information;Gap is determined based on each frame image, determines the evaluation result of user action.By using above-mentioned side
Method, can evaluation result that is objective and accurately determining out movement when user follows standard movement video motion.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention can be by written explanation
Specifically noted structure is achieved and obtained in book, claims and attached drawing.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes a part of the invention, this hair
Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the structural representation of the computing device 10 of the evaluation method provided in an embodiment of the present invention for implementing user action
Figure;
Fig. 2 is one of the flow diagram of evaluation method of user action provided in an embodiment of the present invention;
Fig. 3 is the posture information that determination provided in an embodiment of the present invention is determined based on each frame image and standard movement view
The flow diagram of the gap between posture information presented in standard picture corresponding with the image of acquisition in frequency;
Fig. 4 is the two of the flow diagram of the evaluation method of user action provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram of the evaluating apparatus of user action provided in an embodiment of the present invention;
Fig. 6 is the hardware configuration signal of the electronic equipment of the evaluation method provided in an embodiment of the present invention for implementing user action
Figure.
Specific embodiment
Evaluation method, device and the readable medium of user action provided in an embodiment of the present invention, to improve commenting for movement
The accuracy of valence result.
Below in conjunction with Figure of description, preferred embodiment of the present invention will be described, it should be understood that described herein
Preferred embodiment only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention, and in the absence of conflict, this hair
The feature in embodiment and embodiment in bright can be combined with each other.
For a better understanding of the present invention, technical term of the present invention is explained first:
1, VGG network brief introduction VGG network is developed on the basis of AlexNet network, and main contributions are
Network design is carried out using the convolution kernel of very small 3*3, and network depth is increased to 16-19 layers.
2, convolutional neural networks (Convolutional Neural Networks, CNN), is usually used to tensor form
Input, such as corresponding three two-dimensional matrixes of chromatic image are illustrated respectively in the image pixel intensities of three Color Channels.One
The structure of typical CNN can be interpreted a series of combination in stages.The several stages most started are mainly made of two kinds of layers: volume
Lamination (convolutional layer) and sample level (pooling layer);Wherein, outputting and inputting for convolutional layer is all
Multiple matrix, convolutional layer include multiple convolution kernels, and each convolution kernel is a matrix, each convolution kernel is the equal of one
Two-dimensional matrix is input in convolutional layer by filter, which can export a specific characteristic pattern, every characteristic pattern
It is exactly an output unit of convolutional layer.Then further characteristic pattern is transmitted by a nonlinear activation function (such as ReLU)
To next layer.Different characteristic figure uses different convolution kernels, but between the different location in the same characteristic pattern and input figure
Connection is shared weight, calculates optimal solution finally by softmax.
3, the image recognition technology based on deep learning has the ability for automatically extracting feature, is a kind of for expression
It practises.Deep learning allows the computation model of multiple process layer complicated composition, to obtain the expression of data and multiple abstract automatically
Rank.Deep learning method includes many levels, each layer of completion linear transformation, and converts usually non-linear change each time
It changes, some lower level another characteristic is expressed as more abstract feature.More numbers of plies mean may learn more complicated
Feature.For image classification, the neural network meeting incoherent feature of automatic rejection, such as background, position etc., and amplification has automatically
Feature, such as shape etc..Subsequent network layer can combine study to feature, realize object identification, this often leads to
Full articulamentum is crossed to complete.
In order to solve the problems, such as that the prior art can not objectively evaluate the athletic performance of user, the embodiment of the present invention is given
Solution proposes a kind of computing device 10, implements the evaluation side of user action provided by the invention by computing device 10
Method, the computing device can be showed in the form of universal computing device, which can be terminal or server etc..
Computing device 10 according to the present invention is described referring to Fig. 1.The computing device 10 that Fig. 1 is shown is only an example, no
The function and use scope for coping with the embodiment of the present invention bring any restrictions.
As shown in Figure 1, computing device 10 is showed in the form of universal computing device.The component of computing device 10 may include
But be not limited to: at least one above-mentioned processing unit 11, at least one above-mentioned storage unit 12, the different system components of connection (including
Storage unit 12 and processing unit 11) bus 13.
Bus 13 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller,
Peripheral bus, processor or the local bus using any bus structures in a variety of bus structures.
Storage unit 12 may include the readable medium of form of volatile memory, such as random access memory (RAM)
121 and/or cache memory 122, it can further include read-only memory (ROM) 123.
Storage unit 12 can also include program/utility 125 with one group of (at least one) program module 124,
Such program module 124 includes but is not limited to: operating system, one or more application program, other program modules and
It may include the realization of network environment in program data, each of these examples or certain combination.
Computing device 10 can also be communicated with one or more external equipments 14 (such as keyboard, sensing equipment etc.), may be used also
Enable a user to the equipment interacted with computing device 10 communication with one or more, and/or with enable the computing device 10
Any equipment (such as router, modem etc.) communicated with one or more of the other calculating equipment communicates.This
Kind communication can be carried out by input/output (I/O) interface 15.Also, computing device 10 can also pass through network adapter 16
With one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, such as internet) communication.
As shown, network adapter 16 is communicated by bus 13 with other modules for computing device 10.It will be appreciated that though figure
In be not shown, can in conjunction with computing device 10 use other hardware and/or software module, including but not limited to: microcode, equipment
Driver, redundant processing unit, external disk drive array, RAID system, tape drive and data backup storage system
Deng.
The application scenarios of the evaluation method of user action provided in an embodiment of the present invention are, when user follows standard movement to regard
When frequency is moved, the effect for the movement that user is made based on the movement in standard movement video is known in user's expectation, is based on
This purpose proposes method provided by the invention, it may be assumed that computing device 10 acquires when user follows standard movement video to move
An at least frame image;For each frame image, gesture recognition is carried out to the image using gesture recognition model trained in advance, really
Surely the posture information that user action is presented in the image is characterized;Determine respectively the posture information determined based on each frame image and
The gap between posture information presented in standard picture corresponding with the image of acquisition in standard movement video;Based on each
Frame image determines gap, determines the evaluation result of user action.By executing the above method, can accurately and objectively determine
The evaluation result for the athletic performance that user executes out, no longer needs to by subjective consciousness;In addition, user based on present invention determine that go out
Evaluation result can know that the movement itself made deviates the degree acted in standard movement video, so that user be helped efficiently to manage
Manage the movement of oneself.
Computing device based on the evaluation method for implementing user action provided by the invention in above-mentioned application scenarios and Fig. 1
Next structural schematic diagram is illustrated the evaluation method of user action provided by the invention.As shown in Fig. 2, for the present invention
The flow diagram of the evaluation method for the user action that embodiment provides, comprising the following steps:
At least frame image of S21, acquisition user when standard movement video being followed to move.
When it is implemented, computing device 10 can use camera acquisition user when standard movement video being followed to move
An at least frame image.In acquisition frame image, a certain movement starting point one frame image of acquisition can be executed from user, then in
Between acquire a few frame images, finally user executed the movement end point acquire a frame image.That is: in standard base movement
Movement between start-stop point and therebetween selection several time points is held as standard analysis point based on selected time point acquisition user
The frame image of capable movement, for being compared to time point posture corresponding in user movement.
It should be noted that computing device 10 can be provided separately with camera, the two can use communications cable progress
Connection, computing device 10 is after receiving user action evaluation instruction, and to driving, camera acquisition user follows standard movement to regard
Several frame images when frequency is moved.
It should be noted that the standard movement video in the present invention refers to that user follows any sport video to move
Video, then the video is denoted as the standard movement video for instructing user action.
S22, it is directed to each frame image, gesture recognition is carried out to the image using gesture recognition model trained in advance, really
Surely the posture information that user action is presented in the image is characterized.
In this step, the gesture recognition model in the present invention is trained to obtain to VGG network and convolutional layer group,
In, the convolutional layer group includes several continuous convolutional layers.
Specifically, it can choose Open Framework Tensor Flow and carry out network struction and model training.TensorFlow is
The machine learning frame of Google's open source.When being used for deep learning research, the model of building oneself, while automatic meter can be convenient
Reversed gradient is calculated, all reduces code difficulty in the links of calculating.It is can choose in the present invention based on paf algorithm and determines table
The posture information that user action is presented in image is levied, which is calculating human body attitude message context precision with higher.This
Gesture recognition model is the combination of VGG network and convolutional layer group in invention, and above-mentioned convolution includes several continuous volume bases in groups
Layer.The network is divided into parallel two-way, calculates separately the affine vector field of human synovial and part.
Preferably, can also be by residual error network application into gesture recognition model, which has used for reference the knot of residual error network
The use of structure, residual error structure and relaying supervision can solve the problem of gradient is degenerated during network is deepened, and can successfully instruct
Practice deeper neural network, and trained speed can be accelerated, while better expressive ability can be obtained.
Preferably, the training sample of training gesture recognition model is that a large number of users of mark is based on standard movement in the present invention
The athletic performance that video is completed.VGG network and convolutional layer group are trained using above-mentioned training sample, obtain gesture recognition mould
Type.By implementing above-mentioned training process, the available gesture recognition model with deeper structure, to the spy of human body key point
Sign is extracted and plays important function.
Based on the gesture recognition model that above-mentioned training obtains, nonterminal character extraction is carried out first with trained VGG network, so
Trained convolutional layer group is recycled gradually to extract human body attitude information afterwards.
Figure in S23, determination is determined based on each frame image respectively posture information and standard movement video with acquisition
As the gap between the posture information that is presented in corresponding standard picture.
Specifically, when following standard movement video motion, all characteristic portions of user all and then move user,
Therefore, the posture that the user characteristics position in the image of a certain moment acquisition is presented can be determined in gesture recognition model, so
The posture that user is presented in video in the moment in the posture and standard movement video is subjected to comparison in difference afterwards, it is possible thereby to really
Whether in place etc. to make the movement of user.
Preferably, the posture information in the present invention includes the characteristic information at characteristics of human body position;And for collected
Each frame image can execute step S23 according to process shown in Fig. 3, comprising the following steps:
S31, at least one basis movement that the characteristic information at the characteristics of human body position of the frame image is constituted is determined.
Specifically, when following standard movement video motion, all characteristic portions of user all and then move user,
Therefore, the characteristic information at user characteristics position can be determined in gesture recognition model.
In practical application, user action is not relatively simple and simple movement, often compound movement, therefore in order to just
In subsequent analysis, compound movement is decomposed into simple athletic performance and is combined, it is therefore, special based on the human body determined in the present invention
The characteristic information for levying position determines at least one basis movement.For example, a compound movement is high lift leg movement, the movement
Four limbs can be broken down into several basis movements all dynamic, for example, decomposition basis movement be both hands stretch forward with
Both legs bending jump etc. upwards.
S32, it is acted for each basis, determines that basis movement is corresponding with basis movement with standard movement maneuver library
Standard base movement between gap.
Wherein, it is acted in the standard movement maneuver library comprising several standard bases, the standard base movement is pair
What the user action in the standard movement video was decomposed.
When it is implemented, standard movement specification is formulated in analysis according to existing standard movement video.First by compound movement
Be decomposed into the combination of simple action, then according to each simple action the characteristics of, by simple action be decomposed into standard base movement,
Standard base movement is the smallest moving cell.Based on foregoing description, the present invention is in advance to each dynamic in standard movement video
It is decomposed, each compound action is resolved into the movement of several standard bases, then decomposes the standard movement video
Obtained standard base movement storage is into standard movement maneuver library.
Specifically, when decomposing to the movement in standard movement video, and the frame of several time points acquisition is chosen
The movement that image is shown is decomposed, and the time point chosen generally can be starting point, intermediate point and the end of a certain movement
Point.Such as the 11st point of a certain action criteria sport video open beginning to 12 points terminate, then can acquire the 11st timesharing frame image,
11st point half of frame image and the frame image of the 12nd timesharing etc., for certainly it is also an option that the time of other times or more
Point, depending on actual conditions.Specifically it is with the frame image for acquiring the 11st point, 11 point half and 12 points in standard movement video
Example, then in the present invention when determining that user starts to practice the movement in the 11st framing image, camera acquires user's practice
The frame image of the movement of 11st point of displaying, acquisition user practice the frame image of the movement of the 11st point of half displaying in standard movement video
And acquisition user practices the frame image of the movement of the 12nd point of displaying in standard movement video.Then step S22~S24 is executed.
That is, the time point of acquisition frame image of the present invention is depending on the time point of acquisition standard movement video.
After determining at least one basis movement in step S31, basis movement can be characterized with pixel value, and such one
Come, standard base movement corresponding with basis movement in the pixel value and standard movement maneuver library which can be acted
Pixel value is compared, and determines the gap between basis movement and standard base movement with this.And so on, it can determine
Each basis acts the gap between corresponding standard base movement out.
S33, the gap determined based on each basis movement, determine the posture information and standard movement video of the frame image
In gap between the posture information that presents in standard picture corresponding with the frame image.
In this step, such as it can use average weighted mode and execute step S33.
S24, gap is determined based on each frame image, determines the evaluation result of user action.
In the gap for determining each frame image using process shown in Fig. 3, it can be weighted and averaged and calculate user
The evaluation result for the athletic performance made.It, can evaluation result that is objective and accurately determining out user action based on this.
Preferably, method provided by the invention further includes process shown in Fig. 4, be may comprise steps of:
S41, quantification treatment is carried out to evaluation result, obtains quantization motion result.
S42, it is based on quantization motion result, output is directed to the guiding opinion of the user action.
Specifically, in the present invention after determining the evaluation result of user action based on process shown in Fig. 2, in order to improve
User experience facilitates user to know the movement of its completion and the gap of standard operation, quantifies in the present invention to evaluation result
Processing obtains convenient for user's visualization and the quantization motion result that can intuitively read and exports displaying, and such as quantifying motion result is
Score value, evaluation result is better, and its score value is higher, in this way, user, which is based on the quantization motion result, intuitively and quickly to be determined
The performance of its movement practiced.
Preferably, in order to improve the satisfaction of user, the present invention can also provide the guidance of the movement for user's practice
It is recommended that in this way, user check its practice movement while, be also based on displaying guiding opinion adjustment movement, help
Preferably practice in user, improves user experience.
Using the evaluation method of user action provided by the invention, on the one hand can accurately and objectively determine user with
The evaluation result of movement when being moved with standard movement video, the participation without subjective consciousness;On the other hand, it additionally aids
User efficiently manages the movement of oneself, improves user experience.
Based on the same inventive concept, a kind of evaluating apparatus of user action is additionally provided in the embodiment of the present invention, due to upper
State that the principle that device solves the problems, such as is similar to the evaluation method of user action, therefore the implementation of above-mentioned apparatus may refer to method
Implement, overlaps will not be repeated.
As shown in figure 5, the structural schematic diagram of the evaluating apparatus for user action provided in an embodiment of the present invention, comprising:
Acquisition unit 51, for acquiring at least frame image when user follows standard movement video to move;
Gesture recognition unit 52, for being directed to each frame image, using gesture recognition model trained in advance to the image
Gesture recognition is carried out, determines the posture information for characterizing that user action is presented in the image;
First determination unit 53, for determining the posture information determined based on each frame image and standard movement view respectively
The gap between posture information presented in standard picture corresponding with the image of acquisition in frequency;
Second determination unit 54 determines gap based on each frame image, determines the evaluation result of user action.
Preferably, the posture information includes the characteristic information at characteristics of human body position;And
First determination unit 53 is specifically used for being directed to each frame image, is performed both by following processes: determining the frame image
Characteristics of human body position characteristic information constitute at least one basis movement;For each basis movement, determine that the basis is dynamic
Gap in work and standard movement maneuver library between standard base movement corresponding with basis movement, wherein the standard fortune
It is acted in dynamic maneuver library comprising several standard bases, the standard base movement is to the user in the standard movement video
What movement was decomposed;Based on the gap that each basis movement is determined, the posture information and standard of the frame image are determined
The gap between posture information presented in standard picture corresponding with the frame image in sport video.
Preferably, the gesture recognition model is trained to obtain to VGG network and convolutional layer group, wherein described
Convolutional layer group includes several continuous convolutional layers.
Preferably, the evaluating apparatus of user action provided by the invention, further includes:
Quantization processing unit 55 obtains quantization motion result for carrying out quantification treatment to the evaluation result;
Output unit 56, the quantization motion result for being obtained based on quantization processing unit, output are dynamic for the user
The guiding opinion of work.
For convenience of description, above each section is divided by function describes respectively for each module (or unit).Certainly, exist
Implement to realize the function of each module (or unit) in same or multiple softwares or hardware when the present invention.
Based on the same inventive concept, the embodiment of the invention also provides the electronics for the evaluation method for implementing user action to set
Standby, Fig. 6 gives the hardware structural diagram of electronic equipment, as shown in fig. 6, the electronic equipment includes:
One or more processors 610 and memory 620, in Fig. 6 by taking a processor 610 as an example.
The electronic equipment for executing the evaluation method of user action can also include: input unit 660 and output device 640.
Processor 610, memory 620, input unit 660 and output device 640 can pass through bus or other modes
It connects, in Fig. 6 for being connected by bus.
Memory 620 is used as a kind of non-volatile computer readable storage medium storing program for executing, can be used for storing non-volatile software journey
Sequence, non-volatile computer executable program and module, the evaluation method such as the user action in the embodiment of the present application are corresponding
Program instruction/module/unit (for example, attached acquisition unit shown in fig. 5 51, gesture recognition unit 52, the first determination unit 53
With the second determination unit 53).Processor 610 by operation be stored in memory 620 non-volatile software program, instruction with
And module/unit realizes above-mentioned side thereby executing the various function application and data processing of server or intelligent terminal
The evaluation method of method embodiment user action.
Memory 620 may include storing program area and storage data area, wherein storing program area can store operation system
Application program required for system, at least one function;Storage data area can store the use of the evaluating apparatus according to user action
The data etc. created.In addition, memory 620 may include high-speed random access memory, it can also include non-volatile deposit
Reservoir, for example, at least a disk memory, flush memory device or other non-volatile solid state memory parts.In some implementations
In example, optional memory 620 includes the memory remotely located relative to processor 610, these remote memories can pass through
It is connected to the network to the evaluating apparatus of user action.The example of above-mentioned network includes but is not limited to internet, intranet, local
Net, mobile radio communication and combinations thereof.
Input unit 660 can receive the number or character information of input, and generate and the evaluating apparatus of user action
User setting and the related key signals input of function control.Output device 640 may include that display screen etc. shows equipment.
One or more of modules are stored in the memory 620, when by one or more of processors
When 610 execution, the evaluation method of the user action in above-mentioned any means embodiment is executed.
Method provided by the embodiment of the present application can be performed in the said goods, has the corresponding functional module of execution method and has
Beneficial effect.The not technical detail of detailed description in the present embodiment, reference can be made to method provided by the embodiment of the present application.
The electronic equipment of the embodiment of the present application exists in a variety of forms, including but not limited to:
(1) mobile communication equipment: the characteristics of this kind of equipment is that have mobile communication function, and to provide speech, data
Communication is main target.This Terminal Type includes: smart phone (such as iPhone), multimedia handset, functional mobile phone and low
Hold mobile phone etc..
(2) super mobile personal computer equipment: this kind of equipment belongs to the scope of personal computer, there is calculating and processing function
Can, generally also have mobile Internet access characteristic.This Terminal Type includes: PDA, MID and UMPC equipment etc., such as iPad.
(6) portable entertainment device: this kind of equipment can show and play multimedia content.Such equipment include: audio,
Video player (such as iPod), handheld device, e-book and intelligent toy and portable car-mounted navigation equipment.
(4) server: providing the equipment of the service of calculating, and the composition of server includes that processor, hard disk, memory, system are total
Line etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, in processing energy
Power, stability, reliability, safety, scalability, manageability etc. are more demanding.
(5) other electronic devices with data interaction function.
Based on the same inventive concept, the present invention also provides a kind of nonvolatile computer storage media, the computers
Storage medium is stored with computer executable instructions, which can be performed in above-mentioned any means embodiment
The evaluation method of user action.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications can be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of evaluation method of user action characterized by comprising
At least frame image when acquisition user follows standard movement video to move;
For each frame image, gesture recognition is carried out to the image using gesture recognition model trained in advance, determines that characterization should
The posture information that user action is presented in image;
It determines respectively corresponding with the image of acquisition in the posture information and standard movement video determined based on each frame image
Standard picture in gap between the posture information that presents;
Gap is determined based on each frame image, determines the evaluation result of user action.
2. the method as described in claim 1, which is characterized in that the posture information includes the feature letter at characteristics of human body position
Breath;And it determines respectively opposite with the image of acquisition in the posture information and standard movement video determined based on each frame image
The gap between posture information presented in the standard picture answered, specifically includes:
For each frame image, it is performed both by following processes:
Determine at least one basis movement that the characteristic information at the characteristics of human body position of the frame image is constituted;
For each basis movement, standard base corresponding with basis movement in basis movement and standard movement maneuver library is determined
Gap between plinth movement, wherein acted in the standard movement maneuver library comprising several standard bases, the standard base
Movement is to be decomposed to the user action in the standard movement video;
Based on each gap for determining of basis movement, determine in the posture information and standard movement video of the frame image with the frame
The gap between posture information presented in the corresponding standard picture of image.
3. the method as described in claim 1, which is characterized in that the gesture recognition model is to VGG network and convolutional layer group
It is trained, wherein the convolutional layer group includes several continuous convolutional layers.
4. the method as described in claims 1 to 3 is any, which is characterized in that further include:
Quantification treatment is carried out to the evaluation result, obtains quantization motion result;And
Based on quantization motion result, output is directed to the guiding opinion of the user action.
5. a kind of evaluating apparatus of user action characterized by comprising
Acquisition unit, for acquiring at least frame image when user follows standard movement video to move;
Gesture recognition unit carries out appearance to the image using gesture recognition model trained in advance for being directed to each frame image
State identification, determines the posture information for characterizing that user action is presented in the image;
First determination unit, for determine respectively in the posture information and standard movement video determined based on each frame image with
The gap between posture information presented in the corresponding standard picture of the image of acquisition;
Second determination unit determines gap based on each frame image, determines the evaluation result of user action.
6. device as claimed in claim 5, which is characterized in that the posture information includes the feature letter at characteristics of human body position
Breath;And
First determination unit is specifically used for being directed to each frame image, is performed both by following processes: determining the human body of the frame image
At least one basis movement that the characteristic information of characteristic portion is constituted;For each basis movement, basis movement and mark are determined
Gap in quasi-moving maneuver library between standard base movement corresponding with basis movement, wherein the standard movement movement
Acted in library comprising several standard bases, standard base movement for the user action in the standard movement video into
Row decomposition obtains;Based on the gap that each basis movement is determined, the posture information and standard movement view of the frame image are determined
The gap between posture information presented in standard picture corresponding with the frame image in frequency.
7. device as claimed in claim 5, which is characterized in that the gesture recognition model is to VGG network and convolutional layer group
It is trained, wherein the convolutional layer group includes several continuous convolutional layers.
8. the device as described in claim 5~7 is any, which is characterized in that further include:
Quantization processing unit obtains quantization motion result for carrying out quantification treatment to the evaluation result;
Output unit, the quantization motion result for being obtained based on quantization processing unit, output are directed to the finger of the user action
Lead suggestion.
9. a kind of nonvolatile computer storage media, is stored with computer executable instructions, which is characterized in that the computer
Executable instruction is used to execute the method as described in Claims 1-4 any claim.
10. a kind of electronic equipment characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
It manages device to execute, so that at least one described processor is able to carry out the method as described in Claims 1-4 any claim.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811296176.0A CN109635644A (en) | 2018-11-01 | 2018-11-01 | A kind of evaluation method of user action, device and readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811296176.0A CN109635644A (en) | 2018-11-01 | 2018-11-01 | A kind of evaluation method of user action, device and readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109635644A true CN109635644A (en) | 2019-04-16 |
Family
ID=66067055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811296176.0A Pending CN109635644A (en) | 2018-11-01 | 2018-11-01 | A kind of evaluation method of user action, device and readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109635644A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110600132A (en) * | 2019-08-31 | 2019-12-20 | 深圳市广宁股份有限公司 | Digital twin intelligent health prediction method and device based on vibration detection |
CN110801233A (en) * | 2019-11-05 | 2020-02-18 | 上海电气集团股份有限公司 | Human body gait monitoring method and device |
CN111881867A (en) * | 2020-08-03 | 2020-11-03 | 北京融链科技有限公司 | Video analysis method and device and electronic equipment |
CN111967407A (en) * | 2020-08-20 | 2020-11-20 | 咪咕互动娱乐有限公司 | Action evaluation method, electronic device, and computer-readable storage medium |
CN112101315A (en) * | 2019-11-20 | 2020-12-18 | 北京健康有益科技有限公司 | Deep learning-based exercise judgment guidance method and system |
CN112749684A (en) * | 2021-01-27 | 2021-05-04 | 萱闱(北京)生物科技有限公司 | Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium |
CN113392746A (en) * | 2021-06-04 | 2021-09-14 | 北京格灵深瞳信息技术股份有限公司 | Action standard mining method and device, electronic equipment and computer storage medium |
CN113743237A (en) * | 2021-08-11 | 2021-12-03 | 北京奇艺世纪科技有限公司 | Follow-up action accuracy determination method and device, electronic device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103390174A (en) * | 2012-05-07 | 2013-11-13 | 深圳泰山在线科技有限公司 | Physical education assisting system and method based on human body posture recognition |
CN108256433A (en) * | 2017-12-22 | 2018-07-06 | 银河水滴科技(北京)有限公司 | A kind of athletic posture appraisal procedure and system |
-
2018
- 2018-11-01 CN CN201811296176.0A patent/CN109635644A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103390174A (en) * | 2012-05-07 | 2013-11-13 | 深圳泰山在线科技有限公司 | Physical education assisting system and method based on human body posture recognition |
CN108256433A (en) * | 2017-12-22 | 2018-07-06 | 银河水滴科技(北京)有限公司 | A kind of athletic posture appraisal procedure and system |
Non-Patent Citations (1)
Title |
---|
杨洁 等: "基于卷积网络的视频目标检测", 《南华大学学报(自然科学版)》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110600132A (en) * | 2019-08-31 | 2019-12-20 | 深圳市广宁股份有限公司 | Digital twin intelligent health prediction method and device based on vibration detection |
CN110600132B (en) * | 2019-08-31 | 2023-12-15 | 深圳市广宁股份有限公司 | Digital twin intelligent health prediction method and device based on vibration detection |
CN110801233A (en) * | 2019-11-05 | 2020-02-18 | 上海电气集团股份有限公司 | Human body gait monitoring method and device |
CN110801233B (en) * | 2019-11-05 | 2022-06-07 | 上海电气集团股份有限公司 | Human body gait monitoring method and device |
CN112101315A (en) * | 2019-11-20 | 2020-12-18 | 北京健康有益科技有限公司 | Deep learning-based exercise judgment guidance method and system |
CN111881867A (en) * | 2020-08-03 | 2020-11-03 | 北京融链科技有限公司 | Video analysis method and device and electronic equipment |
CN111967407A (en) * | 2020-08-20 | 2020-11-20 | 咪咕互动娱乐有限公司 | Action evaluation method, electronic device, and computer-readable storage medium |
CN111967407B (en) * | 2020-08-20 | 2023-10-20 | 咪咕互动娱乐有限公司 | Action evaluation method, electronic device, and computer-readable storage medium |
CN112749684A (en) * | 2021-01-27 | 2021-05-04 | 萱闱(北京)生物科技有限公司 | Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium |
CN113392746A (en) * | 2021-06-04 | 2021-09-14 | 北京格灵深瞳信息技术股份有限公司 | Action standard mining method and device, electronic equipment and computer storage medium |
CN113743237A (en) * | 2021-08-11 | 2021-12-03 | 北京奇艺世纪科技有限公司 | Follow-up action accuracy determination method and device, electronic device and storage medium |
CN113743237B (en) * | 2021-08-11 | 2023-06-02 | 北京奇艺世纪科技有限公司 | Method and device for judging accuracy of follow-up action, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109635644A (en) | A kind of evaluation method of user action, device and readable medium | |
Trejo et al. | Recognition of yoga poses through an interactive system with kinect device | |
CN108764120B (en) | Human body standard action evaluation method | |
CN109191588B (en) | Motion teaching method, motion teaching device, storage medium and electronic equipment | |
CN107203953B (en) | Teaching system based on internet, expression recognition and voice recognition and implementation method thereof | |
CN110279425B (en) | Psychological assessment method and system based on intelligent analysis | |
CN111191599B (en) | Gesture recognition method, device, equipment and storage medium | |
CN107423398A (en) | Exchange method, device, storage medium and computer equipment | |
CN108920490A (en) | Assist implementation method, device, electronic equipment and the storage medium of makeup | |
Ba et al. | Construction of wechat mobile teaching platform in the reform of physical education teaching strategy based on deep neural network | |
CN111027403A (en) | Gesture estimation method, device, equipment and computer readable storage medium | |
CN112116589B (en) | Method, device, equipment and computer readable storage medium for evaluating virtual image | |
CN112070865A (en) | Classroom interaction method and device, storage medium and electronic equipment | |
Ali et al. | Virtual reality as a physical training assistant | |
CN111967407B (en) | Action evaluation method, electronic device, and computer-readable storage medium | |
CN114022512A (en) | Exercise assisting method, apparatus and medium | |
CN113516064A (en) | Method, device, equipment and storage medium for judging sports motion | |
CN110125932B (en) | Dialogue interaction method for robot, robot and readable storage medium | |
CN108038802A (en) | Speech teaching method, device and computer-readable recording medium | |
CN114847932A (en) | Method and device for determining motion prompt and computer readable storage medium | |
CN109407826A (en) | Ball game analogy method, device, storage medium and electronic equipment | |
CN107729983B (en) | Method and device for realizing man-machine chess playing by using machine vision and electronic equipment | |
CN115984059A (en) | Collaborative fusion education attention training method, system, device and medium | |
CN116386136A (en) | Action scoring method, equipment and medium based on human skeleton key points | |
Wu | Evaluation of AdaBoost's elastic net-type regularized multi-core learning algorithm in volleyball teaching actions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190416 |
|
RJ01 | Rejection of invention patent application after publication |