CN110349180A - Human body joint point prediction method and device and motion type identification method and device - Google Patents

Human body joint point prediction method and device and motion type identification method and device Download PDF

Info

Publication number
CN110349180A
CN110349180A CN201910646542.9A CN201910646542A CN110349180A CN 110349180 A CN110349180 A CN 110349180A CN 201910646542 A CN201910646542 A CN 201910646542A CN 110349180 A CN110349180 A CN 110349180A
Authority
CN
China
Prior art keywords
human body
training image
artis
joint points
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910646542.9A
Other languages
Chinese (zh)
Other versions
CN110349180B (en
Inventor
石芙源
王恺
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201910646542.9A priority Critical patent/CN110349180B/en
Publication of CN110349180A publication Critical patent/CN110349180A/en
Application granted granted Critical
Publication of CN110349180B publication Critical patent/CN110349180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The embodiment of the invention relates to the technical field of machine learning, and discloses a human body joint point prediction method and device and an action type identification method and device. The invention provides a human body joint point prediction method, which comprises the following steps: acquiring a motion image and positions of M designated joint points of a human body in the motion image, wherein M is larger than 0; inputting the motion image and the positions of M designated joint points into a human body joint point recognition model trained in advance to obtain the predicted positions of N human body joint points of the human body, wherein N is larger than M, the N human body joint points comprise M designated joint points, and N and M are integers. The human body joint point prediction method and device and the action type identification method and device provided by the embodiment of the invention avoid the manpower, material resources and time for acquiring a large number of human body joint points or father-son joint hierarchical relations, and reduce the consumption of data acquisition.

Description

Human joint points prediction technique and device, type of action recognition methods and device
Technical field
The present embodiments relate to machine learning techniques field, in particular to a kind of human joint points prediction technique and dress It sets, type of action recognition methods and device.
Background technique
Human synovial point data is in the fields such as human bioequivalence, robotically-driven, behavior prediction important role.It is calculating On the problem of human joint points position, the prior art, which mainly has, reconstructs missing joint, forward dynamics calculating by machine learning It the methods of is calculated with IK Solvers and obtains human joint points position, but is each restricted.Machine learning reconstruct missing joint needs Known most of human synovial key point position is wanted, the joint information of missing is reconstructed by training;Forward dynamics are then complete The level of artis set membership is followed, to calculate certain artis position must provide father's artis position of the artis;And IK Solvers calculating is to calculate father's artis can make sub- artis by what kind of rotation according to the position of sub- artis Reach the position.
However, it is found by the inventors that at least there are the following problems in the prior art: the existing mode for obtaining human joint points The hierarchical relationship for generally requiring to obtain a large amount of human joint points positions or father and son's artis, need a large amount of human and material resources and Time, data acquisition consumption are larger.
Summary of the invention
Embodiment of the present invention is designed to provide a kind of human joint points prediction technique and device, type of action identification Method and device avoids and acquires a large amount of human joint points or human and material resources that father and son joint hierarchical relationship is spent are with timely Between, reduce the consumption of data acquisition.
In order to solve the above technical problems, embodiments of the present invention provide a kind of human joint points prediction technique, comprising: Obtain the position of M specified artis of human body in motion images and motion images, wherein M is greater than 0;By motion images with And the position of M specified artis inputs trained human joint points identification model in advance and obtains N number of human synovial of human body The predicted position of point, wherein N is greater than M, includes M specified artis in N number of human joint points, N and M are integer.
Embodiments of the present invention additionally provide a kind of human joint points prediction meanss, comprising: at least one processor;With And the memory being connect at least one processor communication;Wherein, memory is stored with and can be executed by least one processor Instruction, instruction is executed by least one processor, so that at least one processor is able to carry out above-mentioned human synovial point prediction Method.
Embodiments of the present invention additionally provide a kind of computer readable storage medium, are stored with computer program, calculate Machine program realizes above-mentioned human joint points prediction technique when being executed by processor.
Embodiments of the present invention additionally provide a kind of type of action recognition methods, comprising: utilize above-mentioned human synovial Point prediction method obtains the predicted position of N number of human joint points on the human body of motion images;By motion images and N number of human body The predicted position of artis inputs trained classification of motion model in advance and obtains the type of action of motion images.
Embodiments of the present invention additionally provide a kind of type of action identification device, comprising: at least one processor;With And the memory being connect at least one processor communication;Wherein, memory is stored with and can be executed by least one processor Instruction, instruction is executed by least one processor, so that at least one processor is able to carry out above-mentioned type of action identification side Method.
Embodiments of the present invention additionally provide a kind of computer readable storage medium, are stored with computer program, calculate Machine program realizes above-mentioned type of action recognition methods when being executed by processor.
Embodiment of the present invention provides a kind of human joint points prediction technique in terms of existing technologies, comprising: obtains The position of M specified artis of human body in motion images and motion images, wherein M is greater than 0;By motion images and M The preparatory trained human joint points identification model of position input of a specified artis obtains N number of human joint points of human body Predicted position, wherein N is greater than M, includes M specified artis in N number of human joint points, N and M are integer.Present embodiment In when knowing the part human joint points position in motion images and motion images, utilize in advance trained human synovial The identification model of point, can predict the position for meeting most of human joint points of human body attitude in motion images, avoid It acquires a large amount of human joint points or human and material resources that father and son's artis hierarchical relationship is spent and time, reduces data and adopt The consumption of collection is provided convenience for the technologies such as human bioequivalence, robotically-driven, behavior prediction.
In addition, trained human joint points identification model is trained in the following manner in advance: obtaining comprising human body The multiframe training image of movement, each frame training image of multiframe training image include: real human body artis position and The number of the real human body artis specified from real human body artis, specified real human body artis is less than real human body The number of artis;Multiframe training image input human joint points identification model is obtained into real human body in each frame training image The predicted position of artis, predicted position include: the predicted position of specified real human body artis;According to predicted position, very The position of real human joint points and the position of specified real human body artis calculate first-loss function;When first-loss letter When several the first preset conditions of satisfaction, terminate training.
In addition, first-loss function L1 is calculate by the following formula:
Wherein, n indicates n frame training image, AniFor i real human body of each frame training image in n frame training image The position of artis, QniFor i predicted position of each frame training image in n frame training image, k indicates specified real human body The number of artis, PjFor the position of j-th of specified real human body artis of each frame training image in n frame training image It sets, QjFor the predicted position of j-th of specified real human body artis of frame training image each in n-th frame training image.
In addition, trained human joint points identification model is trained in the following manner in advance: obtaining comprising human body The multiple groups training image of movement, each group of training image include the training image of successive frame, each in the training image of successive frame The training image of frame includes: that the position of real human body artis and the real human body specified from real human body artis close Node, the number of specified real human body artis are less than the number of real human body artis;By the action training number of successive frame The predicted position of real human body artis in each frame training image, prediction bits are obtained according to collection input human joint points identification model Set include: specified real human body artis predicted position;According to predicted position, real human body artis position and refer to The position of fixed real human body artis calculates the second loss function;When the second loss function meets the second preset condition, knot Shu Xunlian.
In addition, the second loss function L2 is calculate by the following formula:
Wherein, n indicates that n group training image, s indicate there is s frame training image, A in each group of training images niIndicate n group The position of i real human body artis of each frame training image, Q in s frame training image in training images niIndicate n group instruction Practice i predicted position of each frame training image in the s frame training image in image, k is specified real human body artis Number, Pt jFor the position of j-th of specified real human body artis of t frame training image in s frame training image, Qt jIndicate s The predicted position of j-th of real human body artis of t frame training image in frame training image.
Detailed description of the invention
One or more embodiments are illustrated by the picture in corresponding attached drawing, these exemplary theorys The bright restriction not constituted to embodiment, the element in attached drawing with same reference numbers label are expressed as similar element, remove Non- to have special statement, composition does not limit the figure in attached drawing.
Fig. 1 is the flow diagram of the human joint points prediction technique of first embodiment according to the present invention;
Fig. 2 is the structural schematic diagram of the human joint points prediction meanss of second embodiment according to the present invention;
Fig. 3 is the flow diagram of the type of action recognition methods of the 4th embodiment according to the present invention;
Fig. 4 is the structural schematic diagram of the type of action identification device of the 5th embodiment according to the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention Each embodiment be explained in detail.However, it will be understood by those skilled in the art that in each embodiment party of the present invention In formula, in order to make the reader understand this application better, many technical details are proposed.But even if without these technical details And various changes and modifications based on the following respective embodiments, the application technical solution claimed also may be implemented.
The first embodiment of the present invention is related to a kind of human joint points prediction technique, the core of present embodiment is, There is provided a kind of human joint points prediction technique, comprising: obtain M specified joints of human body in motion images and motion images The position of point, wherein M is greater than 0;By the input of the position of motion images and M specified artis, trained human body is closed in advance Node identification model obtains the predicted position of N number of human joint points of human body, wherein N is greater than M, includes in N number of human joint points M specified artis, N and M are integer.The part human body in motion images and motion images is being known in present embodiment When artis position, using the identification model of preparatory trained human joint points, it can predict and meet people in motion images The position of most of human joint points of body posture avoids a large amount of human joint points of acquisition or father and son's artis hierarchical relationship institute The human and material resources of cost and time reduce the consumption of data acquisition, are human bioequivalence, robotically-driven, behavior prediction etc. Technology is provided convenience.
The realization details of the human joint points prediction technique of present embodiment is specifically described below, the following contents Only for convenience of the realization details provided is understood, not implement the necessary of this programme.
The flow diagram of human joint points prediction technique in present embodiment is as shown in Figure 1:
Step 101: obtaining the position of M specified artis of human body in motion images and motion images.
Specifically, obtaining the motion images comprising human action posture, and M finger of human body is marked in motion images Determine the position of artis, the number of the specified artis is greater than 0, i.e. M is greater than 0, and M is integer.Wherein, M specified artis It can be the artis position of same joint part, also may include the artis position of different joint parts.Such as: M finger Determine the artis position that artis is elbow portion or wrist portion, M specified artis may be upper half of human body or the lower part of the body Artis position.
Step 102: by the input of the position of motion images and M specified artis, trained human joint points are known in advance Other model obtains the predicted position of N number of human joint points of human body.
Specifically, training can obtain in advance according to human joint points position in part in motion images and motion images The human joint points identification model of the position of most of human joint points into motion images is needing to obtain a certain action diagram When the position of most of human joint points as in, it is only necessary to acquire the M greater than 0 specified joint of number in the motion images The position of point can obtain the pre- of N number of human joint points in the motion images using trained human joint points identification model Location is set, and N is greater than M, and N is integer.To obtain a large amount of people in needs such as human bioequivalence, robotically-driven and behavior predictions When the scene of body artis position, avoids the position for acquiring a large amount of human joint points in the prior art or obtain father and son joint The human and material resources and time that point hierarchical relationship is spent, reduce the consumption of data acquisition, to be human bioequivalence, machine The technologies such as people's driving, behavior prediction are provided convenience.
It is worth noting that being closed in the N number of human body predicted using preparatory trained human joint points identification model It include M specified artis in node, that is to say, that both included M specified in the predicted position of obtained N number of human joint points The predicted position of artis, and include the predicted position of the N-M human joint points in addition to M specified artis.Due to benefit Often there is deviation, M in the predicted position of N-M human joint points and the actual position of human joint points obtained with model prediction The position of a specified artis and the predicted position of N-M human joint points are possible to the people that can not be formed in motion images Therefore body movement posture obtains the prediction bits of N number of human joint points in motion images in the position using M specified artis It after setting, can give up to fall this M specified artis, and the predicted position of this M specified artis and N-M human body is utilized to close The predicted position of node calculates to do subsequent robotically-driven, human bioequivalence, behavior prediction etc., to guarantee human body movement posture Reliable execution.
The difference of the training image used when in present embodiment according to training is divided into two kinds of training methods:
When training using the multiframe training image comprising human action, then human joint points identification model by with Under type is trained:
Each frame training image of multiframe training image comprising human action include: real human body artis position, And the real human body artis specified from real human body artis, the number of specified real human body artis are less than true The number of human joint points;Multiframe training image input human joint points identification model is obtained true in each frame training image The predicted position of human joint points, predicted position include: the predicted position of specified real human body artis;According to prediction bits It sets, the position of the position of real human body artis and specified real human body artis calculates first-loss function;When first When the first preset condition of satisfaction of loss function, terminate training.
Wherein, first-loss function L1 is calculated by following formula (1):
Wherein, n indicates n frame training image, AniFor i real human body of each frame training image in n frame training image The position of artis, QniFor i predicted position of each frame training image in n frame training image, k indicates specified real human body The number of artis, PjFor the position of j-th of specified real human body artis of each frame training image in n frame training image It sets, QjFor the predicted position of j-th of specified real human body artis of frame training image each in n-th frame training image.
Specifically, acquiring n frame training image, the image of successive frame, each frame training figure are not included in n frame training image Comprising i real human body artis as in, and the k real human body artis specified from the i real human body artis, Wherein 0 < k < i.By each frame training image, the position of the corresponding i real human body artis of the training image and k finger The position for determining real human body artis is trained as the input of human joint points identification model, and prediction obtains the training image I predicted position, wherein i predicted position includes the predicted position of the specified real human body artis of k.By the training The position of i real human body artis of image, the position of a specified real human body artis of k and i predicted position are defeated Enter the loss function L1 in above-mentioned formula (1), calculates loss function value, and human joint points identification is adjusted according to loss function value The model parameter of model, until loss function L1 meets the first preset condition.The human joint points identification model that training is completed Prediction result meets the human action posture in training image, and the predicted position of k specified real human body artis is close Legitimate reading.
It is worth noting that first preset condition can be use by user's self-setting, the first preset condition One numerical value of the characterization loss function size of person's setting characterizes the human body when loss function value is equal to or less than the numerical value Artis identification model is trained successfully;It is also possible to a default value of the characterization loss function amplitude of variation of user's setting Range characterizes human body joint in the training process when the amplitude of variation of the value of loss function is within the scope of the default value Point identification model is trained successfully.
When training using the multiple groups training image comprising human action, each group of training image includes successive frame Training image, then human joint points identification model is trained in the following manner:
Each group of training image of the multiple groups training image comprising human action includes the training image of successive frame, successive frame Training image in the training image of each frame include: the position of real human body artis and from real human body artis Specified real human body artis, the number of specified real human body artis are less than the number of real human body artis;It will be even The action training data set input human joint points identification model of continuous frame obtains real human body artis in each frame training image Predicted position, predicted position includes: the predicted position of specified real human body artis;According to predicted position, real human body The position of artis and the position of specified real human body artis calculate the second loss function;When the second loss function meets When the second preset condition, terminate training.
Wherein, the second loss function L2 is calculated by following formula (2):
Wherein, n indicates that n group training image, s indicate there is s frame training image, A in each group of training images niIndicate n group The position of i real human body artis of each frame training image, Q in s frame training image in training images niIndicate n group instruction Practice i predicted position of each frame training image in the s frame training image in image, k is specified real human body artis Number, Pt jFor the position of j-th of specified real human body artis of t frame training image in s frame training image, Qt jIndicate s The predicted position of j-th of real human body artis of t frame training image in frame training image.
It include s frame training image in each group of training image, and this s frame is trained specifically, acquiring n group training image Image is the image of the human action posture of successive frame, includes i real human body artis, Yi Jicong in each frame training image The k real human body artis specified in the i real human body artis, wherein 0 < k < i.By each frame training image, the instruction It is closed as human body the position of the position and k specified real human body artis of practicing the corresponding i real human body artis of image The input of node identification model is trained, and prediction obtains i predicted position of the training image, wherein i predicted position packet Include the predicted position of k specified real human body artis.By the position of i real human body artis of the training image, k Loss function L1 in the position of specified real human body artis and i predicted position input above-mentioned formula (2), calculates Loss function value, and according to the model parameter of loss function value adjustment human joint points identification model, until loss function L1 is full The first preset condition of foot.The prediction result for the human joint points identification model that training is completed meets the human action in training image Posture, and the predicted position of the specified real human body artis of k is close to legitimate reading.
It is worth noting that second preset condition can be use by user's self-setting, the first preset condition The numerical value of one characterization loss function size of person's setting characterizes the human body when loss function value is equal to or less than the numerical value Artis identification model is trained successfully;It is also possible to a default value of the characterization loss function amplitude of variation of user's setting Range characterizes human body joint in the training process when the amplitude of variation of the value of loss function is within the scope of the default value Point identification model is trained successfully.
Compared with prior art, a kind of human joint points prediction technique is provided in embodiment of the present invention, it is dynamic knowing When making the part human joint points position in image and motion images, the identification mould of preparatory trained human joint points is utilized Type can predict the position for meeting most of human joint points of human body attitude in motion images, avoid a large amount of people of acquisition The human and material resources and time that body artis or father and son's artis hierarchical relationship are spent reduce the consumption of data acquisition, are The technologies such as human bioequivalence, robotically-driven, behavior prediction are provided convenience.
Second embodiment of the invention is related to a kind of human joint points prediction technique, as shown in Fig. 2, including at least one Manage device 201;And the memory 202 with the communication connection of at least one processor 201;Wherein, be stored with can quilt for memory 202 The instruction that at least one processor 201 executes, instruction is executed by least one processor 201, so that at least one processor 201 It is able to carry out above-mentioned human joint points prediction technique.
Wherein, memory 202 is connected with processor 201 using bus mode, and bus may include any number of interconnection Bus and bridge, bus is by one or more processors 201 together with the various circuit connections of memory 202.Bus may be used also With by such as peripheral equipment, voltage-stablizer, together with various other circuit connections of management circuit or the like, these are all It is known in the art, therefore, it will not be further described herein.Bus interface provides between bus and transceiver Interface.Transceiver can be an element, be also possible to multiple element, such as multiple receivers and transmitter, provide for The unit communicated on transmission medium with various other devices.The data handled through processor 201 pass through antenna on the radio medium It is transmitted, further, antenna also receives data and transfers data to processor 201.
Processor 201 is responsible for management bus and common processing, can also provide various functions, including timing, periphery connect Mouth, voltage adjusting, power management and other control functions.And memory 202 can be used for storage processor 201 and execute Used data when operation.
Third embodiment of the present invention additionally provides a kind of computer readable storage medium, is stored with computer program, The computer program realizes above-mentioned human joint points prediction technique when being executed by processor.
That is, it will be understood by those skilled in the art that implement the method for the above embodiments be can be with Relevant hardware is instructed to complete by program, which is stored in a storage medium, including some instructions are to make It obtains an equipment (can be single-chip microcontroller, chip etc.) or processor (processor) executes side described in each embodiment of the application The all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
4th embodiment of the invention is related to a kind of type of action recognition methods, and the type of action in present embodiment is known Other method obtains most of human synovial in motion images using the human joint points prediction technique in first embodiment The predicted position of point.
The flow diagram of type of action recognition methods in present embodiment is as shown in figure 3, specifically include:
Step 301: obtaining the position of M specified artis of human body in motion images and motion images.
Step 302: by the input of the position of motion images and M specified artis, trained human joint points are known in advance Other model obtains the predicted position of N number of human joint points of human body.
Above-mentioned steps 301 and step 302 in first embodiment step 301 and step 302 it is roughly the same, to avoid It repeats, details are not described herein.
Step 303: motion images and the predicted position of N number of human joint points are inputted into the trained classification of motion in advance Model obtains the type of action of motion images.
Specifically, in the prior art when identifying the type of action of a certain motion images to be sorted, usually obtaining should be to The part human joint points for motion images of classifying consume brought by a large amount of human joint points to avoid acquiring, but due to human body Artis negligible amounts are not high to the recognition accuracy of the type of action of motion images to be sorted.It therefore, is the raising classification of motion The recognition accuracy of the type of action of model generally requires to obtain most of human joint points in the motion images to be sorted. Using the human joint points identification model in first embodiment in present embodiment, by motion images to be sorted and it is somebody's turn to do wait divide After the position input human joint points identification model for specifying artis in class motion images, by the motion images to be sorted and The predicted position for obtaining N number of human joint points is inputted again in preparatory trained classification of motion model, to be sorted dynamic to obtain this Make the type of action of image, both avoid the consumption for acquiring human and material resources and the time brought by a large amount of human joint points, Improve the recognition accuracy of the type of action of classification of motion model.
It should be noted that carrying out training action disaggregated model according to human joint points in motion images and motion images Method belongs to the prior art, does not repeat excessively in present embodiment.
Compared with prior art, a kind of type of action recognition methods, movement to be sorted are provided in embodiment of the present invention After the position input human joint points identification model for specifying artis in image and the motion images to be sorted, this is waited for point Class motion images and the predicted position for obtaining N number of human joint points are inputted again in preparatory trained classification of motion model, are come Obtain the type of action of the motion images to be sorted, both avoided human and material resources brought by a large amount of human joint points of acquisition with And the consumption of time, also improve the recognition accuracy of the type of action of classification of motion model.
The step of various methods divide above, be intended merely to describe it is clear, when realization can be merged into a step or Certain steps are split, multiple steps are decomposed into, as long as including identical logical relation, all in the protection scope of this patent It is interior;To adding inessential modification in algorithm or in process or introducing inessential design, but its algorithm is not changed Core design with process is all in the protection scope of the patent.
Fifth embodiment of the invention is related to a kind of type of action identification device, as shown in figure 4, including at least one processing Device 401;And the memory 402 with the communication connection of at least one processor 401;Wherein, be stored with can be by extremely for memory 402 The instruction that a few processor 401 executes, instruction is executed by least one processor 401, so that at least one 401 energy of processor Enough execute above-mentioned type of action recognition methods.
Wherein, memory 402 is connected with processor 401 using bus mode, and bus may include any number of interconnection Bus and bridge, bus is by one or more processors 401 together with the various circuit connections of memory 402.Bus may be used also With by such as peripheral equipment, voltage-stablizer, together with various other circuit connections of management circuit or the like, these are all It is known in the art, therefore, it will not be further described herein.Bus interface provides between bus and transceiver Interface.Transceiver can be an element, be also possible to multiple element, such as multiple receivers and transmitter, provide for The unit communicated on transmission medium with various other devices.The data handled through processor 401 pass through antenna on the radio medium It is transmitted, further, antenna also receives data and transfers data to processor 401.
Processor 401 is responsible for management bus and common processing, can also provide various functions, including timing, periphery connect Mouth, voltage adjusting, power management and other control functions.And memory 402 can be used for storage processor 401 and execute Used data when operation.
Sixth embodiment of the invention additionally provides a kind of computer readable storage medium, is stored with computer program, The computer program realizes above-mentioned type of action recognition methods when being executed by processor.
That is, it will be understood by those skilled in the art that implement the method for the above embodiments be can be with Relevant hardware is instructed to complete by program, which is stored in a storage medium, including some instructions are to make It obtains an equipment (can be single-chip microcontroller, chip etc.) or processor (processor) executes side described in each embodiment of the application The all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize specific embodiments of the present invention, And in practical applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.

Claims (10)

1. a kind of human joint points prediction technique characterized by comprising
Obtain the position of M specified artis of human body in motion images and the motion images, wherein the M is greater than 0;
By the input of the position of the motion images and the M specified artis, trained human joint points identify mould in advance Type obtains the predicted position of N number of human joint points of the human body, wherein the N is greater than the M, N number of human joint points In comprising the M specified artis, the N and the M are integer.
2. human joint points prediction technique according to claim 1, which is characterized in that the human body trained in advance closes Node identification model is trained in the following manner:
The multiframe training image comprising human action is obtained, each frame training image of the multiframe training image includes: true The position of human joint points and the real human body artis specified from the real human body artis are described specified true The number of real human joint points is less than the number of the real human body artis;
Multiframe training image input human joint points identification model is obtained true described in training image described in each frame The predicted position of human joint points, the predicted position include: the predicted position of the specified real human body artis;
According to the position of the predicted position, the position of the real human body artis and the specified real human body artis It sets and calculates first-loss function;
When the first preset condition of satisfaction of the first-loss function, terminate training.
3. human joint points prediction technique according to claim 2, which is characterized in that the first-loss function L1 passes through Following formula is calculated:
Wherein, the n indicates n frame training image, the AniI for each frame training image in n frame training image are true The position of human joint points, the QniFor i predicted position of each frame training image in n frame training image, the k indicates institute State the number of specified real human body artis, the PjFor j-th of finger of each frame training image in n frame training image The position of fixed real human body artis, the QjJ-th for frame training image each in n-th frame training image is described specified Real human body artis predicted position.
4. human joint points prediction technique according to claim 1, which is characterized in that the human body trained in advance closes Node identification model is trained in the following manner:
The multiple groups training image comprising human action is obtained, each group of training image includes the training image of successive frame, the company The training image of each frame includes: the position of real human body artis and from the real human body in the training image of continuous frame The number of the real human body artis specified in artis, the specified real human body artis is closed less than the real human body The number of node;
The action training data set input human joint points identification model of the successive frame is obtained into training image described in each frame Described in real human body artis predicted position, the predicted position includes: the pre- of the specified real human body artis Location is set;
According to the position of the predicted position, the position of the real human body artis and the specified real human body artis It sets and calculates the second loss function;
When second loss function meets the second preset condition, terminate training.
5. human joint points prediction technique according to claim 4, which is characterized in that the second loss function L2 passes through Following formula is calculated:
Wherein, the n indicates n group training image, and the s indicates there is s frame training image in each group of training image, described As niIndicate the position of i real human body artis of each frame training image in the s frame training image in n group training image, institute State Qs niIndicate i predicted position of each frame training image in the s frame training image in n group training image, the k is described The number of specified real human body artis, the Pt jIt is specified true for j-th of t frame training image in s frame training image The position of real human joint points, the Qt jIndicate that j-th of real human body of t frame training image in s frame training image closes The predicted position of node.
6. a kind of human joint points prediction meanss characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out the human joint points as described in any one of claims 1 to 5 Prediction technique.
7. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the computer program is processed The human joint points prediction technique as described in any one of claims 1 to 5 is realized when device executes.
8. a kind of type of action recognition methods characterized by comprising
The N on the human body of motion images is obtained using the human joint points prediction technique as described in any one of claims 1 to 5 The predicted position of a human joint points;
The motion images and the predicted position of N number of human joint points are inputted into trained classification of motion model in advance Obtain the type of action of the motion images.
9. a kind of type of action identification device characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out type of action recognition methods as described in claim 8.
10. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the computer program is located Reason device realizes type of action recognition methods as described in claim 8 when executing.
CN201910646542.9A 2019-07-17 2019-07-17 Human body joint point prediction method and device and motion type identification method and device Active CN110349180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910646542.9A CN110349180B (en) 2019-07-17 2019-07-17 Human body joint point prediction method and device and motion type identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910646542.9A CN110349180B (en) 2019-07-17 2019-07-17 Human body joint point prediction method and device and motion type identification method and device

Publications (2)

Publication Number Publication Date
CN110349180A true CN110349180A (en) 2019-10-18
CN110349180B CN110349180B (en) 2022-04-08

Family

ID=68175622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910646542.9A Active CN110349180B (en) 2019-07-17 2019-07-17 Human body joint point prediction method and device and motion type identification method and device

Country Status (1)

Country Link
CN (1) CN110349180B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614214A (en) * 2020-12-18 2021-04-06 北京达佳互联信息技术有限公司 Motion capture method, motion capture device, electronic device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060053108A1 (en) * 2004-09-03 2006-03-09 Ulrich Raschke System and method for predicting human posture using a rules-based sequential approach
CN101093582A (en) * 2006-06-19 2007-12-26 索尼株式会社 Motion capture apparatus and method, and motion capture program
CN102682452A (en) * 2012-04-12 2012-09-19 西安电子科技大学 Human movement tracking method based on combination of production and discriminant
CN102906670A (en) * 2010-06-01 2013-01-30 索尼公司 Information processing apparatus and method and program
CN103324938A (en) * 2012-03-21 2013-09-25 日电(中国)有限公司 Method for training attitude classifier and object classifier and method and device for detecting objects
KR101469851B1 (en) * 2014-06-03 2014-12-09 한양대학교 산학협력단 Multi-objective optimization-based approach for human posture prediction
CN108230429A (en) * 2016-12-14 2018-06-29 上海交通大学 Real-time whole body posture reconstruction method based on head and two-hand positions and posture
CN108717531A (en) * 2018-05-21 2018-10-30 西安电子科技大学 Estimation method of human posture based on Faster R-CNN
CN108898063A (en) * 2018-06-04 2018-11-27 大连大学 A kind of human body attitude identification device and method based on full convolutional neural networks
CN109190686A (en) * 2018-08-16 2019-01-11 电子科技大学 A kind of human skeleton extracting method relied on based on joint
CN109635630A (en) * 2018-10-23 2019-04-16 百度在线网络技术(北京)有限公司 Hand joint point detecting method, device and storage medium
CN109829849A (en) * 2019-01-29 2019-05-31 深圳前海达闼云端智能科技有限公司 A kind of generation method of training data, device and terminal
CN109858407A (en) * 2019-01-17 2019-06-07 西北大学 A kind of video behavior recognition methods based on much information stream feature and asynchronous fusion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060053108A1 (en) * 2004-09-03 2006-03-09 Ulrich Raschke System and method for predicting human posture using a rules-based sequential approach
CN101093582A (en) * 2006-06-19 2007-12-26 索尼株式会社 Motion capture apparatus and method, and motion capture program
CN102906670A (en) * 2010-06-01 2013-01-30 索尼公司 Information processing apparatus and method and program
CN103324938A (en) * 2012-03-21 2013-09-25 日电(中国)有限公司 Method for training attitude classifier and object classifier and method and device for detecting objects
CN102682452A (en) * 2012-04-12 2012-09-19 西安电子科技大学 Human movement tracking method based on combination of production and discriminant
KR101469851B1 (en) * 2014-06-03 2014-12-09 한양대학교 산학협력단 Multi-objective optimization-based approach for human posture prediction
CN108230429A (en) * 2016-12-14 2018-06-29 上海交通大学 Real-time whole body posture reconstruction method based on head and two-hand positions and posture
CN108717531A (en) * 2018-05-21 2018-10-30 西安电子科技大学 Estimation method of human posture based on Faster R-CNN
CN108898063A (en) * 2018-06-04 2018-11-27 大连大学 A kind of human body attitude identification device and method based on full convolutional neural networks
CN109190686A (en) * 2018-08-16 2019-01-11 电子科技大学 A kind of human skeleton extracting method relied on based on joint
CN109635630A (en) * 2018-10-23 2019-04-16 百度在线网络技术(北京)有限公司 Hand joint point detecting method, device and storage medium
CN109858407A (en) * 2019-01-17 2019-06-07 西北大学 A kind of video behavior recognition methods based on much information stream feature and asynchronous fusion
CN109829849A (en) * 2019-01-29 2019-05-31 深圳前海达闼云端智能科技有限公司 A kind of generation method of training data, device and terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614214A (en) * 2020-12-18 2021-04-06 北京达佳互联信息技术有限公司 Motion capture method, motion capture device, electronic device and storage medium
CN112614214B (en) * 2020-12-18 2023-10-27 北京达佳互联信息技术有限公司 Motion capture method, motion capture device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110349180B (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN109582793A (en) Model training method, customer service system and data labeling system, readable storage medium storing program for executing
CN110147456A (en) A kind of image classification method, device, readable storage medium storing program for executing and terminal device
CN111160350B (en) Portrait segmentation method, model training method, device, medium and electronic equipment
CN110287836B (en) Image classification method and device, computer equipment and storage medium
CN106326857A (en) Gender identification method and gender identification device based on face image
CN111126347B (en) Human eye state identification method, device, terminal and readable storage medium
CN107480575A (en) The training method of model, across age face identification method and corresponding device
KR20200145827A (en) Facial feature extraction model learning method, facial feature extraction method, apparatus, device, and storage medium
CN110781976B (en) Extension method of training image, training method and related device
CN110827236B (en) Brain tissue layering method, device and computer equipment based on neural network
CN108805058A (en) Target object changes gesture recognition method, device and computer equipment
CN109784368A (en) A kind of determination method and apparatus of application program classification
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
CN109740585A (en) A kind of text positioning method and device
WO2021169366A1 (en) Data enhancement method and apparatus
CN112419326B (en) Image segmentation data processing method, device, equipment and storage medium
CN111949795A (en) Work order automatic classification method and device
CN114565087B (en) Method, device and equipment for reasoning intention of people and storage medium
CN113191478A (en) Training method, device and system of neural network model
CN110349180A (en) Human body joint point prediction method and device and motion type identification method and device
CN117079339A (en) Animal iris recognition method, prediction model training method, electronic equipment and medium
CN116612339A (en) Construction device and grading device of nuclear cataract image grading model
CN105046193B (en) A kind of human motion recognition method based on fusion rarefaction representation matrix
CN106373121A (en) Fuzzy image identification method and apparatus
CN115795355A (en) Classification model training method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210207

Address after: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address