CN109711374B - Human body bone point identification method and device - Google Patents

Human body bone point identification method and device Download PDF

Info

Publication number
CN109711374B
CN109711374B CN201811644854.8A CN201811644854A CN109711374B CN 109711374 B CN109711374 B CN 109711374B CN 201811644854 A CN201811644854 A CN 201811644854A CN 109711374 B CN109711374 B CN 109711374B
Authority
CN
China
Prior art keywords
value
bone
sample
bone point
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811644854.8A
Other languages
Chinese (zh)
Other versions
CN109711374A (en
Inventor
曲晓超
杨思远
姜浩
闫帅
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Meitu Innovation Technology Co ltd
Original Assignee
Shenzhen Meitu Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Meitu Innovation Technology Co ltd filed Critical Shenzhen Meitu Innovation Technology Co ltd
Priority to CN201811644854.8A priority Critical patent/CN109711374B/en
Publication of CN109711374A publication Critical patent/CN109711374A/en
Application granted granted Critical
Publication of CN109711374B publication Critical patent/CN109711374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application provides a human skeleton point identification method and a human skeleton point identification device, wherein the human skeleton point identification method comprises the steps of obtaining a human body image to be identified; extracting a characteristic image through a bone point identification model, and obtaining a thermodynamic diagram and a label diagram of each bone point according to the characteristic image; for each bone point, extracting a position higher than a threshold value from the thermodynamic diagram of the bone point as a candidate position of the bone point, obtaining a candidate label value of each candidate position from a label diagram, and obtaining a first label value and a second label value according to each candidate label value; for each bone point, determining the bone position of the bone point from each candidate position according to the candidate label value of each candidate position of the bone point and the first label value or the second label value to obtain the bone position of each bone point. Therefore, the positions of the skeleton points of the left and right limbs can be accurately determined, the recognition error of the skeleton points caused by the similarity of the skeleton points of the left and right limbs is avoided, and the recognition precision of the skeleton points is improved.

Description

Human body bone point identification method and device
Technical Field
The application relates to the field of image processing, in particular to a human skeleton point identification method and device.
Background
In application, generally, human skeleton point identification is performed on a human image first, and then subsequent processing is performed by using each identified skeleton point, for example, a motion in the human image can be identified by using position information of each skeleton point, and accuracy of human skeleton point identification affects a result of the subsequent processing. Therefore, how to improve the accuracy of human skeletal point identification is a technical problem to be solved urgently by those skilled in the art.
Content of application
In view of the above, an object of the present invention is to provide a method and an apparatus for identifying human bone points, so as to solve or improve the above-mentioned problems.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in a first aspect, the present application provides a human skeletal point identification method, comprising;
obtaining a human body image to be identified;
inputting the human body image to be recognized into a pre-trained bone point recognition model, extracting a characteristic image of the human body image to be recognized through the bone point recognition model, and obtaining a thermodynamic diagram and a label diagram of each bone point in the human body image to be recognized according to the characteristic image;
for each bone point, extracting a position with a thermal value meeting a preset candidate condition from a thermodynamic diagram of the bone point as a candidate position of the bone point, and obtaining a candidate label value of each candidate position from a label diagram of the bone point to obtain each candidate position of each bone point and a candidate label value of each candidate position;
obtaining a first label value of a left limb and a second label value of a right limb in the human body image to be identified according to the candidate label value of each candidate position of each skeleton point;
for each bone point, determining the bone position of the bone point from each candidate position according to the candidate label value of each candidate position of the bone point and the first label value or the second label value to obtain the bone position of each bone point.
Optionally, the step of obtaining a first tag value of a left limb and a second tag value of a right limb in the human body image to be recognized according to the candidate tag value of each candidate position of each bone point includes:
calculating a first candidate label mean value of candidate label values of each candidate position of each bone point of the left limb in the human body image to be identified, and taking the first candidate label mean value as the first label value;
and calculating a first candidate label mean value of the candidate label values of each candidate position of each bone point of the right limb in the human body image to be recognized, and taking the first candidate label mean value as the second label value.
Optionally, the step of determining the bone position of the bone point from each candidate position according to the candidate tag value of each candidate position of the bone point and the first tag value or the second tag value comprises:
determining whether the skeletal point is in the left limb or the right limb;
when the bone point is positioned on the left limb, calculating a first difference value between the candidate label value and the first label value of each candidate position of the bone point, and taking the candidate position with the smallest absolute value of the first difference value between the first label values as the bone position of the bone point;
when the bone point is located in the right limb, calculating a second difference value between the candidate tag value and the second tag value of each candidate position of the bone point, and taking the candidate position with the smallest absolute value of the second difference value between the second tag values as the bone position of the bone point.
Optionally, before the step of inputting the human body image to be recognized into the pre-trained bone point recognition model, the method further includes:
training to obtain the bone point identification model;
the step of training to obtain the bone point identification model comprises the following steps:
obtaining a sample set, inputting each sample image in the sample set into an initial bone point identification model, obtaining a sample thermodynamic diagram and a sample label diagram of each bone point in the sample image through the initial bone point identification model, and obtaining a sample candidate position, a sample candidate label value of each sample candidate position, a first sample label value of a left limb in the sample image, a second sample label value of a right limb in the sample image, a sample bone position of each bone point and a sample bone label value of each sample bone position according to the sample thermodynamic diagram and the sample label diagram of each bone point;
obtaining a heat loss value according to the sample thermodynamic diagram of each bone point and the standard thermodynamic diagram of each bone point, and obtaining a label loss value according to the sample bone label value, the first sample label value and the second sample value of each sample bone position;
obtaining a loss function value according to the heat loss value and the label loss value, and updating a network parameter of the bone point identification model according to the loss function value;
and repeating the steps, judging whether the bone point recognition model obtained by each training reaches a training termination condition, and when the bone point recognition model meeting the termination condition is judged to meet the training termination condition, taking the bone point recognition model meeting the termination condition as the bone point recognition model obtained by the training.
Optionally, before the step of obtaining the heat loss value according to the sample thermodynamic diagram of each bone point and the standard thermodynamic diagram of each bone point, the method further comprises:
generating a standard thermodynamic diagram of the bone point according to the position information marked by the bone point in the sample image;
the position information, any position in the standard thermodynamic diagram and the heat value of the position satisfy the following relations:
Figure BDA0001931853560000041
wherein, ax、ayFor mapping position information to position coordinates in the standard thermodynamic diagram, bx、byIs the position coordinate of the position, and H (b) is the heat value of the position.
Optionally, the step of obtaining the heat loss value according to the sample thermodynamic diagram of each bone point and the standard thermodynamic diagram of each bone point comprises:
for each bone point, calculating the mean square error of the sample thermodynamic diagram of the bone point and the standard thermodynamic diagram of the bone point according to the thermodynamic value of each position in the sample thermodynamic diagram of the bone point and the thermodynamic value of the corresponding position in the standard thermodynamic diagram of the bone point to obtain the mean square error of each bone point;
and calculating the sum of the mean square errors of each bone point, and taking the sum of the mean square errors as the heat loss value.
Optionally, the step of deriving a tag loss value from the sample tag value, the first sample tag value, and the second sample value for each bone point comprises;
for each bone point of the left limb, calculating a first difference value between the sample label value of the bone point and the first sample label value, calculating a first square value of the first difference value to obtain a first square value of each bone point of the left limb, and summing each first square value to obtain a first label loss value;
for each bone point on the right limb, calculating a second difference value between the sample label value of the bone point and the second sample label value, calculating a second square value of the second difference value to obtain a second square value of each bone point on the right limb, and summing the second square values to obtain a second label loss value;
calculating a third difference between the first sample tag value and the first sample tag value, and calculating a third square of the third difference;
and calculating the sum of the first label loss value and the second label loss value to obtain a third label loss value, and calculating the difference between the third label loss value and the third square value to obtain the label loss value.
Optionally, the step of obtaining a sample candidate position of each bone point, a sample candidate label value of each sample candidate position, a first sample label value of a left limb in the sample image, a second sample label value of a right limb in the sample image, a sample bone position of each bone point, and a sample bone label value of each sample bone position according to the sample thermodynamic diagram and the sample label diagram of each bone point includes:
for each bone point, extracting a position meeting a preset candidate condition from the thermodynamic diagram of the bone point according to the thermal force value of each position in the thermodynamic diagram of the bone point to serve as a sample candidate position of the bone point, and obtaining a sample candidate label value of each sample candidate position from the label diagram of the bone point to obtain the sample candidate position of each bone point and the sample candidate label value of each sample candidate position;
obtaining a first sample label value of a left limb and a second sample label value of a right limb in the sample image according to the sample candidate label value of each sample candidate position;
for each bone point, determining a sample bone position of the bone point from each sample candidate position of the bone point according to the sample candidate label value of each sample candidate position of the bone point and the first sample label value or the second sample label value, and determining a sample bone label value of the sample bone position according to the sample bone position to obtain a sample bone position of each bone point and a sample bone label value of each sample bone position.
Optionally, after the step of obtaining the bone position of each bone point, the method further comprises:
respectively adjusting the bone positions of corresponding target bone points according to the received bone point adjusting instructions, and generating an adjusted human body image to be identified according to the adjusted target bone positions, wherein the bone point adjusting instructions comprise target bone points of which the bone positions need to be adjusted and an adjusting strategy of each target bone point.
In a second aspect, embodiments of the present application further provide a human bone point identification device, where the human bone point identification device includes;
the input module is used for obtaining a human body image to be recognized;
the characteristic extraction module is used for inputting the human body image to be recognized into a pre-trained bone point recognition model, extracting a characteristic image of the human body image to be recognized through the bone point recognition model, and obtaining a thermodynamic diagram and a label diagram of each bone point in the human body image to be recognized according to the characteristic image;
the candidate module is used for extracting a position with a thermal value meeting a preset candidate condition from the thermodynamic diagram of each bone point as a candidate position of the bone point, and obtaining a candidate label value of each candidate position from the label diagram of the bone point so as to obtain each candidate position of each bone point and a candidate label value of each candidate position;
a threshold value generation module, configured to obtain, according to the candidate tag value of each candidate position of each bone point, a first tag value of a left limb and a second tag value of a right limb in the human body image to be identified; and
and the selecting module is used for determining the bone position of each bone point from each candidate position according to the candidate label value of each candidate position of the bone point and the first label value or the second label value so as to obtain the bone position of each bone point.
Compared with the prior art, the beneficial effects provided by the application are that:
the human body bone point identification method and device provided by the embodiment of the application can accurately determine the bone points of the left and right limbs, avoid the bone point identification error caused by the similarity of the bone points of the left and right limbs, and improve the identification precision of the bone points.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below. It is appreciated that the following drawings depict only certain embodiments of the application and are therefore not to be considered limiting of its scope. For a person skilled in the art, it is possible to derive other relevant figures from these figures without inventive effort.
Fig. 1 is a block diagram illustrating a structure of an electronic device for implementing a human skeletal point identification method according to an embodiment of the present disclosure.
Fig. 2 is a schematic flow chart of a human skeletal point identification method according to an embodiment of the present disclosure.
Fig. 3 is a schematic network structure diagram of a bone point identification model according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating a method for training a skeleton point recognition model according to an embodiment of the present disclosure.
Fig. 5 is a functional block diagram of a human bone point identification device according to an embodiment of the present application.
Icon: 100-an electronic device; 110-a bus; 120-a processor; 130-a storage medium; 140-bus interface; 150-a network adapter; 160-a user interface; 200-human bone point identification means; 210-an input module; 220-a feature extraction module; 230-candidate modules; 240-a threshold generation module; 250-selecting module.
Detailed Description
Aiming at the technical problems in the prior art, a person skilled in the art generally utilizes a deep convolutional neural network to extract features of a human body image to be recognized, and maps the feature extraction result into thermodynamic diagrams of all bone points, wherein each bone point corresponds to one thermodynamic diagram. And for each bone point, setting the position with the highest heat value in the thermodynamic diagram of the bone point as the bone position of the bone point so as to obtain the bone position of each bone point.
The inventor of the application finds that the skeleton points of the left limb and the right limb of the human body have strong similarity, but each skeleton point in the prior art is considered independently, and no special treatment is carried out on the symmetrical skeleton points of the left limb and the right limb. Therefore, when a skeleton point of one side limb is obtained according to the thermodynamic diagram, the skeleton point is possibly mixed with a symmetrical skeleton point of the other side limb, so that the recognition of the skeleton point is wrong, and further, the subsequent processing is wrong.
For example, the texture of the left and right hands in the image of the human body to be recognized may be very similar, and the positions of the two may be very close to each other. When the bone position of the left hand is obtained by using the heat map of the left hand, because the texture features of the two are similar, the heat of the right hand position may be higher than that of the left hand position in the heat map of the left hand, thereby causing the misjudgment of the right hand position as the position of the left hand. In subsequent processing, for example, motion recognition, a human motion may be erroneously recognized because of an incorrect left-hand position.
Based on the technical problems, the inventor of the present application finds that a label map for the left and right limbs with each bone point located at the left and right sides can be obtained while obtaining a thermodynamic diagram, candidate positions of each bone point and candidate label values of each candidate position are obtained according to the thermodynamic diagram and the label map, label threshold values of the left and right limbs are determined according to the candidate label values, and then the bone position of each bone point is determined from the candidate positions according to the label threshold values of the left and right limbs, so that the positions of the bone points of the left and right limbs can be accurately determined by generating the candidate positions and determining the bone positions from the candidate positions through the label values, thereby avoiding the recognition error of the bone points caused by the similarity of the bone points of the left and right limbs, and improving the recognition accuracy of the bone points.
The above prior art solutions have drawbacks that are the results of practical and careful study, and therefore, the discovery process of the above problems and the solutions proposed by the following embodiments of the present application to the above problems should be the contributions of the applicant to the present application in the course of the present application.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it should be noted that the terms "upper", "lower", and the like refer to orientations or positional relationships based on orientations or positional relationships shown in the drawings or orientations or positional relationships that the products of the application usually place when using, are only used for convenience of description and simplification of description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present application.
In the description of the present application, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "disposed" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the keys in the embodiments can be combined with each other without conflict.
Referring to fig. 1, a block diagram of an electronic device 100 for implementing a human skeletal point identification method described below according to an embodiment of the present disclosure is shown.
As shown in FIG. 1, electronic device 100 may be implemented by bus 110 as a general bus architecture. Bus 110 may include any number of interconnecting buses and bridges depending on the specific application of electronic device 100 and the overall design constraints. Bus 110 connects various circuits together, including processor 120, storage medium 130, and bus interface 140. Alternatively, the electronic apparatus 100 may connect a network adapter 150 or the like via the bus 110 using the bus interface 140. The network adapter 150 may be used to implement signal processing functions of a physical layer in the electronic device 100 and implement transmission and reception of radio frequency signals through an antenna. The user interface 160 may connect external devices such as: a keyboard, a display, a mouse or a joystick, etc. The bus 110 may also connect various other circuits such as timing sources, peripherals, voltage regulators, or power management circuits, which are well known in the art, and therefore, will not be described in detail.
Alternatively, the electronic device 100 may be configured as a general purpose processing system, for example, commonly referred to as a chip, including: one or more microprocessors providing processing functions, and an external memory providing at least a portion of storage medium 130, all connected together with other support circuits through an external bus architecture.
Alternatively, the electronic device 100 may be implemented using: an ASIC (application specific integrated circuit) having a processor 120, a bus interface 140, a user interface 160; and at least a portion of the storage medium 130 integrated in a single chip, or the electronic device 100 may be implemented using: one or more FPGAs (field programmable gate arrays), PLDs (programmable logic devices), controllers, state machines, gate logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this application.
Among other things, processor 120 is responsible for managing bus 110 and general processing (including the execution of software stored on storage medium 130). Processor 120 may be implemented using one or more general-purpose processors and/or special-purpose processors. Examples of processor 120 include microprocessors, microcontrollers, DSP processors, and other circuits capable of executing software. Software should be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
Storage medium 130 is shown in fig. 1 as being separate from processor 120, however, one skilled in the art will readily appreciate that storage medium 130, or any portion thereof, may be located external to electronic device 100. Storage medium 130 may include, for example, a transmission line, a carrier waveform modulated with data, and/or a computer product separate from the wireless node, which may be accessed by processor 120 via bus interface 140. Alternatively, the storage medium 130, or any portion thereof, may be integrated into the processor 120, e.g., may be a cache and/or general purpose registers.
The processor 120 may perform the following embodiments, and in particular, the storage medium 130 may store therein the human skeletal point identification apparatus 200, and the processor 120 may be configured to execute the human skeletal point identification apparatus 200.
Further, please refer to fig. 2, which is a flowchart illustrating a human bone point identification method according to an embodiment of the present application, in which the human bone point identification method is executed by the electronic device 100 shown in fig. 1, and the human bone point identification method according to the present application is described in detail below with reference to fig. 2. It should be noted that the human bone point identification method provided in the embodiment of the present application is not limited by fig. 2 and the following specific sequence. The method comprises the following specific processes:
and step S110, obtaining a human body image to be recognized.
In this embodiment, the electronic device 100 executing the human skeletal point identification method may be any device with computing processing capability, for example, a mobile terminal such as a mobile phone and a tablet, or a computing device such as a computer and a server.
When the electronic device 100 is a mobile terminal such as a mobile phone or a tablet, the electronic device 100 may call a camera module to obtain a human body image to be recognized in response to a user operation, or call the human body image to be recognized from the storage medium 130 in response to the user operation.
The to-be-recognized human body image may be called in the storage medium 130 when the electronic device 100 is a computer, a server, or other computing device, wherein the electronic device 100 may also be configured as a cloud server that obtains the to-be-recognized human body image transmitted from a terminal communicatively connected to the electronic device 100.
And S120, inputting the human body image to be recognized into a pre-trained bone point recognition model, extracting a characteristic image of the human body image to be recognized through the bone point recognition model, and obtaining a thermodynamic diagram and a label diagram of each bone point in the human body image to be recognized according to the characteristic image.
When the skeleton point identification model works, after a human body image to be identified is input into the pre-trained skeleton point identification model, the characteristic image of the human body image to be identified is extracted through the characteristic extraction network, then the extracted characteristic image is input into the output unit, and the characteristic image is mapped through the output unit to obtain a thermodynamic diagram and a label diagram of each skeleton point in the human body image to be identified.
As an embodiment, the feature extraction network may include a plurality of convolution units, and the step of extracting the feature image of the human body image to be recognized through the feature extraction network includes:
respectively extracting features of a human body image to be recognized through each convolution unit, sequentially inputting feature extraction results to the next convolution unit, taking the feature extraction result output by the last convolution unit as a feature image, wherein each convolution unit comprises at least one convolution subunit;
in the step of extracting the features of the human body image to be recognized through each convolution unit, extracting the feature extraction image of the human body image to be recognized through each convolution subunit of each convolution unit, and fusing the feature extraction image with the feature image before feature extraction of the convolution subunit to obtain the feature extraction result of the convolution subunit.
As one embodiment, each convolution subunit may include a first convolution layer comprising a 1 × 1 convolution kernel, a second convolution layer comprising a 3 × 3 convolution kernel, and a third convolution layer comprising a 1 × 1 convolution kernel. It should be noted that the number of convolution kernels of each convolution layer may be adjusted according to the number of channels of the image to be identified, which is not limited in the present application.
Alternatively, the output unit may be composed of a plurality of 1 × 1 convolution kernels, and in the step of mapping the feature image through the output unit to obtain a thermodynamic diagram and a label map of each bone point in the human body image to be recognized, the image to be recognized may be convolved by using each convolution kernel to obtain a thermodynamic diagram and a label map of each bone point in the human body image to be recognized, where, as an embodiment, the human body image to be recognized may include 14 bone points, namely, a top of the head, a neck, a right shoulder, a left shoulder, a right elbow, a left elbow, a right wrist, a left wrist, a right hip, a left hip, a right knee, a left knee, a right ankle, and a left ankle, and the output unit outputs 14 thermodynamic diagrams and 14 label maps.
In a possible implementation manner, as shown in fig. 3, a network structure of the bone point identification model, the feature extraction network may include five convolution units, where, when feature extraction is performed on a human body image to be identified through the first three convolution units, the first three convolution units may sample the human body image to be identified by adjusting convolution step sizes to change an image size of a feature extraction result. Optionally, the size of the feature image provided by the application is one eighth of the size of the human body image to be recognized.
Optionally, for each second convolution layer in the fourth convolution unit, the expansion rate of the convolution kernel of the second convolution layer when performing convolution operation is 2, and for each second convolution layer in the fifth convolution unit, the expansion rate of the convolution kernel of the second convolution layer when performing convolution operation is 4, so as to avoid that the convolution field of the convolution kernel is reduced as the size of the human body image to be recognized is reduced, and improve the feature extraction capability.
Optionally, to obtain the above bone point recognition model, before step S110, the human bone point recognition method provided by the present application further includes a method of training to obtain the bone point recognition model.
Referring to fig. 4, the training of the bone point recognition model can be realized by the following steps:
step S210, a sample set is obtained, each sample image in the sample set is input into an initial bone point identification model, a sample thermodynamic diagram and a sample label diagram of each bone point in the sample image are obtained through the initial bone point identification model, and then a sample candidate position, a sample candidate label value of each sample candidate position, a first sample label value of a left limb in the sample image, a second sample label value of a right limb in the sample image, a sample bone position of each bone point and a sample bone label value of each sample bone position are obtained according to the sample thermodynamic diagram and the sample label diagram of each bone point.
Each sample image in the sample set includes position information of each bone point, and during labeling, left and right limbs of the human body can be distinguished according to the orientation of the human body in the sample image and are not simply labeled according to the left and right of the sample image.
The step of obtaining the sample candidate position of each bone point, the sample candidate label value of each sample candidate position, the first sample label value of the left limb in the sample image, the second sample label value of the right limb in the sample image, the sample bone position of each bone point, and the sample bone label value of each sample bone position in step S210 may be implemented by the following sub-steps:
firstly, for each bone point, extracting a position higher than a preset thermal threshold value from the thermodynamic diagram of the bone point as a sample candidate position of the bone point according to the thermal force value of each position in the thermodynamic diagram of the bone point, and obtaining the sample candidate label value of each sample candidate position from the label diagram of the bone point to obtain the sample candidate position of each bone point and the sample candidate label value of each sample candidate position.
Then, a first sample label value of the left limb and a second sample label value of the right limb in the sample image are obtained according to the sample candidate label value of each sample candidate position.
Finally, for each bone point, determining a sample bone position of the bone point from each sample candidate position of the bone point according to the sample candidate label value of each sample candidate position of the bone point and the first sample label value or the second sample label value, and determining a sample bone label value of the sample bone position according to the sample bone position to obtain the sample bone position of each bone point and the sample bone label value of each sample bone position.
Step S220, obtaining a heat loss value according to the sample thermodynamic diagram of each bone point and the standard thermodynamic diagram of each bone point, and obtaining a label loss value according to the sample bone label value, the first sample label value and the second sample value of each sample bone position.
Before further describing step S220, it can be understood that the standard thermodynamic diagram of each bone point may be labeled in the sample image in advance, but in order to reduce the labeling workload, the method provided by the present application may further include, in step S220, a step of generating the standard thermodynamic diagram of the bone point according to the position information of the bone point labeled in the sample image.
The position information marked in the sample image, any position in the standard thermodynamic diagram and the heat value of the position meet the following relations:
Figure BDA0001931853560000151
wherein, ax、ayFor mapping of position information to position coordinates in a standard thermodynamic diagram, bx、byIs the position coordinate of the position, and H (b) is the heat value of the position.
Based on the standard thermodynamic diagrams obtained in the above steps, when obtaining the heat loss value in step S220, for each bone point, a mean square error between the sample thermodynamic diagram of the bone point and the standard thermodynamic diagram may be calculated according to the thermodynamic value of each position in the sample thermodynamic diagram of the bone point and the thermodynamic value of the corresponding position in the standard thermodynamic diagram of the bone point, so as to obtain a mean square error of each bone point. And then calculating the sum of the mean square errors of each bone point, and taking the sum of the mean square errors as a heat loss value.
In step S220, when obtaining the label loss value, a first difference between the sample label value of each bone point on the left limb and the first sample label value is calculated for each bone point on the left limb, and a first square value of the first difference is calculated to obtain a first square value of each bone point on the left limb, and then each first square value is summed to obtain a first label loss value. Then, for each bone point on the right limb, calculating a second difference value between the sample label value of the bone point and the second sample label value, calculating a second square value of the second difference value to obtain a second square value of each bone point on the right limb, and summing each second square value to obtain a second label loss value. Next, a third difference between the first sample tag value and the first sample tag value is calculated, and a third square of the third difference is calculated. And finally, calculating the sum of the first label loss value and the second label loss value to obtain a third label loss value, and calculating the difference between the third label loss value and the third square value to obtain a label loss value.
Wherein the sample label value, the first sample label value, and the second sample label value for each bone point satisfy the following relationship:
Figure BDA0001931853560000161
therein, LosstagIn order to be the value of the loss of the tag,
Figure BDA0001931853560000162
for the value of the tag of the first sample,
Figure BDA0001931853560000163
is the second sample label value, tiI ∈ left refers to the sample label value, t, of the bone point located in the left limbjAnd j e right is the sample label value of the bone point located in the right limb.
It is to be understood that the label value need not be assigned a characteristic value during the training process, for example, the label value of the bone point of the left limb is assigned a positive value, the more positive the value indicates that the bone point is located in the left limb, and the label value of the bone point of the right limb is assigned a negative value, the less negative the value indicates that the bone point is located in the right limb. In the process of iterative Loss value, with LosstagThe first two terms in the expression are smaller and smaller, and the latter term
Figure BDA0001931853560000164
The larger the label value, the difference between the label value of the left limb and the label value of the right limb will be generated naturally when the training is completed, and the difference may be the above-mentioned difference between positive and negative, or one side may be a smaller number, and the other side may be a larger number.
And step S230, obtaining a loss function value according to the heat loss value and the label loss value, and updating the network parameters of the bone point identification model according to the loss function value.
The network parameters of the bone point identification model may include convolution parameters of the feature extraction network and each convolution kernel of the output unit.
As an embodiment, the loss value of heat, the loss value of tag, and the loss function value satisfy the following relationship:
Loss=lambda*Losstag+Losshit
therein, LosshitIs Loss of heat value, LosstagLabel Loss value, lambda is the weighting coefficient of the label Loss value, and Loss is the Loss function value.
The inventor of the present application considers that the tag loss value is obtained based on the sample candidate position, and therefore, when iterating the loss function value, the influence of the tag loss value needs to be reduced, so when calculating the loss function value, the inventor assigns a weighting coefficient lambda to the tag loss value, and generally, the weighting coefficient lambda is 0.01.
And S240, repeating the steps, judging whether the bone point recognition model obtained by each training reaches a training termination condition, and when the bone point recognition model meeting the termination condition is judged to meet the training termination condition, taking the bone point recognition model meeting the termination condition as the bone point recognition model obtained by the training.
Based on the design, a bone point recognition model trained in advance can be obtained through the steps, and the loss function value is calculated according to the heat loss value and the label loss value, so that the information of the left and right limbs can be fully utilized in the training process, and the accuracy of outputting the thermodynamic diagram and the label diagram by the bone point recognition model is improved.
Step S130, extracting a position with a thermal value meeting a preset candidate condition from the thermodynamic diagram of each bone point as a candidate position of the bone point, and obtaining a candidate label value of each candidate position from the label diagram of the bone point so as to obtain each candidate position of each bone point and a candidate label value of each candidate position.
The inventor of the application considers that the bone points of the left and right limbs can be mixed, and changes the convenience of directly extracting the position with the highest heat value as the bone position of the bone point in the prior art into extracting a candidate position from a heat map and confirming the bone position from the candidate position, wherein the step of extracting the candidate position from the heat map can firstly traverse the heat value of each position in the heat map, extract the position meeting the preset candidate condition from the heat map and then use the extracted position as the candidate position of the bone point.
As an embodiment, the preset candidate condition may include a preset thermal threshold condition, and the position meeting the preset candidate condition may be a position having a thermal value higher than a preset thermal threshold, that is, a position having a thermal value higher than the preset thermal threshold is extracted from the thermodynamic diagram as a candidate position, and when the thermodynamic diagram works, the preset thermal threshold may be generally set to 0.1, or the preset thermal threshold may be flexibly adjusted according to an actual situation.
As another embodiment, the preset candidate condition may be a thermal value ranking condition, and the positions that meet the preset candidate condition may be the first positions in the overall thermal value ranking.
Optionally, the foregoing embodiments may be used in combination, for example, it may be determined whether the thermal value of each position is higher than a preset thermal threshold, if so, the candidate positions are directly generated, and if not, the thermal values of each position are sorted, and the positions of the first few in the whole thermal value sorting are taken as the candidate positions.
It should be noted that, each thermodynamic diagram is in a one-to-one correspondence relationship with each bone point, the thermodynamic diagrams contain the correspondence relationship, and the thermodynamic diagrams are not required to be matched with the bone points after the thermodynamic diagrams are obtained, and the relationship between each label diagram and each bone point is consistent with the relationship between each thermodynamic diagram and each bone point.
After the step of obtaining candidate locations, candidate tag values for the candidate locations may be extracted from corresponding locations in the tag map of the bone point according to the location coordinates of the candidate locations.
It should be noted that, the size of the general thermodynamic diagram is equal to that of the label diagram, each position on the thermodynamic diagram can be directly mapped onto the label diagram according to the position coordinate, when the sizes of the thermodynamic diagram and the label diagram are different, the thermodynamic diagram can be established according to the sizes of the thermodynamic diagram and the label diagram, the mapping relationship of the position coordinate is established, and each position on the thermodynamic diagram can be mapped onto the label diagram according to the mapping relationship of the position coordinate, so as to obtain a candidate label value.
Alternatively, for bone points that do not belong to either the left or right limb, the position with the highest heat value in the heat map may be directly taken as the bone position, e.g., the vertex, the neck. Alternatively, in order to reduce the computation amount, the output unit may not output the label map of the bone points not belonging to the left or right limb, for example, may output only the label map of 12 bone points of the right shoulder, the left shoulder, the right elbow, the left elbow, the right wrist, the left wrist, the right hip, the left hip, the right knee, the left knee, the right ankle, and the left ankle.
Step S140, obtaining a first label value of the left limb and a second label value of the right limb in the human body image to be identified according to the candidate label value of each candidate position of each bone point.
When statistics is carried out, whether the bone point belongs to the left limb or the right limb is needed, and then a first label value of the left limb and a second label value of the right limb are obtained according to the candidate label values of the left and right limb statistics positions.
As an embodiment, when step S140 is executed, a first candidate tag mean value of the candidate tag values of each candidate position of each bone point of the left limb in the human body image to be recognized may be calculated first, and the first candidate tag mean value is used as the first tag value, and then the first candidate tag mean value of the candidate tag values of each candidate position of each bone point of the right limb in the human body image to be recognized may be calculated, and the first candidate tag mean value is used as the second tag value.
Based on the design, the first label value and the second label value are obtained according to the candidate label values of the candidate positions of the left limb and the right limb, so that the left limb and the right limb can be more accurately marked by the first label value and the second label value.
Step S150, for each bone point, determining the bone position of the bone point from each candidate position according to the candidate label value of each candidate position of the bone point and the first label value or the second label value, so as to obtain the bone position of each bone point.
The inventor considers that the candidate positions of the bone points possibly comprise the bone positions of the bone points and the bone positions of symmetrical bone points, so that the label value of the candidate position is compared with the label threshold (one of the first label value and the second label value) of the limb on one side of the bone point, and the candidate position closest to the label threshold is taken as the bone position of the bone point, thereby effectively distinguishing similar bone points and improving the human body bone point identification accuracy.
As an embodiment, step S150 may first determine whether the bone point is located in the left limb or the right limb, and then when the bone point is located in the left limb, calculate a first difference between the candidate tag value and the first tag value of each candidate position of the bone point, and use the candidate position with the smallest absolute value of the first difference between the first tag values as the bone position of the bone point; when the bone point is located in the right limb, calculating a second difference value between the candidate tag value and the second tag value of each candidate position of the bone point, and taking the candidate position with the smallest absolute value of the second difference value between the second tag values as the bone position of the bone point.
It should be noted that, the present application is not limited to the method for comparing the tag value with the tag threshold, for example, in addition to the difference comparison, the comparison may be performed according to the ratio of the candidate tag value to the tag threshold, and a person skilled in the art may flexibly select the comparison method according to the situations of the first tag value and the second tag value.
Optionally, after step S150, the method for identifying human bone points provided by the present application may further include:
and respectively adjusting the bone positions of the corresponding target bone points according to the received bone point adjusting instructions, and generating an adjusted human body image to be identified according to the adjusted target bone positions, wherein the bone point adjusting instructions comprise target bone points of which the bone positions need to be adjusted and an adjusting strategy of each target bone point.
Optionally, after step S150, the method for identifying human bone points provided by the present application may further include:
inputting the skeleton position of each skeleton point into a posture classification function, and calculating to obtain posture information of the human body image to be recognized;
and determining the human body action in the human body image to be recognized according to the posture information of the human body image to be recognized.
In one embodiment, referring to fig. 5, which is a functional block diagram of the human bone point identification apparatus 200 according to an embodiment of the present application, the human bone point identification apparatus 200 may include:
the input module 210 is used for obtaining a human body image to be recognized.
The feature extraction module 220 is configured to input the human body image to be recognized into a pre-trained bone point recognition model, extract a feature image of the human body image to be recognized through the bone point recognition model, and obtain a thermodynamic diagram and a label diagram of each bone point in the human body image to be recognized according to the feature image.
And a candidate module 230, configured to, for each bone point, extract, from the thermodynamic diagram of the bone point, a position with a thermal value higher than a preset thermal threshold as a candidate position of the bone point, and obtain a candidate label value of each candidate position from the label diagram of the bone point, so as to obtain each candidate position of each bone point and a candidate label value of each candidate position.
And a threshold generating module 240, configured to obtain a first label value of the left limb and a second label value of the right limb in the human body image to be recognized according to the candidate label value of each candidate position of each bone point. And
and a selecting module 250, configured to determine, for each bone point, a bone position of the bone point from each candidate position according to the candidate tag value of each candidate position of the bone point and the first tag value or the second tag value, so as to obtain the bone position of each bone point.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
Alternatively, all or part of the implementation may be in software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, including an integrated electronic device, server, data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk (ssd)), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A human skeletal point identification method is characterized by comprising the following steps;
obtaining a human body image to be identified;
inputting the human body image to be recognized into a pre-trained bone point recognition model, extracting a characteristic image of the human body image to be recognized through the bone point recognition model, and obtaining a thermodynamic diagram and a label diagram of each bone point in the human body image to be recognized according to the characteristic image;
for each bone point, extracting a position with a thermal value meeting a preset candidate condition from a thermodynamic diagram of the bone point as a candidate position of the bone point, and obtaining a candidate label value of each candidate position from a label diagram of the bone point to obtain each candidate position of each bone point and a candidate label value of each candidate position, wherein the preset candidate condition is a preset thermal threshold condition or a thermal value sorting condition;
obtaining a first label value of a left limb and a second label value of a right limb in the human body image to be identified according to the candidate label value of each candidate position of each skeleton point;
for each bone point, determining the bone position of the bone point from each candidate position according to the candidate label value of each candidate position of the bone point and the first label value or the second label value to obtain the bone position of each bone point.
2. The human bone point identification method according to claim 1, wherein the step of obtaining a first label value of a left limb and a second label value of a right limb in the human image to be identified according to the candidate label value of each candidate position of each bone point comprises:
calculating a first candidate label mean value of candidate label values of each candidate position of each bone point of the left limb in the human body image to be identified, and taking the first candidate label mean value as the first label value;
and calculating a first candidate label mean value of the candidate label values of each candidate position of each bone point of the right limb in the human body image to be recognized, and taking the first candidate label mean value as the second label value.
3. The human bone point identification method of claim 1, wherein the step of determining the bone position of the bone point from each candidate position according to the candidate label value of each candidate position of the bone point and the first label value or the second label value comprises:
determining whether the skeletal point is in the left limb or the right limb;
when the bone point is positioned on the left limb, calculating a first difference value between the candidate label value and the first label value of each candidate position of the bone point, and taking the candidate position with the smallest absolute value of the first difference value between the first label values as the bone position of the bone point;
when the bone point is located in the right limb, calculating a second difference value between the candidate tag value and the second tag value of each candidate position of the bone point, and taking the candidate position with the smallest absolute value of the second difference value between the second tag values as the bone position of the bone point.
4. The human skeletal point recognition method of claim 1, wherein before the step of inputting the human image to be recognized into a pre-trained skeletal point recognition model, the method further comprises:
training to obtain the bone point identification model;
the step of training to obtain the bone point identification model comprises the following steps:
obtaining a sample set, inputting each sample image in the sample set into an initial bone point identification model, obtaining a sample thermodynamic diagram and a sample label diagram of each bone point in the sample image through the initial bone point identification model, and obtaining a sample candidate position, a sample candidate label value of each sample candidate position, a first sample label value of a left limb in the sample image, a second sample label value of a right limb in the sample image, a sample bone position of each bone point and a sample bone label value of each sample bone position according to the sample thermodynamic diagram and the sample label diagram of each bone point;
obtaining a heat loss value according to the sample thermodynamic diagram of each bone point and the standard thermodynamic diagram of each bone point, and obtaining a label loss value according to the sample bone label value, the first sample label value and the second sample label value of each sample bone position;
obtaining a loss function value according to the heat loss value and the label loss value, and updating a network parameter of the bone point identification model according to the loss function value;
and repeating the steps, judging whether the bone point recognition model obtained by each training reaches a training termination condition, and when the bone point recognition model meeting the termination condition is judged to meet the training termination condition, taking the bone point recognition model meeting the termination condition as the bone point recognition model obtained by the training.
5. The method of claim 4, wherein the step of deriving the heat loss value from the sample thermodynamic diagram for each bone point and the standard thermodynamic diagram for each bone point is preceded by the method further comprising:
generating a standard thermodynamic diagram of the bone point according to the position information marked by the bone point in the sample image;
the position information, any position in the standard thermodynamic diagram and the heat value of the position satisfy the following relations:
Figure FDA0002987501900000031
wherein, ax、ayFor mapping position information to position coordinates in the standard thermodynamic diagram, bx、byIs the position coordinate of the position, and H (b) is the heat value of the position.
6. The human bone point identification method of claim 4 wherein said step of deriving a heat loss value from the sample thermodynamic diagram for each bone point and the standard thermodynamic diagram for each bone point comprises:
for each bone point, calculating the mean square error of the sample thermodynamic diagram of the bone point and the standard thermodynamic diagram of the bone point according to the thermodynamic value of each position in the sample thermodynamic diagram of the bone point and the thermodynamic value of the corresponding position in the standard thermodynamic diagram of the bone point to obtain the mean square error of each bone point;
and calculating the sum of the mean square errors of each bone point, and taking the sum of the mean square errors as the heat loss value.
7. The method of claim 4, wherein the step of deriving a tag loss value from the sample tag value, the first sample tag value, and the second sample value for each bone point comprises;
for each bone point of the left limb, calculating a first difference value between the sample label value of the bone point and the first sample label value, calculating a first square value of the first difference value to obtain a first square value of each bone point of the left limb, and summing each first square value to obtain a first label loss value;
for each bone point on the right limb, calculating a second difference value between the sample label value of the bone point and the second sample label value, calculating a second square value of the second difference value to obtain a second square value of each bone point on the right limb, and summing the second square values to obtain a second label loss value;
calculating a third difference between the first sample tag value and the first sample tag value, and calculating a third square of the third difference;
and calculating the sum of the first label loss value and the second label loss value to obtain a third label loss value, and calculating the difference between the third label loss value and the third square value to obtain the label loss value.
8. The human bone point identification method of claim 4, wherein the step of obtaining the sample candidate location of each bone point, the sample candidate label value of each sample candidate location, the first sample label value of the left limb in the sample image, the second sample label value of the right limb in the sample image, the sample bone location of each bone point, and the sample bone label value of each sample bone location according to the sample thermodynamic diagram and the sample label diagram of each bone point comprises:
for each bone point, extracting a position meeting a preset candidate condition from the thermodynamic diagram of the bone point according to the thermal force value of each position in the thermodynamic diagram of the bone point to serve as a sample candidate position of the bone point, and obtaining a sample candidate label value of each sample candidate position from the label diagram of the bone point to obtain the sample candidate position of each bone point and the sample candidate label value of each sample candidate position;
obtaining a first sample label value of a left limb and a second sample label value of a right limb in the sample image according to the sample candidate label value of each sample candidate position;
for each bone point, determining a sample bone position of the bone point from each sample candidate position of the bone point according to the sample candidate label value of each sample candidate position of the bone point and the first sample label value or the second sample label value, and determining a sample bone label value of the sample bone position according to the sample bone position to obtain a sample bone position of each bone point and a sample bone label value of each sample bone position.
9. The human bone point identification method of claim 1, wherein after the step of obtaining the bone location of each bone point, the method further comprises:
respectively adjusting the bone positions of corresponding target bone points according to the received bone point adjusting instructions, and generating an adjusted human body image to be identified according to the adjusted target bone positions, wherein the bone point adjusting instructions comprise target bone points of which the bone positions need to be adjusted and an adjusting strategy of each target bone point.
10. A human skeletal point identification device, characterized in that the human skeletal point identification device comprises;
the input module is used for obtaining a human body image to be recognized;
the characteristic extraction module is used for inputting the human body image to be recognized into a pre-trained bone point recognition model, extracting a characteristic image of the human body image to be recognized through the bone point recognition model, and obtaining a thermodynamic diagram and a label diagram of each bone point in the human body image to be recognized according to the characteristic image;
the candidate module is used for extracting a position with a thermal value meeting a preset candidate condition from the thermodynamic diagram of each bone point as a candidate position of the bone point, and obtaining a candidate label value of each candidate position from the label diagram of the bone point so as to obtain each candidate position of each bone point and a candidate label value of each candidate position, wherein the preset candidate condition is a preset thermal threshold condition or a thermal value sorting condition;
a threshold value generation module, configured to obtain, according to the candidate tag value of each candidate position of each bone point, a first tag value of a left limb and a second tag value of a right limb in the human body image to be identified; and
and the selecting module is used for determining the bone position of each bone point from each candidate position according to the candidate label value of each candidate position of the bone point and the first label value or the second label value so as to obtain the bone position of each bone point.
CN201811644854.8A 2018-12-29 2018-12-29 Human body bone point identification method and device Active CN109711374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811644854.8A CN109711374B (en) 2018-12-29 2018-12-29 Human body bone point identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811644854.8A CN109711374B (en) 2018-12-29 2018-12-29 Human body bone point identification method and device

Publications (2)

Publication Number Publication Date
CN109711374A CN109711374A (en) 2019-05-03
CN109711374B true CN109711374B (en) 2021-06-04

Family

ID=66260332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811644854.8A Active CN109711374B (en) 2018-12-29 2018-12-29 Human body bone point identification method and device

Country Status (1)

Country Link
CN (1) CN109711374B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652983A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Augmented reality AR special effect generation method, device and equipment
CN112434639A (en) * 2020-12-03 2021-03-02 郑州捷安高科股份有限公司 Action matching method, device, equipment and storage medium
CN114463414A (en) * 2021-12-13 2022-05-10 北京长木谷医疗科技有限公司 Knee joint external rotation angle measuring method and device, electronic equipment and storage medium
CN117037221B (en) * 2023-10-08 2023-12-29 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156994A (en) * 2011-04-19 2011-08-17 上海摩比源软件技术有限公司 Joint positioning method of single-view unmarked human motion tracking
CN102402288A (en) * 2010-09-07 2012-04-04 微软公司 System for fast, probabilistic skeletal tracking
CN107492121A (en) * 2017-07-03 2017-12-19 广州新节奏智能科技股份有限公司 A kind of two-dimension human body bone independent positioning method of monocular depth video
CN108345004A (en) * 2018-02-09 2018-07-31 弗徕威智能机器人科技(上海)有限公司 A kind of human body follower method of mobile robot
CN108491754A (en) * 2018-02-02 2018-09-04 泉州装备制造研究所 A kind of dynamic representation based on skeleton character and matched Human bodys' response method
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN108647639A (en) * 2018-05-10 2018-10-12 电子科技大学 Real-time body's skeletal joint point detecting method
CN108960212A (en) * 2018-08-13 2018-12-07 电子科技大学 Based on the detection of human joint points end to end and classification method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPQ600100A0 (en) * 2000-03-03 2000-03-23 Macropace Products Pty. Ltd. Animation technology
CN105184280A (en) * 2015-10-10 2015-12-23 东方网力科技股份有限公司 Human body identity identification method and apparatus
CN106056053B (en) * 2016-05-23 2019-04-23 西安电子科技大学 The human posture's recognition methods extracted based on skeleton character point
CN108520251A (en) * 2018-04-20 2018-09-11 北京市商汤科技开发有限公司 Critical point detection method and device, electronic equipment and storage medium
CN108647663B (en) * 2018-05-17 2021-08-06 西安电子科技大学 Human body posture estimation method based on deep learning and multi-level graph structure model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402288A (en) * 2010-09-07 2012-04-04 微软公司 System for fast, probabilistic skeletal tracking
CN102156994A (en) * 2011-04-19 2011-08-17 上海摩比源软件技术有限公司 Joint positioning method of single-view unmarked human motion tracking
CN107492121A (en) * 2017-07-03 2017-12-19 广州新节奏智能科技股份有限公司 A kind of two-dimension human body bone independent positioning method of monocular depth video
CN108491754A (en) * 2018-02-02 2018-09-04 泉州装备制造研究所 A kind of dynamic representation based on skeleton character and matched Human bodys' response method
CN108345004A (en) * 2018-02-09 2018-07-31 弗徕威智能机器人科技(上海)有限公司 A kind of human body follower method of mobile robot
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN108647639A (en) * 2018-05-10 2018-10-12 电子科技大学 Real-time body's skeletal joint point detecting method
CN108960212A (en) * 2018-08-13 2018-12-07 电子科技大学 Based on the detection of human joint points end to end and classification method

Also Published As

Publication number Publication date
CN109711374A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109711374B (en) Human body bone point identification method and device
CN108205655B (en) Key point prediction method and device, electronic equipment and storage medium
CN109948590B (en) Attitude problem detection method and device
CN111488824B (en) Motion prompting method, device, electronic equipment and storage medium
US11854237B2 (en) Human body identification method, electronic device and storage medium
CN110287775B (en) Palm image clipping method, palm image clipping device, computer equipment and storage medium
CN111414946B (en) Artificial intelligence-based medical image noise data identification method and related device
CN107766349B (en) Method, device, equipment and client for generating text
WO2021190122A1 (en) Human body key point detection method and apparatus, electronic device, and storage medium
CN114186632A (en) Method, device, equipment and storage medium for training key point detection model
WO2021051868A1 (en) Target location method and apparatus, computer device, computer storage medium
CN114022900A (en) Training method, detection method, device, equipment and medium for detection model
CN111652054A (en) Joint point detection method, posture recognition method and device
CN111709268A (en) Human hand posture estimation method and device based on human hand structure guidance in depth image
CN109948624A (en) Method, apparatus, electronic equipment and the computer storage medium of feature extraction
CN111985414A (en) Method and device for determining position of joint point
CN111709428A (en) Method and device for identifying key point positions in image, electronic equipment and medium
CN111353325A (en) Key point detection model training method and device
CN112488126A (en) Feature map processing method, device, equipment and storage medium
CN116958715A (en) Method and device for detecting hand key points and storage medium
CN110728172A (en) Point cloud-based face key point detection method, device and system and storage medium
CN109993165A (en) The identification of tablet plate medicine name and tablet plate information acquisition method, device and system
CN115880719A (en) Gesture depth information generation method, device, equipment and computer readable medium
CN106469437B (en) Image processing method and image processing apparatus
CN108133206B (en) Static gesture recognition method and device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant