CN112507954A - Human body key point identification method and device, terminal equipment and storage medium - Google Patents

Human body key point identification method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN112507954A
CN112507954A CN202011518131.0A CN202011518131A CN112507954A CN 112507954 A CN112507954 A CN 112507954A CN 202011518131 A CN202011518131 A CN 202011518131A CN 112507954 A CN112507954 A CN 112507954A
Authority
CN
China
Prior art keywords
human body
key points
coordinates
heat map
human bodies
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011518131.0A
Other languages
Chinese (zh)
Other versions
CN112507954B (en
Inventor
郭渺辰
程骏
张惊涛
胡淑萍
顾在旺
王东
庞建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202011518131.0A priority Critical patent/CN112507954B/en
Publication of CN112507954A publication Critical patent/CN112507954A/en
Application granted granted Critical
Publication of CN112507954B publication Critical patent/CN112507954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention is suitable for the technical field of artificial intelligence, and provides a human body key point identification method, a human body key point identification device, terminal equipment and a storage medium, wherein key points at N joint positions of a human body in a human body image are detected through a key point detection model, and N heat maps and a background map output by the key point detection model are obtained; segmenting the areas where the key points of different human bodies are located in each heat map, and determining the outline of the areas where the key points of different human bodies are located in each heat map; acquiring a heat map peak value in the outline of the key point area; obtaining the coordinates of the key points of different human bodies in each heat map according to the coordinates of the peak value of the heat map in the outline of the area where the key points of different human bodies are located in each heat map; and drawing a joint characteristic diagram of each human body according to the coordinates of the key points of each human body, so that the identification precision of the key points of the human body can be effectively improved.

Description

Human body key point identification method and device, terminal equipment and storage medium
Technical Field
The invention belongs to the technical field of Artificial Intelligence (AI), and particularly relates to a human body key point identification method, a human body key point identification device, terminal equipment and a storage medium.
Background
With the rapid development of artificial intelligence technology, the target detection and classification technology based on neural network is mature and widely applied in the industrial field, for example, the image is used for identifying the posture of a human body. The postures of the human body can be divided into static and dynamic states, static actions (such as standing, sitting, lifting hands and the like) can be distinguished by depending on a single-frame image, and a common static action identification method is based on a target detection method; dynamic motions (e.g., walking, jumping, running, etc.) are discriminated from a sequence of continuous images of a plurality of frames, and a commonly used dynamic motion recognition method is a method based on a dual-stream network, a 3D convolution, or the like. The identification of the key points of the human body is the realization basis of the identification of the static action, ensures the identification precision of the key points of the human body, and is the key for solving the problems of low training speed, poor interpretability and the like caused by large data amount required by training and covering clothes, backgrounds and the like in a figure in the existing identification method of the static action.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for identifying a human body key point, a terminal device, and a storage medium, which can effectively improve the identification accuracy of the human body key point.
The first aspect of the embodiments of the present invention provides a method for identifying key points of a human body, including:
detecting key points at N joint positions of a human body in a human body image through a key point detection model to obtain N heat maps and a background map output by the key point detection model; wherein N is an integer greater than or equal to 2;
segmenting the areas where the key points of different human bodies are located in each heat map, and determining the outline of the areas where the key points of different human bodies are located in each heat map;
acquiring a heat map peak value in the outline of the key point area;
obtaining the coordinates of the key points of different human bodies in each heat map according to the coordinates of the peak value of the heat map in the outline of the area where the key points of different human bodies are located in each heat map;
and drawing a joint feature map of each human body according to the coordinates of the key points of each human body.
A second aspect of an embodiment of the present invention provides a human body key point identification device, including:
the key point detection module is used for detecting key points at N joint positions of a human body in a human body image through a key point detection model to obtain N heat maps and a background map output by the key point detection model; wherein N is an integer greater than or equal to 2;
the key point segmentation module is used for segmenting the areas where the key points of different human bodies are located in each heat map and determining the outline of the area where the key points of different human bodies are located in each heat map;
the heat map peak acquisition module is used for acquiring heat map peaks in the outline of the key point area;
the key point positioning module is used for obtaining the coordinates of the key points of different human bodies in each heat map according to the coordinates of the heat map peak values in the outline of the area where the key points of different human bodies are located in each heat map;
and the characteristic diagram drawing module is used for drawing the joint characteristic diagram of each human body according to the coordinates of the key points of each human body.
A third aspect of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the human body keypoint identification method according to the first aspect of the present invention when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the human keypoint identification method according to the first aspect of the embodiments of the present invention.
In the method for identifying key points of a human body provided by the first aspect of the embodiments of the present invention, key points at N joint positions of a human body in a human body image are detected by a key point detection model, and N heat maps and a background map output by the key point detection model are obtained; segmenting the areas where the key points of different human bodies are located in each heat map, and determining the outline of the areas where the key points of different human bodies are located in each heat map; acquiring a heat map peak value in the outline of the key point area; obtaining the coordinates of the key points of different human bodies in each heat map according to the coordinates of the peak value of the heat map in the outline of the area where the key points of different human bodies are located in each heat map; according to the coordinates of each key point of the human body, the joint characteristic diagram of the human body is drawn, so that the recognition accuracy of the key points of the human body can be effectively improved, the interference of clothes, backgrounds and the like in human body images can be effectively eliminated when the static posture of the human body is recognized by the joint characteristic diagram drawn by the coordinates of the key points of the human body, the data amount required by training is reduced, the training speed is further improved, and the interpretability is strong.
It is understood that the beneficial effects of the second to fourth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first flowchart of a method for identifying key points of a human body according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of 18 keypoint locations of a human body provided by an embodiment of the invention;
FIG. 3 is a diagram illustrating key points of all human bodies in a human body image according to an embodiment of the present invention;
FIG. 4 is a graph of 18 heatmaps provided by an embodiment of the present invention;
FIG. 5 is a diagram illustrating the effect obtained by connecting key points according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a second method for identifying key points in a human body according to an embodiment of the present invention;
fig. 7 is a third flowchart illustrating a method for identifying key points in a human body according to an embodiment of the present invention;
FIG. 8 is a solid color skeleton diagram of 10 pieces of background provided by an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a human body key point identification device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present invention and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The method for identifying the key points of the human body provided by the embodiment of the invention can be applied to terminal devices such as robots, mobile phones, tablet computers, wearable devices, vehicle-mounted devices, Augmented Reality (AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computers (UMPCs), netbooks, Personal Digital Assistants (PDAs) and the like, and the embodiment of the invention does not limit the specific types of the terminal devices at all. The robot may be a service robot, an underwater robot, an entertainment robot, a military robot, an agricultural robot, or the like.
As shown in fig. 1, the method for identifying key points of a human body according to the embodiment of the present invention includes the following steps S101 to S104:
s101, detecting N key points of a human body in a human body image through a key point detection model, and obtaining N heat maps and a background map output by the key point detection model; wherein N is an integer greater than or equal to 2.
In application, the key point detection model can be constructed based on any algorithm for detecting key points of human bones, and a human body image containing a human body is input into the key point detection model, so that N heat maps and a background map output by the key point detection model can be obtained. The human body contains 18 keypoints, and the N keypoints contain at least 2 of the 18 keypoints of the human body. The human body image can be an RGB image, an infrared image or a depth image.
As shown in fig. 2, a schematic diagram illustrating the location of 18 key points of a human body is illustrated as an example; wherein, 18 key points are respectively marked as: nose (Nose)0, Neck (Neck)1, Right Shoulder (Right Shoulder)2, Right Elbow (Right Elbow)3, Right Wrist (Right Wrist)4, Left Shoulder (Left Shoulder)5, Left Elbow (Left Elbow)6, Left Wrist (Left Wrist)7, Right Hip (Right Hip)8, Right Knee (Right Knee)9, Right Ankle (Right Ankle)10, Left Hip (Left Hip)11, Left Knee (Left Knee)12, Left Ankle (Left Ankle)13, Right Eye (Right Eye)14, Left Eye (Left Eye)15, Right Ear (Right Eye) 16, and Left Ear (Left Eye) 17.
In one embodiment, the N key points include at least 14 key points of 18 key points of the human body, including a nose, a neck, a right shoulder, a right elbow, a right wrist, a left shoulder, a left elbow, a left wrist, a right hip, a right knee, a right ankle, a left hip, a left knee, and a left ankle, and optionally at least one of a right eye, a left eye, a right ear, and a left ear.
In application, under the condition that the head action of a human body does not need to be recognized, the N key points can only comprise other key points except the head key points, so that the data volume is reduced, and the training speed is improved.
In application, only N key points of a human body to be detected in a human body image can be detected through the key point detection model, a human body not to be detected does not need to be detected, each human body needing key point detection in the human body image is a human body to be detected, each human body not needing key point detection is a human body not to be detected, a user can set any human body in the human body image as the human body to be detected according to actual needs, and for example, all human bodies in the human body image can be set as the human body to be detected. Each heat map includes the same key point of all the human bodies to be detected in the human body image, that is, the same key point of all the human bodies to be detected in the human body image corresponds to one heat map, for example, when N key points include 18 key points of a human body, 18 heat maps and a background map can be obtained, the 1 st heat map includes the noses of all the human bodies to be detected, the 2 nd heat map includes the necks, … (and so on) of all the human bodies to be detected, and the N th heat map includes the left ears of all the human bodies to be detected. The background image is an image corresponding to the background in the human body image output by the key point detection model, and the background image output by the key point detection module is a blank image because no key point exists in the background of the human body image. Since the human body image does not necessarily display the complete human body, some key points in the human body image may be occluded, and therefore some key points may not be detected, and blank images not including key points may exist in all heat maps output by the key point detection model.
As shown in fig. 3, exemplary key points of all human bodies in one human body image are shown.
As shown in FIG. 4, an exemplary output of 18 heatmaps.
S102, segmenting areas where key points of different human bodies are located in each heat map, and determining outlines of the areas where the key points of the different human bodies are located in each heat map;
step S103, acquiring a heat map peak value in the outline of the key point area;
and step S104, obtaining the coordinates of the key points of different human bodies in each heat map according to the coordinates of the peak value of the heat map in the outline of the area where the key points of different human bodies are located in each heat map.
In application, after obtaining N heat maps, the coordinates of the key points in each heat map are identified, and then the coordinates of the key points belonging to the same human body are distributed to the same person, so that the coordinates of the N key points of each human body can be obtained.
In application, the closer the position in the heat map to the coordinates of the key point, the higher the confidence, therefore, the region where the key point is located in the heat map is to be segmented, then the contour of the region where the key point is located can be identified by any contour identification method, then the coordinates of the heat map peak are found in the contour, and further the coordinates of the heat map peak are found in the region where the key point is located, and the coordinates where the heat map peak is located are taken as the coordinates of the key point.
In one embodiment, step S104 includes:
and distributing the key points of different human bodies in each heat map to corresponding human bodies according to the coordinates of the key points of different human bodies in each heat map, and obtaining the coordinates of the key points of different human bodies in each heat map.
In application, after obtaining the coordinates of each keypoint, the keypoints belonging to each human body are assigned to each human body according to the coordinates of each keypoint, and the coordinates of all the keypoints belonging to each human body are obtained.
In one embodiment, step S104 specifically includes:
and mapping the key points of different human bodies in each heat map to corresponding human bodies according to the coordinates of the key points of different human bodies in each heat map and the position of each human body, and obtaining the coordinates of the key points of different human bodies in each heat map.
In application, the position of each human body in the human body image can be obtained by identifying the human body image through a target identification method, then the key points belonging to each human body can be obtained according to the coordinates of the key points and the position of the human body, and then each key point is mapped to the corresponding human body.
And S105, drawing a joint feature map of each human body according to the coordinates of the key points of each human body.
In application, after obtaining the coordinates of the key points of each human body, the obtained key points of each human body can be connected according to the connection rule of adjacent key points in the joints of the human body, so as to obtain the joint feature map of each human body.
As shown in fig. 5, an effect diagram obtained by assigning key points to corresponding human bodies and connecting the key points is exemplarily shown.
In one embodiment, step S105 includes:
and drawing a joint characteristic diagram of each human body according to the coordinates of the key points of each human body and a preset key point connection rule.
In application, the preset key point connection rule is a rule preset according to a connection rule of adjacent key points in joints of a human body, for example, a right elbow connected with a right elbow, a right wrist connected with a right elbow, a left elbow connected with a left elbow, a left wrist connected with a left elbow, a neck connected with a right hip and a left hip, a right hip connected with a right knee, a right knee connected with a right ankle, a left hip connected with a left knee, a left knee connected with a left ankle, a right ear connected with a right eye, a left ear connected with a left eye, a nose connected with a right eye and a left eye.
In application, the joint feature map may be an image in which the background is a solid color and includes only one key point of the human body, or may be an image in which the background is a natural color and includes all key points of the human body.
As shown in fig. 6, in an embodiment, after step S105, the method further includes:
step S601, classifying the joint characteristic diagram of each human body through a classification network to obtain the posture category label of each human body.
In application, after the joint feature maps of each human body are drawn, the joint feature maps of all the human bodies are input into the classification network as training data, the classification network is trained, and posture category labels of each human body, which are output after the classification network classifies the joint feature maps of all the human bodies, are obtained. The classification network may be a lightweight classification network (shufflenet-v2) with the last layer being a normalized exponential loss function (softmax loss).
As shown in fig. 7, in one embodiment, step S601 includes the following steps S701 to S703:
s701, respectively identifying a left part of joints and a right part of joints in a joint feature map of each human body by using different colors;
s702, adjusting the marked joint characteristic graph of each human body to a preset size;
step S703, training a classification network through the adjusted joint feature map of each human body, and obtaining the posture class label of each human body output by the classification network.
In application, the left part of joints and the right part of joints in the joint feature map of each human body are marked by different colors, so that the classification network can classify different human postures conveniently. Since the joint feature map has no background semantic meaning, each joint feature map does not need to be large, and therefore, the oversized joint feature map can be reduced or cropped to adjust to a preset size, for example, 64 × 64 pixels. When the joint feature map is drawn based on key points of a body part of a human body other than the head, the joint feature map may be referred to as a skeleton map.
As shown in fig. 8, 10 skeleton images with solid background are exemplarily shown.
In one embodiment, step S702 includes:
adjusting the identified joint characteristic diagram of each human body to a preset size according to the resolution of the classification network;
step S703 includes:
training the classification network through the adjusted joint feature maps of all the human bodies to obtain the posture class label of each human body output by the normalization index loss function of the last layer of the classification network.
In application, the size of each joint feature map can be adjusted according to the input resolution supported by the classification network, then the classification network is trained by using the joint feature maps, and finally the posture class labels are output through the normalized exponential loss function of the classification network, namely the training of the classification network is completed.
In application, the human body image can be an image needing to recognize human body posture, and can also be a human body image specially used for training a classification network. If other human body images are to be identified, the other human body images are input into the key point detection model, and the human body key point identification method provided by the embodiment of the invention is executed again.
The human body key point identification method provided by the embodiment of the invention can effectively improve the identification precision of the key points of the human body, thereby effectively improving the identification precision of the key points of the human body, effectively eliminating the interference of clothes, backgrounds and the like in human body images when the posture of a static human body is identified by using the joint characteristic diagram drawn by the coordinates of the key points of the human body, reducing the data amount required by training, further improving the training speed and having strong interpretability.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The embodiment of the invention also provides a human body key point identification device which is used for executing the steps in the human body key point identification method. The human body key point identification device may be a virtual device (virtual application) in the terminal device, and is run by a processor of the terminal device, or may be the terminal device itself.
As shown in fig. 9, the human body key point identification apparatus provided in the embodiment of the present invention includes:
the key point detection module 101 is configured to detect key points at N joint positions of a human body in a human body image through a key point detection model, and obtain N heat maps and a background map output by the key point detection model; wherein N is an integer greater than or equal to 2;
a key point segmentation module 102, configured to segment regions where key points of different human bodies are located in each heat map, and determine outlines of the regions where key points of different human bodies are located in each heat map;
a heat map peak acquisition module 103, configured to acquire a heat map peak within the contour of the keypoint region;
a key point positioning module 104, configured to obtain coordinates of key points of different human bodies in each heat map according to coordinates of a heat map peak in an outline of an area where the key points of different human bodies in each heat map are located;
and the feature map drawing module 105 is configured to draw a joint feature map of each human body according to the coordinates of the key points of each human body.
In one embodiment, the human body key point identification apparatus further includes:
and the classification module is used for classifying the skeleton map of each human body through a classification network to obtain the posture classification label of each human body.
In application, each module in the human body key point identification device can be a software program module, can also be realized by different logic circuits integrated in a processor, and can also be realized by a plurality of distributed processors.
As shown in fig. 10, an embodiment of the present invention further provides a terminal device 200, including: at least one processor 201 (only one processor is shown in fig. 10), a memory 202, and a computer program 203 stored in the memory 202 and operable on the at least one processor 201, the steps in the various human keypoint identification method embodiments described above being implemented when the computer program 203 is executed by the processor 201.
In an application, the terminal device may include, but is not limited to, a memory, a processor. Those skilled in the art will appreciate that fig. 10 is merely an example of a terminal device, and does not constitute a limitation of the terminal device, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, etc.
In applications, the processor may include a central processing unit and a graphics processor, which may also be other general purpose processors, digital signal processors, application specific integrated circuits, off-the-shelf programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In some embodiments, the storage may be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory may also be an external storage device of the terminal device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of computer programs. The memory may also be used to temporarily store data that has been output or is to be output.
It should be noted that, because the contents of information interaction, execution process, and the like between the above-mentioned devices/modules are based on the same concept as the method embodiment of the present invention, specific functions and technical effects thereof can be referred to specifically in the method embodiment section, and are not described herein again.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely illustrated, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module, and the integrated module may be implemented in a form of hardware, or in a form of software functional module. In addition, the specific names of the functional modules are only for convenience of distinguishing from each other and are not used for limiting the protection scope of the present invention. The specific working process of the modules in the system may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The embodiment of the invention provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the human body key point identification method of any one of the embodiments is realized.
The embodiment of the invention provides a computer program product, and when the computer program product runs on a terminal device, the terminal device is enabled to execute the human body key point identification method of any one of the embodiments.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may be implemented by a computer program, which is stored in a computer readable storage medium and used for instructing related hardware to implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus and the terminal device are merely illustrative, and for example, the division of the modules is only one logical division, and there may be other divisions when the actual implementation is performed, for example, a plurality of modules or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A human body key point identification method is characterized by comprising the following steps:
detecting key points at N joint positions of a human body in a human body image through a key point detection model to obtain N heat maps and a background map output by the key point detection model; wherein N is an integer greater than or equal to 2;
segmenting the areas where the key points of different human bodies are located in each heat map, and determining the outline of the areas where the key points of different human bodies are located in each heat map;
acquiring a heat map peak value in the outline of the key point area;
obtaining the coordinates of the key points of different human bodies in each heat map according to the coordinates of the peak value of the heat map in the outline of the area where the key points of different human bodies are located in each heat map;
and drawing a joint feature map of each human body according to the coordinates of the key points of each human body.
2. The method of claim 1, wherein obtaining the coordinates of the keypoints for the different human bodies in each of the heat maps based on the coordinates of the peaks of the heat maps within the contours of the regions in which the keypoints for the different human bodies are located in each of the heat maps comprises:
and distributing the key points of different human bodies in each heat map to corresponding human bodies according to the coordinates of the key points of different human bodies in each heat map, and obtaining the coordinates of the key points of different human bodies in each heat map.
3. The method of claim 2, wherein said assigning keypoints for different human bodies in each of said heat maps to corresponding human bodies based on the coordinates of keypoints for different human bodies in each of said heat maps, and wherein said obtaining coordinates of keypoints for different human bodies in each of said heat maps comprises:
and mapping the key points of different human bodies in each heat map to corresponding human bodies according to the coordinates of the key points of different human bodies in each heat map and the position of each human body, and obtaining the coordinates of the key points of different human bodies in each heat map.
4. The method of claim 1, wherein said mapping the joint feature map of each said human body according to the coordinates of the key points of each said human body comprises:
and drawing a joint characteristic diagram of each human body according to the coordinates of the key points of each human body and a preset key point connection rule.
5. The method according to any one of claims 1 to 4, wherein after the step of plotting the joint feature map of each human body according to the coordinates of the key points of each human body, the method further comprises the following steps:
classifying the joint characteristic diagram of each human body through a classification network to obtain the posture category label of each human body.
6. The method of claim 5, wherein the classifying the joint feature map of each of the human bodies through a classification network to obtain a posture category label of each of the human bodies comprises:
respectively identifying a left part of joints and a right part of joints in the joint feature map of each human body by using different colors;
adjusting the identified joint characteristic diagram of each human body to a preset size;
training the classification network through the adjusted joint feature map of each human body, and obtaining the posture class label of each human body output by the classification network.
7. The method of claim 6, wherein the classification network is a lightweight classification network;
adjusting the marked joint characteristic diagram of each human body to a preset size, comprising:
adjusting the identified joint characteristic diagram of each human body to a preset size according to the resolution of the classification network;
the training of the classification network through the adjusted joint feature map of each human body to obtain the posture category label of each human body output by the classification network comprises the following steps:
training the classification network through the adjusted joint feature map of each human body, and obtaining the posture category label of each human body output by the normalization index loss function of the last layer of the classification network.
8. A human body key point recognition device is characterized by comprising:
the key point detection module is used for detecting key points at N joint positions of a human body in a human body image through a key point detection model to obtain N heat maps and a background map output by the key point detection model; wherein N is an integer greater than or equal to 2;
the key point segmentation module is used for segmenting the areas where the key points of different human bodies are located in each heat map and determining the outline of the area where the key points of different human bodies are located in each heat map;
the heat map peak acquisition module is used for acquiring heat map peaks in the outline of the key point area;
the key point positioning module is used for obtaining the coordinates of the key points of different human bodies in each heat map according to the coordinates of the heat map peak values in the outline of the area where the key points of different human bodies are located in each heat map;
and the characteristic diagram drawing module is used for drawing the joint characteristic diagram of each human body according to the coordinates of the key points of each human body.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the human keypoint identification method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the human keypoint identification method according to any one of claims 1 to 7.
CN202011518131.0A 2020-12-21 2020-12-21 Human body key point identification method and device, terminal equipment and storage medium Active CN112507954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011518131.0A CN112507954B (en) 2020-12-21 2020-12-21 Human body key point identification method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011518131.0A CN112507954B (en) 2020-12-21 2020-12-21 Human body key point identification method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112507954A true CN112507954A (en) 2021-03-16
CN112507954B CN112507954B (en) 2024-01-19

Family

ID=74922700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011518131.0A Active CN112507954B (en) 2020-12-21 2020-12-21 Human body key point identification method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112507954B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115966016A (en) * 2022-12-19 2023-04-14 天翼爱音乐文化科技有限公司 Jumping state identification method and system, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985259A (en) * 2018-08-03 2018-12-11 百度在线网络技术(北京)有限公司 Human motion recognition method and device
CN110532981A (en) * 2019-09-03 2019-12-03 北京字节跳动网络技术有限公司 Human body key point extracting method, device, readable storage medium storing program for executing and equipment
CN110942056A (en) * 2018-09-21 2020-03-31 深圳云天励飞技术有限公司 Clothing key point positioning method and device, electronic equipment and medium
CN111339903A (en) * 2020-02-21 2020-06-26 河北工业大学 Multi-person human body posture estimation method
CN111860300A (en) * 2020-07-17 2020-10-30 广州视源电子科技股份有限公司 Key point detection method and device, terminal equipment and storage medium
WO2020228217A1 (en) * 2019-05-13 2020-11-19 河北工业大学 Human body posture visual recognition method for transfer carrying nursing robot, and storage medium and electronic device
CN112101312A (en) * 2020-11-16 2020-12-18 深圳市优必选科技股份有限公司 Hand key point identification method and device, robot and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985259A (en) * 2018-08-03 2018-12-11 百度在线网络技术(北京)有限公司 Human motion recognition method and device
CN110942056A (en) * 2018-09-21 2020-03-31 深圳云天励飞技术有限公司 Clothing key point positioning method and device, electronic equipment and medium
WO2020228217A1 (en) * 2019-05-13 2020-11-19 河北工业大学 Human body posture visual recognition method for transfer carrying nursing robot, and storage medium and electronic device
CN110532981A (en) * 2019-09-03 2019-12-03 北京字节跳动网络技术有限公司 Human body key point extracting method, device, readable storage medium storing program for executing and equipment
CN111339903A (en) * 2020-02-21 2020-06-26 河北工业大学 Multi-person human body posture estimation method
CN111860300A (en) * 2020-07-17 2020-10-30 广州视源电子科技股份有限公司 Key point detection method and device, terminal equipment and storage medium
CN112101312A (en) * 2020-11-16 2020-12-18 深圳市优必选科技股份有限公司 Hand key point identification method and device, robot and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115966016A (en) * 2022-12-19 2023-04-14 天翼爱音乐文化科技有限公司 Jumping state identification method and system, electronic equipment and storage medium
CN115966016B (en) * 2022-12-19 2024-07-05 天翼爱音乐文化科技有限公司 Jump state identification method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112507954B (en) 2024-01-19

Similar Documents

Publication Publication Date Title
JP7248799B2 (en) IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, COMPUTER PROGRAM, AND IMAGE PROCESSING DEVICE
CN109359538B (en) Training method of convolutional neural network, gesture recognition method, device and equipment
CN109902548B (en) Object attribute identification method and device, computing equipment and system
US10019655B2 (en) Deep-learning network architecture for object detection
CN112446919B (en) Object pose estimation method and device, electronic equipment and computer storage medium
CN109101946B (en) Image feature extraction method, terminal device and storage medium
CN111144348A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111191582B (en) Three-dimensional target detection method, detection device, terminal device and computer readable storage medium
CN110288715B (en) Virtual necklace try-on method and device, electronic equipment and storage medium
US20230334893A1 (en) Method for optimizing human body posture recognition model, device and computer-readable storage medium
US20220262093A1 (en) Object detection method and system, and non-transitory computer-readable medium
CN112336342A (en) Hand key point detection method and device and terminal equipment
Desai et al. Human Computer Interaction through hand gestures for home automation using Microsoft Kinect
CN112419326A (en) Image segmentation data processing method, device, equipment and storage medium
US11727605B2 (en) Method and system for creating virtual image based deep-learning
CN110222651A (en) A kind of human face posture detection method, device, terminal device and readable storage medium storing program for executing
CN114445853A (en) Visual gesture recognition system recognition method
CN112507954B (en) Human body key point identification method and device, terminal equipment and storage medium
CN116246343A (en) Light human body behavior recognition method and device
CN113724176B (en) Multi-camera motion capture seamless connection method, device, terminal and medium
CN112464753B (en) Method and device for detecting key points in image and terminal equipment
CN115147469A (en) Registration method, device, equipment and storage medium
CN113553884B (en) Gesture recognition method, terminal device and computer-readable storage medium
Bhuyan et al. Hand gesture recognition and animation for local hand motions
CN113435358A (en) Sample generation method, device, equipment and program product for training model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant