WO2022033264A1 - 人体特征点的筛选方法、装置、电子设备以及存储介质 - Google Patents
人体特征点的筛选方法、装置、电子设备以及存储介质 Download PDFInfo
- Publication number
- WO2022033264A1 WO2022033264A1 PCT/CN2021/106337 CN2021106337W WO2022033264A1 WO 2022033264 A1 WO2022033264 A1 WO 2022033264A1 CN 2021106337 W CN2021106337 W CN 2021106337W WO 2022033264 A1 WO2022033264 A1 WO 2022033264A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- human body
- body feature
- feature point
- feature points
- confidence
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Definitions
- the present application relates to the technical field of electronic devices, and more particularly, to a method, device, electronic device, and storage medium for screening human body feature points.
- ASM active shape model
- the present application proposes a method, device, electronic device and storage medium for screening human body feature points to solve the above problems.
- an embodiment of the present application provides a method for screening human body feature points, the method includes: acquiring an image to be detected; inputting the to-be-detected image into a trained human feature point detection model; acquiring the trained human body feature point detection model; The multiple first human body feature points output by the human body feature point detection model, the overall confidence of the multiple first human body feature points, and the independence of each of the multiple first human body feature points Confidence degree: screening the plurality of first human body feature points based on the overall confidence degree of the plurality of first human body feature points and the independent confidence degree of each of the first human body feature points.
- an embodiment of the present application provides a device for screening human body feature points, the device includes: a to-be-detected image acquisition module for acquiring a to-be-detected image; a to-be-detected image input module for The image is input to the trained human body feature point detection model; the confidence output module is used to obtain a plurality of first human body feature points output by the trained human body feature point detection model, and the totality of the plurality of first human body feature points.
- a feature point screening module configured to be based on the overall confidence of the plurality of first human body feature points and the For the independent confidence level of each first human body feature point, the plurality of first human body feature points are screened.
- embodiments of the present application provide an electronic device, including a memory and a processor, the memory is coupled to the processor, and the memory stores instructions, which are stored when the instructions are executed by the processor.
- the processor executes the above method.
- an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be invoked by a processor to execute the above method.
- FIG. 1 shows a schematic flowchart of a method for screening human body feature points provided by an embodiment of the present application
- FIG. 2 shows a schematic flowchart of a method for screening human body feature points provided by another embodiment of the present application
- FIG. 3 shows a schematic flowchart of a method for screening human body feature points provided by still another embodiment of the present application
- FIG. 4 shows a schematic flowchart of step S370 of the method for screening human body feature points shown in FIG. 3 of the present application;
- FIG. 5 shows a schematic flowchart of a method for screening human body feature points provided by another embodiment of the present application.
- FIG. 6 shows a schematic flowchart of step S410 of the method for screening human body feature points shown in FIG. 5 of the present application;
- FIG. 7 shows a schematic flowchart of step S430 of the method for screening human body feature points shown in FIG. 5 of the present application;
- FIG. 8 shows a block diagram of a module of an apparatus for screening human body feature points provided by an embodiment of the present application
- FIG. 9 shows a block diagram of an electronic device for performing the method for screening human body feature points according to an embodiment of the present application.
- FIG. 10 shows a storage unit according to an embodiment of the present application for storing or carrying a program code for implementing a method for screening human body feature points according to an embodiment of the present application.
- Convolutional neural network is a type of neural network that includes convolutional computation and has a certain depth structure, and is one of the representative algorithms of deep learning.
- the development of convolutional neural networks so far generally includes the following types of stacked layers: input layer, convolutional layer, pooling layer, normalization layer (also known as Batch Norm layer), activation function layer, fully connected layer, and output layer Wait.
- the input layer is generally a color image with three RGB channels;
- the function of the convolution layer is to extract features from the input data, and the calculation form is a convolution operation, including weight coefficients and biases;
- the pooling layer is used to The information is selected and filtered.
- Commonly used pooling methods include maximum pooling and average pooling; the normalization layer normalizes the input data to make the distribution of each feature similar, and the network is easier to train; the activation function layer is used for Adding nonlinear factors to the model makes the model have stronger fitting ability; the fully connected layer is generally located in the last part of the convolutional neural network, and the input features are nonlinearly combined to obtain the output; the output layer outputs the results of the type required by the model,
- the output layer uses functions such as softmax (normalized exponential function, which is often used as an output layer in the field of deep learning to obtain a specified type of output) to output classification labels.
- softmax normalized exponential function, which is often used as an output layer in the field of deep learning to obtain a specified type of output
- the output layer directly outputs each pixel.
- the classification results of the human body feature point detection problem the output layer outputs the human body feature points.
- Human body feature point detection mainly detects some feature points of the human body, such as eyes, nose, elbows, shoulders, etc., and connects them in sequence according to the feature points, and describes the human body information through the feature points. By extension, it can also describe the posture, gait, behavior and other information of the human body.
- Human feature point detection is one of the basic algorithms of computer vision, and it plays a fundamental role in the research of other related fields of computer vision, such as behavior recognition, intelligent composition and other related fields.
- the scheme of active shape model, the scheme of direct regression of human body feature points by convolutional neural network, or the method of human body feature point heat map is generally adopted. Point position prediction as an auxiliary scheme, etc.
- the inventors have long discovered that the current human body feature point detection scheme directly gives the human body feature point coordinates or the heat map of human body feature points. However, for most detection schemes of human body feature points, in some cases the human body The detection results of feature points are not satisfactory. If the detection results of human feature points are completely trusted without any filtering method, other tasks that rely on human feature point detection tasks may cause errors and cannot be solved.
- the inventor After long-term research, the inventor has found and proposed the method, device, electronic device and storage medium for screening human body feature points provided in the embodiments of the present application.
- the method, device, electronic device and storage medium for screening human body feature points provided in the embodiments of the present application.
- the overall confidence level of multiple human body feature points and the Independent confidence level By obtaining the overall confidence level of multiple human body feature points and the Independent confidence level, and screen multiple human body feature points based on the overall confidence level and independent confidence level to filter the falsely detected human body feature points and eliminate the influence of inaccurate prediction of human body feature points on subsequent tasks.
- the specific method for screening human body feature points will be described in detail in the following embodiments.
- FIG. 1 shows a schematic flowchart of a method for screening human body feature points provided by an embodiment of the present application.
- the screening method for human body feature points is used to obtain the overall confidence level of multiple human body feature points and the independent confidence level of each human body feature point, and screen the multiple human body feature points based on the overall confidence level and the independent confidence level, In order to filter the falsely detected human body feature points, the influence of inaccurate prediction of human body feature points on subsequent tasks is eliminated.
- the method for screening human body feature points is applied to the device 200 for screening human body feature points as shown in FIG. 8 and the electronic device 100 ( FIG. 9 ) equipped with the device 200 for screening human body feature points.
- the electronic device applied in this embodiment may be a smart phone, a tablet computer, a wearable electronic device, etc., which is not limited here.
- the flow process shown in Figure 1 will be described in detail below, and the screening method of the human body feature points can specifically include the following steps:
- Step S110 Acquire an image to be detected.
- an image to be detected may be acquired, wherein the acquired image to be detected may include at least one human body.
- the image to be detected may be a preview image collected by a camera of an electronic device, a photo captured by a camera of an electronic device and stored in an album, an image downloaded from a network and stored in an album, etc., This is not limited.
- the acquired image to be detected may be a static image or a dynamic image, which is not limited herein.
- Step S120 Input the image to be detected into the trained human body feature point detection model.
- the electronic device can input the image to be detected into a trained human body feature point detection model, wherein the trained human body feature point detection model is obtained through machine learning, Specifically, a training data set is first collected, wherein the attributes or characteristics of one type of data in the training data set are different from another type of data, and then the neural network is trained and modeled by using the collected training data set according to a preset algorithm, Thus, the rules are summarized based on the training data set, and the trained human feature point detection model is obtained.
- the trained human feature point detection model may be stored in the local area of the electronic device after pre-training. Based on this, after acquiring the image to be detected, the electronic device can directly call the trained human feature point detection model locally. For example, it can directly send an instruction to the human feature point detection model to indicate the trained human feature point.
- the detection model reads the to-be-detected image in the target storage area, or the electronic device can directly input the to-be-detected image into a local trained human feature point detection model, thereby effectively avoiding the reduction of the to-be-detected image due to the influence of network factors. Enter the speed of the trained human feature point detection model to improve the speed at which the trained human feature point detection model acquires the image to be detected and improve user experience.
- the trained human feature point detection model may also be stored in a server in communication with the electronic device after pre-training. Based on this, after the electronic device obtains the image to be detected, it can send an instruction to the trained human body feature point detection model stored in the server through the network, so as to instruct the trained human body feature point detection model to read the electronic device through the network.
- the image to be detected, or the electronic device can send the image to be detected to the trained human feature point detection model stored in the server through the network, so that the trained human feature point detection model is stored in the server.
- the occupation of the storage space of the electronic device reduces the influence on the normal operation of the electronic device.
- Step S130 Obtain multiple first human body feature points output by the trained human body feature point detection model, the overall confidence level of the multiple first human body feature points, and each of the multiple first human body feature points. The independent confidence of the first human feature point.
- the trained human body feature point detection model outputs corresponding information based on the read image to be detected, and the electronic device can obtain the information output by the trained human body feature point detection model. It can be understood that if the trained human body feature point detection model is stored locally in the electronic device, the electronic device can directly obtain the information output by the trained human body feature point detection model; if the trained human body feature point detection model The model is stored in a server connected to the electronic device, and the electronic device can obtain the information output by the trained human body feature point detection model from the server through the network.
- the trained human body feature point detection model may output a plurality of first human body feature points in the to-be-detected image, an overall confidence level of the plurality of first human body feature points, and a plurality of first human body feature points based on the input image to be detected. Independent confidence for each of the first human body feature points.
- the overall confidence level of the multiple first human body feature points is used to represent the overall accuracy or reliability of the predictions of the multiple first human body feature points
- the independent confidence level of each first human body feature point is used to represent each first human body feature point. The degree of accuracy or reliability of the prediction of a feature point of a human body.
- Step S140 Screen the plurality of first human body feature points based on the overall confidence level of the plurality of first human body feature points and the independent confidence level of each of the first human body feature points.
- the overall confidence level of the multiple first human body feature points and the independent confidence level of each first human body feature point is used to screen multiple first human body feature points, thereby eliminating the influence of inaccurate prediction of human body feature points on subsequent tasks.
- the to-be-detected image may be screened based on the overall confidence of the plurality of first human body feature points, for example, when the plurality of first human body features
- the image to be detected can be deleted or filtered and the image to be detected is no longer used to participate in other subsequent tasks; when the overall confidence level of the multiple first human body feature points represents When the multiple first human body feature points are credible, the to-be-detected image can be retained and used to continue participating in other subsequent tasks.
- a plurality of first human body feature points in the image to be detected may be screened based on the independent confidence level of each first human body feature point, For example, when the independent confidence of a certain first human body feature point indicates that the first human body feature point is unreliable, the first human body feature point can be deleted or filtered in the image to be detected and the first human body feature point is no longer used Participate in other follow-up tasks; when the independent confidence of a first human body feature point indicates that the first human body feature point is credible, the first human body feature point can be retained in the image to be detected and the first human body feature point can be used Participate in other follow-up tasks.
- a method for screening human body feature points provided by an embodiment of the present application is to obtain an image to be detected, input the to-be-detected image into a trained human body feature point detection model, and obtain a plurality of first human body features output by the trained human body feature point detection model point, the overall confidence of the plurality of first human body feature points, and the independent confidence of each of the plurality of first human body feature points, based on the overall confidence of the plurality of first human body feature points and each
- the independent confidence level of the first human body feature point is screened for multiple first human body feature points, so as to obtain the overall confidence level of the multiple human body feature points and the independent confidence level of each human body feature point, and based on the overall confidence level and
- the independent confidence level screens multiple human body feature points to filter the falsely detected human body feature points and eliminates the influence of inaccurate prediction of human body feature points on subsequent tasks.
- FIG. 2 shows a schematic flowchart of a method for screening human body feature points provided by another embodiment of the present application.
- the process shown in FIG. 2 will be described in detail below, and the screening method of the human body feature points may specifically include the following steps:
- Step S210 Acquire an image to be detected.
- Step S220 Input the to-be-detected image into a trained human body feature point detection model.
- Step S230 Obtain multiple first human body feature points output by the trained human body feature point detection model, the overall confidence level of the multiple first human body feature points, and each of the multiple first human body feature points. The independent confidence of the first human feature point.
- steps S210 to S230 may refer to steps S110 to S130, which will not be repeated here.
- Step S240 Compare the overall confidence level of the plurality of first human body feature points with a first confidence level threshold to obtain a first comparison result.
- the electronic device may preset and store the first confidence threshold, or may temporarily set the first confidence threshold after acquiring the overall confidence of the plurality of first human feature points, where the first confidence threshold is The degree threshold is used as a basis for judging the overall confidence of multiple human feature points. Therefore, in this embodiment, after acquiring the overall confidence of multiple first human feature points, the multiple first human feature points can be The overall confidence of is compared with the first confidence threshold to obtain the first comparison result.
- the first confidence threshold may be set to one.
- Step S250 when the overall confidence of the plurality of first human body feature points representing the first comparison result is less than the first confidence threshold, delete the to-be-detected image.
- the overall confidence level of the first comparison result representing the multiple first human body feature points when the overall confidence level of the first comparison result representing the multiple first human body feature points is less than the first confidence level threshold, it may be determined that the overall confidence level of the multiple first human body feature points is unreliable, and the multiple first human body feature points are determined to be unreliable. Most of the first human body feature points in the first human body feature points are unreliable, and the detection error of the first human body feature point of the to-be-detected image is relatively large, so the to-be-detected image can be deleted to avoid influence on subsequent tasks.
- Step S260 when the first comparison result indicates that the overall confidence of the plurality of first human body feature points is not less than the first confidence threshold, retain the to-be-detected image.
- the overall confidence level of the first comparison result representing the plurality of first human body feature points when the overall confidence level of the first comparison result representing the plurality of first human body feature points is not less than the first confidence level threshold, it may be determined that the overall confidence level of the plurality of first human body feature points is credible, and the representation is more Most of the first human body feature points in the first human body feature points can be trusted, and the detection error of the first human body feature points of the to-be-detected image is small, then the to-be-detected image can be retained to provide the to-be-detected image for subsequent tasks The first human body feature point.
- Step S270 Compare the independent confidence level of each first human body feature point with a second confidence level threshold to obtain a second comparison result.
- the electronic device may preset and store the second confidence threshold, or may temporarily set the second confidence after acquiring the independent confidence of each first human feature point in the plurality of first human feature points degree threshold, where the second confidence threshold is used as a basis for judging the independent confidence of each human body feature point. Therefore, in this embodiment, when acquiring each of the second human body feature points After the independent confidence level of the human body feature points is obtained, the independent confidence level of each second human body feature point may be compared with the second confidence level threshold respectively to obtain a second comparison result.
- the independent confidence level of the second comparison result representing a certain first human body feature point when the independent confidence level of the second comparison result representing a certain first human body feature point is less than the second confidence threshold, the independent confidence level representing the first human body feature point is unreliable, and when the second comparison result represents a certain first human body feature point.
- the independent confidence level of a human body feature point is not less than the second confidence level threshold, the independent confidence level representing the second human body feature point is credible.
- the second confidence threshold may be set to one.
- the independent confidence level of each first human body feature point can be determined.
- the degrees are compared with the second confidence threshold respectively to obtain a second comparison result, so as to achieve the effect of reducing the number of comparisons of the first human body feature points.
- the independent confidence level of each first human body feature point can be directly compared with the second confidence level threshold to obtain the second comparison.
- Step S280 Based on the second comparison result, delete the first human body feature points whose independent confidence is less than the second confidence threshold from the plurality of first human feature points, and keep the independent confidence not less than the The first human feature point of the second confidence threshold.
- the first human body feature points whose independent confidence is less than the second confidence threshold may be deleted from the plurality of first human body feature points based on the second comparison result, and the The first human body feature point whose independent confidence is not less than the second confidence.
- the second comparison result includes a comparison result of each first human body feature point in the plurality of first human body feature points and the second confidence threshold, that is, the second comparison result includes each first human body feature
- the magnitude relationship between the point and the second confidence threshold is understandable, if the independent confidence of the first human feature point is less than the second confidence threshold, it means that the first human feature point is unreliable, and the independence of the first human feature point is If the confidence is not less than the second confidence threshold, it indicates that the first human body feature point is credible. Therefore, in this embodiment, the first human body feature points whose independent confidence level is less than the second confidence threshold may be deleted from the plurality of first human body feature points, and the independent confidence level is not retained from the plurality of first human body feature points. The first human feature points that are less than the second confidence threshold.
- the plurality of first human body feature points include human body feature point 1, human body feature point 2, human body feature point 3, human body feature point 4 and human body feature point 5, when human body feature point 1, human body feature point 2, human body feature point 5 3 and the human body feature point 4 are not less than the second confidence threshold, and when the human body feature point 5 is less than the second confidence threshold, the human body feature point 5 can be deleted from the plurality of first human body feature points, and the human body feature point is retained. 1. Human body feature point 2, human body feature point 3 and human body feature point 4.
- an image to be detected is obtained, the image to be detected is input into a trained human body feature point detection model, and a plurality of first human bodies output by the trained human body feature point detection model are obtained.
- the feature points, the overall confidence level of the multiple first human body feature points, and the independent confidence level of each first human body feature point in the multiple first human body feature points are combined with the overall confidence level of the multiple first human body feature points and the first human body feature point.
- a confidence threshold is compared to obtain a first comparison result. When the overall confidence of the first comparison result representing multiple first human body feature points is less than the first confidence threshold, the image to be detected is deleted.
- the image to be detected is retained, and the independent confidence of each first human feature point is compared with the second confidence threshold to obtain a second comparison.
- the second comparison result delete the first human body feature points whose independent confidence is less than the second confidence threshold from the plurality of first human feature points, and retain the first human whose independent confidence is not less than the second confidence threshold Feature points.
- this embodiment also sets a first confidence threshold to determine the overall confidence, so as to delete or retain the image to be detected.
- this embodiment also sets a second threshold. The confidence threshold determines the independent confidence to delete or retain each first human feature point, thereby improving the screening effect of human feature points.
- FIG. 3 shows a schematic flowchart of a method for screening human body feature points provided by yet another embodiment of the present application.
- the flow shown in FIG. 3 will be described in detail below, wherein, in this embodiment, the image to be detected includes a plurality of regions to be detected, and the screening method for the human body feature points may specifically include the following steps:
- Step S310 Acquire an image to be detected.
- Step S320 Input the image to be detected into the trained human body feature point detection model.
- Step S330 Obtain multiple first human body feature points output by the trained human body feature point detection model, the overall confidence level of the multiple first human body feature points, and each of the multiple first human body feature points. The independent confidence of the first human feature point.
- Step S340 Screen the plurality of first human body feature points based on the overall confidence level of the plurality of first human body feature points and the independent confidence level of each of the first human body feature points.
- steps S310-step S340 can refer to steps S110-step S140, and details are not repeated here.
- Step S350 Acquire a first human body feature point included in each of the multiple to-be-detected regions to obtain a plurality of first human body feature point sets.
- the image to be detected includes a plurality of regions to be detected, wherein the plurality of regions to be detected may be obtained by uniformly dividing the image to be detected, or obtained by non-uniformly dividing the image to be detected.
- the number of regions can be set fixedly or dynamically according to requirements, which is not limited here.
- the first human-body feature points included in each of the multiple to-be-detected regions can be acquired, and the multiple first human-body feature point sets can be obtained.
- coordinate information of each first human body feature point in the plurality of first human body feature points may be acquired, and a plurality of to-be-detected feature points may be acquired.
- the coordinate area contained in each to-be-detected area in the detection area is based on the coordinate information of each first human body feature point and the coordinate area contained in each to-be-detected area, and is obtained from a plurality of first human body feature points located in each the first human body feature points in the regions to be detected, and the first human body feature points included in each region to be detected are used as a first human body feature point set to obtain a plurality of first human body feature point sets.
- Step S360 Based on the independent confidence level of each first human body feature point, obtain the set confidence level of each first human body feature point set in the plurality of first human body feature point sets.
- each first human body feature point in the multiple first human body feature point sets may be obtained based on the independent confidence of each first human body feature point The ensemble confidence for the ensemble.
- an independent confidence level of each first human body feature point included in each first human body feature point set may be acquired, and based on each first human body feature point set The independent confidence level of each first human body feature point included in the feature point set is obtained, and the set confidence level of each corresponding first human body feature point set is obtained.
- the set confidence level of each corresponding first human body feature point set may be obtained by summing or averaging the independent confidence levels of the first human body feature points included in each first human body feature point set.
- Step S370 Screen the multiple to-be-detected regions based on the set confidence of each first human body feature point set.
- the area to be detected is screened. For example, when the set confidence level of a certain first human body feature point set in the multiple first human body feature point sets indicates that the certain first human body feature point set is unreliable, the human body can be deleted or filtered. The area to be detected corresponding to the feature point set; when the set confidence of a certain first human body feature point set in the multiple first human body feature point sets indicates that the certain first human body feature point set is credible, the human body features can be retained The area to be detected corresponding to the point set.
- FIG. 4 shows a schematic flowchart of step S370 of the method for screening human body feature points shown in FIG. 3 of the present application.
- the process shown in FIG. 4 will be described in detail below, and the method may specifically include the following steps:
- Step S371 Comparing the set confidence of each first human body feature point set with a third confidence threshold, respectively, to obtain a third comparison result.
- the electronic device may preset and store a third confidence threshold, or may temporarily set a third confidence threshold after acquiring the collective confidence of each first set of human body feature points, wherein the third confidence threshold
- the confidence threshold is used as the basis for judging the collective confidence of each first human feature point set in the aggregate confidence of the multiple first human feature point sets.
- the set confidence level of each first human body feature point set may be compared with a third confidence level threshold to obtain a third comparison result.
- the third comparison result indicates that the set confidence of a certain first human body feature point set is less than the third confidence threshold
- the set confidence of the first human body feature point set is unreliable
- the third comparison result represents a certain set of confidence points
- the set confidence level representing the first set of human body feature points is credible.
- the first confidence threshold may be set to one.
- Step S372 Based on the third comparison result, delete the first human body feature point set whose set confidence is less than the third confidence threshold from the plurality of first human feature point sets, and keep the set confidence not less than The first set of human feature points of the third confidence threshold.
- the first human body feature point set whose set confidence is less than the third confidence threshold may be deleted from the plurality of first human body feature point sets based on the third comparison result, And retain the first set of human body feature points whose set confidence is not less than the third confidence.
- the third comparison result includes a comparison result between the first human body feature point set in the plurality of first human body feature point sets and the third confidence threshold, that is, the third comparison result includes each first human body feature
- the size relationship between the point set and the third confidence threshold it can be understood that if the first human feature point set is smaller than the third confidence threshold, it indicates that the first human feature point set is not credible, and the first human feature point set is not reliable. If the confidence level is not less than the third confidence level threshold, it indicates that the first human body feature point set is credible.
- the first human body feature point set whose set confidence is less than the third confidence threshold may be deleted from the multiple first human body feature point sets, and the first human body feature point set may be retained from the multiple first human body feature point sets The first set of human body feature points whose confidence is not less than the third confidence threshold.
- Another embodiment of the present application provides a method for screening human body feature points, which includes acquiring an image to be detected, inputting the to-be-detected image into a trained human body feature point detection model, and acquiring multiple first human bodies output by the trained human body feature point detection model
- the independent confidence levels of the first human body feature points are screened, and the first human body feature points contained in each of the multiple to-be-detected regions are obtained, and a plurality of first human body feature points are obtained.
- the feature point set obtains the set confidence level of each first human body feature point set in the plurality of first human body feature point sets, based on the set of each first human body feature point set Confidence, screening multiple areas to be detected.
- a plurality of regions to be detected are also set in the image to be detected, and the multiple regions to be detected are screened based on the independent confidence of each first human feature point. Screening, so as to improve the screening effect of human feature points.
- FIG. 5 shows a schematic flowchart of a method for screening human body feature points provided by another embodiment of the present application.
- the process shown in FIG. 5 will be described in detail below, and the screening method of the human body feature points may specifically include the following steps:
- Step S410 Acquire a training image and real coordinate information of each second human body feature point in the plurality of second human body feature points included in the training image.
- the training image and the real coordinate information of each second human body feature in the plurality of second human body feature points included in the training image can be obtained.
- the training image may only include the position annotation of the real coordinate information of each second human body feature point in the human body image and the plurality of second human body feature points contained therein, without other additional annotations.
- FIG. 6 shows a schematic flowchart of step S410 of the method for screening human body feature points shown in FIG. 5 of the present application.
- the flow shown in FIG. 6 will be described in detail below, and the method may specifically include the following steps:
- Step S411 Acquire multiple images to be selected.
- a plurality of images to be selected may be obtained, wherein the plurality of images to be selected may be obtained from a public dataset, for example, may be obtained from public datasets WFLW, AFLW, 300W, and the like.
- Step S412 Input the plurality of images to be selected into the trained human detection model.
- the electronic device can input the plurality of images to be selected into a trained human detection model, wherein the trained human detection model is obtained through machine learning, specifically First, a training data set is collected, wherein the attributes or characteristics of one type of data in the training data set are different from another type of data, and then the neural network is trained and modeled by using the collected training data set according to a preset algorithm, so that Based on the training data set, the rules are summarized, and the trained human detection model is obtained.
- Step S413 Obtain the human body confidence of each to-be-selected image among the multiple to-be-selected images output by the trained human detection model.
- the trained human detection model outputs corresponding information based on the multiple read images to be selected, and the electronic device can acquire the information output by the trained human detection model.
- the trained human detection model may output the human body confidence level of each to-be-selected image in the plurality of to-be-selected images based on the read multiple to-be-selected images.
- the human body confidence output by the trained human detection idol that is, the probability that the detected area is a human body
- the value can be from 0 to 1, the larger the value, the greater the possibility of the human body, the smaller the value. , the less likely it is to be a human body.
- Step S414 Obtain an image to be selected whose human body confidence is less than a fourth confidence threshold from the plurality of images to be selected, as a target image.
- the electronic device may preset and store a fourth confidence threshold, or may temporarily set a fourth confidence threshold after acquiring the human body confidence of a plurality of images to be selected, wherein the fourth confidence threshold It is used as the judgment basis for the human body confidence of the image to be selected.
- the setting standard of the fourth confidence threshold can be to obtain as many non-human body images as possible, so the fourth confidence threshold can be set to 0.1, and the human body confidence is less than the fourth confidence to be selected. If the image is used as the target image, it can be ensured that more than 90% of the images are images that do not contain human bodies, and the purpose of selecting images that do not contain human bodies is achieved.
- the confidence level of the human body obtained from the multiple images to be selected can be compared with the fourth confidence level threshold, and the comparison result is the confidence level of the human body An image to be selected that is smaller than the fourth confidence threshold is used as the target image.
- Step S415 Set the real coordinate information of the human body feature points included in the target image to zero, and use the target image after zeroing the real coordinate information as the training image.
- the real coordinate information of the human body feature points in the target image can be set to zero, and the target image after the real coordinate information has been set to zero can be used as a negative sample, Join as a training image to participate in the training of the detection model of human feature points.
- the human body feature points detected in the human body detection frame are the cases where the confidence level is close to 0. filter.
- Step S420 Perform a regression analysis on the coordinates of human body feature points on the training image, and obtain predicted coordinate information of each second human body feature point included in the training image.
- the training image may be subjected to coordinate regression analysis of the human body feature points to obtain the predicted coordinate information of each second feature point included in the training image.
- the predicted coordinate information of each second human body feature point included in the training image may also be acquired by using a heat map of human body feature points.
- Step S430 Based on the real coordinate information of each second human body feature point and the predicted coordinate information of each second human body feature point, obtain the independent confidence level of each second human body feature point and the multiple The overall confidence of the second human feature point.
- the The real coordinate information of the feature point and the predicted coordinate information of each second human body feature point are used to obtain the independent confidence level of each second human body feature point and the overall confidence level of multiple second human body feature points.
- FIG. 7 shows a schematic flowchart of step S430 of the method for screening human body feature points shown in FIG. 5 of the present application.
- the flow shown in FIG. 7 will be described in detail below, and the method may specifically include the following steps:
- Step S431 Calculate the Euclidean distance between the real coordinate information of each second human body feature point and the corresponding predicted coordinate information, and obtain a plurality of distance vectors.
- each second human body can be calculated
- the Euclidean distance between the real coordinate information of the feature points and the corresponding predicted coordinate information is an N-dimensional vector, where N is the number of second human body feature points, that is, multiple distance vectors are obtained.
- Step S432 Based on the magnitude relationship between the plurality of distance vectors and the first distance threshold, obtain the independent confidence level of each of the second human body feature points.
- the electronic device may preset and store the first distance threshold, or may temporarily acquire the first distance threshold when acquiring multiple distance vectors, which is not limited herein.
- the first distance threshold is used as a judgment basis for each distance vector in the plurality of distance vectors. Therefore, in this embodiment, after obtaining the plurality of distance vectors, each of the plurality of distance vectors can be The distance vectors are respectively compared with the first distance threshold to obtain a magnitude relationship between each distance vector in the plurality of distance vectors and the first distance threshold, and based on the magnitude relationship between each distance vector and the first distance threshold, Obtain independent confidence for each second human feature point. Wherein, the independent confidence level of each second human body feature point includes credible (represented as 1) and unreliable (represented as 0).
- Step S433 Based on the magnitude relationship between the distance vector sum of the plurality of distance vectors and the second distance threshold, obtain the overall confidence level of the plurality of second human body characteristics.
- the electronic device may preset and store the second distance threshold, or may temporarily acquire the second distance threshold when acquiring multiple distance vectors, which is not limited herein.
- the second distance threshold is used as the judgment basis for the distance vector sum of multiple distance vectors. Therefore, in this embodiment, after obtaining multiple distance vectors, the distance vector sum of multiple distance vectors can be calculated. Comparing the distance vector sum with the second distance threshold to obtain a magnitude relationship between the distance vector sum and the second distance threshold, and obtain a plurality of second human body features based on the magnitude relationship between the distance vector sum and the second distance threshold The overall confidence of the point.
- the independent confidence level of each second human body feature point includes credible (represented as 1) and unreliable (represented as 0).
- the first distance threshold and the second distance threshold decrease as the number of training steps increases.
- the first distance threshold The second distance threshold needs to be adjusted during the training process, that is, the first distance threshold and the second distance threshold decrease as the number of training steps increases.
- the first distance threshold and the second distance threshold are larger, indicating that the tolerance for false detection of human feature points is larger, and the branch of independent confidence and the branch of overall confidence can be obtained from the samples constructed by self-supervised learning
- the learning of positive samples balances the number of positive samples and negative samples. Otherwise, if the first distance threshold and the second distance threshold are too small, the data received by the independent confidence branch and the overall confidence branch used for self-supervised learning in the early stage of training will all be negative samples, and no positive samples will be released.
- the learning of the independent confidence branch and the overall confidence branch is completely biased. The independent confidence branch and the overall confidence branch quickly enter a local minimum value in a wrong direction, resulting in poor learning results. We call this situation extremely negative sample bias.
- the values of the first distance threshold and the second distance threshold gradually decrease.
- the detection model of human feature points tends to predict the human feature points correctly, but only in some cases the prediction results are not good.
- the tolerance for false detection of human feature points should be small. Therefore, the first distance threshold and the second distance threshold should be reduced to balance the number of negative samples. Otherwise, it will be similar to the situation described above, causing the positive samples to be extremely biased and independent.
- the branch of confidence and the branch of overall confidence quickly enter a minimum value in a wrong direction, resulting in poor learning effect.
- the setting of the initial value of the first distance threshold and the second distance threshold and how to decrease it should be adjusted according to a specific human detection model.
- the setting of the initial value of the first distance threshold and the second distance threshold should also be consistent with this
- the number of human body feature points in the task is related.
- the initial value and decreasing method of the first distance threshold and the second distance threshold belong to the hyperparameters of the model framework training. If NAS is used for search, they can be included in the search items.
- Step S440 Use the training image as input data, and use the predicted coordinate information of each second human body feature point, the independent confidence level of each second human body feature point, and the plurality of second human body feature points.
- the overall confidence of is trained by the machine learning algorithm, and the trained human feature point detection model is obtained.
- the predicted coordinate information of each second human body feature point, the independent confidence level of each second human body feature point, and the overall confidence level of the plurality of second human body feature points is used as input data, and the predicted coordinate information of each second human body feature point, the independent confidence level of each second human body feature point, and the overall confidence level of multiple second human body feature points are used as output data, and the machine learning algorithm is used as output data. Perform training to obtain the trained human feature point detection model.
- the training image can be used as input data, and the predicted coordinate information of each second human body feature point, the independent confidence level of each second human body feature point, and the overall confidence level of multiple second human body feature points can be used as Output data, train through tensorflow or pytorch, and obtain the trained detection model of human feature points.
- the settings such as the number of training steps and the learning rate of the detection model of the human body feature points can be adjusted according to the trained human body detector.
- Step S450 Acquire an image to be detected.
- Step S460 Input the image to be detected into the trained human body feature point detection model.
- Step S470 Obtain multiple first human body feature points output by the trained human body feature point detection model, the overall confidence level of the multiple first human body feature points, and each of the multiple first human body feature points. The independent confidence of the first human feature point.
- Step S480 Screen the plurality of first human body feature points based on the overall confidence level of the plurality of first human body feature points and the independent confidence level of each of the first human body feature points.
- steps S450 to S480 may refer to steps S110 to S140, which will not be repeated here.
- a training image and the real coordinate information of each second human body feature point in a plurality of second human body feature points included in the training image are obtained, and the training image is analyzed.
- the independent confidence level of the feature points and the overall confidence level of multiple second human body feature points are used as output data, and the machine learning algorithm is used for training to obtain the trained human body feature point detection model, obtain the image to be detected, and input the image to be detected.
- the trained human body feature point detection model to obtain multiple first human body feature points output by the trained human body feature point detection model, the overall confidence level of the multiple first human body feature points, and each of the multiple first human body feature points
- the independent confidence level of the first human body feature point is based on the overall confidence level of the plurality of first human body feature points and the independent confidence level of each first human body feature point, and the plurality of first human body feature points are screened.
- this embodiment also uses the training image, the predicted coordinate information of each second human body feature point, the independent confidence level of each second human body feature point, and a plurality of first human body feature points.
- the overall confidence of the two human body feature points is trained to obtain a detection model of the trained human body feature points, thereby improving the accuracy of the obtained human body feature points.
- FIG. 8 shows a block diagram of a module of an apparatus for screening human body feature points provided by an embodiment of the present application.
- the block diagram shown in FIG. 8 will be described below.
- the screening device 200 for human feature points includes: an image acquisition module 210 to be detected, an image input module 220 to be detected, a confidence output module 230, and a feature point screening module 240, wherein :
- the to-be-detected image acquisition module 210 is configured to acquire the to-be-detected image.
- the to-be-detected image input module 220 is configured to input the to-be-detected image into the trained human body feature point detection model.
- a confidence level output module 230 configured to acquire multiple first human body feature points output by the trained human body feature point detection model, the overall confidence level of the multiple first human body feature points, and the multiple first human body feature points Independent confidence for each first human feature point among the feature points.
- the feature point screening module 240 is configured to screen the plurality of first human body feature points based on the overall confidence level of the plurality of first human body feature points and the independent confidence level of each of the first human body feature points.
- the feature point screening module 240 includes: a first comparison result acquisition sub-module, a to-be-detected image deletion sub-module and a to-be-detected image retention sub-module, wherein:
- the first comparison result obtaining sub-module is configured to compare the overall confidence of the plurality of first human body feature points with a first confidence threshold to obtain a first comparison result.
- a to-be-detected image deletion sub-module configured to delete the to-be-detected image when the overall confidence of the plurality of first human body feature points represented by the first comparison result is less than the first confidence threshold.
- a to-be-detected image retention sub-module configured to retain the to-be-detected image when the overall confidence of the first comparison result representing the plurality of first human body feature points is not less than the first confidence threshold.
- the feature point screening module 240 further includes: a second comparison result acquisition sub-module and a human body feature point retention sub-module, wherein:
- the second comparison result obtaining sub-module is configured to compare the independent confidence level of each first human body feature point with a second confidence level threshold, respectively, to obtain a second comparison result.
- the human body feature point retention sub-module is configured to, based on the second comparison result, delete from the plurality of first human body feature points the first human body feature points whose independent confidence is less than the second confidence threshold, and retain the independent The first human body feature points whose confidence is not less than the second confidence threshold.
- the to-be-detected image includes a plurality of to-be-detected regions
- the human body feature point screening device 200 further includes: a feature point set acquisition module, a set confidence level acquisition module, and a to-be-detected region screening module, wherein:
- the feature point set obtaining module is configured to obtain the first human body feature points included in each of the multiple to-be-detected regions to obtain a plurality of first human body feature point sets.
- the set confidence level obtaining module is configured to acquire the set confidence level of each first human body feature point set in the plurality of first human body feature point sets based on the independent confidence level of each first human body feature point.
- a to-be-detected area screening module configured to screen the multiple to-be-detected areas based on the collective confidence of each first human body feature point set.
- the to-be-detected area screening module includes: a third comparison result acquisition sub-module and a human body feature point set screening sub-module, wherein:
- the third comparison result obtaining sub-module is configured to compare the set confidence of each first human body feature point set with a third confidence threshold, respectively, to obtain a third comparison result.
- a human body feature point set screening submodule configured to, based on the third comparison result, delete from the plurality of first human body feature point sets the first human body feature point set whose set confidence is less than the third confidence threshold, And retain the first set of human body feature points whose set confidence is not less than the third confidence threshold.
- the human body feature point screening device 200 further includes: a training image acquisition module, a predicted coordinate information acquisition module, a confidence level acquisition module, and a human body feature point detection model acquisition module, wherein:
- a training image acquisition module for acquiring a training image and the real coordinate information of each second human body feature point in the multiple second human body feature points included in the training image
- the training image acquisition module includes: a to-be-selected image acquisition sub-module, a to-be-selected image input sub-module, a human body confidence acquisition sub-module, a target image acquisition sub-module and a training image acquisition sub-module, wherein:
- the to-be-selected image acquisition sub-module is used to acquire a plurality of to-be-selected images.
- the to-be-selected image input sub-module is configured to input the plurality of to-be-selected images into the trained human detection model.
- the human body confidence level acquisition sub-module is used for acquiring the human body confidence level of each to-be-selected image in the multiple to-be-selected images output by the trained human detection model.
- the target image acquisition sub-module is configured to acquire, from the plurality of to-be-selected images, a to-be-selected image whose human body confidence level is less than a fourth confidence level threshold, as a target image.
- the training image acquisition sub-module is used for zeroing the real coordinate information of the human body feature points contained in the target image, and using the target image after zeroing the real coordinate information as the training image.
- the predicted coordinate information acquisition module is configured to perform a regression analysis on the coordinates of the human body feature points on the training image, and obtain the predicted coordinate information of each second human body feature point included in the training image.
- a confidence level acquisition module configured to acquire the independent confidence level of each second human body feature point based on the real coordinate information of each second human body feature point and the predicted coordinate information of each second human body feature point and the overall confidence of the plurality of second human body feature points.
- the confidence degree acquisition module includes: a distance vector acquisition sub-module, an independent confidence degree acquisition sub-module and an overall confidence degree acquisition sub-module, wherein:
- the distance vector obtaining sub-module is used to calculate the Euclidean distance between the real coordinate information of each second human body feature point and the corresponding predicted coordinate information, and obtain a plurality of distance vectors.
- the independent confidence level obtaining sub-module is configured to obtain the independent confidence level of each second human body feature point based on the magnitude relationship between the plurality of distance vectors and the first distance threshold.
- the overall confidence level obtaining sub-module is configured to obtain the overall confidence level of the plurality of second human body characteristics based on the relationship between the distance vector sum of the plurality of distance vectors and the second distance threshold.
- the human body feature point detection model obtaining module is used to use the training image as input data, and the predicted coordinate information of each second human body feature point, the independent confidence level of each second human body feature point and the The overall confidence of the plurality of second human body feature points is used as output data, and is trained through a machine learning algorithm to obtain a trained human body feature point detection model.
- the coupling between the modules may be electrical, mechanical or other forms of coupling.
- each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
- the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
- FIG. 9 shows a structural block diagram of an electronic device 100 provided by an embodiment of the present application.
- the electronic device 100 may be an electronic device capable of running an application program, such as a smart phone, a tablet computer, an electronic book, or the like.
- the electronic device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more application programs, wherein the one or more application programs may be stored in the memory 120 and configured to be executed by a
- the processor or processors 110 execute one or more programs configured to execute the method described in the foregoing method embodiments.
- the processor 110 may include one or more processing cores.
- the processor 110 uses various interfaces and lines to connect various parts of the entire electronic device 100, and executes by running or executing the instructions, programs, code sets or instruction sets stored in the memory 120, and calling the data stored in the memory 120.
- the processor 110 may adopt at least one of a digital signal processing (Digital Signal Processing, DSP), a Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and a Programmable Logic Array (Programmable Logic Array, PLA).
- DSP Digital Signal Processing
- FPGA Field-Programmable Gate Array
- PLA Programmable Logic Array
- the processor 110 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like.
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- the CPU mainly handles the operating system, user interface, and application programs
- the GPU is used to render and draw the content to be displayed
- the modem is used to handle wireless communication. It can be understood that, the above-mentioned modem may also not be integrated into the processor 110, and is implemented by a communication chip alone.
- the memory 120 may include random access memory (Random Access Memory, RAM), or may include read-only memory (Read-Only Memory). Memory 120 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
- the memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , instructions for implementing the following method embodiments, and the like.
- the storage data area may also store data (such as phone book, audio and video data, chat record data) created by the electronic device 100 during use.
- FIG. 10 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
- the computer-readable medium 300 stores program codes, and the program codes can be invoked by the processor to execute the methods described in the above method embodiments.
- the computer-readable storage medium 300 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
- the computer-readable storage medium 300 includes a non-transitory computer-readable storage medium.
- the computer-readable storage medium 300 has storage space for program code 310 for performing any of the method steps in the above-described methods. These program codes can be read from or written to one or more computer program products.
- the program code 310 may be compressed, for example, in a suitable form.
- the method, device, electronic device, and storage medium for screening human body feature points provided in the embodiments of the present application acquire an image to be detected, input the to-be-detected image into a trained human body feature point detection model, and obtain a trained human body feature point detection model.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
本申请公开了一种人体特征点的筛选方法、装置、电子设备以及存储介质,涉及电子设备技术领域。该方法包括:获取待检测图像,将待检测图像输入已训练的人体特征点检测模型,获取已训练的人体特征点检测模型输出的多个第一人体特征点、多个第一人体特征点的总体置信度以及多个第一人体特征点中的每个第一人体特征点的独立置信度,基于多个第一人体特征点的总体置信度和每个第一人体特征点的独立置信度,对多个第一人体特征点进行筛选。本申请通过获取多个人体特征点的总体置信度和每个人体特征点的独立置信度,并基于总体置信度和独立置信度对多个人体特征点进行筛选,以对误检的人体特征点进行过滤,消除人体特征点预测不准对后续任务的影响。
Description
相关申请的交叉引用
本申请要求于2020年08月12日提交的申请号为CN202010808012.2的中国申请的优先权,其在此出于所有目的通过引用将其全部内容并入本文。
本申请涉及电子设备技术领域,更具体地,涉及一种人体特征点的筛选方法、装置、电子设备以及存储介质。
随着人工智能技术的不断发展,人工智能技术也逐渐被应用到人体特征点的检测领域。目前,在通过人工智能技术对图像中的人体特征点进行检测时,一般采用主动形状模型(active shape model,ASM)的方案、卷积神经网络对人体特征点直接进行回归的方案、或通过人体特征点热力图对人体特征点位置预测做辅助的方案等。
发明内容
鉴于上述问题,本申请提出了一种人体特征点的筛选方法、装置、电子设备以及存储介质,以解决上述问题。
第一方面,本申请实施例提供了一种人体特征点的筛选方法,所述方法包括:获取待检测图像;将所述待检测图像输入已训练的人体特征点检测模型;获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度;基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选。
第二方面,本申请实施例提供了一种人体特征点的筛选装置,所述装置包括:待检测图像获取模块,用于获取待检测图像;待检测图像输入模块,用于将所述待检测图像输入已训练的人体特征点检测模型;置信度输出模块,用于获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度;特征点筛选模块,用于基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选。
第三方面,本申请实施例提供了一种电子设备,包括存储器和处理器,所述存储器耦接到所述处理器,所述存储器存储指令,当所述指令由所述处理器执行时所述处理器执行上述方法。
第四方面,本申请实施例提供了一种计算机可读取存储介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述方法。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1示出了本申请一个实施例提供的人体特征点的筛选方法的流程示意图;
图2示出了本申请又一个实施例提供的人体特征点的筛选方法的流程示意图;
图3示出了本申请再一个实施例提供的人体特征点的筛选方法的流程示意图;
图4示出了本申请的图3所示的人体特征点的筛选方法的步骤S370的流程示意图;
图5示出了本申请另一个实施例提供的人体特征点的筛选方法的流程示意图;
图6示出了本申请的图5所示的人体特征点的筛选方法的步骤S410的流程示意图;
图7示出了本申请的图5所示的人体特征点的筛选方法的步骤S430的流程示意图;
图8示出了本申请实施例提供的人体特征点的筛选装置的模块框图;
图9示出了本申请实施例用于执行根据本申请实施例的人体特征点的筛选方法的电子设备的框图;
图10示出了本申请实施例的用于保存或者携带实现根据本申请实施例的人体特征点的筛选方法的程序代码的存储单元。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
卷积神经网络是一类包含卷积计算且具有一定深度结构的神经网络,是深度学习的代表算法之一。卷积神经网络发展至今,一般包含如下几种类型的堆叠层:输入层、卷积层、池化层、归一化层(又叫Batch Norm层)、激活函数层、全连接层、输出层等。在计算机视觉领域,输入层一般是RGB三通道的彩色图像;卷积层的功能是对输入数据进行特征提取,计算形式为卷积运算,包含权重系数和偏置;池化层用于对特征信息进行选择和过滤,常用的池化方式包括最大池化和平均池化;归一化层对输入数据进行归一化处理,使各个特征的分布相近,网络更容易训练;激活函数层用于给模型增加非线性因素,使得模型具有更强的拟合能力;全连接层一般位于卷积神经网络的最后部分,对输入特征进行非线性组合得到输出;输出层输出模型所需类型的结果,对图像分类问题,输出层使用softmax(归一化指数函数,在深度学习领域常用作输出层,得到指定类型的输出)等函数输出分类标签,对图像语义分割问题,输出层直接输出每个像素的分类结果,对人体特征点检测问题,输出层输出人体特征点。
人体特征点检测,即pose estimation,主要检测人体的一些特征点,如眼睛、鼻子、手肘、肩膀等,并将它们按照特征点顺序依次连接,通过特征点来描述人体信息。扩展开来,还可以描述人体的姿态、步态、行为等信息。人体特征点检测是计算机视觉的基础性算法之一,在计算机视觉的其他相关领域的研究中都起到了基础性的作用,如行为识别、智能构图等相关领域。目前,在通过人工智能技术对图像中的人体特征点进行检测时,一般采用主动形状模型的方案、卷积神经网络对人体特征点直接进行回归的方案、或通过人体特征点热图对人体特征点位置预测做辅助的方案等。但是,发明人经过长期发现,目前的人体特征点检测方案是直接给出人体特征点坐标或人体特征点热图,然而对于大部分人体特征点的检测方案来说,在某些情况下的人体特征点的检测结果并不尽如人意,如果完全信任人体特征点的检测结果而无任何过滤方法,可能会对接下来依赖人体特征点检测任务的其他任务造成错误而无法解决。
发明人经过长期的研究发现,并提出了本申请实施例提供的人体特征点的筛选方法、装置、电子设备以及存储介质,通过获取多个人体特征点的总体置信度和每个人体特征点的独立置信度,并基于总体置信度和独立置信度对多个人体特征点进行筛选,以对误检的人体特征点进行过滤,消除人体特征点预测不准对后续任务的影响。其中,具体的人体特征点的筛选方法在后续的实施例中进行详细的说明。
请参阅图1,图1示出了本申请一个实施例提供的人体特征点的筛选方法的流程示意图。所述人体特征点的筛选方法用于通过获取多个人体特征点的总体置信度和每个人体特征点的独立置信度,并基于总体置信度和独立置信度对多个人体特征点进行筛选,以对误检的人体特征点进行过滤,消除人体特征点预测不准对后续任务的影响。在具体的实施例中,所述人体特征点的筛选方法应用于如图8所示的人体特征点的筛选装置200以及配置有人体特征点的筛选装置200的电子设备100(图9)。下面将以电子设备为例,说明本实施例的具体流程,当然,可以理解的,本实施例所应用的电子设备可以为智能手机、平板电脑、穿戴式电子设备等,在此不做限定。下面将针对图1所示的流程进行 详细的阐述,所述人体特征点的筛选方法具体可以包括以下步骤:
步骤S110:获取待检测图像。
在本实施例中,可以获取待检测图像,其中,所获取的待检测图像中可以包括至少一个人体。在一些实施方式中,该待检测图像可以为通过电子设备的摄像头采集的预览图像、可以为通过电子设备的摄像头拍摄并存储在相册的照片、可以为从网络下载并存储在相册的图像等,在此不做限定。另外,在一些实施方式中,所获取的待检测图像可以为静态图像,也可以为动态图像,在此不做限定。
步骤S120:将所述待检测图像输入已训练的人体特征点检测模型。
在本实施例中,电子设备在获取到待检测图像后,可以将该待检测图像输入已训练的人体特征点检测模型,其中,该已训练的人体特征点检测模型是通过机器学习获得的,具体地,首先采集训练数据集,其中,训练数据集中的一类数据的属性或特征区别于另一类数据,然后通过将采集的训练数据集按照预设的算法对神经网络进行训练建模,从而基于该训练数据集总结出规律,得到已训练的人体特征点检测模型。
在一些实施方式中,该已训练的人体特征点检测模型可以预先训练完成后存储在电子设备的本地。基于此,电子设备在获取到待检测图像后,可以直接在本地调用该已训练的人体特征点检测模型,例如,可以直接发送指令至人体特征点检测模型,以指示该已训练的人体特征点检测模型在目标存储区域读取该待检测图像,或者,电子设备可以直接将该待检测图像输入存储在本地的已训练的人体特征点检测模型,从而有效避免由于网络因素的影响降低待检测图像输入已训练的人体特征点检测模型的速度,以提升已训练的人体特征点检测模型获取待检测图像的速度,提升用户体验。
在一些实施方式中,该已训练的人体特征点检测模型也可以预先训练完成后存储在与电子设备通信连接的服务器。基于此,电子设备在获取到待检测图像后,可以通过网络发送指令至存储在服务器的已训练的人体特征点检测模型,以指示该已训练的人体特征点检测模型通过网络读取电子设备获取的待检测图像,或者,电子设备可以通过网络将待检测图像发送至存储在服务器的已训练的人体特征点检测模型,从而通过将已训练的人体特征点检测模型存储在服务器的方式,减少对电子设备的存储空间的占用,降低对电子设备正常运行的影响。
步骤S130:获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度。
在本实施例中,已训练的人体特征点检测模型基于读取的待检测图像输出相应的信息,则电子设备可以获取该已训练的人体特征点检测模型输出的信息。可以理解的,若该已训练的人体特征点检测模型存储在电子设备的本地,则该电子设备可以直接获取该已训练的人体特征点检测模型输出的信息;若该已训练的人体特征点检测模型存储在与电子设备连接的服务器,则该电子设备可以通过网络从服务器获取该已训练的人体特征点检测模型输出的信息。
在一些实施方式中,已训练的人体特征点检测模型可以基于输入的待检测图像,输出该待检测图像中的多个第一人体特征点、多个第一人体特征点的总体置信度以及多个第一人体特征点中的每个第一人体特征点的独立置信度。其中,多个第一人体特征点的总体置信度用于表征多个第一人体特征点预测的整体准确程度或可信程度,每个第一人体特征点的独立置信度用于表征每个第一人体特征点预测的准确程度或可信程度。
步骤S140:基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选。
在本实施例中,在获取多个第一人体特征点的总体置信度和每个第一人体特征点的独立置信度后,可以基于多个第一人体特征点的总体置信度和每个第一人体特征点的独立置信度,对多个第一人体特征点进行筛选,从而消除人体特征点预测不准时对后续任务的影响。
在一些实施方式中,在获取多个第一人体特征点的总体置信度后,可以基于多个第一人体特征点的总体置信度,对待检测图像进行筛选,例如,当多个第一人体特征点的总体置信度表征多个第一人体特征点不可信时,可以删除或过滤待检测图像并不再使用待检测图像参与后续的其他任务;当多个第一人体特征点的总体置信度表征多个第一人体特征点可信时,可以保留待检测图像并使用待检测图像继续参与后续的其他任务。
在一些实施方式中,在获取每个第一人体特征点的独立置信度后,可以基于每个第一人体特征点的独立置信度,对待检测图像中的多个第一人体特征点进行筛选,例如,当某个第一人体特征点的独立置信度表征该第一人体特征点不可信时,可以在待检测图像中删除或过滤该第一人体特征点并不再使用该第一人体特征点参与后续的其他任务;当某个第一人体特征点的独立置信度表征该第一人体特征点可信时,可以在待检测图像中保留该第一人体特征点并使用该第一人体特征点参与后续的其他任务。
本申请一个实施例提供的人体特征点的筛选方法,获取待检测图像,将待检测图像输入已训练的人体特征点检测模型,获取已训练的人体特征点检测模型输出的多个第一人体特征点、多个第一人体特征点的总体置信度以及多个第一人体特征点中的每个第一人体特征点的独立置信度,基于多个第一人体特征点的总体置信度和每个第一人体特征点的独立置信度,对多个第一人体特征点进行筛选,从而通过获取多个人体特征点的总体置信度和每个人体特征点的独立置信度,并基于总体置信度和独立置信度对多个人体特征点进行筛选,以对误检的人体特征点进行过滤,消除人体特征点预测不准对后续任务的影响。
请参阅图2,图2示出了本申请又一个实施例提供的人体特征点的筛选方法的流程示意图。下面将针对图2所示的流程进行详细的阐述,所述人体特征点的筛选方法具体可以包括以下步骤:
步骤S210:获取待检测图像。
步骤S220:将所述待检测图像输入已训练的人体特征点检测模型。
步骤S230:获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度。
其中,步骤S210-步骤S230的具体描述请参阅步骤S110-步骤S130,在此不再赘述。
步骤S240:将所述多个第一人体特征点的总体置信度和第一置信度阈值进行比较,获得第一比较结果。
在一些实施方式中,电子设备可以预先设置并存储有第一置信度阈值,也可以在获取多个第一人体特征点的总体置信度后临时设置第一置信度阈值,其中,该第一置信度阈值用于作为多个人体特征点的总体置信度的判断依据,因此,在本实施例中,在获取多个第一人体特征点的总体置信度后,可以将多个第一人体特征点的总体置信度与第一置信度阈值进行比较,获得第一比较结果。其中,当第一比较结果表征多个第一人体特征点的总体置信度小于第一置信度阈值时,表征多个第一人体特征点的总体置信度不可信,当第一比较结果表征多个第一人体特征点的总体置信度不小于第一置信度阈值时,表征多个第一人体特征点的总体置信度可信。在一些实施方式中,第一置信度阈值可以设置为1。
步骤S250:当所述第一比较结果表征所述多个第一人体特征点的总体置信度小于所述第一置信度阈值时,删除所述待检测图像。
在一些实施方式中,当第一比较结果表征多个第一人体特征点的总体置信度小于第一置信度阈值时,可以确定多个第一人体特征点的总体置信度不可信,表征多个第一人体特征点中的大多数第一人体特征点均不可信,该待检测图像的第一人体特征点的检测误差较大,则可以删除该待检测图像,以避免对后续任务的影响。
步骤S260:当所述第一比较结果表征所述多个第一人体特征点的总体置信度不小于所述第一置信度阈值时,保留所述待检测图像。
在一些实施方式中,当第一比较结果表征多个第一人体特征点的总体置信度不小于第一置信度阈值时,可以确定多个第一人体特征点的总体置信度可信,表征多个第一人体特征点中的大多数第一人体特征点均可信,该待检测图像的第一人体特征点的检测误差较小,则可以保留该待检测图像,以为后续任务提供待检测图像的第一人体特征点。
步骤S270:将所述每个第一人体特征点的独立置信度分别和第二置信度阈值进行比较,获得第二比较结果。
在一些实施方式中,电子设备可以预先设置并存储有第二置信度阈值,也可以在获取多个第一人体特征点中的每个第一人体特征点的独立置信度后临时设置第二置信度阈值,其中,该第二置信度阈值用于作为每个人体特征点的独立置信度的判断依据,因此,在本实施例中,在获取多个第二人体特征点中的每个第二人体特征点的独立置信度后,可以将每个第二人体特征点的独立置信度分别与第二置信度阈值进行比较,获得第二比较结果。其中,当第二比较结果表征某个第一人体特征点的独立置信度小于第二置信度阈值时,表征该第一人体特征点的独立置信度不可信,当第二比较结果表征某个第一人体特征点的独立置信度不小于第二置信度阈值时,表征该第二人体特征点的独立置信度可信。在一些实施方式中,第二置信度阈值可以设置为1。
作为一种方式,可以在确定第一比较结果表征多个第一人体特征点的总体置信度不小于第一置信度阈值并保留待检测图像后,再将每个第一人体特征点的独立置信度分别和第二置信度阈值进行比较,获得第二比较结果,以达到减小第一人体特征点的比较次数的效果。作为另一种方式,可以在获得每个第一人体特征点的独立置信度后,就直接将每个第一人体特征点的独立置信度分别和第二置信度阈值进行比较,获得第二比较结果,以达到减小第一人体特征点的误判的效果。
步骤S280:基于所述第二比较结果,从所述多个第一人体特征点中删除独立置信度小于所述第二置信度阈值的第一人体特征点,并保留独立置信度不小于所述第二置信度阈值的第一人体特征点。
在本实施例中,在获得第二比较结果后,可以基于该第二比较结果,从多个第一人体特征点中删除独立置信度小于第二置信度阈值的第一人体特征点,并保留独立置信度不小于第二置信度的第一人体特征点。
在一些实施方式中,第二比较结果包括多个第一人体特征点中的每个第一人体特征点和第二置信度阈值的比较结果,即,第二比较结果包括每个第一人体特征点和第二置信度阈值之间的大小关系,可以理解的,第一人体特征点的独立置信度小于第二置信度阈值,则表征第一人体特征点不可信,第一人体特征点的独立置信度不小于第二置信度阈值,则表征第一人体特征点可信。因此,在本实施例中,可以从多个第一人体特征点中删除独立置信度小于第二置信度阈值的第一人体特征点,以及从多个第一人体特征点中保留独立置信度不小于第二置信度阈值的第一人体特征点。
例如,假设多个第一人体特征点包括人体特征点1、人体特征点2、人体特征点3、人体特征点4以及人体特征点5,当人体特征点1、人体特征点2、人体特征点3以及人体特征点4均不小于第二置信度阈值,且人体特征点5小于第二置信度阈值时,则可以从多个第一人体特征点中删除人体特征点5,并保留人体特征点1、人体特征点2、人体特征点3以及人体特征点4。
本申请又一个实施例提供的人体特征点的筛选方法,获取待检测图像,将待检测图像输入已训练的人体特征点检测模型,获取已训练的人体特征点检测模型输出的多个第一人体特征点、多个第一人体特征点的总体置信度以及多个第一人体特征点中的每个第一人体特征点的独立置信度,将多个第一人体特征点的总体置信度和第一置信度阈值进行比较,获得第一比较结果,当第一比较结果表征多个第一人体特征点的总体置信度小于第一置信度阈值时,删除待检测图像,当第一比较结果表征多个第一人体特征点的总体置信度不小于第一置信度阈值时,保留待检测图像,将每个第一人体特征点的独立置 信度分别和第二置信度阈值进行比较,获得第二比较结果,基于第二比较结果,从多个第一人体特征点中删除独立置信度小于第二置信度阈值的第一人体特征点,并保留独立置信度不小于第二置信度阈值的第一人体特征点。相较于图1所示的人体特征点的筛选方法,本实施例还设置第一置信度阈值对总体置信度进行判定,以对待检测图像进行删除或保留,另外,本实施例还设置第二置信度阈值对独立置信度进行判定,以对每个第一人体特征点进行删除或保留,从而提升人体特征点筛选效果。
请参阅图3,图3示出了本申请再一个实施例提供的人体特征点的筛选方法的流程示意图。下面将针对图3所示的流程进行详细的阐述,其中,在本实施例中,待检测图像包括多个待检测区域,所述人体特征点的筛选方法具体可以包括以下步骤:
步骤S310:获取待检测图像。
步骤S320:将所述待检测图像输入已训练的人体特征点检测模型。
步骤S330:获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度。
步骤S340:基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选。
其中,步骤S310-步骤S340的具体描述请参阅步骤S110-步骤S140,在此不再赘述。
步骤S350:获取所述多个待检测区域中的每个待检测区域所包含的第一人体特征点,获得多个第一人体特征点集合。
在本实施例中,待检测图像包括多个待检测区域,其中,多个待检测区域可以是对待检测图像进行均匀划分获得,也可以对待检测图像进行非均匀划分获得,另外,多个待检测区域的数量可以固定设置,也可以根据需求动态设置,在此不做限定。
在本实施例中,在获取多个第一人体特征点、多个第一人体特征点的总体置信度以及多个第一人体特征点中的每个第一人体特征点的独立置信度后,可以基于多个第一人体特征点和多个待检测区域,获取多个待检测区域中的每个待检测区域所包含的第一人体特征点,获得多个第一人体特征点集合。在一些实施方式中,在获取多个第一人体特征点和多个待检测区域后,可以获取多个第一人体特征点中的每个第一人体特征点的坐标信息,以及获取多个待检测区域中的每个待检测区域所包含的坐标区域,基于每个第一人体特征点的坐标信息和每个待检测区域所包含的坐标区域,从多个第一人体特征点中获取位于每个待检测区域中的第一人体特征点,并将每个待检测区域所包含的第一人体特征点作为一个第一人体特征点集合,以获得多个第一人体特征点集合。
步骤S360:基于所述每个第一人体特征点的独立置信度,获取所述多个第一人体特征点集合中的每个第一人体特征点集合的集合置信度。
在本实施例中,在获得多个第一人体特征点集合后,可以基于每个第一人体特征点的独立置信度,获取多个第一人体特征点集合中的每个第一人体特征点集合的集合置信度。在一些实施方式中,在获取多个第一人体特征点集合后,可以获取每个第一人体特征点集合所包含的每个第一人体特征点的独立置信度,并基于每个第一人体特征点集合所包含的每个第一人体特征点的独立置信度,获得每个对应的第一人体特征点集合的集合置信度。例如,可以对每个第一人体特征点集合所包含的第一人体特征点的独立置信度求和或求平均值的方式,获得每个对应的第一人体特征点集合的集合置信度。
步骤S370:基于所述每个第一人体特征点集合的集合置信度,对所述多个待检测区域进行筛选。
在一些实施方式中,在获取多个第一人体特征点集合中的每个第一人体特征点集合的集合置信度后,可以基于每个第一人体特征点集合的集合置信度,对多个待检测区域进行筛选,例如,当多个第一人体特征点集合中的某个第一人体特征点集合的集合置信度表征该某个第一人体特征点集合不可信时,可以删除或过滤人体特征点集合对应的待检测区域;当多个第 一人体特征点集合中的某个第一人体特征点集合的集合置信度表征该某个第一人体特征点集合可信时,可以保留人体特征点集合对应的待检测区域。
请参阅图4,图4示出了本申请的图3所示的人体特征点的筛选方法的步骤S370的流程示意图。下面将针对图4所示的流程进行详细的阐述,所述方法具体可以包括以下步骤:
步骤S371:将所述每个第一人体特征点集合的集合置信度分别和第三置信度阈值进行比较,获取第三比较结果。
在一些实施方式中,电子设备可以预先设置并存储有第三置信度阈值,也可以在获取每个第一人体特征点集合的集合置信度后临时设置第三置信度阈值,其中,该第三置信度阈值用于作为多个第一人体特征点集合的集合置信度中的每个第一人体特征点集合的集合置信度的判断依据,因此,在本实施例中,在获取多个第一人体特征点集合的集合置信度后,可以将每个第一人体特征点集合的集合置信度与第三置信度阈值进行比较,获得第三比较结果。其中,当第三比较结果表征某个第一人体特征点集合的集合置信度小于第三置信度阈值时,表征该第一人体特征点集合的集合置信度不可信,当第三比较结果表征某个第一人体特征点集合的集合置信度不小于第一置信度阈值时,表征该第一人体特征点集合的集合置信度可信。在一些实施方式中,第一置信度阈值可以设置为1。
步骤S372:基于所述第三比较结果,从所述多个第一人体特征点集合中删除集合置信度小于所述第三置信度阈值的第一人体特征点集合,并保留集合置信度不小于所述第三置信度阈值的第一人体特征点集合。
在本实施例中,在获得第三比较结果后,可以基于该第三比较结果,从多个第一人体特征点集合中删除集合置信度小于第三置信度阈值的第一人体特征点集合,并保留集合置信度不小于第三置信度的第一人体特征点集合。
在一些实施方式中,第三比较结果包括多个第一人体特征点集合中的第一人体特征点集合和第三置信度阈值的比较结果,即,第三比较结果包括每个第一人体特征点集合和第三置信度阈值之间的大小关系,可以理解的,第一人体特征点集合小于第三置信度阈值,则表征第一人体特征点集合不可信,第一人体特征点集合的集合置信度不小于第三置信度阈值,则表征第一人体特征点集合可信。因此,在本实施例中,可以从多个第一人体特征点集合中删除集合置信度小于第三置信度阈值的第一人体特征点集合,以及从多个第一人体特征点集合中保留集合置信度不小于第三置信度阈值的第一人体特征点集合。
本申请再一个实施例提供的人体特征点的筛选方法,获取待检测图像,将待检测图像输入已训练的人体特征点检测模型,获取已训练的人体特征点检测模型输出的多个第一人体特征点、多个第一人体特征点的总体置信度以及多个第一人体特征点中的每个第一人体特征点的独立置信度,基于多个第一人体特征点的总体置信度和每个第一人体特征点的独立置信度,对多个第一人体特征点进行筛选,获取多个待检测区域中的每个待检测区域所包含的第一人体特征点,获得多个第一人体特征点集合,基于每个人体特征点的独立置信度,获取多个第一人体特征点集合中的每个第一人体特征点集合的集合置信度,基于每个第一人体特征点集合的集合置信度,对多个待检测区域进行筛选。相较于图1所示的人体特征点的筛选方法,本实施例还将待检测图像设置多个待检测区域,并基于每个第一人体特征点的独立置信度对多个待检测区域进行筛选,从而提升人体特征点的筛选效果。
请参阅图5,图5示出了本申请另一个实施例提供的人体特征点的筛选方法的流程示意图。下面将针对图5所示的流程进行详细的阐述,所述人体特征点的筛选方法具体可以包括以下步骤:
步骤S410:获取训练图像,以及所述训练图像所包含的多个第二人体特征点中的每个第二人体特征点的真实坐标信息。
在本实施例中,可以获取训练图像,以及该训练图像所包含的多个第二人体特征点中的 每个第二人体特征带你的真实坐标信息。其中,作为一种方式,该训练图像可以仅包括人体图像及其所包含的多个第二人体特征点中的每个第二人体特征点的真实坐标信息的位置标注,而无需其他额外标注。
请参阅图6,图6示出了本申请的图5所示的人体特征点的筛选方法的步骤S410的流程示意图。下面将针对图6所示的流程进行详细的阐述,所述方法具体可以包括以下步骤:
步骤S411:获取多个待选择图像。
在一些实施方式中,可以获取多个待选择图像,其中,该多个待选择图像可以从采用公开数据集中获取,例如,可以从公开数据集WFLW、AFLW、300W等中获取。
步骤S412:将所述多个待选择图像输入已训练的人体检测模型。
在本实施例中,电子设备在获取到多个待选择图像后,可以将多个待选择图像输入已训练的人体检测模型,其中,该已训练的人体检测模型是通过机器学习获得的,具体地,首先采集训练数据集,其中,训练数据集中的一类数据的属性或特征区别于另一类数据,然后通过将采集的训练数据集按照预设的算法对神经网络进行训练建模,从而基于该训练数据集总结出规律,得到已训练的人体检测模型。
步骤S413:获取所述已训练的人体检测模型输出的多个待选择图像中的每个待选择图像的人体置信度。
在一些实施方式中,已训练的人体检测模型基于读取的多个待选择图像输出相应的信息,则电子设备可以获取该已训练的人体检测模型输出的信息。在一些实施方式中,已训练的人体检测模型可以基于读取的多个待选择图像,输出多个待选择图像中的每个待选择图像的人体置信度。其中,已训练的人体检测偶像输出的人体置信度,即代表检测到的区域是人体的概率大小,取值可以从0~1,值越大,表示是人体的可能性越大,值越小,表示是人体的可能性越小。
步骤S414:从所述多个待选择图像中获取人体置信度小于第四置信度阈值的待选择图像,作为目标图像。
在一些实施方式中,电子设备可以预先设置并存储有第四置信度阈值,也可以在获取多个待选择图像的人体置信度后临时设置第四置信度阈值,其中,该第四置信度阈值用于作为待选择图像的人体置信度的判断依据。作为一种方式,该第四置信度阈值的设置标准可以为能够得到尽量多的非人体区域图像,所以可以将第四置信度阈值设置为0.1,将人体置信度小于第四置信度的待选择图像作为目标图像,那么能保证90%以上的图像都是不包含人体的图像,也就达到选取不包含人体的图像的目的。因此,在本实施例中,在获取多个待选择图像中获取人体置信度后,可以将多个待选择图像中获取人体置信度与第四置信度阈值进行比较,将比较结果为人体置信度小于第四置信度阈值的待选择图像作为目标图像。
步骤S415:将所述目标图像所包含的人体特征点的真实坐标信息置零,并将真实坐标信息置零后的目标图像作为所述训练图像。
在一些实施方式中,在从多个待选择图像中确定目标图像后,可以将目标图像中的人体特征点的真实坐标信息置零,并将真实坐标信息置零后的目标图像作为负样本,加入作为训练图像参与人体特征点的检测模型的训练。其中,通过负样本的设置,可以针对人体误检时,在任何情况下,人体检测框中检测到的人体特征点都是置信度趋近于0的情况,据此可以对人体检测结果进行二次过滤。
步骤S420:对所述训练图像进行人体特征点坐标回归分析,获取所述训练图像所包含的每个第二人体特征点的预测坐标信息。
在本实施例中,在获取训练图像后,可以将训练图像进行人体特征点的坐标回归分析,以获取训练图像所包含的每个第二特征点的预测坐标信息。在一些实施方式中,在获取训练图像后,还可以使用人体特征点热力图的方式,获取训练图像所包含的每个第二人体特征点的预测坐标信息。
步骤S430:基于所述每个第二人体特征点的真实坐标信息和所述每个第二人体特征点 的预测坐标信息,获取所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度。
在本实施例中,在训练图像所包含的每个第二人体特征点的真实坐标信息,以及训练图像所包含的每个第二人体特征点的预测坐标信息后,可以基于每个第二人体特征点的真实坐标信息和每个第二人体特征点的预测坐标信息,获取每个第二人体特征点的独立置信度和多个第二人体特征点的总体置信度。
请参阅图7,图7示出了本申请的图5所示的人体特征点的筛选方法的步骤S430的流程示意图。下面将针对图7所示的流程进行详细的阐述,所述方法具体可以包括以下步骤:
步骤S431:计算所述每个第二人体特征点的真实坐标信息和对应的预测坐标信息之间的欧式距离,获得多个距离向量。
在一些实施方式中,在训练图像所包含的每个第二人体特征点的真实坐标信息,以及训练图像所包含的每个第二人体特征点的预测坐标信息后,可以计算每个第二人体特征点的真实坐标信息和对应的预测坐标信息之间的欧式距离,得到的结果是一个N维向量,其中,N为第二人体特征点的数量,即获得多个距离向量。
步骤S432:基于所述多个距离向量和第一距离阈值的大小关系,获取所述每个第二人体特征点的独立置信度。
在一些实施方式中,电子设备可以预先设置并存储有第一距离阈值,也可以在获取多个距离向量时临时获取第一距离阈值,在此不做限定。其中,该第一距离阈值用于作为多个距离向量中的每个距离向量的判断依据,因此,在本实施例中,在获取多个距离向量后,可以将多个距离向量中的每个距离向量分别与第一距离阈值进行比较,获得多个距离向量中的每个距离向量和第一距离阈值之间的大小关系,并基于每个距离向量和第一距离阈值之间的大小关系,获取每个第二人体特征点的独立置信度。其中,每个第二人体特征点的独立置信度包括可信(表示为1)和不可信(表示为0)。
步骤S433:基于所述多个距离向量的距离向量和和第二距离阈值的大小关系,获取所述多个第二人体特点的总体置信度。
在一些实施方式中,电子设备可以预先设置并存储有第二距离阈值,也可以在获取多个距离向量时临时获取第二距离阈值,在此不做限定。其中,该第二距离阈值用于作为多个距离向量的距离向量和的判断依据,因此,在本实施例中,在获取多个距离向量后,可以计算多个距离向量的距离向量和,可以将距离向量和与第二距离阈值进行比较,获得距离向量和和第二距离阈值之间的大小关系,并基于距离向量和和第二距离阈值之间的大小关系,获取多个第二人体特征点的总体置信度。其中,每个第二人体特征点的独立置信度包括可信(表示为1)和不可信(表示为0)。
在一些实施方式中,第一距离阈值和第二距离阈值岁训练步数的增大而减小。其中,由于在人体特征点的检测模型的训练初期,人体特征点的预测结果较差,而在人体特征点的检测模型的训练后期,人体特征点的预测结果较好,因此,第一距离阈值与第二距离阈值在训练过程中需要进行调整,即,第一距离阈值和第二距离阈值岁训练步数的增大而减小。
在训练前期,第一距离阈值与第二距离阈值较大,表示对人体特征点误检的宽容度较大,独立置信度的分支和总体置信度的分支从自监督学习构建的样本中能够进行正样本的学习,平衡正样本与负样本个数。否则,第一距离阈值与第二距离阈值太小会造成在训练前期用于自监督学习的独立置信度的分支和总体置信度的分支接收到的数据全部为负样本,无任何正样本释放,训练前期独立置信度的分支和总体置信度的分支学习具有完全的偏向性,独立置信度的分支和总体置信度的分支迅速进入某一错误方向的局部极小值中,导致学习结果很差,称此情况为负样本偏向性极大。
随着训练步数增加,第一距离阈值与第二距离阈值的值逐渐减小。随着训练过程的进行,人体特征点的检测模型对于人体特征点预测趋向于预测正确,仅在某些情况下预测结果不好。此时人体特征点误检宽容度应较小,因此要降低第一距离阈值与第二距离阈值,平衡负样本 的数量,否则会与上述描述的情况类似,造成正样本偏向性极大,独立置信度的分支和总体置信度的分支支迅速进入某一错误方向的极小值中,导致学习效果很差。
其中,第一距离阈值与第二距离阈值的初始值的设置与如何进行递减则要根据特定的人体检测模型进行调节,通常第一距离阈值和第二距离阈值的初始值的设置还应与该任务中的人体特征点个数有关,对于第一距离阈值与第二距离阈值的初始值与递减方式,属于模型框架训练的超参,若使用NAS进行搜索,可以纳入搜索项中。
步骤S440:将所述训练图像作为输入数据,将所述每个第二人体特征点的预测坐标信息、所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度作为输出数据,通过机器学习算法进行训练,获得已训练的人体特征点检测模型。
在一些实施方式中,在获得训练图像、每个第二人体特征点的预测坐标信息、每个第二人体特征点的独立置信度以及多个第二人体特征点的总体置信度后,可以将训练图像作为输入数据,将每个第二人体特征点的预测坐标信息、每个第二人体特征点的独立置信度和多个第二人体特征点的总体置信度作为输出数据,通过机器学习算法进行训练,获得已训练的人体特征点检测模型。作为一种方式,可以将训练图像作为输入数据,将每个第二人体特征点的预测坐标信息、每个第二人体特征点的独立置信度和多个第二人体特征点的总体置信度作为输出数据,通过tensorflow或者pytorch进行训练,获得已训练的人体特征点的检测模型。其中,人体特征点的检测模型的训练步数和学习率等设置可以根据已训练的人体检测器进行设置于调整。
步骤S450:获取待检测图像。
步骤S460:将所述待检测图像输入已训练的人体特征点检测模型。
步骤S470:获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度。
步骤S480:基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选。
其中,步骤S450-步骤S480的具体描述请参阅步骤S110-步骤S140,在此不再赘述。
本申请另一个实施例提供的人体特征点的筛选方法,获取训练图像,以及训练图像所包含的多个第二人体特征点中的每个第二人体特征点的真实坐标信息,对训练图像进行人体特征点坐标回归分析,获取训练图像所包含的每个第二人体特征点的预测坐标信息,基于每个第二人体特征点的真实坐标信息和每个第二人体特征点的预测坐标信息,获取每个第二人体特征点的独立置信度和多个第二人体特征点的总体置信度,将训练图像作为输入数据,将每个第二人体特征点的预测坐标信息、每个第二人体特征点的独立置信度以及多个第二人体特征点的总体置信度作为输出数据,通过机器学习算法进行训练,获得已训练的人体特征点检测模型,获取待检测图像,将待检测图像输入已训练的人体特征点检测模型,获取已训练的人体特征点检测模型输出的多个第一人体特征点、多个第一人体特征点的总体置信度以及多个第一人体特征点中的每个第一人体特征点的独立置信度,基于多个第一人体特征点的总体置信度和每个第一人体特征点的独立置信度,对多个第一人体特征点进行筛选。相较于图1所示的人体特征点的筛选方法,本实施例还通过训练图像、每个第二人体特征点的预测坐标信息、每个第二人体特征点的独立置信度以及多个第二人体特征点的总体置信度进行训练,获得已训练的人体特征点的检测模型,从而提升获取的人体特征点的准确性。
请参阅图8,图8示出了本申请实施例提供的人体特征点的筛选装置的模块框图。下面将针对图8所示的框图进行阐述,所述人体特征点的筛选装置200包括:待检测图像获取模块210、待检测图像输入模块220、置信度输出模块230以及特征点筛选模块240,其中:
待检测图像获取模块210,用于获取待检测图像。
待检测图像输入模块220,用于将所述待检测图像输入已训练的人体特征点检测模型。
置信度输出模块230,用于获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度。
特征点筛选模块240,用于基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选。
进一步地,所述特征点筛选模块240包括:第一比较结果获得子模块、待检测图像删除子模块以及待检测图像保留子模块,其中:
第一比较结果获得子模块,用于将所述多个第一人体特征点的总体置信度和第一置信度阈值进行比较,获得第一比较结果。
待检测图像删除子模块,用于当所述第一比较结果表征所述多个第一人体特征点的总体置信度小于所述第一置信度阈值时,删除所述待检测图像。
待检测图像保留子模块,用于当所述第一比较结果表征所述多个第一人体特征点的总体置信度不小于所述第一置信度阈值时,保留所述待检测图像。
进一步地,所述特征点筛选模块240还包括:第二比较结果获得子模块和人体特征点保留子模块,其中:
第二比较结果获得子模块,用于将所述每个第一人体特征点的独立置信度分别和第二置信度阈值进行比较,获得第二比较结果。
人体特征点保留子模块,用于基于所述第二比较结果,从所述多个第一人体特征点中删除独立置信度小于所述第二置信度阈值的第一人体特征点,并保留独立置信度不小于所述第二置信度阈值的第一人体特征点。
进一步地,所述待检测图像包括多个待检测区域,所述人体特征点筛选装置200还包括:特征点集合获得模块、集合置信度获取模块以及待检测区域筛选模块,其中:
特征点集合获得模块,用于获取所述多个待检测区域中的每个待检测区域所包含的第一人体特征点,获得多个第一人体特征点集合。
集合置信度获取模块,用于基于所述每个第一人体特征点的独立置信度,获取所述多个第一人体特征点集合中的每个第一人体特征点集合的集合置信度。
待检测区域筛选模块,用于基于所述每个第一人体特征点集合的集合置信度,对所述多个待检测区域进行筛选。
进一步地,所述待检测区域筛选模块包括:第三比较结果获取子模块和人体特征点集合筛选子模块,其中:
第三比较结果获取子模块,用于将所述每个第一人体特征点集合的集合置信度分别和第三置信度阈值进行比较,获取第三比较结果。
人体特征点集合筛选子模块,用于基于所述第三比较结果,从所述多个第一人体特征点集合中删除集合置信度小于所述第三置信度阈值的第一人体特征点集合,并保留集合置信度不小于所述第三置信度阈值的第一人体特征点集合。
进一步地,所述人体特征点筛选装置200还包括:训练图像获取模块、预测坐标信息获取模块、置信度获取模块以及人体特征点检测模型获得模块,其中:
训练图像获取模块,用于获取训练图像,以及所述训练图像所包含的多个第二人体特征点中的每个第二人体特征点的真实坐标信息;
进一步地,所述训练图像获取模块包括:待选择图像获取子模块、待选择图像输入子模块、人体置信度获取子模块、目标图像获取子模块以及训练图像获取子模块,其中:
待选择图像获取子模块,用于获取多个待选择图像。
待选择图像输入子模块,用于将所述多个待选择图像输入已训练的人体检测模型。
人体置信度获取子模块,用于获取所述已训练的人体检测模型输出的多个待选择图像中的每个待选择图像的人体置信度。
目标图像获取子模块,用于从所述多个待选择图像中获取人体置信度小于第四置信度阈值的待选择图像,作为目标图像。
训练图像获取子模块,用于将所述目标图像所包含的人体特征点的真实坐标信息置零,并将真实坐标信息置零后的目标图像作为所述训练图像。
预测坐标信息获取模块,用于对所述训练图像进行人体特征点坐标回归分析,获取所述训练图像所包含的每个第二人体特征点的预测坐标信息。
置信度获取模块,用于基于所述每个第二人体特征点的真实坐标信息和所述每个第二人体特征点的预测坐标信息,获取所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度。
进一步地,所述置信度获取模块包括:距离向量获得子模块、独立置信度获取子模块以及总体置信度获取子模块,其中:
距离向量获得子模块,用于计算所述每个第二人体特征点的真实坐标信息和对应的预测坐标信息之间的欧式距离,获得多个距离向量。
独立置信度获取子模块,用于基于所述多个距离向量和第一距离阈值的大小关系,获取所述每个第二人体特征点的独立置信度。
总体置信度获取子模块,用于基于所述多个距离向量的距离向量和和第二距离阈值的大小关系,获取所述多个第二人体特点的总体置信度。
人体特征点检测模型获得模块,用于将所述训练图像作为输入数据,将所述每个第二人体特征点的预测坐标信息、所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度作为输出数据,通过机器学习算法进行训练,获得已训练的人体特征点检测模型。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
请参阅图9,其示出了本申请实施例提供的一种电子设备100的结构框图。该电子设备100可以是智能手机、平板电脑、电子书等能够运行应用程序的电子设备。本申请中的电子设备100可以包括一个或多个如下部件:处理器110、存储器120以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
其中,处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个电子设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行电子设备100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责待显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集 或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储电子设备100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
请参阅图10,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质300中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质300可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质300包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质300具有执行上述方法中的任何方法步骤的程序代码310的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码310可以例如以适当形式进行压缩。
综上所述,本申请实施例提供的人体特征点的筛选方法、装置、电子设备以及存储介质,获取待检测图像,将待检测图像输入已训练的人体特征点检测模型,获取已训练的人体特征点检测模型输出的多个第一人体特征点、多个第一人体特征点的总体置信度以及多个第一人体特征点中的每个第一人体特征点的独立置信度,基于多个第一人体特征点的总体置信度和每个第一人体特征点的独立置信度,对多个第一人体特征点进行筛选,从而通过获取多个人体特征点的总体置信度和每个人体特征点的独立置信度,并基于总体置信度和独立置信度对多个人体特征点进行筛选,以对误检的人体特征点进行过滤,消除人体特征点预测不准对后续任务的影响。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。
Claims (20)
- 一种人体特征点的筛选方法,其特征在于,所述方法包括:获取待检测图像;将所述待检测图像输入已训练的人体特征点检测模型;获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度;基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选。
- 根据权利要求1所述的方法,其特征在于,所述基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选,包括:将所述多个第一人体特征点的总体置信度和第一置信度阈值进行比较,获得第一比较结果;当所述第一比较结果表征所述多个第一人体特征点的总体置信度小于所述第一置信度阈值时,删除所述待检测图像;当所述第一比较结果表征所述多个第一人体特征点的总体置信度不小于所述第一置信度阈值时,保留所述待检测图像。
- 根据权利要求2所述的方法,其特征在于,所述第一置信度阈值包括1。
- 根据权利要求2或3所述的方法,其特征在于,所述当所述第一比较结果表征所述多个第一人体特征点的总体置信度不小于所述第一置信度阈值时,保留所述待检测图像之后,还包括:将所述每个第一人体特征点的独立置信度分别和第二置信度阈值进行比较,获得第二比较结果;基于所述第二比较结果,从所述多个第一人体特征点中删除独立置信度小于所述第二置信度阈值的第一人体特征点,并保留独立置信度不小于所述第二置信度阈值的第一人体特征点。
- 根据权利要求4所述的方法,其特征在于,所述第二置信度阈值包括1。
- 根据权利要求1-5任一项所述的方法,其特征在于,所述待检测图像包括多个待检测区域,在获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度之后,所述方法还包括:获取所述多个待检测区域中的每个待检测区域所包含的第一人体特征点,获得多个第一人体特征点集合;基于所述每个第一人体特征点的独立置信度,获取所述多个第一人体特征点集合中的每个第一人体特征点集合的集合置信度;基于所述每个第一人体特征点集合的集合置信度,对所述多个待检测区域进行筛选。
- 根据权利要求6所述的方法,其特征在于,所述基于所述每个第一人体特征点集合的集合置信度,对所述多个待检测区域进行筛选,包括:将所述每个第一人体特征点集合的集合置信度分别和第三置信度阈值进行比较,获取第三比较结果;基于所述第三比较结果,从所述多个第一人体特征点集合中删除集合置信度小于所述第三置信度阈值的第一人体特征点集合,并保留集合置信度不小于所述第三置信度阈值的第一人体特征点集合。
- 根据权利要求6或7所述的方法,其特征在于,所述获取所述多个待检测区域中的 每个待检测区域所包含的第一人体特征点,获得多个第一人体特征点集合,包括:获取所述多个第一人体特征点中的每个第一人体特征点的坐标信息,并获取所述多个待检测区域中的每个待检测区域所包含的坐标区域;基于所述每个第一人体特征点的坐标信息和所述每个待检测区域所包含的坐标区域,从所述多个第一人体特征点中获取位于所述每个待检测区域中的第一人体特征点,并将所述每个待检测区域所包含的第一人体特征点作为一个第一人体特征点集合,获得所述多个第一人体特征点集合。
- 根据权利要求6-8任一项所述的方法,其特征在于,所述基于所述每个第一人体特征点的独立置信度,获取所述多个第一人体特征点集合中的每个第一人体特征点集合的集合置信度,包括:对所述每个第一人体特征点集合中所包含的第一人体特征点的独立置信度求和或求平均,获得所述每个第一人体特征点集合的集合置信度。
- 根据权利要求6-9任一项所述的方法,其特征在于,所述多个待检测区域由所述待检测图像进行均匀划分获得,或者,多个待检测区域由所述待检测图像进行非均匀划分获得。
- 根据权利要求1-10任一项所述的方法,其特征在于,在所述获取待检测图像之前,还包括:获取训练图像,以及所述训练图像所包含的多个第二人体特征点中的每个第二人体特征点的真实坐标信息;对所述训练图像进行人体特征点坐标回归分析,获取所述训练图像所包含的每个第二人体特征点的预测坐标信息;基于所述每个第二人体特征点的真实坐标信息和所述每个第二人体特征点的预测坐标信息,获取所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度;将所述训练图像作为输入数据,将所述每个第二人体特征点的预测坐标信息、所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度作为输出数据,通过机器学习算法进行训练,获得已训练的人体特征点检测模型。
- 根据权利要求11所述的方法,其特征在于,所述基于所述每个第二人体特征点的真实坐标信息和所述每个第二人体特征点的预测坐标信息,获取所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度,包括:计算所述每个第二人体特征点的真实坐标信息和对应的预测坐标信息之间的欧式距离,获得多个距离向量;基于所述多个距离向量和第一距离阈值的大小关系,获取所述每个第二人体特征点的独立置信度;基于所述多个距离向量的距离向量和和第二距离阈值的大小关系,获取所述多个第二人体特点的总体置信度。
- 根据权利要求12所述的方法,其特征在于,所述第一距离阈值和所述第二距离阈值随训练步数增加而减小。
- 根据权利要求11-13任一项所述的方法,其特征在于,所述获取训练图像,包括:获取多个待选择图像;将所述多个待选择图像输入已训练的人体检测模型;获取所述已训练的人体检测模型输出的多个待选择图像中的每个待选择图像的人体置信度;从所述多个待选择图像中获取人体置信度小于第四置信度阈值的待选择图像,作为目标图像;将所述目标图像所包含的人体特征点的真实坐标信息置零,并将真实坐标信息置零后 的目标图像作为所述训练图像。
- 根据权利要求11-14任一项所述的方法,其特征在于,在获取训练图像之后,还包括:对所述训练图像进行人体特征热力图处理,获取所述训练图像所包含的每个第二人体特征点的预测坐标信息。
- 根据权利要求11-15任一项所述的方法,其特征在于,所述机器学习算法包括tensorflow算法或者pytorch算法。
- 一种人体特征点的筛选装置,其特征在于,所述装置包括:待检测图像获取模块,用于获取待检测图像;待检测图像输入模块,用于将所述待检测图像输入已训练的人体特征点检测模型;置信度输出模块,用于获取所述已训练的人体特征点检测模型输出的多个第一人体特征点、所述多个第一人体特征点的总体置信度以及所述多个第一人体特征点中的每个第一人体特征点的独立置信度;特征点筛选模块,用于基于所述多个第一人体特征点的总体置信度和所述每个第一人体特征点的独立置信度,对所述多个第一人体特征点进行筛选。
- 根据权利要求17所述的装置,其特征在于,所述装置还包括:训练图像获取模块,用于获取训练图像,以及所述训练图像所包含的多个第二人体特征点中的每个第二人体特征点的真实坐标信息;预测坐标信息获取模块,用于对所述训练图像进行人体特征点坐标回归分析,获取所述训练图像所包含的每个第二人体特征点的预测坐标信息;置信度获取模块,用于基于所述每个第二人体特征点的真实坐标信息和所述每个第二人体特征点的预测坐标信息,获取所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度;人体特征点检测模块获得模块,用于将所述训练图像作为输入数据,将所述每个第二人体特征点的预测坐标信息、所述每个第二人体特征点的独立置信度和所述多个第二人体特征点的总体置信度作为输出数据,通过机器学习算法进行训练,获得已训练的人体特征点检测模型。
- 一种电子设备,其特征在于,包括存储器和处理器,所述存储器耦接到所述处理器,所述存储器存储指令,当所述指令由所述处理器执行时所述处理器执行如权利要求1-16任一项所述的方法。
- 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-16任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010808012.2 | 2020-08-12 | ||
CN202010808012.2A CN111814749A (zh) | 2020-08-12 | 2020-08-12 | 人体特征点的筛选方法、装置、电子设备以及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022033264A1 true WO2022033264A1 (zh) | 2022-02-17 |
Family
ID=72859290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/106337 WO2022033264A1 (zh) | 2020-08-12 | 2021-07-14 | 人体特征点的筛选方法、装置、电子设备以及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111814749A (zh) |
WO (1) | WO2022033264A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116665295A (zh) * | 2023-04-07 | 2023-08-29 | 奥视纵横(北京)科技有限公司 | 一种基于数字孪生的生产培训系统 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111814749A (zh) * | 2020-08-12 | 2020-10-23 | Oppo广东移动通信有限公司 | 人体特征点的筛选方法、装置、电子设备以及存储介质 |
CN112613382B (zh) * | 2020-12-17 | 2024-04-30 | 浙江大华技术股份有限公司 | 对象完整性的确定方法及装置、存储介质、电子装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8254647B1 (en) * | 2012-04-16 | 2012-08-28 | Google Inc. | Facial image quality assessment |
CN106295567A (zh) * | 2016-08-10 | 2017-01-04 | 腾讯科技(深圳)有限公司 | 一种关键点的定位方法及终端 |
CN107808147A (zh) * | 2017-11-17 | 2018-03-16 | 厦门美图之家科技有限公司 | 一种基于实时人脸点跟踪的人脸置信度判别方法 |
CN110348370A (zh) * | 2019-07-09 | 2019-10-18 | 北京猫眼视觉科技有限公司 | 一种人体动作识别的增强现实系统及方法 |
CN111062239A (zh) * | 2019-10-15 | 2020-04-24 | 平安科技(深圳)有限公司 | 人体目标检测方法、装置、计算机设备及存储介质 |
CN111814749A (zh) * | 2020-08-12 | 2020-10-23 | Oppo广东移动通信有限公司 | 人体特征点的筛选方法、装置、电子设备以及存储介质 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046787A (zh) * | 2019-12-10 | 2020-04-21 | 华侨大学 | 一种基于改进YOLO v3模型的行人检测方法 |
-
2020
- 2020-08-12 CN CN202010808012.2A patent/CN111814749A/zh active Pending
-
2021
- 2021-07-14 WO PCT/CN2021/106337 patent/WO2022033264A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8254647B1 (en) * | 2012-04-16 | 2012-08-28 | Google Inc. | Facial image quality assessment |
CN106295567A (zh) * | 2016-08-10 | 2017-01-04 | 腾讯科技(深圳)有限公司 | 一种关键点的定位方法及终端 |
CN107808147A (zh) * | 2017-11-17 | 2018-03-16 | 厦门美图之家科技有限公司 | 一种基于实时人脸点跟踪的人脸置信度判别方法 |
CN110348370A (zh) * | 2019-07-09 | 2019-10-18 | 北京猫眼视觉科技有限公司 | 一种人体动作识别的增强现实系统及方法 |
CN111062239A (zh) * | 2019-10-15 | 2020-04-24 | 平安科技(深圳)有限公司 | 人体目标检测方法、装置、计算机设备及存储介质 |
CN111814749A (zh) * | 2020-08-12 | 2020-10-23 | Oppo广东移动通信有限公司 | 人体特征点的筛选方法、装置、电子设备以及存储介质 |
Non-Patent Citations (1)
Title |
---|
CHENG XIANGHAO, FEIPENG DA, XING DENG: "Coarse-to-fine 3D facial landmark localization based on keypoints", CHINESE JOURNAL OF SCIENTIFIC INSTRUMENT, vol. 39, no. 10, 15 October 2018 (2018-10-15), pages 256 - 264, XP055900271, DOI: 10.19650/j.cnki.cjsi.J1702963 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116665295A (zh) * | 2023-04-07 | 2023-08-29 | 奥视纵横(北京)科技有限公司 | 一种基于数字孪生的生产培训系统 |
CN116665295B (zh) * | 2023-04-07 | 2024-01-02 | 奥视纵横(北京)科技有限公司 | 一种基于数字孪生的生产培训系统 |
Also Published As
Publication number | Publication date |
---|---|
CN111814749A (zh) | 2020-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021169723A1 (zh) | 图像识别方法、装置、电子设备及存储介质 | |
WO2022033264A1 (zh) | 人体特征点的筛选方法、装置、电子设备以及存储介质 | |
WO2022134337A1 (zh) | 人脸遮挡检测方法、系统、设备及存储介质 | |
WO2021008328A1 (zh) | 图像处理方法、装置、终端及存储介质 | |
WO2022033150A1 (zh) | 图像识别方法、装置、电子设备及存储介质 | |
US11062123B2 (en) | Method, terminal, and storage medium for tracking facial critical area | |
WO2019232862A1 (zh) | 嘴巴模型训练方法、嘴巴识别方法、装置、设备及介质 | |
WO2018103608A1 (zh) | 一种文字检测方法、装置及存储介质 | |
AU2021201933B2 (en) | Hierarchical multiclass exposure defects classification in images | |
WO2020199477A1 (zh) | 基于多模型融合的图像标注方法、装置、计算机设备及存储介质 | |
KR20210110823A (ko) | 이미지 인식 방법, 인식 모델의 트레이닝 방법 및 관련 장치, 기기 | |
JP6309549B2 (ja) | 変形可能な表現検出器 | |
US8873840B2 (en) | Reducing false detection rate using local pattern based post-filter | |
US20220084304A1 (en) | Method and electronic device for image processing | |
WO2022205937A1 (zh) | 特征信息提取方法、模型训练方法、装置及电子设备 | |
JP7260674B2 (ja) | ニューラルネットワークに基づくc/d比決定方法、コンピュータ機器及び記憶媒体 | |
WO2022082999A1 (zh) | 一种物体识别方法、装置、终端设备及存储介质 | |
WO2021129466A1 (zh) | 检测水印的方法、装置、终端及存储介质 | |
US20200272897A1 (en) | Learning device, learning method, and recording medium | |
WO2018082308A1 (zh) | 一种图像处理方法及终端 | |
US20210342593A1 (en) | Method and apparatus for detecting target in video, computing device, and storage medium | |
CN111666905B (zh) | 模型训练方法、行人属性识别方法和相关装置 | |
WO2023284182A1 (en) | Training method for recognizing moving target, method and device for recognizing moving target | |
WO2021179856A1 (zh) | 内容识别方法、装置、电子设备及存储介质 | |
WO2021189770A1 (zh) | 基于人工智能的图像增强处理方法、装置、设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21855317 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21855317 Country of ref document: EP Kind code of ref document: A1 |