CN111274852B - Target object key point detection method and device - Google Patents
Target object key point detection method and device Download PDFInfo
- Publication number
- CN111274852B CN111274852B CN201811480075.9A CN201811480075A CN111274852B CN 111274852 B CN111274852 B CN 111274852B CN 201811480075 A CN201811480075 A CN 201811480075A CN 111274852 B CN111274852 B CN 111274852B
- Authority
- CN
- China
- Prior art keywords
- key point
- target
- target object
- image
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 99
- 238000000034 method Methods 0.000 claims abstract description 37
- 230000001133 acceleration Effects 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 210000000746 body region Anatomy 0.000 description 5
- 210000001061 forehead Anatomy 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The application provides a target object key point detection method and device, wherein the method comprises the following steps: determining the predicted position of the target key point in the image to be detected according to the position of the target key point of the target object in the previous N frames of the image to be detected; wherein N is more than 1; determining a target area corresponding to the target object from the image to be detected according to the predicted position of the target key point; and detecting key points of the target area. According to the method, the position of the target key point in the image to be detected is predicted according to the position of the target key point in the previous multi-frame image of the image to be detected, and then the target area corresponding to the target object in the image to be detected is predicted according to the predicted position, so that a link of detecting the position of the target object by using a detector is omitted, and the detection speed of the key point is greatly improved.
Description
Technical Field
The present application relates to the field of computer vision, and in particular, to a method and apparatus for detecting a target object key point.
Background
With the development of computer technology, key points of a human body can be detected from an image containing the human body. When the human body key point detection is carried out, the position of the human body can be detected by an independent human body detector, then the human body is scratched out of the background, and then the human body key point detection is carried out.
In the related art, when detecting the position of the human body, the following three modes can be adopted: (1) Each frame of image is called a human body detector to detect the position of a human body; (2) invoking the human body detector once every multiframe; (3) The first frame is detected by a human body detector, and the subsequent images of each frame determine the position of the human body by using a tracking algorithm.
However, the operation time consumed by the independent human body detector is long, and the key point detection speed is seriously affected by calling the human body detector every frame; although average detection time can be reduced in frame-separated detection (once every n frames), the reduction degree is limited, and when the human body position in n frames of images is greatly moved in the images, the human body position detection result is inaccurate, so that the human body key point detection is inaccurate; the tracking algorithm is insensitive to small position change and small size change of the tracked object, the detection precision of key points of the human body is low, and accumulated errors exist in the tracking algorithm, so that continuous tracking cannot be realized.
Therefore, in the related art, the method for detecting the key points of the human body has the problems of low detection speed and low detection precision.
Disclosure of Invention
The application provides a target object key point detection method and device, which are used for solving the problems of low detection speed and low detection precision of a target object key point detection method in the related technology.
In one aspect, an embodiment of the present application provides a method for detecting a key point of a target object, including:
determining the predicted position of a target key point of a target object in the image to be detected according to the position of the target key point of the target object in the previous N frames of images of the image to be detected; wherein N is more than 1;
determining a target area corresponding to the target object from the image to be detected according to the predicted position of the target key point;
and detecting key points of the target area.
According to the target object key point detection method, the position of the target key point in the image to be detected is predicted according to the position of the target key point of the target object in the previous multi-frame image of the image to be detected, and then the target area of the target object in the image to be detected is determined according to the predicted position, so that a link of detecting the position of the target object by using a detector is omitted, the detection speed of the key point is greatly improved, the detection accuracy of the position of the target object is improved, and the detection accuracy of the key point is further improved.
As one possible implementation of an embodiment of an aspect of the application,
the determining the predicted position of the target key point in the image to be detected according to the position of the target key point of the target object in the previous N frames of images of the image to be detected comprises:
Determining the moving speed information of the target key point according to the position of the target key point in the previous N frames of images; wherein the moving speed information comprises a moving speed and/or a moving acceleration;
and determining the predicted position of the target key point in the image to be detected according to the moving speed information of the target key point and the position of the target key point in the image of the previous frame of the image to be detected.
As a possible implementation manner of an embodiment of an aspect of the present application, before determining, according to a position of a target key point of a target object in a previous N frames of images of an image to be detected, a predicted position of the target key point in the image to be detected, the method further includes:
determining the identification accuracy of each key point of the target object in the previous N frames of images;
and screening the target key points of the target object from the key points according to the identification accuracy of the key points.
As a possible implementation manner of an embodiment of an aspect of the present application, the determining accuracy of identifying each key point of the target object in the previous N frames of images includes:
acquiring confidence degrees of key points of the target object in the previous N frames of images; carrying out weighted summation on the confidence coefficient of the same key point in the previous N frames of images to obtain the identification accuracy of the corresponding key point;
And/or, acquiring the positions of all key points of the target object in the previous N frames of images; determining the relative positions of the key points according to the positions of the key points in the same frame of image; determining the recognition accuracy of each key point according to the degree of difference between the relative positions of each key point and the standard relative positions of each key point of the target object;
and/or, acquiring the positions of all key points of the target object in the previous N frames of images; determining the change of the moving speed information of each key point according to the positions of the same key point in different frame images; determining the recognition accuracy of each key point according to whether the change of the movement speed information of each key point accords with a movement continuity rule or not; wherein the movement speed information comprises a movement speed and/or a movement acceleration.
As a possible implementation manner of an embodiment of an aspect of the present application, the determining, according to the predicted position of the target key point, a target area corresponding to the target object from the image to be detected includes:
according to the predicted position of the target key point and the standard relative position of each key point of the target object, determining the predicted position of each key point of the target object in the image to be detected;
And determining a target area corresponding to the target object in the image to be detected according to the predicted positions of the key points of the target object in the image to be detected.
As a possible implementation manner of an embodiment of an aspect of the present application, after the performing key point detection on the target area, the method further includes:
obtaining a detection result obtained by detecting key points of the target area, wherein the detection result comprises the positions and/or the confidence degrees of all the key points in the target area;
checking whether the target area contains the target object according to the detection result;
and if the target area contains the target object, determining the position of each key point of the target object in the image to be detected according to the position of each key point in the target area.
As a possible implementation manner of an embodiment of an aspect of the present application, the verifying, according to the detection result, whether the target area includes the target object includes:
determining the relative positions of the key points in the target area according to the positions of the key points in the target area;
and verifying whether the target object is contained in the target area according to the degree of difference between the relative positions of the key points in the target area and the standard relative positions of the key points of the target object and the confidence degree of the key points in the target area.
As a possible implementation manner of the embodiment of an aspect of the present application, the method further includes:
if the target area does not contain the target object, determining a target area corresponding to the target object from the image to be detected according to the image to be detected;
and detecting key points of a target area determined according to the image to be detected to obtain the positions of all key points of the target object in the image to be detected.
Another embodiment of the present application provides a target object key point detection apparatus, including:
the first determining module is used for determining the predicted position of the target key point in the image to be detected according to the position of the target key point of the target object in the previous N frames of the image to be detected; wherein N is more than 1;
the second determining module is used for determining a target area corresponding to the target object from the image to be detected according to the predicted position of the target key point;
and the detection module is used for detecting the key points of the target area.
As a possible implementation manner of the embodiment of another aspect of the present application, the first determining module may include:
a first determining unit, configured to determine movement speed information of the target key point according to the position of the target key point in the previous N-frame image; wherein the moving speed information comprises a moving speed and/or a moving acceleration;
And the second determining unit is used for determining the predicted position of the target key point in the image to be detected according to the moving speed information of the target key point and the position of the target key point in the image of the previous frame of the image to be detected.
As a possible implementation manner of the embodiment of another aspect of the present application, the apparatus further includes:
the third determining module is used for determining the identification accuracy of each key point of the target object in the previous N frames of images;
and the screening module is used for screening the target key points of the target object from the key points according to the identification accuracy of the key points.
As a possible implementation manner of the embodiment of another aspect of the present application, the third determining module is specifically configured to:
acquiring confidence degrees of key points of the target object in the previous N frames of images; carrying out weighted summation on the confidence coefficient of the same key point in the previous N frames of images to obtain the identification accuracy of the corresponding key point;
and/or, acquiring the positions of all key points of the target object in the previous N frames of images; determining the relative positions of the key points according to the positions of the key points in the same frame of image; determining the recognition accuracy of each key point according to the degree of difference between the relative positions of each key point and the standard relative positions of each key point of the target object;
And/or, acquiring the positions of all key points of the target object in the previous N frames of images; determining the change of the moving speed information of each key point according to the positions of the same key point in different frame images; determining the recognition accuracy of each key point according to whether the change of the movement speed information of each key point accords with a movement continuity rule or not; wherein the movement speed information comprises a movement speed and/or a movement acceleration.
As a possible implementation manner of the embodiment of another aspect of the present application, the second determining module is specifically configured to:
according to the predicted position of the target key point and the standard relative position of each key point of the target object, determining the predicted position of each key point of the target object in the image to be detected;
and determining a target area corresponding to the target object in the image to be detected according to the predicted positions of the key points of the target object in the image to be detected.
As a possible implementation manner of the embodiment of another aspect of the present application, the apparatus may further include:
the acquisition module is used for acquiring a detection result obtained by detecting the key points of the target area, wherein the detection result comprises the positions and/or the confidence degrees of the key points in the target area;
The verification module is used for verifying whether the target area contains the target object or not according to the detection result;
and the fourth determining module is used for determining the positions of the key points of the target object in the image to be detected according to the positions of the key points in the target area when the target object is contained in the target area.
As a possible implementation manner of the embodiment of another aspect of the present application, the verification module is specifically configured to:
determining the relative positions of the key points in the target area according to the positions of the key points in the target area;
and verifying whether the target object is contained in the target area according to the degree of difference between the relative positions of the key points in the target area and the standard relative positions of the key points of the target object and the confidence degree of the key points in the target area.
As a possible implementation manner of the embodiment of another aspect of the present application, the apparatus may further include:
a fifth determining module, configured to determine, according to the image to be detected, a target area corresponding to the target object from the image to be detected when the target area does not include the target object;
And the detection module is also used for detecting key points of the target area determined according to the image to be detected, and obtaining the positions of all key points of the target object in the image to be detected.
In another aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the target object key point detection method according to the embodiment of the aspect.
Another embodiment of the present application proposes a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the target object key point detection method according to the embodiment of the above aspect.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
Fig. 1 is a schematic flow chart of a target object key point detection method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another method for detecting key points of a target object according to an embodiment of the present application;
FIG. 3 is a flowchart of another method for detecting key points of a target object according to an embodiment of the present application;
FIG. 4 is a flowchart of another method for detecting key points of a target object according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a target object key point detection device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another target object key point detection device according to an embodiment of the present application;
fig. 7 shows a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
The following describes a target object key point detection method and device according to an embodiment of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a target object key point detection method according to an embodiment of the present application.
The target object key point detection method provided by the embodiment of the application can be configured in the electronic equipment to determine the target area corresponding to the target object in the image to be detected according to the position of the target key point of the target object in the previous N frames of images, and a position link of detecting the target object by using a detector is omitted, so that the detection speed of the key point is improved, the detection accuracy of the position of the target object is improved, and the detection accuracy of the key point is improved.
In this embodiment, the electronic device may be a device having an operating system, such as a personal computer, a server, or the like.
As shown in fig. 1, the target object key point detection method includes:
step 101, determining the predicted position of the target key point in the image to be detected according to the position of the target key point of the target object in the previous N frames of the image to be detected.
Wherein N is an integer greater than 1.
In this embodiment, the key point detection may be performed on the target object in each frame of image in the video, and then the image to be detected may be one frame of image in the video. Wherein the target object may be a human, an animal, etc.
The video is composed of one frame of image, and when key points of target objects in the video are detected, one frame of image in the video can be used as an object for detection. Specifically, the next frame image, that is, the next frame image is detected as the image to be detected after the current frame image is detected according to the playing sequence of the video.
In this embodiment, the target key points may be part of the key points of the target object, or may be all the key points of the target object.
Since the motion of the target object has continuity, for example, the motion of the human body has continuity, and the position of the key point of the human body in the adjacent frames of images does not change much. In the embodiment of the application, the position of the target key point in the image to be detected is predicted according to the position of the target key point of the target object in the previous N frames of the image to be detected.
Specifically, the position of the target key point in each frame of the previous N frames of images of the image to be detected may be acquired first, and then the position of the target key point in the image to be detected may be predicted by using the position of the target key point in the previous N frames of images, where the predicted position is referred to as a predicted position.
For example, the predicted position of the target key point of the target object in the image to be detected is determined using the positions of the target key points of the target object in the previous 2 frames of the image to be detected.
In determining the predicted position of the target key point in the image to be detected, the motion information of the target key point may be used for prediction, and the following embodiments will be described in detail.
And 102, determining a target area corresponding to the target object from the image to be detected according to the predicted position of the target key point.
In this embodiment, the target area may refer to an area where the target object in the image is located. For example, when the target object is a person, then the target area is the area of the image where the person is located.
After the predicted position of the target key point in the image to be detected is determined, the target area can be determined from the image to be detected according to the predicted position of the target key point. For example, the human body region in the image to be detected may be predicted from the predicted positions of key points at the nose, elbow, shoulder, knee joints of the person.
As an implementation manner, the predicted position of each key point of the target object in the image to be detected can be determined according to the predicted position of the target key point and the standard relative position of each key point of the target object; and determining a target area corresponding to the target object in the image to be detected according to the predicted positions of the key points of the target object in the image to be detected. For example, according to the predicted position of the target human body key point in the image to be detected and the standard relative position of each human body key point, the position of each human body key point in the image to be detected can be predicted, so as to obtain the distribution range of each human body key point, and then the area where the predicted position of each human body key point is located forms the target area corresponding to the target object in the image to be detected.
As another possible implementation manner, the target area of the image to be detected may be determined according to the predicted position of the target key point in the image to be detected and the relative position of the target key point and the edge of the target object. For example, the area in which the person's forehead is located may be determined based on the predicted position of the person's forehead and the relative position of the forehead and the head edge.
In the related art, the detector is used for detecting the whole position of the target object, and the positions of a plurality of key points of the target object can be detected in the method, so that the detection precision is higher in the method for predicting the position of the target object by using the target key points in the previous N frames of images.
And 103, detecting key points of the target area.
In this embodiment, the target area may be input into the keypoint identification model for keypoint detection. The key point identification model can be obtained by training a neural network and is used for detecting each key point of a target object in an image.
Or determining the positions of the key points in the target area in the image to be detected by using the positions of the key points in the target area in the previous frame or frames of images.
Taking the target area as a human body area as an example, determining the human body area in the image to be detected according to the position of the target human body key point, and then detecting the key point of the human body area.
It should be noted that, in this embodiment, when the image to be detected is the first frame image or the second frame image, the method in the prior art may be used to detect the key point of the target object in the first frame image, so as to obtain the position of each key point of the target object.
In the embodiment of the application, the position of the target key point in the image to be detected is predicted according to the position of the target key point of the target object in the previous N (N > 1) frame image, and then the target area corresponding to the target object in the image to be detected is determined according to the predicted position, and then the key point detection is carried out on the target area, so that the link of detecting the position of the target object by a detector is omitted, the detection speed of the key point is greatly improved, and the detection precision of the key point is improved.
In addition, in the related art, if the target object is a human body, when a plurality of people appear in the image to be detected, the positions of the plurality of human bodies detected by the detector cannot be matched with the key points of the plurality of human bodies in the previous frame of image. In the embodiment of the application, since the positions of the human bodies in each frame of image are predicted from the corresponding key points of the human bodies in the previous N frames of images, the positions of the plurality of human bodies detected in the multi-person scene can be matched with the key points of the plurality of human bodies in the previous frame.
In one embodiment of the present application, when determining the predicted position of the target key point in the image to be detected, the position of the target key point in the previous N frames of images may be used to obtain the motion information of the target key point, and then the position of the target key point in the image to be detected is predicted according to the motion information. Fig. 2 is a schematic flow chart of another method for detecting key points of a target object according to an embodiment of the present application.
As shown in fig. 2, determining the predicted position of the target key point in the image to be detected according to the positions of the target key point in the previous N frames of the image to be detected includes:
step 201, determining moving speed information of a target key point according to the position of the target key point in the previous N frames of images; wherein the movement speed information comprises a movement speed and/or a movement acceleration.
In this embodiment, in the first N frames of images of the image to be detected, the moving speed of the target key point may be determined according to the position of the same target key point in the two adjacent frames of images, and then the moving acceleration of the target key point may be determined according to the position of the same key point in the three adjacent frames of images.
Specifically, for each target key point, according to the position of the same target key point in the previous N frames of images, the moving distance of the target key point in each two adjacent frames of images is calculated, and further according to the frame rate and the moving distance of the target key point, the moving speed of the target key point in each two adjacent frames of images is calculated, wherein the frame rate refers to the number of images displayed in each second of video. The movement distance is divided by the inverse of the frame rate to obtain the movement speed. Further, an average value of the moving speeds is calculated as the moving speed of the target key point according to the moving speeds of the target key point calculated for every two adjacent frames of images.
Then, the target key point movement acceleration can be determined according to the movement speed of the target key point in the two adjacent frames of images.
For example, when n=2, the moving speed of the target key point may be determined according to the position of the target key point in the previous 2 frames of images of the image to be detected. When n=3, assuming that the first 3 frames of images of the image to be detected are A, B, C, respectively, the moving speed x of the target key point can be determined according to the positions of the target key point in the a image and the B image 1 The moving speed x of the target key point can also be determined according to the B image and the C image 2 Then according to the moving speed x 1 And x 2 The moving acceleration of the target keypoint may be determined.
Step 202, determining a predicted position of the target key point in the image to be detected according to the moving speed information of the target key point and the position of the target key point in the image of the previous frame of the image to be detected.
In this embodiment, the predicted position of the target key point in the image to be detected may be determined according to the position of the target key point in the previous frame of the image to be detected and the moving speed of the target key point. Specifically, the product of the speed of the target key point and the time interval between the previous frame image and the image to be detected can be calculated, and then the sum of the product result and the coordinates of the target key point in the previous frame image is calculated, namely the coordinates of the predicted position of the target key point in the image to be detected.
Or calculating the moving speed according to the moving acceleration of the target key point, and further determining the predicted position of the target key point in the image to be detected according to the moving speed and the position of the target key point in the previous frame of image.
If the target object is a human body, after the predicted position of the target human body key point in the image to be detected is determined, the human body region in the image to be detected can be determined according to the predicted position, and then the key point detection is performed on the human body region.
In the embodiment of the application, the moving speed information of the target key point is determined according to the position of the target key point in the previous N frames of images, and then the position of the target key point in the image to be detected is predicted according to the moving speed information and the position of the target key point in the previous frame of images.
Further, in order to improve the detection accuracy of the target area in the image to be detected, before determining the predicted position of the target key point, each key point of the target object may be screened according to the identification accuracy, so as to obtain a reliable key point as the target key point, and further, according to the position of the target key point in the previous N frames of images, determine the predicted position of the target key point in the image to be detected. Fig. 3 is a schematic flow chart of another target object key point detection method according to an embodiment of the present application.
As shown in fig. 3, before determining the predicted position of the target key point in the image to be detected according to the position of the target key point of the target object in the previous N frames of the image to be detected, the target object key point detection method further includes:
step 301, determining the recognition accuracy of each key point of the target object in the previous N frames of images.
In this embodiment, the accuracy of identifying each key point may be determined according to the confidence level of each key point of the target object in the previous N frames of images, the relative position between each key point, the motion continuity rule, and the like. The identification accuracy is used for indicating the detection accuracy of the key point position.
When the key point detection is carried out, the image can be detected by adopting a key point identification model, and the key point identification model can output the position and the confidence of each key point in the image. Wherein the confidence level is used to indicate the confidence level of the detected location of the keypoint.
As one possible implementation manner, the confidence coefficient of each key point of the target object obtained by performing key point detection on the previous N frames of images by using a key point identification model is obtained, and then the confidence coefficient of the same key point in the previous N frames of images is weighted and summed to obtain the identification accuracy of the corresponding key point. Thus, the recognition accuracy of each key point of the target object can be obtained. The weight value of the same key point in each frame of image in the previous N frames of images can be determined according to actual needs.
Taking the target object as a human body as an example, the relative positions of all joints of the human body are not changed greatly, for example, the relative positions of key points at the shoulders are basically the distance of the shoulder widths, and the relative positions of the forehead and the nose are basically unchanged.
Thus, as another possible implementation, the recognition accuracy of each key point may also be determined according to the relative position between the key points. Specifically, the positions of all the key points obtained by carrying out key point detection on the previous N frames of images by adopting a key point identification model are obtained, then the relative positions of all the key points in the corresponding images are determined according to the positions of all the key points in the same frame of images for each frame of images in the previous N frames of images, and then the degree of difference between the relative positions of all the key points in the corresponding images and the standard relative positions of all the key points of the target object is calculated. And for each key point, determining the identification accuracy of the key point according to N difference degrees obtained from the previous N frames of images. The standard relative position can be obtained according to the structure of the target object. For example, the standard relative positions of key points of the human body can be obtained according to the human body structure.
For example, the recognition accuracy of the human body key point can be determined according to the degree of difference between the relative positions of the same human body key point and the adjacent key points in the previous N frames of images and the standard relative positions, and the mapping relation between the degree of difference and the recognition accuracy. Wherein, the greater the degree of difference, the lower the corresponding recognition accuracy.
Since the motion of the target object has continuity, the speed or acceleration of the key point of the target object determined according to the adjacent frames of images should also be continuous, and if a sudden increase or decrease occurs, the detected key point position may be considered inaccurate.
Then, as another possible implementation manner, the moving speed information, such as moving speed and/or moving acceleration, of each key point may be determined according to the positions of the same key point in different frame images, so as to determine the moving speed information change, such as acceleration change and/or speed change, of each key point, and determine the recognition accuracy according to whether the moving speed information change of each key point accords with the motion continuity rule. If the change of the moving speed information accords with the motion continuity rule, the identification accuracy is high. Otherwise, the identification accuracy of the key points is low.
It should be noted that, the accuracy of identifying each key point of the target object may be determined by using any two of the above three methods simultaneously, or by using three methods simultaneously.
In the embodiment of the application, the identification accuracy is determined by the confidence coefficient of each key point, the relative position among the key points, the motion continuity rule and the like, thereby greatly improving the accuracy of determining the identification accuracy.
And step 302, screening target key points of the target object from the key points according to the identification accuracy of the key points.
In this embodiment, the key points may be screened according to the recognition accuracy of each key point, and the key points with the recognition accuracy exceeding the preset threshold may be used as target key points, so as to determine the target area in the image to be detected by using the screened target key points, and then detect the key points in the target area.
Taking a target object as a human body as an example, the human body region in the image to be detected can be determined by utilizing the screened target key points, and then the key point detection is carried out on the human body region.
In the embodiment of the application, the identification accuracy of each key point is determined, and the reliable key point is screened out by using the identification accuracy as the target key point, so that the target area in the image to be detected is determined by using the reliable key point, the detection accuracy of the target area in the image to be detected is greatly improved, and the detection accuracy of the key point is further improved.
In order to avoid the situation that the determined target area does not contain the target object and the detection is inaccurate, after the key point detection is performed on the target area, whether the target area contains the target object can be checked. Fig. 4 is a schematic flow chart of another method for detecting key points of a target object according to an embodiment of the present application.
As shown in fig. 4, after the target area is subjected to the keypoint detection, the target object keypoint detection method further includes:
step 401, obtaining a detection result obtained by performing key point detection on the target area, wherein the detection result comprises the position and/or the confidence coefficient of each key point in the target area.
In this embodiment, when the target area in the image to be detected is detected by the key point identification model, the key point identification model may output the position and the confidence of the key point, so that the confidence of the key point in the target area and the position of each key point in the target area may be obtained.
Step 402, checking whether the target area contains the target object according to the detection result. If the target object is included, go to step 403; if the target object is not included, step 404 is performed.
In this embodiment, the relative positions of the key points in the target area may be determined according to the positions of the key points in the target area, and then, the target area is checked to determine whether the target object is included by using the degree of difference between the relative positions of the key points and the standard relative positions of the key points of the target object and the confidence of the key points in the target area.
For example, if the target object is a human body, the above method may be used to check whether the target area contains a human body.
Specifically, if the degree of difference between the relative position of each key point in the target area and the standard relative position of each key point exceeds a preset ratio in a preset range, and the confidence of the key point exceeds a preset threshold, it may be considered that the target area contains the target object.
For example, more than 90% of human body key points exist in the target area, the difference between the relative positions and the standard relative positions is within a preset range, and the confidence degrees of the human body key points are all more than a preset threshold, so that the target area can be considered to contain human bodies.
Step 403, if the target area contains the target object, determining the position of each key point of the target object in the image to be detected according to the position of each key point in the target area.
If the target area contains the target object, the determined target area in the image to be detected is accurate, and then the position of each key point obtained by detecting the target area can be used as the position of the key point in the image to be detected.
Step 404, if the target area does not contain the target object, determining a target area corresponding to the target object from the image to be detected according to the image to be detected.
If the target area does not contain the target object, the determined target area in the image to be detected is inaccurate, and the image to be detected can be detected by using the detector to determine the target area.
For example, when the target object is a human body and the target area does not include a human body, the detector may be used to detect the image to be detected, and determine the human body area in the image to be detected.
And step 405, performing key point detection on the target area determined according to the image to be detected to obtain the positions of all key points of the target object in the image to be detected.
After the target area is determined by the detector, the target area is scratched out of the image to be detected and is input into the key point identification model, and the key point identification model is used for detecting the key points of the target area so as to detect the positions of the key points of the target object in the target area.
In the above embodiment, the positions and the confidence degrees of the key points in the target area are used to check whether the target area contains the target object. It will be appreciated that one of the location and confidence of each keypoint in the target region may also be used to verify whether the target region contains the target object.
For example, it is determined whether the number of key points whose degree of difference between the relative position in the target area and the standard relative position is within the preset range exceeds a preset ratio, and if the preset ratio is exceeded, the target object may be considered to be included in the target area.
For another example, it is determined whether the number of key points in the target area with confidence exceeding a preset threshold exceeds a preset proportion, and if the number of key points exceeds the preset proportion, the target area may be considered to contain the target object.
In the embodiment of the application, after the key point detection is carried out on the target area, whether the target area contains the target object is checked by utilizing the position and/or the confidence coefficient of the detected key point in the target area, so that the accuracy of the key point detection is further improved.
In order to achieve the above embodiment, the embodiment of the present application further provides a target object key point detection device. Fig. 5 is a schematic structural diagram of a target object key point detection device according to an embodiment of the present application.
As shown in fig. 5, the target object key point detection apparatus includes: a first determination module 510, a second determination module 520, a detection module 530.
A first determining module 510, configured to determine a predicted position of a target key point in the image to be detected according to a position of the target key point of the target object in the previous N frames of the image to be detected; wherein N is more than 1;
the second determining module 520 is configured to determine, from the image to be detected, a target area corresponding to the target object according to the predicted position of the target key point;
And the detection module 530 is configured to detect a key point of the target area.
Fig. 6 is a schematic structural diagram of another target object key point detection device according to an embodiment of the present application.
In one possible implementation manner of the embodiment of the present application, as shown in fig. 6, the first determining module 510 may include:
a first determining unit 511 for determining movement speed information of the target key point according to the position of the target key point in the previous N-frame image; wherein the moving speed information comprises a moving speed and/or a moving acceleration;
the second determining unit 512 is configured to determine a predicted position of the target key point in the image to be detected according to the moving speed information of the target key point and the position of the target key point in the image of the previous frame of the image to be detected.
In one possible implementation manner of the embodiment of the present application, the apparatus further includes:
the third determining module is used for determining the recognition accuracy of each key point of the target object in the previous N frames of images;
and the screening module is used for screening the target key points of the target object from the key points according to the identification accuracy of the key points.
In one possible implementation manner of the embodiment of the present application, the third determining module is specifically configured to:
Acquiring the confidence coefficient of each key point of a target object in the previous N frames of images; carrying out weighted summation on the confidence coefficient of the same key point in the previous N frames of images to obtain the identification accuracy of the corresponding key point;
and/or, acquiring the positions of key points of the target object in the previous N frames of images; determining the relative positions of the key points according to the positions of the key points in the same frame of image; determining the identification accuracy of each key point according to the degree of difference between the relative position of each key point and the standard relative position of each key point of the target object;
and/or, acquiring the positions of key points of the target object in the previous N frames of images; determining the change of the moving speed information of each key point according to the positions of the same key point in different frame images; determining the identification accuracy of each key point according to whether the change of the movement speed information of each key point accords with the motion continuity rule; wherein the movement speed information comprises a movement speed and/or a movement acceleration.
In one possible implementation manner of the embodiment of the present application, the second determining module 520 is specifically configured to:
determining the predicted position of each key point of the target object in the image to be detected according to the predicted position of the target key point and the standard relative position of each key point of the target object;
And determining a target area corresponding to the target object in the image to be detected according to the predicted positions of the key points of the target object in the image to be detected.
In one possible implementation manner of the embodiment of the present application, the apparatus may further include:
the acquisition module is used for acquiring a detection result obtained by detecting the key points of the target area, wherein the detection result comprises the positions and/or the confidence degrees of the key points in the target area;
the verification module is used for verifying whether the target area contains a target object according to the detection result;
and the fourth determining module is used for determining the positions of the key points of the target object in the image to be detected according to the positions of the key points in the target area when the target object is contained in the target area.
In one possible implementation manner of the embodiment of the present application, the above verification module is specifically configured to:
determining the relative positions of the key points in the target area according to the positions of the key points in the target area;
and checking whether the target area contains the target object according to the degree of difference between the relative positions of the key points in the target area and the standard relative positions of the key points of the target object and the confidence degree of the key points in the target area.
In one possible implementation manner of the embodiment of the present application, the apparatus may further include:
a fifth determining module, configured to determine, according to the image to be detected, a target area corresponding to the target object from the image to be detected when the target area does not include the target object;
the detection module 530 is further configured to perform key point detection on the target area determined according to the image to be detected, so as to obtain positions of each key point of the target object in the image to be detected.
It should be noted that the foregoing explanation of the embodiment of the target object key point detection method is also applicable to the target object key point detection device of this embodiment, so that the description thereof is omitted herein.
In order to achieve the above embodiments, an embodiment of the present application further provides an electronic device, including a processor and a memory;
the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the target object key point detection method described in the above embodiment.
Fig. 7 shows a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the application. The electronic device 12 shown in fig. 7 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 7, the electronic device 12 is in the form of a general purpose computing device. Components of the electronic device 12 may include, but are not limited to: one or more processors 16 or processor units, a memory 28, a bus 18 that connects the various system components, including the memory 28 and the processor 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnection; hereinafter PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard disk drive"). Although not shown in fig. 7, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a compact disk read only memory (Compact Disc Read Only Memory; hereinafter CD-ROM), digital versatile read only optical disk (Digital Video Disc Read Only Memory; hereinafter DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks, such as a local area network (Local Area Network; hereinafter: LAN), a wide area network (Wide Area Network; hereinafter: WAN) and/or a public network, such as the Internet, via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processor 16 executes various functional applications and data processing, such as implementing the methods mentioned in the previous embodiments, by running programs stored in the memory 28.
In order to achieve the above embodiments, the embodiments of the present application also provide a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the target object key point detection method as described in the above embodiments.
In the description of this specification, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
Claims (11)
1. A method for detecting a key point of a target object, the method comprising the steps of:
determining the predicted position of a target key point of a target object in the previous N frames of images of the image to be detected according to the position of the target key point of the target object in the previous N frames of images of the image to be detected; wherein N is more than 1;
predicting a target area corresponding to the target object from the image to be detected according to the predicted position of the target key point in the image to be detected;
and detecting key points of the target area.
2. The method for detecting a target object key point according to claim 1, wherein determining the predicted position of the target key point in the image to be detected according to the position of the target key point of the target object in the previous N frames of the image to be detected comprises:
Determining the moving speed information of the target key point according to the position of the target key point in the previous N frames of images; wherein the moving speed information comprises a moving speed and/or a moving acceleration;
and determining the predicted position of the target key point in the image to be detected according to the moving speed information of the target key point and the position of the target key point in the image of the previous frame of the image to be detected.
3. The method for detecting a target object key point according to claim 1, wherein before determining the predicted position of the target key point in the image to be detected according to the position of the target key point of the target object in the previous N frames of images of the image to be detected, the method further comprises:
determining the identification accuracy of each key point of the target object in the previous N frames of images;
and screening the target key points of the target object from the key points according to the identification accuracy of the key points.
4. The method for detecting key points of a target object according to claim 3, wherein determining the recognition accuracy of each key point of the target object in the previous N-frame image comprises:
Acquiring confidence degrees of key points of the target object in the previous N frames of images; carrying out weighted summation on the confidence coefficient of the same key point in the previous N frames of images to obtain the identification accuracy of the corresponding key point;
and/or, acquiring the positions of all key points of the target object in the previous N frames of images; determining the relative positions of the key points according to the positions of the key points in the same frame of image; determining the recognition accuracy of each key point according to the degree of difference between the relative positions of each key point and the standard relative positions of each key point of the target object;
and/or, acquiring the positions of all key points of the target object in the previous N frames of images; determining the change of the moving speed information of each key point according to the positions of the same key point in different frame images; determining the recognition accuracy of each key point according to whether the change of the movement speed information of each key point accords with a movement continuity rule or not; wherein the movement speed information comprises a movement speed and/or a movement acceleration.
5. The method for detecting a target object key point according to claim 1, wherein predicting a target area corresponding to the target object from the image to be detected according to a predicted position of the target key point in the image to be detected includes:
According to the predicted position of the target key point and the standard relative position of each key point of the target object, determining the predicted position of each key point of the target object in the image to be detected;
and determining a target area corresponding to the target object in the image to be detected according to the predicted positions of the key points of the target object in the image to be detected.
6. The method for detecting a keypoint of a target object according to any one of claims 1 to 5, further comprising, after said keypoint detection of said target region:
obtaining a detection result obtained by detecting key points of the target area, wherein the detection result comprises the positions and/or the confidence degrees of all the key points in the target area;
checking whether the target area contains the target object according to the detection result;
and if the target area contains the target object, determining the position of each key point of the target object in the image to be detected according to the position of each key point in the target area.
7. The method for detecting a target object key point according to claim 6, wherein the verifying whether the target object is included in the target area according to the detection result includes:
Determining the relative positions of the key points in the target area according to the positions of the key points in the target area;
and verifying whether the target object is contained in the target area according to the degree of difference between the relative positions of the key points in the target area and the standard relative positions of the key points of the target object and the confidence degree of the key points in the target area.
8. The target object key point detection method according to claim 6, further comprising:
if the target area does not contain the target object, determining a target area corresponding to the target object from the image to be detected according to the image to be detected;
and detecting key points of a target area determined according to the image to be detected to obtain the positions of all key points of the target object in the image to be detected.
9. A target object key point detection apparatus, characterized by comprising:
the first determining module is used for determining the predicted position of the target key point in the image to be detected according to the position of the target key point of the target object in the previous N frames of the image to be detected; wherein N is more than 1;
The second determining module is used for predicting a target area corresponding to the target object from the image to be detected according to the predicted position of the target key point in the image to be detected;
and the detection module is used for detecting the key points of the target area.
10. An electronic device comprising a processor and a memory;
wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the target object keypoint detection method according to any one of claims 1 to 8.
11. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the target object keypoint detection method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811480075.9A CN111274852B (en) | 2018-12-05 | 2018-12-05 | Target object key point detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811480075.9A CN111274852B (en) | 2018-12-05 | 2018-12-05 | Target object key point detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111274852A CN111274852A (en) | 2020-06-12 |
CN111274852B true CN111274852B (en) | 2023-10-31 |
Family
ID=70998551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811480075.9A Active CN111274852B (en) | 2018-12-05 | 2018-12-05 | Target object key point detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111274852B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113034580B (en) * | 2021-03-05 | 2023-01-17 | 北京字跳网络技术有限公司 | Image information detection method and device and electronic equipment |
CN113362370B (en) * | 2021-08-09 | 2022-01-11 | 深圳市速腾聚创科技有限公司 | Method, device, medium and terminal for determining motion information of target object |
CN114373157A (en) * | 2022-03-21 | 2022-04-19 | 蔚来汽车科技(安徽)有限公司 | Safety monitoring method, device and medium for power swapping station and power swapping station |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014235743A (en) * | 2013-06-03 | 2014-12-15 | 株式会社リコー | Method and equipment for determining position of hand on the basis of depth image |
CN107257980A (en) * | 2015-03-18 | 2017-10-17 | 英特尔公司 | Local change in video detects |
CN107622252A (en) * | 2017-09-29 | 2018-01-23 | 百度在线网络技术(北京)有限公司 | information generating method and device |
CN108229308A (en) * | 2017-11-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Recongnition of objects method, apparatus, storage medium and electronic equipment |
WO2018137623A1 (en) * | 2017-01-24 | 2018-08-02 | 深圳市商汤科技有限公司 | Image processing method and apparatus, and electronic device |
-
2018
- 2018-12-05 CN CN201811480075.9A patent/CN111274852B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014235743A (en) * | 2013-06-03 | 2014-12-15 | 株式会社リコー | Method and equipment for determining position of hand on the basis of depth image |
CN107257980A (en) * | 2015-03-18 | 2017-10-17 | 英特尔公司 | Local change in video detects |
WO2018137623A1 (en) * | 2017-01-24 | 2018-08-02 | 深圳市商汤科技有限公司 | Image processing method and apparatus, and electronic device |
CN107622252A (en) * | 2017-09-29 | 2018-01-23 | 百度在线网络技术(北京)有限公司 | information generating method and device |
CN108229308A (en) * | 2017-11-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Recongnition of objects method, apparatus, storage medium and electronic equipment |
Non-Patent Citations (1)
Title |
---|
孙为民 ; 王晖 ; 高涛 ; 张凯 ; 刘爱民 ; .用关键点密度算法加速基于卷积神经网络的绝缘子图像目标检测.电子制作.2018,(16),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111274852A (en) | 2020-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109871909B (en) | Image recognition method and device | |
CN109948542B (en) | Gesture recognition method and device, electronic equipment and storage medium | |
CN111274852B (en) | Target object key point detection method and device | |
CN108875533B (en) | Face recognition method, device, system and computer storage medium | |
KR102169309B1 (en) | Information processing apparatus and method of controlling the same | |
CN109344899A (en) | Multi-target detection method, device and electronic equipment | |
US10255673B2 (en) | Apparatus and method for detecting object in image, and apparatus and method for computer-aided diagnosis | |
CN112312001B (en) | Image detection method, device, equipment and computer storage medium | |
CN111126268A (en) | Key point detection model training method and device, electronic equipment and storage medium | |
CN109544516B (en) | Image detection method and device | |
CN114596440B (en) | Semantic segmentation model generation method and device, electronic equipment and storage medium | |
CN113012200A (en) | Method and device for positioning moving object, electronic equipment and storage medium | |
US20200211202A1 (en) | Fall detection method, fall detection apparatus and electronic device | |
CN118038335A (en) | Running timing detection method, running timing detection device and storage medium | |
CN112150508B (en) | Target tracking method, device and related equipment | |
CN110934565B (en) | Method and device for measuring pupil diameter and computer readable storage medium | |
CN111784660B (en) | Method and system for analyzing frontal face degree of face image | |
CN110647826B (en) | Method and device for acquiring commodity training picture, computer equipment and storage medium | |
CN111126101B (en) | Method and device for determining key point position, electronic equipment and storage medium | |
CN111753625B (en) | Pedestrian detection method, device, equipment and medium | |
CN109829440B (en) | Method and device for detecting road difference, electronic equipment and storage medium | |
CN114639056A (en) | Live content identification method and device, computer equipment and storage medium | |
CN114640807A (en) | Video-based object counting method and device, electronic equipment and storage medium | |
CN116745808A (en) | Job estimation device, job estimation method, and job estimation program | |
CN115641567B (en) | Target object detection method and device for vehicle, vehicle and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |