CN112686298A - Target detection method and device and electronic equipment - Google Patents

Target detection method and device and electronic equipment Download PDF

Info

Publication number
CN112686298A
CN112686298A CN202011591844.XA CN202011591844A CN112686298A CN 112686298 A CN112686298 A CN 112686298A CN 202011591844 A CN202011591844 A CN 202011591844A CN 112686298 A CN112686298 A CN 112686298A
Authority
CN
China
Prior art keywords
identification
frame
type
frames
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011591844.XA
Other languages
Chinese (zh)
Inventor
亓先军
唐健洋
吴俊豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202011591844.XA priority Critical patent/CN112686298A/en
Publication of CN112686298A publication Critical patent/CN112686298A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the invention provides a target detection method, a target detection device and electronic equipment, and relates to the technical field of image processing. The method comprises the following steps: determining each identification frame of a target in an image to be detected, and the corresponding type and confidence of each identification frame; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box; deleting the identification frames with the corresponding confidence degrees smaller than a preset confidence degree threshold value of the type corresponding to the identification frame from the identification frames to obtain candidate identification frames; and selecting the calibration identification frames of all targets belonging to the type from all the candidate identification frames with the same corresponding type. Compared with the prior art, the scheme provided by the embodiment of the invention can realize the identification condition of a large number of identification frames and ensure the efficiency of target detection on the image.

Description

Target detection method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a target detection method and apparatus, and an electronic device.
Background
With the continuous development of image processing technology, object detection on images is gradually applied to more and more technical fields, such as intelligent transportation, automatic driving, and the like.
In the related art, a method for detecting a target of an image to be detected generally includes: and identifying a plurality of identification frames about the target in the image to be detected from the image to be detected, and further selecting the calibration identification frames of the targets of various types in the image to be detected from the plurality of identified identification frames, namely selecting a small number of identification frames from the identified identification frames as a target detection result.
However, with the requirement for the target detection accuracy of the image being improved, when the target of the image to be detected is detected, a large number of identification frames about the target in the image to be detected are usually identified. Therefore, how to ensure the efficiency of target detection on the image is an urgent problem to be solved for the recognition situation with a large number of identification frames.
Disclosure of Invention
The embodiment of the invention aims to provide a target detection method, a target detection device and electronic equipment, so as to ensure the efficiency of target detection on an image aiming at the recognition condition that a large number of identification frames exist. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a target detection method, where the method includes:
determining each identification frame of a target in an image to be detected, and the corresponding type and confidence of each identification frame; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box;
deleting the identification frames with the corresponding confidence degrees smaller than a preset confidence degree threshold value of the type corresponding to the identification frame from the identification frames to obtain candidate identification frames;
and selecting the calibration identification frames of all targets belonging to the type from all the candidate identification frames with the same corresponding type.
Optionally, in a specific implementation manner, the step of selecting the calibration identifier boxes of the targets belonging to the type from the candidate identifier boxes with the same corresponding type includes:
and aiming at each type, selecting a calibration identification frame of each target belonging to the type from each candidate identification frame corresponding to the type based on the confidence corresponding to each candidate identification frame corresponding to the type and the size of an overlapping area between each candidate identification frame corresponding to the type.
Optionally, in a specific implementation manner, the step of selecting, for each type, a calibration identifier frame of each target belonging to the type from the candidate identifier frames corresponding to the type based on the confidence corresponding to each candidate identifier frame corresponding to the type and the size of the overlapping area between the candidate identifier frames corresponding to the type includes:
for each type, the following steps are performed:
taking each candidate identification frame corresponding to the type as an identification frame to be operated, and taking the identification frame with the highest corresponding confidence coefficient in the identification frames to be operated as a reference identification frame;
deleting the mark frames with the IOU value larger than a preset numerical value in the rest mark frames to obtain the current mark frame to be operated; wherein the remaining identification box is: the identification frames except the reference identification frame in the identification frames to be operated, and the IOU value of each remaining identification frame and the IOU value of the reference identification frame are as follows: dividing the area of the overlapping region of the residual mark frame and the reference mark frame by the ratio of the area of the union region of the residual mark frame and the reference mark frame;
taking the identification frame which is not taken as the reference identification frame and has the highest corresponding confidence coefficient in the current identification frames to be operated as the next reference identification frame, and returning to the step of deleting the identification frames with the IOU value larger than the preset value in the rest identification frames;
and when the current identification frame to be operated does not have an identification frame which is not used as a reference identification frame, determining each identification frame in the current identification frame to be operated as a calibration identification frame of each target belonging to the type.
Optionally, in a specific implementation manner, the step of identifying each identification frame related to the target in the image to be detected, and the type and the confidence degree corresponding to each identification frame includes:
determining each detection frame of the image to be detected, and the type and the confidence coefficient corresponding to each detection frame based on the image characteristics of the image to be detected; wherein, the type corresponding to each detection frame is as follows: the type of the target represented by the detection frame belongs to, and the confidence corresponding to each detection frame is as follows: confidence that the target represented by the detection frame belongs to the type corresponding to the detection frame
And deleting the detection frames of which the corresponding types are the image backgrounds of the images to be detected in the detection frames to obtain each identification frame of the targets in the images to be detected, and the type and the confidence degree corresponding to each identification frame.
In a second aspect, an embodiment of the present invention provides an object detection apparatus, where the apparatus includes:
the identification frame recognition module is used for determining each identification frame of the target in the image to be detected, and the type and the confidence coefficient corresponding to each identification frame; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box;
the identification frame deleting module is used for deleting the identification frames with the corresponding confidence degrees smaller than the preset confidence degree threshold value of the type corresponding to the identification frame from the identification frames to obtain candidate identification frames;
and the identification frame determining module is used for selecting the calibration identification frame of each target belonging to the type from each candidate identification frame with the same corresponding type.
Optionally, in a specific implementation manner, the identification box determining module is specifically configured to:
and aiming at each type, selecting a calibration identification frame of each target belonging to the type from each candidate identification frame corresponding to the type based on the confidence corresponding to each candidate identification frame corresponding to the type and the size of an overlapping area between each candidate identification frame corresponding to the type.
Optionally, in a specific implementation manner, the identification box determining module is specifically configured to:
for each type, the following steps are performed:
taking each candidate identification frame corresponding to the type as an identification frame to be operated, and taking the identification frame with the highest corresponding confidence coefficient in the identification frames to be operated as a reference identification frame;
deleting the mark frames with the IOU value larger than a preset numerical value in the rest mark frames to obtain the current mark frame to be operated; wherein the remaining identification box is: the identification frames except the reference identification frame in the identification frames to be operated, and the IOU value of each remaining identification frame and the IOU value of the reference identification frame are as follows: dividing the area of the overlapping region of the residual mark frame and the reference mark frame by the ratio of the area of the union region of the residual mark frame and the reference mark frame;
taking the identification frame which is not taken as the reference identification frame and has the highest corresponding confidence coefficient in the current identification frames to be operated as the next reference identification frame, and returning to the step of deleting the identification frames with the IOU value larger than the preset value in the rest identification frames;
and when the current identification frame to be operated does not have an identification frame which is not used as a reference identification frame, determining each identification frame in the current identification frame to be operated as a calibration identification frame of each target belonging to the type.
Optionally, in a specific implementation manner, the identification frame recognition module is specifically configured to:
determining each detection frame of the image to be detected, and the type and the confidence coefficient corresponding to each detection frame based on the image characteristics of the image to be detected; wherein, the type corresponding to each detection frame is as follows: the type of the target represented by the detection frame belongs to, and the confidence corresponding to each detection frame is as follows: confidence that the target represented by the detection frame belongs to the type corresponding to the detection frame
And deleting the detection frames of which the corresponding types are the image backgrounds of the images to be detected in the detection frames to obtain each identification frame of the targets in the images to be detected, and the type and the confidence degree corresponding to each identification frame.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the steps of any one of the object detection methods provided in the first aspect when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any one of the object detection methods provided in the first aspect.
In a fifth aspect, embodiments of the present invention provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the steps of any of the object detection methods provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
as can be seen from the above, by applying the scheme provided by the embodiment of the present invention, when the image is subjected to the visual inspection, and when a large number of identification frames related to the target in the image to be inspected are determined, the type and the confidence degree corresponding to each identification frame can be determined at the same time; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box. Further, by using the confidence threshold preset for each type, the identification frame with the confidence smaller than the preset confidence threshold corresponding to the identification frame is deleted from the determined identification frames, so as to obtain each candidate identification frame. Further, the calibration frames of the targets belonging to the type can be selected from the candidate frames with the same type.
Therefore, after a large number of identification frames are obtained through recognition, the number of candidate identification frames needing further processing can be greatly reduced through screening of the preset confidence threshold values of all types, and therefore the calibration identification frames of all types of targets in the image to be detected can be selected from the candidate identification frames with a small number. Based on the scheme provided by the embodiment of the invention, the identification condition of a large number of identification frames can be realized by reducing the number of candidate identification frames to be selected, and the efficiency of target detection on the image is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a target detection method according to an embodiment of the present invention;
FIG. 2(a) is a schematic diagram of determining each identification frame about an object in an image to be detected according to an embodiment of the present invention;
FIG. 2(b) is a schematic diagram of determining each inspection frame for the image to be inspected according to the embodiment of the present invention;
FIG. 3 is a flowchart illustrating an embodiment of S101 in FIG. 1;
fig. 4 is a schematic flow chart showing a specific implementation manner of selecting the calibration frames belonging to the targets of each type from the candidate frames corresponding to the type based on the confidence corresponding to each candidate frame corresponding to the type and the size of the overlapping area between the candidate frames corresponding to the type for each type;
fig. 5 is a schematic diagram of an embodiment of a target detection method according to the present invention;
fig. 6 is a schematic structural diagram of an object detection apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the related art, a method for detecting a target of an image to be detected generally includes: and identifying a plurality of identification frames about the target in the image to be detected from the image to be detected, and further selecting the calibration identification frames of the targets of various types in the image to be detected from the plurality of identified identification frames, namely selecting a small number of identification frames from the identified identification frames as a target detection result. However, with the requirement for the target detection accuracy of the image being improved, when the target of the image to be detected is detected, a large number of identification frames about the target in the image to be detected are usually identified. Then, how to ensure the efficiency of target detection on the image is an urgent problem to be solved for the recognition situation with a large number of identification frames
In order to solve the above technical problem, an embodiment of the present invention provides a target detection method.
The target detection method can be applied to any application scene needing target detection on the image, such as intelligent transportation, automatic driving and the like. And, the detected target may be a target of any type appearing in the image, for example, a human face target in the detected image, a license plate target in the detected image, and the like. The type of the detected target may be a completely irrelevant type, for example, for a road traffic image, the type of the detected target may include a license plate and a face, so that a face target and a license plate target in the road traffic image are detected; accordingly, the type of detected object may also be different subcategories belonging to the same object type, e.g., for images of a plurality of types of animals including cats, dogs, etc., the type of detected object may be different types of animals, e.g., cats, dogs, etc., it being apparent that cats and dogs belong to different subcategories of the same object type of animal.
Based on this, the embodiment of the present invention does not determine the application scenario of the target detection method, and the specific content and the dividing manner of the type of the detected target.
In addition, the method can be applied to various types of electronic equipment such as servers, desktop computers, mobile phones and the like, and the electronic equipment is hereinafter referred to as the electronic equipment for short. That is, the embodiment of the present invention also does not limit the execution subject of the object detection method.
The electronic equipment can execute the target detection method through the installed client with the function of target detection on the image; the target detection method can also be executed by a module which is configured by the target detection method and has the function of detecting the target of the image. When the electronic device executes the target detection method through the client, the module for implementing the target detection method can be used as a plug-in of the client; the client may also be a client dedicated to performing the target detection method. This is all reasonable.
Furthermore, a target detection method provided by the embodiment of the present invention may include the following steps:
determining each identification frame of a target in an image to be detected, and the corresponding type and confidence of each identification frame; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box;
deleting the identification frames with the corresponding confidence degrees smaller than a preset confidence degree threshold value of the type corresponding to the identification frame from the identification frames to obtain candidate identification frames;
and selecting the calibration identification frames of all targets belonging to the type from all the candidate identification frames with the same corresponding type.
As can be seen from the above, by applying the scheme provided by the embodiment of the present invention, when the image is subjected to the visual inspection, and when a large number of identification frames related to the target in the image to be inspected are determined, the type and the confidence degree corresponding to each identification frame can be determined at the same time; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box. Further, by using the confidence threshold preset for each type, the identification frame with the confidence smaller than the preset confidence threshold corresponding to the identification frame is deleted from the determined identification frames, so as to obtain each candidate identification frame. Further, the calibration frames of the targets belonging to the type can be selected from the candidate frames with the same type.
Therefore, after a large number of identification frames are obtained through recognition, the number of candidate identification frames needing further processing can be greatly reduced through screening of the preset confidence threshold values of all types, and therefore the calibration identification frames of all types of targets in the image to be detected can be selected from the candidate identification frames with a small number. Based on the scheme provided by the embodiment of the invention, the identification condition of a large number of identification frames can be realized by reducing the number of candidate identification frames to be selected, and the efficiency of target detection on the image is ensured.
A target detection method provided in an embodiment of the present invention is specifically described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a target detection method according to an embodiment of the present invention, and as shown in fig. 1, the method may include the following steps S101 to S103:
s101: determining each identification frame of a target in an image to be detected, and the corresponding type and confidence of each identification frame;
wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box.
After the image to be detected for target detection is determined, the target in the image to be detected can be identified, so as to determine each identification frame related to the target in the image to be detected, and the type and confidence degree corresponding to each identification frame.
The identification frame can be understood as: when the target in the image to be detected is identified, if it is determined that the target of a certain type is present in a certain region in the image to be detected, the region can be identified by using a graphic frame, and the graphic frame for identifying the region in the image to be detected, in which the target of the certain type is present, is the identification frame.
For example, as shown in fig. 2(a), when the target detection is performed on the image to obtain the cat and the dog in the image to be detected, and the target existing in the image to be detected is identified, each rectangular frame existing in fig. 2(a) is each identified frame determined about the target in the image to be detected; and the corresponding type and confidence of each identification box can be determined simultaneously.
Optionally, in a specific implementation manner, as shown in fig. 3, the step S101 may include the following steps S1011 to S1012:
s1011: determining each detection frame of the image to be detected, and the corresponding type and confidence of each detection frame based on the image characteristics of the image to be detected;
wherein, the type corresponding to each detection frame is as follows: the type of the target represented by the detection frame belongs to, and the confidence corresponding to each detection frame is as follows: confidence that the target represented by the detection frame belongs to the type corresponding to the detection frame
S1012: and deleting the detection frames of which the corresponding types are the image backgrounds of the images to be detected in the detection frames to obtain each identification frame of the targets in the images to be detected, and the type and the confidence coefficient corresponding to each identification frame.
After the image to be detected is obtained, the target in the image to be detected can be identified based on the image characteristics of the image to be detected, so that each detection frame of the image to be detected, and the type and the confidence coefficient corresponding to each detection frame can be determined.
Wherein, some areas in the image to be detected can be absent with targets of the type to be detected, and thus, the areas absent with the type to be detected can be regarded as image backgrounds in the image to be detected in the process of detecting the targets; when the target detection is performed on the image to be detected and each detection frame of the image to be detected is determined, the situation of false detection may occur, so that the detection frame corresponding to the image background of the image to be detected in the type is obtained.
For example, as shown in fig. 2(b), when the target detection is performed on the image to obtain the cat and the dog in the image to be detected, a plurality of detection frames whose contents do not include the cat and the dog may exist in each detection frame shown in fig. 2(b), and the type corresponding to these detection frames is the image background of the image to be detected.
Furthermore, since the detected frame corresponding to the image background of the image to be detected in the type identified cannot be determined as the calibration identification frame of the target belonging to each type in the image to be detected, after the type and the confidence corresponding to each detected frame and each detected frame are identified and obtained based on the image characteristics of the image to be detected, for the frame to be detected corresponding to the image background of the image to be detected in the type identified as the image background of the image to be detected, the frame to be detected can be deleted no matter what the confidence corresponding to the detected frame is, and the type and the confidence corresponding to the frame to be detected are deleted at the same time.
In this way, after deleting the detection frame of which the corresponding type is the image background of the image to be detected in each identified detection frame, the remaining detection frames can be used as the identification frames of the target in the image to be detected, so that the identification frames of the target in the image to be detected and the corresponding type and confidence of each identification frame can be obtained.
Optionally, as an embodiment of the specific implementation shown in fig. 3, the step S1011 may include the following step 1011A:
step 1011A: and inputting the image to be detected into a preset detection model for carrying out target detection on the image to obtain each detection frame output by the target detection model, and the type and the confidence coefficient corresponding to each detection frame.
In this embodiment, a target detection model for performing target detection on an image may be obtained by using a sample image through pre-training, so that when the target detection method provided by the embodiment of the present invention is executed, the preset target detection model may be used to perform target detection on the image to be detected, so as to obtain each detection frame related to the image to be detected, and a type and a confidence corresponding to each detection frame. Therefore, when the target detection model outputs each detection frame, and the type and the confidence degree corresponding to each detection frame, the electronic equipment can acquire each detection frame of the image to be detected, and the type and the confidence degree corresponding to each detection frame.
The target detection model may be any type of model such as a CNN (Convolutional Neural Network) model, an RNN (Recurrent Neural Network) model, and the like, which is reasonable.
S102: deleting the identification frames with the corresponding confidence degrees smaller than a preset confidence degree threshold value of the type corresponding to the identification frame from the identification frames to obtain candidate identification frames;
after the identification frames of the target in the image to be detected, the type and the confidence degree corresponding to each identification frame are obtained through identification, the identification frame with the corresponding confidence degree smaller than a preset confidence degree threshold value of the type corresponding to the identification frame can be deleted from the identification frames, and each candidate identification frame is obtained.
Therefore, the number of candidate identification frames needing further processing can be greatly reduced, and the calibration identification frames of various types of targets in the image to be detected can be selected from the fewer candidate identification frames. Based on the method, aiming at the recognition condition with a large number of identification frames, the efficiency of selecting the calibration identification frame of each type of target in the image to be detected in the candidate identification frames can be ensured due to the fact that the number of the candidate identification frames needing further processing is reduced, and therefore the efficiency of carrying out target detection on the image can be ensured aiming at the recognition condition with a large number of identification frames.
For the image to be detected, since the detection rates of the objects belonging to different types may be different, in order to reduce the number of the remaining candidate identification frames and ensure the detection performance of each type of object, confidence thresholds may be set for different types, respectively. Moreover, it is reasonable that the confidence thresholds set for different types may be the same or different.
For the target with higher detection rate, the preset confidence threshold value of the type can be higher, so that more identification frames with the corresponding type of the type can be deleted, the number of reserved candidate identification frames is reduced more, and the target detection efficiency is better ensured.
Correspondingly, for the target with a low detection rate, the preset confidence threshold corresponding to the type can be low, so that the detection performance of the target can be ensured.
Based on this, in step S102, for the corresponding identification frames with the same type, whether to delete may be determined according to the same preset confidence threshold.
Thus, for each identification frame, it can be determined whether the confidence corresponding to the identification frame is smaller than a preset confidence threshold corresponding to the type specified by the preset identification frame, so that, when the determination result is yes, that is, the confidence corresponding to the identification frame is smaller than the preset confidence threshold corresponding to the identification frame, the identification frame is deleted.
Correspondingly, if the judgment result is negative, that is, the confidence corresponding to the identification frame is not less than the preset confidence threshold of the type corresponding to the identification frame, the identification frame is retained, and thus, the retained identification frame is the candidate identification frame.
For example, when the image is subjected to target detection to expect to detect a cat and a dog in the image to be detected, the confidence threshold of the cat may be set to 0.8, and the confidence threshold of the dog may be set to 0.9 in advance.
Then, supposing that identification frames A, B, C, D and E are determined for the image to be detected, the type and confidence corresponding to the identification frame A are respectively cat and 0.9, the type and confidence corresponding to the identification frame B are respectively cat and 0.8, the type and confidence corresponding to the identification frame C are respectively dog and 0.5, the type and confidence corresponding to the identification frame D are respectively dog and 0.9, and the type and confidence corresponding to the identification frame E are respectively dog and 0.8;
then, for identification box a, since 0.9>0.8, identification box a is retained as a candidate identification box; for the identification frame B, since 0.8 is 0.8, the identification frame B is reserved as a candidate identification frame; for the identification box C, since 0.5<0.9, the identification box C is deleted; for the identification frame D, since 0.9 ═ 0.9, the identification frame D is reserved as a candidate identification frame; for identification box E, since 0.8<0.9, identification box E is deleted.
S103: and selecting the calibration identification frames of all targets belonging to the type from all the candidate identification frames with the same corresponding type.
After obtaining each candidate identification frame, each candidate identification frame may be divided according to the type corresponding to each candidate identification frame. Therefore, the calibration identification frames of the targets belonging to the type can be selected from the position relation of the candidate identification frames with the same corresponding type.
Thus, the selected calibration identification frame of each target belonging to each type is the detection result obtained by performing target detection on the image to be detected.
For example, a plurality of candidate id boxes having the same overlapping area may be grouped into one group based on the corresponding candidate id boxes having the same type, wherein in some cases, one candidate id box may be grouped into a plurality of groups because the one candidate id box may have a different overlapping area from other different candidate id boxes.
Further, for each group of candidate identification frames obtained by dividing, the candidate identification frame whose corresponding confidence coefficient in the group meets the preset confidence coefficient condition may be determined as the calibration identification frame belonging to the target of the type. For example, the candidate identification frame with the highest confidence coefficient in the group may be determined as the calibration identification frame belonging to the target of the type, and for example, the candidate identification frame with the confidence coefficient greater than a preset value in the group may be determined as the calibration identification frame belonging to the target of the type.
Optionally, in a specific implementation manner, the step S103 may include the following step 1031:
step 1031: and aiming at each type, selecting a calibration identification frame of each target belonging to the type from each candidate identification frame corresponding to the type based on the confidence corresponding to each candidate identification frame corresponding to the type and the size of an overlapping area between each candidate identification frame corresponding to the type.
In this specific implementation manner, after obtaining each candidate identification frame, and dividing each candidate identification frame according to the type corresponding to each candidate identification frame, the size of the overlapping area between each candidate identification frame corresponding to the type may be determined for each type, and further, based on the confidence corresponding to each candidate identification frame corresponding to the type and the size of the overlapping area between each candidate identification frame corresponding to the type, the calibration identification frame of each target belonging to the type is selected from each candidate identification frame corresponding to the type.
Optionally, in a specific implementation manner, as shown in fig. 4, for each type, the implementation manner of the step 1031 may include the following steps S31-S34:
s31: taking each candidate identification frame corresponding to the type as an identification frame to be operated, and taking the identification frame with the highest corresponding confidence coefficient in the identification frames to be operated as a reference identification frame;
s32: deleting the mark frames with the IOU value larger than the preset numerical value from the rest mark frames to obtain the current mark frame to be operated;
wherein, the remaining identification frame is: and the identification frames except the reference identification frame in the identification frames to be operated, and the IOU value of each residual identification frame and the reference identification frame is as follows: dividing the area of the overlapping region of the residual mark frame and the reference mark frame by the ratio of the area of the union region of the residual mark frame and the reference mark frame;
s33: taking the identification frame which is not taken as the reference identification frame and has the highest corresponding confidence coefficient in the current identification frames to be operated as the next reference identification frame, and returning to the step of deleting the identification frames with the IOU value larger than the preset numerical value in the rest identification frames;
s34: and when the current to-be-operated identification frame does not have an identification frame which is not used as a reference identification frame, determining each identification frame in the current to-be-operated identification frame as a calibration identification frame of each target belonging to the type.
In this embodiment, for each type, each candidate identification frame corresponding to the type as the type may be used as an identification frame to be operated, and then, an identification frame with the highest corresponding confidence in the identification frames to be operated is used as a reference identification frame.
Further, for each remaining id, the IOU values for the remaining id and the reference id can be calculated. The IOU is an abbreviation of interaction over Union, which is a standard for measuring the accuracy of detecting a corresponding object in a specific data set, and the chinese meaning may be an overlap ratio.
Wherein, the IOU value of each remaining id box and the reference id box is: the area of the overlapping region of the residual mark frame and the reference mark frame is divided by the ratio of the area of the union region of the residual mark frame and the reference mark frame. That is, for each remaining flag frame, the area of the overlapping region between the remaining flag frame and the reference flag frame and the area of the union region between the remaining flag frame and the reference flag frame may be first calculated, and then the ratio of the obtained area of the overlapping region to the area of the union region is calculated, which is the IOU value of the remaining flag frame and the reference flag frame.
After the IOU value of each remaining id frame and the IOU value of the base station id frame are obtained by calculation, the relationship between the IOU value and the preset value can be further determined, so that when the IOU value is greater than the preset value, the remaining id frames can be deleted.
Thus, after traversing each remaining id frame, each remaining id frame that can be reserved because the IOU value of the base station id frame is not greater than the preset value can form the current id frame to be operated. Compared with the to-be-operated flag box determined in step S31, the current to-be-operated flag box may be reduced by at least one flag box or may not be reduced by the flag box.
Furthermore, each of the current to-be-operated frames that is not used as the reference frame may be determined, and the frame with the highest corresponding confidence may be determined in each of the determined frames that is not used as the reference frame, so that the frame that is not used as the reference frame and has the highest corresponding confidence in the current to-be-operated frame may be used as the next reference frame, and therefore, the step S32 may be returned to obtain the updated to-be-operated frame.
Furthermore, when the updated to-be-operated identification frame is obtained, the identification frame which is not used as the reference identification frame and has the highest corresponding confidence coefficient in the updated to-be-operated identification frame may be determined as the next reference identification frame, and the process returns to the step S32 again.
And repeating the steps S32-S33 until there is no identifier frame not used as the reference identifier frame in the updated identifier frame to be operated, so that each identifier frame reserved in the updated identifier frame to be operated can be determined as the calibration identifier frame belonging to each target of the type.
The processing method adopted in this specific implementation manner may be referred to as NMS (non maximum suppression) processing.
Further, optionally, when the identification frames of the targets in the image to be detected are obtained through identification, the position information of each identification frame in the image to be detected can be obtained through identification, so that after the calibration identification frames of the targets belonging to each type are determined, the image to be detected marked with each calibration identification frame can be output according to the position information of each calibration identification frame in the image to be detected.
Therefore, when the image content included in each calibration identification frame marked in the output image to be detected is the target detection of the image to be detected, each target belonging to each type in the detected image to be detected is obtained through detection.
As can be seen from the above, by applying the scheme provided by the embodiment of the present invention, after a large number of identification frames are obtained through recognition, the number of candidate identification frames that need to be further processed can be greatly reduced through the screening of the preset confidence thresholds of each type, so that the calibration identification frame of each type of target in the image to be detected can be selected from a small number of candidate identification frames. Based on the scheme provided by the embodiment of the invention, the identification condition of a large number of identification frames can be realized by reducing the number of candidate identification frames to be selected, and the efficiency of target detection on the image is ensured.
In order to facilitate understanding of the target detection method provided in the embodiment of the present invention, an example of the target detection method provided in the embodiment of the present invention is described below.
As shown in fig. 5, the image to be detected is subjected to target detection, so as to obtain cats and dogs in the image to be detected through expected detection.
The method comprises the following steps: acquiring an image to be detected, as shown in a diagram on the left of a first row in FIG. 5;
step two: performing target detection on an image to be detected through a preset CNN model, namely, obtaining each detection frame of the image to be detected through a graph passing through a CNN network in FIG. 5, as shown in the right diagram of the first row in FIG. 5; wherein, each obtained detection frame about the image to be detected is the output result of the preset CNN model;
step three: deleting each detection frame with the corresponding type being the image background of the image to be detected according to the type corresponding to each detection frame, and performing background filtering, namely 'performing background filtering on the CNN network output' in the graph 5 to obtain each identification frame related to the image to be detected, as shown in the graph on the right side of the second row in the graph 5;
step four: for a cat of the type, deleting each detection frame of which the corresponding confidence is smaller than the confidence threshold of the preset cat of the type, and for a dog of the type, deleting each detection frame of which the corresponding confidence is smaller than the confidence threshold of the preset dog of the type, that is, "confidence filtering for classification" in fig. 5, to obtain each candidate identification frame, as shown in the diagram on the left of the second row in fig. 5;
step five: and performing NMS processing on each candidate identification frame corresponding to the cat type, and performing NMS processing, namely "NMS" in fig. 5 on each candidate identification frame corresponding to the dog type, thereby obtaining a calibration identification frame of the cat and a calibration identification frame of the dog in the image to be detected, as shown in the third line of the diagram in fig. 5.
As shown in fig. 5, it can be clearly found that, by applying the target detection method provided by the embodiment of the present invention, the number of the determined candidate identification frames that need to be further analyzed is small, so that the efficiency of target detection on the image can be ensured for the recognition situation where a large number of identification frames exist.
Corresponding to the target detection method provided by the embodiment of the invention, the embodiment of the invention also provides a target detection device.
Fig. 6 is a schematic structural diagram of an object detection apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus may include the following modules:
the identification frame recognition module 610 is configured to determine each identification frame about the target in the image to be detected, and a type and a confidence corresponding to each identification frame; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box;
a mark frame deleting module 620, configured to delete, from each mark frame, a mark frame whose corresponding confidence is smaller than a preset confidence threshold of a type corresponding to the mark frame, so as to obtain each candidate mark frame;
the identification frame determining module 630 is configured to select a calibration identification frame of each target belonging to the type from the candidate identification frames with the same corresponding type.
As can be seen from the above, by applying the scheme provided by the embodiment of the present invention, when the image is subjected to the visual inspection, and when a large number of identification frames related to the target in the image to be inspected are determined, the type and the confidence degree corresponding to each identification frame can be determined at the same time; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box. Further, by using the confidence threshold preset for each type, the identification frame with the confidence smaller than the preset confidence threshold corresponding to the identification frame is deleted from the determined identification frames, so as to obtain each candidate identification frame. Further, the calibration frames of the targets belonging to the type can be selected from the candidate frames with the same type.
Therefore, after a large number of identification frames are obtained through recognition, the number of candidate identification frames needing further processing can be greatly reduced through screening of the preset confidence threshold values of all types, and therefore the calibration identification frames of all types of targets in the image to be detected can be selected from the candidate identification frames with a small number. Based on the scheme provided by the embodiment of the invention, the identification condition of a large number of identification frames can be realized by reducing the number of candidate identification frames to be selected, and the efficiency of target detection on the image is ensured.
Optionally, in a specific implementation manner, the identification box determining module 630 is specifically configured to:
and aiming at each type, selecting a calibration identification frame of each target belonging to the type from each candidate identification frame corresponding to the type based on the confidence corresponding to each candidate identification frame corresponding to the type and the size of an overlapping area between each candidate identification frame corresponding to the type.
Optionally, in a specific implementation manner, the identification box determining module 630 is specifically configured to:
for each type, the following steps are performed:
taking each candidate identification frame corresponding to the type as an identification frame to be operated, and taking the identification frame with the highest corresponding confidence coefficient in the identification frames to be operated as a reference identification frame;
deleting the mark frames with the IOU value larger than a preset numerical value in the rest mark frames to obtain the current mark frame to be operated; wherein the remaining identification box is: the identification frames except the reference identification frame in the identification frames to be operated, and the IOU value of each remaining identification frame and the IOU value of the reference identification frame are as follows: dividing the area of the overlapping region of the residual mark frame and the reference mark frame by the ratio of the area of the union region of the residual mark frame and the reference mark frame;
taking the identification frame which is not taken as the reference identification frame and has the highest corresponding confidence coefficient in the current identification frames to be operated as the next reference identification frame, and returning to the step of deleting the identification frames with the IOU value larger than the preset value in the rest identification frames;
and when the current identification frame to be operated does not have an identification frame which is not used as a reference identification frame, determining each identification frame in the current identification frame to be operated as a calibration identification frame of each target belonging to the type.
Optionally, in a specific implementation manner, the identification box identifying module 610 is specifically configured to:
determining each detection frame of the image to be detected, and the type and the confidence coefficient corresponding to each detection frame based on the image characteristics of the image to be detected; wherein, the type corresponding to each detection frame is as follows: the type of the target represented by the detection frame belongs to, and the confidence corresponding to each detection frame is as follows: confidence that the target represented by the detection frame belongs to the type corresponding to the detection frame
And deleting the detection frames of which the corresponding types are the image backgrounds of the images to be detected in the detection frames to obtain each identification frame of the targets in the images to be detected, and the type and the confidence degree corresponding to each identification frame.
Corresponding to the target detection method provided by the above embodiment of the present invention, an embodiment of the present invention further provides an electronic device, as shown in fig. 7, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702 and the memory 703 complete mutual communication through the communication bus 704,
a memory 703 for storing a computer program;
the processor 701 is configured to implement the steps of any target detection method provided in the above embodiments of the present invention when executing the program stored in the memory 703.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a Graphics Processing Unit (GPU), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the object detection methods provided in the embodiments of the present invention.
In yet another embodiment, a computer program product containing instructions is provided, which when run on a computer causes the computer to perform the steps of any of the object detection methods provided in the embodiments of the present invention described above.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, apparatus embodiments, electronic device embodiments, computer-readable storage medium embodiments, and computer program product embodiments are described for simplicity because they are substantially similar to method embodiments, as may be found in some descriptions of method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method of object detection, the method comprising:
determining each identification frame of a target in an image to be detected, and the corresponding type and confidence of each identification frame; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box;
deleting the identification frames with the corresponding confidence degrees smaller than a preset confidence degree threshold value of the type corresponding to the identification frame from the identification frames to obtain candidate identification frames;
and selecting the calibration identification frames of all targets belonging to the type from all the candidate identification frames with the same corresponding type.
2. The method according to claim 1, wherein the step of selecting the calibration frames belonging to each target of the type from the candidate frames with the same type includes:
and aiming at each type, selecting a calibration identification frame of each target belonging to the type from each candidate identification frame corresponding to the type based on the confidence corresponding to each candidate identification frame corresponding to the type and the size of an overlapping area between each candidate identification frame corresponding to the type.
3. The method according to claim 2, wherein the step of selecting, for each type, the calibration flag belonging to each target of the type from the candidate flags corresponding to the type based on the confidence corresponding to each candidate flag corresponding to the type and the size of the overlapping area between the candidate flags corresponding to the type comprises:
for each type, the following steps are performed:
taking each candidate identification frame corresponding to the type as an identification frame to be operated, and taking the identification frame with the highest corresponding confidence coefficient in the identification frames to be operated as a reference identification frame;
deleting the mark frames with the IOU value larger than a preset numerical value in the rest mark frames to obtain the current mark frame to be operated; wherein the remaining identification box is: the identification frames except the reference identification frame in the identification frames to be operated, and the IOU value of each remaining identification frame and the IOU value of the reference identification frame are as follows: dividing the area of the overlapping region of the residual mark frame and the reference mark frame by the ratio of the area of the union region of the residual mark frame and the reference mark frame;
taking the identification frame which is not taken as the reference identification frame and has the highest corresponding confidence coefficient in the current identification frames to be operated as the next reference identification frame, and returning to the step of deleting the identification frames with the IOU value larger than the preset value in the rest identification frames;
and when the current identification frame to be operated does not have an identification frame which is not used as a reference identification frame, determining each identification frame in the current identification frame to be operated as a calibration identification frame of each target belonging to the type.
4. The method according to any one of claims 1-3, wherein the step of identifying respective identification boxes for objects in the image to be detected, and the type and confidence level corresponding to each identification box, comprises:
determining each detection frame of the image to be detected, and the type and the confidence coefficient corresponding to each detection frame based on the image characteristics of the image to be detected; wherein, the type corresponding to each detection frame is as follows: the type of the target represented by the detection frame belongs to, and the confidence corresponding to each detection frame is as follows: confidence that the target represented by the detection frame belongs to the type corresponding to the detection frame
And deleting the detection frames of which the corresponding types are the image backgrounds of the images to be detected in the detection frames to obtain each identification frame of the targets in the images to be detected, and the type and the confidence degree corresponding to each identification frame.
5. An object detection apparatus, characterized in that the apparatus comprises:
the identification frame recognition module is used for determining each identification frame of the target in the image to be detected, and the type and the confidence coefficient corresponding to each identification frame; wherein, the type corresponding to each identification frame is as follows: the type of the target represented by the identification box belongs to, and the confidence corresponding to each identification box is as follows: the confidence that the target represented by the identification box belongs to the type corresponding to the identification box;
the identification frame deleting module is used for deleting the identification frames with the corresponding confidence degrees smaller than the preset confidence degree threshold value of the type corresponding to the identification frame from the identification frames to obtain candidate identification frames;
and the identification frame determining module is used for selecting the calibration identification frame of each target belonging to the type from each candidate identification frame with the same corresponding type.
6. The apparatus of claim 5, wherein the identification box determining module is specifically configured to:
and aiming at each type, selecting a calibration identification frame of each target belonging to the type from each candidate identification frame corresponding to the type based on the confidence corresponding to each candidate identification frame corresponding to the type and the size of an overlapping area between each candidate identification frame corresponding to the type.
7. The apparatus of claim 6, wherein the identification box determining module is specifically configured to:
for each type, the following steps are performed:
taking each candidate identification frame corresponding to the type as an identification frame to be operated, and taking the identification frame with the highest corresponding confidence coefficient in the identification frames to be operated as a reference identification frame;
deleting the mark frames with the IOU value larger than a preset numerical value in the rest mark frames to obtain the current mark frame to be operated; wherein the remaining identification box is: the identification frames except the reference identification frame in the identification frames to be operated, and the IOU value of each remaining identification frame and the IOU value of the reference identification frame are as follows: dividing the area of the overlapping region of the residual mark frame and the reference mark frame by the ratio of the area of the union region of the residual mark frame and the reference mark frame;
taking the identification frame which is not taken as the reference identification frame and has the highest corresponding confidence coefficient in the current identification frames to be operated as the next reference identification frame, and returning to the step of deleting the identification frames with the IOU value larger than the preset value in the rest identification frames;
and when the current identification frame to be operated does not have an identification frame which is not used as a reference identification frame, determining each identification frame in the current identification frame to be operated as a calibration identification frame of each target belonging to the type.
8. The apparatus according to any one of claims 5 to 7, wherein the identification box recognition module is specifically configured to:
determining each detection frame of the image to be detected, and the type and the confidence coefficient corresponding to each detection frame based on the image characteristics of the image to be detected; wherein, the type corresponding to each detection frame is as follows: the type of the target represented by the detection frame belongs to, and the confidence corresponding to each detection frame is as follows: confidence that the target represented by the detection frame belongs to the type corresponding to the detection frame
And deleting the detection frames of which the corresponding types are the image backgrounds of the images to be detected in the detection frames to obtain each identification frame of the targets in the images to be detected, and the type and the confidence degree corresponding to each identification frame.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 4 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
CN202011591844.XA 2020-12-29 2020-12-29 Target detection method and device and electronic equipment Pending CN112686298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011591844.XA CN112686298A (en) 2020-12-29 2020-12-29 Target detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011591844.XA CN112686298A (en) 2020-12-29 2020-12-29 Target detection method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112686298A true CN112686298A (en) 2021-04-20

Family

ID=75453806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011591844.XA Pending CN112686298A (en) 2020-12-29 2020-12-29 Target detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112686298A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128247A (en) * 2021-05-17 2021-07-16 阳光电源股份有限公司 Image positioning identification verification method and server
CN113761245A (en) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960266A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Image object detection method and device
CN111368600A (en) * 2018-12-26 2020-07-03 北京眼神智能科技有限公司 Method and device for detecting and identifying remote sensing image target, readable storage medium and equipment
CN112052787A (en) * 2020-09-03 2020-12-08 腾讯科技(深圳)有限公司 Target detection method and device based on artificial intelligence and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960266A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Image object detection method and device
CN111368600A (en) * 2018-12-26 2020-07-03 北京眼神智能科技有限公司 Method and device for detecting and identifying remote sensing image target, readable storage medium and equipment
CN112052787A (en) * 2020-09-03 2020-12-08 腾讯科技(深圳)有限公司 Target detection method and device based on artificial intelligence and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈慧岩 等: "《智能车辆理论与应用》", 北京理工大学出版社, pages: 71 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761245A (en) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and computer readable storage medium
CN113761245B (en) * 2021-05-11 2023-10-13 腾讯科技(深圳)有限公司 Image recognition method, device, electronic equipment and computer readable storage medium
CN113128247A (en) * 2021-05-17 2021-07-16 阳光电源股份有限公司 Image positioning identification verification method and server
CN113128247B (en) * 2021-05-17 2024-04-12 阳光电源股份有限公司 Image positioning identification verification method and server

Similar Documents

Publication Publication Date Title
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
US11275955B2 (en) Lane line processing method and device
WO2020068868A1 (en) Object detection machine learning
CN110443212B (en) Positive sample acquisition method, device, equipment and storage medium for target detection
CN111917740A (en) Abnormal flow alarm log detection method, device, equipment and medium
CN110889463A (en) Sample labeling method and device, server and machine-readable storage medium
CN112686298A (en) Target detection method and device and electronic equipment
CN111161265A (en) Animal counting and image processing method and device
CN112541372B (en) Difficult sample screening method and device
CN113657202B (en) Component identification method, training set construction method, device, equipment and storage medium
CN112287566A (en) Automatic driving scene library generation method and system and electronic equipment
WO2022213565A1 (en) Review method and apparatus for prediction result of artificial intelligence model
CN109697468A (en) Mask method, device and the storage medium of sample image
CN111898491A (en) Method and device for identifying reverse driving of vehicle and electronic equipment
CN112132892B (en) Target position labeling method, device and equipment
CN115082659A (en) Image annotation method and device, electronic equipment and storage medium
CN115357155A (en) Window identification method, device, equipment and computer readable storage medium
CN111325181A (en) State monitoring method and device, electronic equipment and storage medium
CN113902740A (en) Construction method of image blurring degree evaluation model
CN112270319B (en) Event labeling method and device and electronic equipment
CN113723467A (en) Sample collection method, device and equipment for defect detection
CN116823793A (en) Device defect detection method, device, electronic device and readable storage medium
CN114140735B (en) Deep learning-based goods path accumulation detection method and system and storage medium
CN115018783A (en) Video watermark detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210420