CN111507958B - Target detection method, training method of detection model and electronic equipment - Google Patents

Target detection method, training method of detection model and electronic equipment Download PDF

Info

Publication number
CN111507958B
CN111507958B CN202010295474.9A CN202010295474A CN111507958B CN 111507958 B CN111507958 B CN 111507958B CN 202010295474 A CN202010295474 A CN 202010295474A CN 111507958 B CN111507958 B CN 111507958B
Authority
CN
China
Prior art keywords
image
detected
target
detection model
candidate region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010295474.9A
Other languages
Chinese (zh)
Other versions
CN111507958A (en
Inventor
刘思言
王博
夏卫尚
陈江琦
王万国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Shandong Electric Power Co Ltd
Global Energy Interconnection Research Institute
Original Assignee
State Grid Corp of China SGCC
State Grid Shandong Electric Power Co Ltd
Global Energy Interconnection Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Shandong Electric Power Co Ltd, Global Energy Interconnection Research Institute filed Critical State Grid Corp of China SGCC
Priority to CN202010295474.9A priority Critical patent/CN111507958B/en
Publication of CN111507958A publication Critical patent/CN111507958A/en
Application granted granted Critical
Publication of CN111507958B publication Critical patent/CN111507958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The invention relates to the technical field of image processing, in particular to a target detection method, a training method of a detection model and electronic equipment, wherein the detection method comprises the steps of acquiring an image to be detected; inputting the image to be detected into a detection model to obtain the position information of the target candidate region; the target candidate areas are candidate areas, the sizes of all the candidate areas output by the detection model are smaller than a preset value; extracting an image with a preset size from the image to be detected based on the position information of the target candidate region to obtain a sub-image to be detected; inputting the sub-image to be detected into a detection model to obtain the category of the target corresponding to the target candidate region. According to the detection method, the candidate region with the size smaller than the preset size is extracted from the image to be detected for detection again, so that the data processing amount can be reduced, and the target detection efficiency can be improved.

Description

Target detection method, training method of detection model and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a target detection method, a training method of a detection model and electronic equipment.
Background
The detection method of the target in the image can be carried out manually, but the detection method is influenced by manual subjective factors, so that the detection accuracy is low. Based on this, a method for automatically detecting an image is also proposed in the prior art, so as to automatically detect a target in the image. However, when a multi-size object exists in an image to be detected, in order to accurately detect a small-size object, a method is often employed in which the image to be detected is divided into a plurality of smaller sub-pictures to detect the small-size object.
Taking transmission line detection as an example, transmission line inspection is one of the important works of operating and maintaining a transmission network. At present, a helicopter inspection, unmanned aerial vehicle inspection and robot inspection are combined to collect an inspection picture of the transmission line, and an artificial intelligent method is used for identifying the collected inspection picture of the transmission line so as to greatly increase the efficiency of inspecting the inspection picture and reduce the workload of inspection workers. However, there are numerous targets with smaller sizes in the inspection pictures of the power transmission line, and most of the prior art directly scales the original pictures to a larger resolution or uniformly divides the original pictures into a plurality of smaller pictures to identify the targets with smaller sizes. Although these techniques can identify small-sized objects more accurately, the use of larger resolution pictures or the direct cutting of the original image and detection thereof requires more computing resources, resulting in less efficient object detection.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a target detection method, a training method of a detection model and electronic equipment, so as to solve the problem of low target detection efficiency.
According to a first aspect, an embodiment of the present invention provides a target detection method, including:
acquiring an image to be detected;
inputting the image to be detected into a detection model to obtain the position information of a target candidate region; the target candidate areas are candidate areas with the sizes of all the candidate areas output by the detection model smaller than a preset value;
extracting an image with a preset size from the image to be detected based on the position information of the target candidate region to obtain a sub-image to be detected;
and inputting the sub-image to be detected into the detection model to obtain the category of the target corresponding to the target candidate region.
According to the target detection method provided by the embodiment of the invention, after the position information of the target candidate area is obtained by using the detection model, an image with a preset size is extracted from the image to be detected to obtain a sub-image to be detected, and the sub-image to be detected is detected again by using the detection model; that is, only the candidate region with the size smaller than the preset size is extracted from the image to be detected for re-detection, so that the data processing amount can be reduced, and the target detection efficiency can be improved.
With reference to the first aspect, in a first implementation manner of the first aspect, the inputting the image to be detected into a detection model to obtain location information of a target candidate area includes:
predicting candidate areas corresponding to all targets in the image to be detected by using a candidate area prediction structure in the detection model;
judging whether the size of the candidate region is smaller than the preset value by utilizing a region judging structure in the detection model;
and when the size of the candidate region is smaller than the preset value, predicting the first position information of the candidate region by using a first region prediction structure in the detection model so as to obtain the position information of the target candidate region.
According to the target detection method provided by the embodiment of the invention, the size of the candidate region is judged by utilizing the region judgment structure, and when the size is smaller than the preset value, the first position information of the candidate region is predicted by utilizing the first region preset structure, the category is not predicted, and the target detection efficiency can be improved on the premise of ensuring the detection accuracy.
With reference to the first aspect and the first implementation manner, in a second implementation manner of the first aspect, the first position information is central position information of the target candidate area.
According to the target detection method provided by the embodiment of the invention, as the other position information of the target candidate region is obtained by calculating the central position information of the target candidate region, the central position information of the target candidate region is selected as the first position information, so that the data processing amount can be reduced, and the target detection efficiency can be improved.
With reference to the first implementation manner of the first aspect, in a third implementation manner of the first aspect, the inputting the image to be detected into a detection model to obtain location information of the target candidate area further includes:
and when the size of the candidate region is larger than or equal to the preset value, predicting the category and the second position information of the target corresponding to the candidate region by using a second region prediction structure in the detection model.
According to the target detection method provided by the embodiment of the invention, when the size of the candidate region is larger than or equal to the preset value, the class and the second position information of the target corresponding to the candidate region are predicted by utilizing the second region in the detection model, and the detection method can ensure the accuracy of large-size target detection and the efficiency of small-size target detection.
With reference to the first aspect, or any one of the first to third implementation manners of the first aspect, in a fourth implementation manner of the first aspect, the inputting the image to be detected into a detection model to obtain location information of a target candidate area, further includes:
scaling the image to be detected to an image to be detected with preset resolution;
and inputting the image to be detected with the preset resolution into the detection model to obtain the position information of the target candidate region.
According to the target detection method provided by the embodiment of the invention, before the image to be detected is detected, the image to be detected is zoomed to the image to be detected with the preset resolution, so that the detection accuracy is improved.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the inputting the sub-image to be detected into the detection model to obtain a class of the target corresponding to the target candidate region includes:
scaling the sub-image to be detected to the sub-image to be detected with the preset resolution;
inputting the sub-image to be detected with the preset resolution into the detection model to obtain the category of the target corresponding to the target candidate region.
With reference to the first aspect, in a sixth implementation manner of the first aspect, the image to be detected is an electric transmission line image.
According to the target detection method provided by the embodiment of the invention, the power transmission line image comprises the targets with multiple sizes, the target detection is carried out on the power transmission line image by adopting the detection method, the detection of finer granularity is carried out only on the area possibly with the targets with small sizes, the identification of multiple scale defects is supported, the problem of overlarge calculated amount caused by directly detecting the large-resolution image or the segmented image by the existing power transmission line image defect detection method is solved, and the calculated amount of picture identification is greatly reduced and the detection efficiency is improved under the condition that the detection precision is not reduced.
According to a second aspect, an embodiment of the present invention further provides a training method of a detection model, including:
acquiring a sample image with labeling information; the labeling information is the position information of the category and the target area corresponding to each target in the sample image, and the size of the target area is smaller than a preset value;
inputting the sample image into an initial detection model to obtain position information of a predicted area;
and updating parameters in the initial detection model based on the position information of the target area marked in the sample image and the position information of the prediction area to obtain the detection model.
According to the training method of the detection model, provided by the embodiment of the invention, the detection model is utilized to predict the small-size target, so that the accuracy of the detection of the small-size target can be ensured, and the accurate guarantee is provided for the subsequent detection of the small-size target by using the detection model.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: the device comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the target detection method in the first aspect or any implementation manner of the first aspect, or executing the training method of the detection model in the second aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to perform the object detection method described in the first aspect or any implementation manner of the first aspect, or to perform the training method of the detection model described in the second aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a target detection method according to an embodiment of the invention;
FIG. 2 is a flow chart of a target detection method according to an embodiment of the invention;
FIG. 3 is a flow chart of a target detection method according to an embodiment of the invention;
FIGS. 4a-4c are schematic diagrams of a target detection method according to embodiments of the invention;
FIG. 5 is a flow chart of a training method of a detection model according to an embodiment of the present invention;
FIG. 6 is a block diagram of a target detection apparatus according to an embodiment of the present invention;
FIG. 7 is a block diagram of a training apparatus for a detection model according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, the target detection method described in the embodiment of the present invention may be used in various scenarios, for example: transmission lines, video scenes, etc., and the specific application scene is not limited in any way. The target may be an object in the image to be detected, or a defect of the object in the image to be detected, or the like.
When the target detection method is applied to the power transmission line, as the power transmission line image comprises targets with multiple sizes, the detection method is adopted for detecting the targets of the power transmission line image, and only the region possibly with the targets with small sizes is detected with finer granularity, and the identification of multiple scale defects is supported, so that the problem of overlarge calculated amount caused by directly detecting the large-resolution image or the segmented image by the existing power transmission line image defect detection method is solved, and the calculated amount of image identification is greatly reduced and the detection efficiency is improved under the condition that the detection precision is not reduced.
According to an embodiment of the present invention, there is provided an object detection method embodiment, it being noted that the steps shown in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
In this embodiment, a target detection method is provided, which may be used in an electronic device, such as a mobile phone, a tablet computer, a computer, etc., fig. 1 is a flowchart of a target detection method according to an embodiment of the present invention, and as shown in fig. 1, the flowchart includes the following steps:
s11, acquiring an image to be detected.
The image to be detected can be stored in the electronic equipment, or obtained from the outside by the electronic equipment, and the like, for example, when the target detection method is applied to a power transmission line, the image to be detected can be an original image shot by a camera carried by inspection equipment such as an unmanned plane, a helicopter, a robot and the like.
S12, inputting the image to be detected into a detection model to obtain the position information of the target candidate region.
The target candidate region is a candidate region with the size of all candidate regions output by the detection model smaller than a preset value.
After the electronic device acquires the model to be detected, the image to be detected can be directly input into the detection model, or can be input into the detection model after preprocessing the image to be detected.
The detection model may be a model based on Faster R-CNN, a model based on SSD, or other classification detection models, etc., and the specific form of the detection model is not limited in any way.
The electronic equipment inputs the image to be detected into a detection model, and the detection model automatically processes the image to be detected to obtain the position information of the target candidate region. The target candidate areas are candidate areas corresponding to small-size targets, wherein the small-size targets are determined by the sizes of the candidate areas corresponding to the targets.
The size of the candidate region may be determined by using the number of pixels corresponding to each side length of the candidate region, or the area of the candidate region, or may be set accordingly according to actual situations, which is not limited in any way.
S13, extracting an image with a preset size from the image to be detected based on the position information of the target candidate region, and obtaining a sub-image to be detected.
After the electronic device obtains the position information of the target candidate area in S12, the position information may be corresponding to the image to be detected, and an image with a preset size is extracted from the image to be detected, so as to obtain a sub-image to be detected.
For example, the image to be detected input into the detection model is an original image, and after the electronic device obtains the position information of the target area, the electronic device can directly extract an image with a preset size from the image to be detected; if the image to be detected in the input value detection model is an image subjected to scaling processing, the electronic equipment can obtain the position information corresponding to the image to be detected after processing the position information after obtaining the position information of the target area in the image subjected to scaling processing, and the sub-image to be detected can be extracted from the image to be detected by utilizing the corresponding position information.
S14, inputting the sub-images to be detected into a detection model to obtain the category of the target corresponding to the target candidate region.
After the electronic device obtains the sub-image to be detected in S13, the sub-image to be detected is input into the detection model, and secondary detection is performed on the sub-image to be detected. If the image to be detected input into the detection model in the electronic device S12 is an image after the scaling process, the sub-image to be detected needs to be correspondingly processed before the sub-image to be detected is input into the detection model.
For example, if the image to be detected is scaled to 512×512 pixels before the image to be detected is input into the detection model, the electronic device extracts the sub-image to be detected from the image to be detected, and similarly, the sub-image to be detected needs to be scaled to 512×512 pixels before the sub-image to be detected is input into the detection model.
It should be noted that, if the sub-image to be detected is input into the detection model and still able to output the position information of the target candidate region, then S13-S14 need to be executed until all the targets in the image to be detected are detected. That is, the electronic device loops S12-S14 until all targets in the image to be detected are detected.
According to the target detection method provided by the embodiment, after the position information of the target candidate area is obtained by using the detection model, an image with a preset size is extracted from the image to be detected to obtain a sub-image to be detected, and the sub-image to be detected is detected again by using the detection model; that is, only the candidate region with the size smaller than the preset size is extracted from the image to be detected for re-detection, so that the data processing amount can be reduced, and the target detection efficiency can be improved.
In this embodiment, a target detection method is provided, which may be used in an electronic device, such as a mobile phone, a tablet computer, a computer, etc., fig. 2 is a flowchart of a target detection method according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
s21, acquiring an image to be detected.
Please refer to S11 of the embodiment shown in fig. 1 in detail, and no limitation is made herein.
S22, inputting the image to be detected into a detection model to obtain the position information of the target candidate region.
The target candidate region is a candidate region with the size of all candidate regions output by the detection model smaller than a preset value.
The detection model comprises a candidate region prediction structure, a region judgment structure and a first region prediction structure. The input of the region judgment structure is connected with the output of the candidate region prediction structure, and the output of the region judgment structure is connected with the input of the first region prediction structure.
Taking the example of the detection model based on the Faster R-CNN, the detection model in the embodiment realizes the detection of the small-size target on the basis of the Faster R-CNN, and is essentially as follows: another head similar to RPN (regional advice network) is connected after stage 5 of the fast R-CNN backbone res net network for predicting whether there is a small-size target in each pixel-centered candidate region in the feature map output by stage 5.
The candidate region prediction structure is used for predicting candidate regions of all targets in the image to be detected; the region judging structure is used for judging whether the size of the candidate region of each target is smaller than a preset value so as to determine the target candidate region; the first region prediction structure is used for determining position information of the target candidate region.
Specifically, the step S22 includes the following steps:
s221, predicting candidate areas corresponding to the targets in the image to be detected by using a candidate area prediction structure in the detection model.
Alternatively, taking an image to be detected as an electric transmission line image as an example, the electronic device inputs the electric transmission line image into a detection model, and a candidate region prediction structure in the detection model detects candidate regions of each target in the electric transmission line image, for example, a candidate region corresponding to an insulating string, a candidate region corresponding to an insulator self-explosion, a candidate region corresponding to a bird nest, and the like.
S222, judging whether the size of the candidate region is smaller than a preset value by utilizing a region judging structure in the detection model.
After obtaining the candidate areas corresponding to the targets in the image to be detected, the electronic equipment compares the sizes of the candidate areas corresponding to the targets with a preset value. This is because, when the size of the target is large, the size of its corresponding candidate region is also large; when the size of the target is smaller, the size of its corresponding candidate region is smaller. Therefore, the size of the target size can be reflected by using the size of the candidate region, that is, when the size of the candidate region is smaller than the preset value, the target corresponding to the candidate region is indicated to be a small-size target, and secondary detection is required.
When the size of the candidate region is smaller than the preset value, S223 is performed; otherwise, S224 is performed.
The electronic device may compare the sizes of the candidate regions by using the number of pixels with at least one side length of the candidate regions, or may compare the sizes of the candidate regions by using the area sizes of the candidate regions, or the like. When the preset value is the number of the pixel points, the electronic equipment compares the number of the pixel points with the preset value, wherein the number of the pixel points is the side length of each side of the candidate area; for example, the preset value is 50 pixels, and when at least one side of the candidate region is less than 50 pixels, S223 is performed; otherwise, S224 is performed. That is, when at least one side length of the candidate region is smaller than 50 pixels, it means that the target corresponding to the candidate region is a small-sized target; otherwise, the target corresponding to the candidate region is a large-size target.
S223, predicting the first position information of the candidate region by using the first region prediction structure in the detection model to obtain the position information of the target candidate region.
When the electronic device determines in S222 that the size of the candidate region is smaller than the preset value, the candidate region is the target candidate region, and then the electronic device predicts the first position information of the candidate region by using the first region prediction structure, so as to obtain the first position information of the target candidate region. The first position information may be central position information of the target candidate region, or may be an upper left corner coordinate, a lower right corner coordinate, or the like of the target candidate region.
In this embodiment, the first position information is taken as the center position information of the target candidate area. Because the other position information of the target candidate area is obtained by calculating the central position information of the target candidate area, the central position information of the target candidate area is selected as the first position information, the data processing amount can be reduced, and the target detection efficiency is improved.
S224, predicting the category and the second position information of the target corresponding to the candidate region by using the second region prediction structure in the detection model.
When the electronic equipment determines that the size of the candidate frame is larger than or equal to the preset value, the electronic equipment indicates that the target corresponding to the candidate region is a large-size target, and then the electronic equipment directly predicts the category and the second position information of the target corresponding to the candidate region by using a second region prediction structure in the detection model.
Wherein the input of the second region prediction structure is connected with the output of the region judgment structure. The second position information may be the center point coordinate of the candidate region, or may be the upper left corner coordinate and the lower right corner coordinate of the candidate region.
S23, extracting an image with a preset size from the image to be detected based on the position information of the target candidate region, and obtaining a sub-image to be detected.
Please refer to the embodiment S13 shown in fig. 1 in detail, which is not described herein.
S24, inputting the sub-images to be detected into a detection model to obtain the category of the target corresponding to the target candidate region.
Please refer to the embodiment S14 in fig. 1 in detail, which is not described herein.
According to the target detection method provided by the embodiment, the size of the candidate region is judged by utilizing the region judging structure, when the size is smaller than the preset value, the first position information of the candidate region is predicted by utilizing the first region preset structure, category prediction is not performed, and the target detection efficiency can be improved on the premise of ensuring the detection accuracy.
In this embodiment, a target detection method is provided, which may be used in an electronic device, such as a mobile phone, a tablet computer, a computer, etc., fig. 3 is a flowchart of a target detection method according to an embodiment of the present invention, and as shown in fig. 3, the flowchart includes the following steps:
s31, acquiring an image to be detected.
Please refer to the embodiment S21 shown in fig. 2 in detail, which is not described herein.
S32, inputting the image to be detected into a detection model to obtain the position information of the target candidate region.
The target candidate region is a candidate region with the size of all candidate regions output by the detection model smaller than a preset value.
Specifically, the step S32 includes the following steps:
s321, scaling the image to be detected to the image to be detected with the preset resolution.
After the electronic equipment acquires the image to be detected, the image to be detected is scaled to the image to be detected with preset resolution. The preset resolution may be specifically set according to the actual situation, for example, the image to be detected may be scaled to an image to be detected with a resolution of 512×512.
Taking an image to be detected as an example of a power transmission line image, referring to fig. 4a, fig. 4a shows a specific schematic diagram of the image to be detected. The targets in the power transmission line image are equalizing rings, insulating character strings, bird nests and shockproof hammers.
S322, inputting the image to be detected with the preset resolution into a detection model to obtain the position information of the target candidate region.
The electronic device inputs the image to be detected shown in fig. 4a into the detection model, and obtains the position information of the target candidate region shown in fig. 4 b. Fig. 4a is a schematic diagram of a typical power transmission line image, which includes 7 targets including a grading ring, an insulator string, a bird nest and a damper 4. After the electronic device scales the image to be detected shown in fig. 4a to 512×512 pixels and inputs the scaled image to the detection model in this embodiment, the detection model will detect and output the class numbers and the coordinates of the upper left corner and the lower right corner of the two insulator string targets shown by the bold border in fig. 4 b. Since the sides of the candidate areas corresponding to the equalizing ring, the bird nest and the damper in fig. 4b are smaller than 50 pixels in the zoomed picture, the detection model will recognize and output the center coordinates of the 3 candidate areas containing the small-sized object indicated by the arrow in fig. 4b, as shown by the dotted line in fig. 4 b.
The candidate region indicated by the arrow in fig. 4b is the target candidate region.
Please refer to S22 in the embodiment shown in fig. 2 for the rest details, which will not be described herein.
S33, extracting an image with a preset size from the image to be detected based on the position information of the target candidate region, and obtaining a sub-image to be detected.
After the electronic device obtains the position information of the target candidate region in S32, an image of a preset size is extracted from the image to be detected. Specifically, as described above, the target candidate region is a candidate region corresponding to a small-size target. Referring to fig. 4c, the small-sized targets in the image to be detected are equalizing rings, bird nests and anti-vibration hammers, and then the target candidate regions are candidate regions corresponding to the equalizing rings, bird nests and anti-vibration hammers.
The position information of the target candidate region obtained by the electronic device in S32 is the position information of the target candidate region in the zoomed image to be detected. Then, the electronic device corresponds the position information to the original image to be detected, and extracts an image with the size of 128×128 pixels, so as to obtain a sub-image to be detected as shown in fig. 4 c.
S34, inputting the sub-images to be detected into a detection model to obtain the category of the target corresponding to the target candidate region.
Specifically, the step S34 includes the following steps:
and S341, scaling the sub-image to be detected to the sub-image to be detected with the preset resolution.
After obtaining the sub-image to be detected in S33, the electronic device scales the sub-image to be detected to the preset resolution in S321, so as to obtain the sub-image to be detected with the preset resolution.
S342, inputting the sub-image to be detected with the preset resolution into a detection model to obtain the category of the target corresponding to the target candidate region.
The electronic equipment inputs the sub-image to be detected with the preset resolution into a detection model, and predicts a candidate region corresponding to the target in the sub-image to be detected so as to obtain the category of the target corresponding to the subsequent region of the target.
As described above, the size of the sub-image to be detected is 128×128 pixels, the sub-image to be detected is scaled to 512×512 pixels, and the sub-image is input into the detection model, so as to obtain the position of the target candidate region in the scaled sub-image to be detected, obtain the relative position of the target corresponding to the target candidate region in the sub-image to be detected, extract the sub-image to be detected from the sub-image to be detected according to each position information, and repeat steps S32-S34 (i.e. each sub-image to be detected is scaled to 512×512 pixels and input into the detection model) until the sub-image to be detected of all levels is detected.
Specifically, the electronic device cuts the images of the corresponding areas of the three target candidate areas shown in fig. 4b in the image to be detected shown in fig. 4a into sub-images to be detected, scales the sub-images to be detected to 512×512 pixels, and sequentially inputs the sub-images to the detection model in the embodiment, wherein the detection model identifies and outputs the class numbers of the two equalizing rings in the first sub-image and the two vibration dampers in the third sub-image shown by the frame thickening in fig. 4c, and the coordinates of the upper left corner and the lower right corner. The target candidate area corresponding to the bird nest in the second sub-picture is smaller than 50 pixels in the scaled sub-image to be detected, and the detection model identifies and outputs the center coordinates of the block shown by the middle dashed line in the sub-image to be detected.
The corresponding region of the block shown in fig. 4c in the image to be detected is cut into sub-images to be detected, scaled to 512×512 pixels, and input into the detection model described in this embodiment, so that the bird nest in the sub-images to be detected can be identified.
Please refer to S14 in the embodiment of fig. 1 for the rest details, which will not be described herein.
According to the target detection method provided by the embodiment, before the image to be detected is detected, the image to be detected is zoomed to the image to be detected with the preset resolution, so that the detection accuracy is improved.
In the example shown in fig. 4a, the total number of pixels of the processed image in the method of the present embodiment is only 4×512×512 pixels (total 1048576 pixels), however, the effect of identifying the small-size object is approximately equal to the effect of scaling the original image to 8192×8192 pixels (total 67108864 pixels) and identifying. Therefore, the method of the embodiment greatly reduces the calculated amount of the detection of the power transmission line parts and the defects while ensuring the identification precision, and improves the detection efficiency.
According to an embodiment of the present invention, there is provided an embodiment of a training method for a detection model, it should be noted that the steps illustrated in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order different from that herein.
In this embodiment, a training method of a detection model is provided, which may be used in an electronic device, such as a mobile phone, a tablet computer, a computer, etc., fig. 5 is a flowchart of a target detection method according to an embodiment of the present invention, and as shown in fig. 5, the flowchart includes the following steps:
s41, acquiring a sample image with labeling information.
The labeling information is the position information of the category and the target area corresponding to each target in the sample image, and the size of the target area is smaller than a preset value.
The labeling of the sample image can be manually or automatically by using a labeling tool. That is, the sample image is marked with the category corresponding to each target and the position information of the target region corresponding to the target candidate region in the embodiment shown in fig. 1 to 3, and the target regions correspond to the small-sized targets.
S42, inputting the sample image into the initial detection model to obtain the position information of the prediction area.
The electronic equipment inputs the sample image with the labeling information into an initial detection model to obtain the position information of a predicted area, wherein the predicted area is the predicted area corresponding to the small-size target.
S43, updating parameters in the initial detection model based on the position information of the target area and the position information of the prediction area marked in the sample image to obtain the detection model.
The electronic device calculates a loss function by using the position information of the predicted area obtained in S42 and the position information of the target area marked in S41, and updates parameters in the initial detection model to obtain the detection model.
According to the training method of the detection model, the detection model is used for predicting the small-size target, so that the accuracy of small-size target detection can be ensured, and the accurate guarantee is provided for the follow-up detection of the small-size target by using the detection model.
The embodiment also provides a target detection device, which is used for implementing the above embodiment and the preferred implementation manner, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides an object detection apparatus, as shown in fig. 6, including:
a first acquiring module 51, configured to acquire an image to be detected;
the detection module 52 is configured to input the image to be detected into a detection model, so as to obtain location information of a target candidate region; the target candidate areas are candidate areas with the sizes of all the candidate areas output by the detection model smaller than a preset value;
an extracting module 53, configured to extract an image of a preset size from the image to be detected based on the position information of the target candidate region, so as to obtain a sub-image to be detected;
the first input module 54 is configured to input the sub-image to be detected into the detection model, so as to obtain a category of the target corresponding to the target candidate region.
According to the target detection device provided by the embodiment, after the position information of the target candidate area is obtained by using the detection model, an image with a preset size is extracted from the image to be detected to obtain a sub-image to be detected, and the sub-image to be detected is detected again by using the detection model; that is, only the candidate region with the size smaller than the preset size is extracted from the image to be detected for re-detection, so that the data processing amount can be reduced, and the target detection efficiency can be improved.
The embodiment also provides a training device for a detection model, as shown in fig. 7, including:
a second obtaining module 61, configured to obtain a sample image with labeling information; the labeling information is the position information of the category and the target area corresponding to each target in the sample image, and the size of the target area is smaller than a preset value;
a second input module 62, configured to input the sample image into an initial detection model to obtain location information of a prediction area;
and an updating module 63, configured to update parameters in the initial detection model based on the position information of the target area and the position information of the prediction area, so as to obtain the detection model.
The training device for the detection model provided by the embodiment predicts the small-size target by using the detection model, can ensure the accuracy of small-size target detection, and provides accurate guarantee for the follow-up detection of the small-size target by using the detection model.
The object detection means, or training means of the detection model, in this embodiment are presented in the form of functional units, here referred to as ASIC circuits, processors and memories executing one or more software or firmware programs, and/or other devices that can provide the above described functionality.
Further functional descriptions of the above respective modules are the same as those of the above corresponding embodiments, and are not repeated here.
The embodiment of the invention also provides electronic equipment, which is provided with the target detection device shown in the figure 6 or the training device of the detection model shown in the figure 7.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, as shown in fig. 8, the electronic device may include: at least one processor 71, such as a CPU (Central Processing Unit ), at least one communication interface 73, a memory 74, at least one communication bus 72. Wherein the communication bus 72 is used to enable connected communication between these components. The communication interface 73 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional communication interface 73 may further include a standard wired interface and a wireless interface. The memory 74 may be a high-speed RAM memory (Random Access Memory, volatile random access memory) or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 74 may alternatively be at least one memory device located remotely from the processor 71. Where the processor 71 may be a device as described in connection with fig. 6 or fig. 7, the memory 74 stores an application program, and the processor 71 invokes the program code stored in the memory 74 for performing any of the method steps described above.
The communication bus 72 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus, an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The communication bus 72 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 8, but not only one bus or one type of bus.
Wherein the memory 74 may include volatile memory (English) such as random-access memory (RAM); the memory may also include a nonvolatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated as HDD) or a solid state disk (english: solid-state drive, abbreviated as SSD); memory 74 may also include a combination of the above types of memory.
The processor 71 may be a central processor (English: central processing unit, abbreviated: CPU), a network processor (English: network processor, abbreviated: NP) or a combination of CPU and NP.
The processor 71 may further include a hardware chip, among others. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof (English: programmable logic device). The PLD may be a complex programmable logic device (English: complex programmable logic device, abbreviated: CPLD), a field programmable gate array (English: field-programmable gate array, abbreviated: FPGA), a general-purpose array logic (English: generic array logic, abbreviated: GAL), or any combination thereof.
Optionally, the memory 74 is also used for storing program instructions. Processor 71 may invoke program instructions to implement the target detection method as shown in the embodiments of fig. 1-3 of the present application, or the training method of the detection model as shown in the embodiment of fig. 5.
The embodiment of the invention also provides a non-transitory computer storage medium, which stores computer executable instructions, and the computer executable instructions can execute the target detection method or the training method of the detection model in any of the method embodiments. Wherein the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (8)

1. A method of detecting an object, comprising:
acquiring an image to be detected;
inputting the image to be detected into a detection model to obtain the position information of a target candidate region; the target candidate areas are candidate areas with the sizes of all the candidate areas output by the detection model smaller than a preset value;
extracting an image with a preset size from the image to be detected based on the position information of the target candidate region to obtain a sub-image to be detected;
inputting the sub-image to be detected into the detection model to obtain the category of the target corresponding to the target candidate region;
the inputting the image to be detected into a detection model to obtain the position information of the target candidate region includes:
predicting candidate areas corresponding to all targets in the image to be detected by using a candidate area prediction structure in the detection model;
judging whether the size of the candidate region is smaller than the preset value by utilizing a region judging structure in the detection model;
when the size of the candidate region is smaller than the preset value, predicting first position information of the candidate region by using a first region prediction structure in the detection model so as to obtain the position information of the target candidate region;
and when the size of the candidate region is larger than or equal to the preset value, predicting the category and the second position information of the target corresponding to the candidate region by using a second region prediction structure in the detection model.
2. The method of claim 1, wherein the first location information is center location information of the target candidate region.
3. The method according to claim 1 or 2, wherein the inputting the image to be detected into a detection model to obtain the position information of the target candidate region further comprises:
scaling the image to be detected to an image to be detected with preset resolution;
and inputting the image to be detected with the preset resolution into the detection model to obtain the position information of the target candidate region.
4. A method according to claim 3, wherein said inputting the sub-image to be detected into the detection model to obtain the category of the target corresponding to the target candidate region comprises:
scaling the sub-image to be detected to the sub-image to be detected with the preset resolution;
inputting the sub-image to be detected with the preset resolution into the detection model to obtain the category of the target corresponding to the target candidate region.
5. The method of claim 1, wherein the image to be detected is a transmission line image.
6. A method of training a test model, comprising:
acquiring a sample image with labeling information; the labeling information is the position information of the category and the target area corresponding to each target in the sample image, and the size of the target area is smaller than a preset value;
inputting the sample image into an initial detection model to obtain position information of a predicted area;
updating parameters in the initial detection model based on the position information of the target area marked in the sample image and the position information of the prediction area to obtain the detection model;
the detection model comprises a candidate region prediction structure, a region judgment structure and a first region prediction structure, wherein the candidate region prediction structure is used for predicting candidate regions of all targets in an image to be detected, the region judgment structure is used for judging whether the sizes of the candidate regions of all the targets are smaller than a preset value so as to determine target candidate regions, and the first region prediction structure is used for determining position information of the target candidate regions; when the size of the candidate region is smaller than the preset value, predicting first position information of the candidate region by using a first region prediction structure in the detection model so as to obtain the position information of the target candidate region; and when the size of the candidate region is larger than or equal to the preset value, predicting the category and the second position information of the target corresponding to the candidate region by using a second region prediction structure in the detection model.
7. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the object detection method of any one of claims 1-5 or the training method of the detection model of claim 6.
8. A computer-readable storage medium storing computer instructions for causing the computer to perform the object detection method according to any one of claims 1 to 5 or the training method of the detection model according to claim 6.
CN202010295474.9A 2020-04-15 2020-04-15 Target detection method, training method of detection model and electronic equipment Active CN111507958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010295474.9A CN111507958B (en) 2020-04-15 2020-04-15 Target detection method, training method of detection model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010295474.9A CN111507958B (en) 2020-04-15 2020-04-15 Target detection method, training method of detection model and electronic equipment

Publications (2)

Publication Number Publication Date
CN111507958A CN111507958A (en) 2020-08-07
CN111507958B true CN111507958B (en) 2023-05-26

Family

ID=71877591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010295474.9A Active CN111507958B (en) 2020-04-15 2020-04-15 Target detection method, training method of detection model and electronic equipment

Country Status (1)

Country Link
CN (1) CN111507958B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001902A (en) * 2020-08-19 2020-11-27 上海商汤智能科技有限公司 Defect detection method and related device, equipment and storage medium
CN112069959A (en) * 2020-08-27 2020-12-11 北京锐安科技有限公司 Human body detection method, human body detection device, electronic equipment and storage medium
CN112346614B (en) * 2020-10-28 2022-07-29 京东方科技集团股份有限公司 Image display method and device, electronic device, and storage medium
CN112749735B (en) * 2020-12-30 2023-04-07 中冶赛迪信息技术(重庆)有限公司 Converter tapping steel flow identification method, system, medium and terminal based on deep learning
CN112985263B (en) * 2021-02-09 2022-09-23 中国科学院上海微系统与信息技术研究所 Method, device and equipment for detecting geometrical parameters of bow net
CN113011297A (en) * 2021-03-09 2021-06-22 全球能源互联网研究院有限公司 Power equipment detection method, device, equipment and server based on edge cloud cooperation
CN113111852B (en) * 2021-04-30 2022-07-01 苏州科达科技股份有限公司 Target detection method, training method, electronic equipment and gun and ball linkage system
CN113705565A (en) * 2021-08-10 2021-11-26 北京中星天视科技有限公司 Ship detection method, device, electronic equipment and computer readable medium
CN113506293B (en) * 2021-09-08 2021-12-07 成都数联云算科技有限公司 Image processing method, device, equipment and storage medium
CN114495195B (en) * 2021-12-17 2023-02-28 珠海视熙科技有限公司 Face detection method applied to video conference system and video conference system
CN113936199B (en) * 2021-12-17 2022-05-13 珠海视熙科技有限公司 Image target detection method and device and camera equipment
CN114332456A (en) * 2022-03-16 2022-04-12 山东力聚机器人科技股份有限公司 Target detection and identification method and device for large-resolution image
CN115546483B (en) * 2022-09-30 2023-05-12 哈尔滨市科佳通用机电股份有限公司 Deep learning-based method for measuring residual usage amount of carbon slide plate of subway pantograph

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978580A (en) * 2015-06-15 2015-10-14 国网山东省电力公司电力科学研究院 Insulator identification method for unmanned aerial vehicle polling electric transmission line
CN107610113A (en) * 2017-09-13 2018-01-19 北京邮电大学 The detection method and device of Small object based on deep learning in a kind of image
WO2018120038A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for target detection
WO2018121690A1 (en) * 2016-12-29 2018-07-05 北京市商汤科技开发有限公司 Object attribute detection method and device, neural network training method and device, and regional detection method and device
CN109583321A (en) * 2018-11-09 2019-04-05 同济大学 The detection method of wisp in a kind of structured road based on deep learning
CN109815868A (en) * 2019-01-15 2019-05-28 腾讯科技(深圳)有限公司 A kind of image object detection method, device and storage medium
CN109948616A (en) * 2019-03-26 2019-06-28 北京迈格威科技有限公司 Image detecting method, device, electronic equipment and computer readable storage medium
CN110084175A (en) * 2019-04-23 2019-08-02 普联技术有限公司 A kind of object detection method, object detecting device and electronic equipment
CN110443159A (en) * 2019-07-17 2019-11-12 新华三大数据技术有限公司 Digit recognition method, device, electronic equipment and storage medium
CN110555347A (en) * 2018-06-01 2019-12-10 杭州海康威视数字技术股份有限公司 Vehicle target identification method and device with dangerous cargo carrying behavior and electronic equipment
CN110598512A (en) * 2018-06-13 2019-12-20 杭州海康威视数字技术股份有限公司 Parking space detection method and device
CN110610123A (en) * 2019-07-09 2019-12-24 北京邮电大学 Multi-target vehicle detection method and device, electronic equipment and storage medium
CN110781887A (en) * 2019-10-25 2020-02-11 上海眼控科技股份有限公司 License plate screw detection method and device and computer equipment
CN110837789A (en) * 2019-10-31 2020-02-25 北京奇艺世纪科技有限公司 Method and device for detecting object, electronic equipment and medium
CN110852209A (en) * 2019-10-29 2020-02-28 腾讯科技(深圳)有限公司 Target detection method and apparatus, medium, and device
CN110852233A (en) * 2019-11-05 2020-02-28 上海眼控科技股份有限公司 Hand-off steering wheel detection and training method, terminal, device, medium, and system
JP2020046706A (en) * 2018-09-14 2020-03-26 トヨタ自動車株式会社 Object detection apparatus, vehicle control system, object detection method and computer program for object detection

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978580A (en) * 2015-06-15 2015-10-14 国网山东省电力公司电力科学研究院 Insulator identification method for unmanned aerial vehicle polling electric transmission line
WO2018121690A1 (en) * 2016-12-29 2018-07-05 北京市商汤科技开发有限公司 Object attribute detection method and device, neural network training method and device, and regional detection method and device
WO2018120038A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for target detection
CN107610113A (en) * 2017-09-13 2018-01-19 北京邮电大学 The detection method and device of Small object based on deep learning in a kind of image
CN110555347A (en) * 2018-06-01 2019-12-10 杭州海康威视数字技术股份有限公司 Vehicle target identification method and device with dangerous cargo carrying behavior and electronic equipment
CN110598512A (en) * 2018-06-13 2019-12-20 杭州海康威视数字技术股份有限公司 Parking space detection method and device
JP2020046706A (en) * 2018-09-14 2020-03-26 トヨタ自動車株式会社 Object detection apparatus, vehicle control system, object detection method and computer program for object detection
CN109583321A (en) * 2018-11-09 2019-04-05 同济大学 The detection method of wisp in a kind of structured road based on deep learning
CN109815868A (en) * 2019-01-15 2019-05-28 腾讯科技(深圳)有限公司 A kind of image object detection method, device and storage medium
CN109948616A (en) * 2019-03-26 2019-06-28 北京迈格威科技有限公司 Image detecting method, device, electronic equipment and computer readable storage medium
CN110084175A (en) * 2019-04-23 2019-08-02 普联技术有限公司 A kind of object detection method, object detecting device and electronic equipment
CN110610123A (en) * 2019-07-09 2019-12-24 北京邮电大学 Multi-target vehicle detection method and device, electronic equipment and storage medium
CN110443159A (en) * 2019-07-17 2019-11-12 新华三大数据技术有限公司 Digit recognition method, device, electronic equipment and storage medium
CN110781887A (en) * 2019-10-25 2020-02-11 上海眼控科技股份有限公司 License plate screw detection method and device and computer equipment
CN110852209A (en) * 2019-10-29 2020-02-28 腾讯科技(深圳)有限公司 Target detection method and apparatus, medium, and device
CN110837789A (en) * 2019-10-31 2020-02-25 北京奇艺世纪科技有限公司 Method and device for detecting object, electronic equipment and medium
CN110852233A (en) * 2019-11-05 2020-02-28 上海眼控科技股份有限公司 Hand-off steering wheel detection and training method, terminal, device, medium, and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Yushi Chen等.Deep Feature Extration and Classification of Hyperspectral Images Based on Convolutional Neural Networks.《IEEE Transactions on Geoscience and Remote Sensing》.2016,第54卷(第10期),6232-6251. *
Zhong-Qiu Zhao等.Object Detection With Deep Learning: A Review.《IEEE Transactions on Neural Networks and Learning Systems》.2019,第30卷(第11期),3212-3232. *
姚筑宇.基于深度学习的目标检测研究与应用.《中国优秀硕士学位论文全文数据库 信息科技辑》.2019,第2019年卷(第9期), I138-1172. *
崔善波.基于深度学习的小目标检测.《中国优秀硕士学位论文全文数据库 信息科技辑》.2020,第2020年卷(第2期), I138-1476. *

Also Published As

Publication number Publication date
CN111507958A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111507958B (en) Target detection method, training method of detection model and electronic equipment
CN107545262B (en) Method and device for detecting text in natural scene image
CN111797890A (en) Method and system for detecting defects of power transmission line equipment
US11481862B2 (en) System and method for real-time, simultaneous object detection and semantic segmentation
CN109272060B (en) Method and system for target detection based on improved darknet neural network
US20210272272A1 (en) Inspection support apparatus, inspection support method, and inspection support program for concrete structure
CN112560698A (en) Image processing method, apparatus, device and medium
CN116168351B (en) Inspection method and device for power equipment
CN111598889A (en) Grading ring inclination fault identification method and device and computer equipment
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN113435407A (en) Small target identification method and device for power transmission system
CN115880260A (en) Method, device and equipment for detecting base station construction and computer readable storage medium
CN114429640A (en) Drawing segmentation method and device and electronic equipment
CN111178445A (en) Image processing method and device
CN112560791B (en) Recognition model training method, recognition method and device and electronic equipment
CN112001336A (en) Pedestrian boundary crossing alarm method, device, equipment and system
CN111784667A (en) Crack identification method and device
CN113439227A (en) Capturing and storing magnified images
CN112990350B (en) Target detection network training method and target detection network-based coal and gangue identification method
CN112668637B (en) Training method, recognition method and device of network model and electronic equipment
CN110634124A (en) Method and equipment for area detection
CN111931721B (en) Method and device for detecting color and number of annual inspection label and electronic equipment
CN112749702B (en) Image recognition method, device, terminal and storage medium
CN115222017A (en) Method and system for training machine learning method for determining predetermined point in image
CN112348835B (en) Material quantity detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant