CN112580542A - Steel bar counting method based on target detection - Google Patents

Steel bar counting method based on target detection Download PDF

Info

Publication number
CN112580542A
CN112580542A CN202011550390.1A CN202011550390A CN112580542A CN 112580542 A CN112580542 A CN 112580542A CN 202011550390 A CN202011550390 A CN 202011550390A CN 112580542 A CN112580542 A CN 112580542A
Authority
CN
China
Prior art keywords
frame
constructing
center
anchor
circle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011550390.1A
Other languages
Chinese (zh)
Inventor
李志鹏
郑小青
郑松
孔亚广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011550390.1A priority Critical patent/CN112580542A/en
Publication of CN112580542A publication Critical patent/CN112580542A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a steel bar counting method based on target detection, which comprises the following steps: acquiring a steel bar sample image, and performing image preprocessing to obtain a sample data set; constructing a characteristic pyramid; constructing a prediction circular frame, and calculating a position loss function; and screening and predicting circular frames by a non-maximum value inhibition method of various threshold values, and training a network model. The invention has the beneficial effects that: 1. the defect that manual counting is easy to make mistakes is overcome, and the counting accuracy is improved; 2. by adopting the circular prediction frame, the steel bars can be better counted; 3. the method realizes rapid and accurate prediction through multiple network training, and has strong adaptability and robustness.

Description

Steel bar counting method based on target detection
Technical Field
The invention relates to the technical field of computer image processing, in particular to a reinforcing steel bar counting method based on target detection.
Background
On the site of a construction site, for the steel bar car entering the field, the inspection and acceptance personnel need to manually point the roots on the steel bars on the car on site, and the steel bar car can finish entering and unloading after the quantity is confirmed. At present, a manual counting mode is adopted on site. The process is tedious, labor intensive and slow. And the accuracy of the number is difficult to ensure, and the condition of number leakage is difficult to avoid, thereby bringing certain economic loss to enterprises.
For example, chinese patent publication No. CN111126415A discloses a system and method for detecting and counting tunnel rebars based on radar detection images, which includes an image preprocessing module, a rebar key point detecting module, a rebar layer key curve fitting module, a rebar layer key curve peak position identifying module, and a rebar counting module, and can automatically identify and count rebars in geological radar wave original images, thereby improving the efficiency and accuracy of rebar identification and detection. However, this patent is complex and tedious to implement.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: at present, the problems of missing detection and false detection of steel bar counting cannot be solved, and a steel bar counting method based on target detection is provided to realize quick and accurate counting.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a rebar counting method based on target detection comprises the following steps:
acquiring a steel bar sample image, and performing image preprocessing to obtain a sample data set;
constructing a characteristic pyramid;
constructing a prediction circular frame, and calculating a position loss function;
and screening and predicting circular frames by a non-maximum value inhibition method of various threshold values, and training a network model.
The non-maximum value inhibition method is 3 threshold values, the threshold values are upper and lower limits and the average value of the upper and lower limits, and the reinforcing steel bars can be counted better by constructing a circular prediction frame; the network model is trained for many times, so that rapid and accurate prediction is realized, and the method has strong adaptability and robustness.
Preferably, the image pre-processing includes image cropping, scaling and brightness contrast adjustment.
This series of image pre-processing can highlight the detection target and enlarge the data set.
Preferably, the building feature pyramid comprises
Carrying out convolution processing on the image for multiple times, and reducing the size of the image to obtain a convolution layer;
and (4) obtaining a feature map after the convolution layer is subjected to upsampling, and adding the feature map with other feature maps subjected to convolution to change the number of channels of the feature map.
And continuously pooling, upsampling, convolving and other operations are carried out on the input pictures to obtain feature maps of various sizes, and the feature maps are used for training the network model.
Preferably, the construction prediction circle block comprises
Constructing a round anchor frame by a rectangular frame;
recording offset of the grid and the upper left corner of the anchor frame, radius of the anchor frame and coordinates of the center of a circle of the anchor frame;
and calculating the center coordinates and the radius of the predicted circular frame.
For the detection target, the intersection ratio of the two circles of the circular frame is far better than that of the rectangular frame, the adaptability to the target is better, the recognition rate of the circular shape is ensured, and the network model has the function of accurately recognizing the reinforcing steel bars.
Preferably, the calculation formula for predicting the abscissa of the center of the circular frame is
Figure BDA0002857737830000021
Wherein, txIs the horizontal coordinate of the center of the anchor frame, cxThe horizontal coordinate offset of the grid and the upper left corner of the anchor frame is obtained;
the calculation formula for predicting the longitudinal coordinate of the circle center of the circular frame is
Figure BDA0002857737830000022
Wherein, tyIs the longitudinal coordinate of the center of the anchor frame, cyThe vertical coordinate offset of the grid and the upper left corner of the anchor frame is obtained;
the calculation formula of the predicted circular frame radius is
Figure BDA0002857737830000023
Wherein, trIs the radius of the center of a circle of the anchor frame, prIs the radius of the anchor frame.
Preferably, said calculating a position loss function comprises
Constructing a real frame, and recording the radius of the real frame and the circle center coordinate of the real frame;
recording the number of grid units and the number of anchor frames generated by each grid;
setting balance parameters;
a position loss function is calculated.
The balance parameter is a parameter set to balance positive and negative samples.
Preferably, the constructing the real frame includes
Selecting a sample data set target;
a minimum circular frame is constructed that surrounds the target.
Preferably, the calculation formula of the position loss function is
Figure BDA0002857737830000031
Wherein x isiIs the circle center abscissa of the real frame, yiIs the longitudinal coordinate of the center of a circle of the real frame, riIs the radius of the circle center of the real frame, S2Is composed of
Number of network elements, B number of anchor frames that can be generated per mesh, Iobj: if the jth anchor box of the ith mesh is responsible for the target, IobjIs 1; if not, IobjIs 0.
The loss function is changed according to the change of the circular prediction box and the real box, and is used for training the network model.
Preferably, the non-maximum value suppression method includes
Setting upper and lower limit ranges of a threshold;
and (4) performing specific category classification output on the prediction frame, predicting the position offset, and outputting an accurate target detection frame.
Since many blocked steel bar bounding boxes cannot be correctly identified and suppressed during the selection of the bounding box, the non-maximum suppression method uses two large and small thresholds together to select the prediction box.
Preferably, the training network model comprises
Setting an iteration number learning rate parameter;
inputting a position loss function;
inputting a sample data set for training;
when the iteration reaches 80% of the iteration times, outputting a verification accuracy, if the accuracy is not up to the standard, fine-tuning the parameters, when the iteration reaches 90% of the iteration times, outputting the verification accuracy again and fine-tuning, and when the iteration times reaches a set value, finishing the training.
The invention has the beneficial effects that: 1. the defect that manual counting is easy to make mistakes is overcome, and the counting accuracy is improved; 2. by adopting the circular prediction frame, the steel bars can be better counted; 3. the network model is trained for many times, so that rapid and accurate prediction is realized, and the method has strong adaptability and robustness.
Drawings
FIG. 1 is a flowchart of a method according to a first embodiment.
Detailed Description
The following further describes the embodiments of the present invention by means of specific examples, in conjunction with the accompanying drawings.
The first embodiment is as follows:
a rebar counting method based on target detection comprises the following steps:
acquiring a steel bar sample image, and performing image preprocessing to obtain a sample data set;
constructing a characteristic pyramid;
constructing a prediction circular frame, and calculating a position loss function;
and screening and predicting circular frames by a non-maximum value inhibition method of various threshold values, and training a network model.
By constructing the circular prediction frame, the steel bars can be better counted; the network model is trained for many times, so that rapid and accurate prediction is realized, and the method has strong adaptability and robustness.
Image pre-processing includes image cropping, scaling, and brightness contrast adjustment.
This series of image pre-processing can highlight the detection target and enlarge the data set.
Picture cutting: the problem that the area occupied by the target reinforcing steel bars of part of pictures is small and the area occupied by non-reinforcing steel bars is large exists in the training set, so that the effective area is small after the images are input into a network, and the training effect of the model is influenced; aiming at the situation, the steel bar picture is cut, most of non-target areas are cut off, and the training effect is effectively improved;
scaling: because the distance and the angle are not completely the same when a photographer shoots, the diameter of the steel bar is changed greatly, and the steel bar picture is zoomed in multiple scales aiming at the situation, which is beneficial to improving the detection precision of the model multi-scale steel bar;
adjusting the brightness contrast ratio: because the environment of collecting the steel bar picture is complicated, the light condition is not identical, and the condition that the bright shadow changes greatly exists, aiming at the condition, the brightness contrast adjustment is carried out on the steel bar picture, so that the detection method can adapt to the brightness contrast change under various conditions.
Constructing a feature pyramid includes
Carrying out convolution processing on the image for multiple times, and reducing the size of the image to obtain a convolution layer;
and (4) obtaining a feature map after the convolution layer is subjected to upsampling, and adding the feature map with other feature maps subjected to convolution to change the number of channels of the feature map.
And continuously pooling, upsampling, convolving and other operations are carried out on the input pictures to obtain feature maps of various sizes, and the feature maps are used for training the network model.
Because the steel bars to be detected mostly belong to the small target range, a characteristic pyramid module for the small target is designed, and only the high-resolution characteristics are fused, because the shallow network focuses more on the detail information, the high-level network focuses more on the semantic information, namely the characteristic with small downsampling times is small in feeling, the method is suitable for the small target, and the large-scale resolution information is suitable for the small target. The detection of multiple target single categories is used as only one category of steel bars and does not have strong semantic information, so that a deep characteristic pyramid structure is not needed.
Constructing a predicted circular frame comprising
Constructing a round anchor frame by a rectangular frame;
recording offset of the grid and the upper left corner of the anchor frame, radius of the anchor frame and coordinates of the center of a circle of the anchor frame;
and calculating the center coordinates and the radius of the predicted circular frame.
For the detection target, the intersection ratio of the two circles of the circular frame is far better than that of the rectangular frame, the adaptability to the target is better, the recognition rate of the circular shape is ensured, and the network model has the function of accurately recognizing the reinforcing steel bars.
The following is a calculation formula for the intersection area of the two circles.
Calculating the intersection area of the circular bounding box:
area of intersection ═ α R2+βr2-R2sinαcosα-r2sinβcosβ
Alpha can be obtained by the formula of the cosine theorem
Figure BDA0002857737830000051
Beta can be obtained by the formula of the cosine theorem
Figure BDA0002857737830000052
Intersection ratio of two circles
Figure BDA0002857737830000053
Alpha is an angle formed by a connecting line of two circle centers and a large circle radius; beta is an angle formed by a connecting line of the two circle centers and the radius of the small circle;
r is the major circle radius; r is a small circular radius; l is the distance between the intersection of the two circles.
The calculation formula for predicting the horizontal coordinate of the circle center of the circular frame is
Figure BDA0002857737830000054
Wherein, txIs the horizontal coordinate of the center of the anchor frame, cxThe horizontal coordinate offset of the grid and the upper left corner of the anchor frame is obtained;
the calculation formula for predicting the longitudinal coordinate of the circle center of the circular frame is
Figure BDA0002857737830000055
Wherein, tyIs the longitudinal coordinate of the center of the anchor frame, cyThe vertical coordinate offset of the grid and the upper left corner of the anchor frame is obtained;
the calculation formula for predicting the radius of the circular frame is
Figure BDA0002857737830000056
Wherein, trIs the radius of the center of a circle of the anchor frame, prIs the radius of the anchor frame.
Calculating a position loss function comprising
Constructing a real frame, and recording the radius of the real frame and the circle center coordinate of the real frame;
recording the number of grid units and the number of anchor frames generated by each grid;
setting balance parameters;
a position loss function is calculated.
The balance parameter is a parameter set to balance positive and negative samples.
Construct the real frame including
Selecting a sample data set target;
a minimum circular frame is constructed that surrounds the target.
The calculation formula of the position loss function is
Figure BDA0002857737830000061
Wherein x isiIs the circle center abscissa of the real frame, yiIs the longitudinal coordinate of the center of a circle of the real frame, riIs the radius of the circle center of the real frame, S2Is composed of
Number of network elements, B number of anchor frames that can be generated per mesh, Iobj: if the jth anchor box of the ith mesh is responsible for the target, IobjIs l; if not, IobjIs 0.
The loss function is changed according to the change of the circular prediction box and the real box, and is used for training the network model.
The non-maximum suppression method comprises
Setting upper and lower limit ranges of a threshold;
and (4) performing specific category classification output on the prediction frame, predicting the position offset, and outputting an accurate target detection frame.
Since many blocked steel bar bounding boxes cannot be correctly identified and suppressed during the selection of the bounding box, the non-maximum suppression method uses two large and small thresholds together to select the prediction box.
The training network model comprises
Setting an iteration number learning rate parameter;
inputting a position loss function;
inputting a sample data set for training;
when the iteration reaches 80% of the iteration times, outputting a verification accuracy, if the accuracy is not up to the standard, fine-tuning the parameters, when the iteration reaches 90% of the iteration times, outputting the verification accuracy again and fine-tuning, and when the iteration times reaches a set value, finishing the training.
The invention has the beneficial effects that: 1. the defect that manual counting is easy to make mistakes is overcome, and the counting accuracy is improved; 2. by adopting the circular prediction frame, the steel bars can be better counted; 3. the network model is trained for many times, so that rapid and accurate prediction is realized, and the method has strong adaptability and robustness.
The above embodiment is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and other variations and modifications may be made without departing from the technical scope of the claims.

Claims (10)

1. A steel bar counting method based on target detection is characterized by comprising the following steps:
acquiring a steel bar sample image, and performing image preprocessing to obtain a sample data set;
constructing a characteristic pyramid;
constructing a prediction circular frame, and calculating a position loss function;
and screening and predicting circular frames by a non-maximum value inhibition method of various threshold values, and training a network model.
2. The object detection-based rebar counting method of claim 1, wherein the image pre-processing comprises image cropping, scaling, and brightness contrast adjustment.
3. The method of claim 1 or 2, wherein constructing the pyramid of features comprises constructing a pyramid of features comprising
Carrying out convolution processing on the image for multiple times, and reducing the size of the image to obtain a convolution layer;
and (4) obtaining a feature map after the convolution layer is subjected to upsampling, and adding the feature map with other feature maps subjected to convolution to change the number of channels of the feature map.
4. The method of claim 1, wherein constructing the prediction circle comprises constructing a prediction circle based on the target detection
Constructing a round anchor frame by a rectangular frame;
recording offset of the grid and the upper left corner of the anchor frame, radius of the anchor frame and coordinates of the center of a circle of the anchor frame;
and calculating the center coordinates and the radius of the predicted circular frame.
5. The method of claim 4, wherein the prediction of the abscissa of the center of the circle is calculated as
Figure FDA0002857737820000011
Wherein, txIs the horizontal coordinate of the center of the anchor frame, cxThe horizontal coordinate offset of the grid and the upper left corner of the anchor frame is obtained;
the calculation formula for predicting the longitudinal coordinate of the circle center of the circular frame is
Figure FDA0002857737820000012
Wherein, tyIs the longitudinal coordinate of the center of the anchor frame, cyThe vertical coordinate offset of the grid and the upper left corner of the anchor frame is obtained;
the calculation formula of the predicted circular frame radius is
Figure FDA0002857737820000013
Wherein, trIs the radius of the center of a circle of the anchor frame, prIs the radius of the anchor frame.
6. The method of claim 5, wherein the calculating a position loss function comprises calculating a position loss function based on the target detection
Constructing a real frame, and recording the radius of the real frame and the circle center coordinate of the real frame;
recording the number of grid units and the number of anchor frames generated by each grid;
setting balance parameters;
a position loss function is calculated.
7. The method of claim 6, wherein constructing a real box comprises selecting a sample dataset target;
a minimum circular frame is constructed that surrounds the target.
8. The method of claim 6, wherein the position loss function is calculated as
Figure FDA0002857737820000021
Wherein x isiIs the circle center abscissa of the real frame, yiIs the longitudinal coordinate of the center of a circle of the real frame, riIs the radius of the circle center of the real frame, S2Number of network elements, B number of anchor frames that can be generated per mesh, Iobj: if the jth anchor box of the ith mesh is responsible for the target, IobjIs 1; if not, IobjIs 0.
9. The method of claim 1 or 2, wherein the non-maxima suppression method comprises
Setting upper and lower limit ranges of a threshold;
and (4) performing specific category classification output on the prediction frame, predicting the position offset, and outputting an accurate target detection frame.
10. The method of claim 8, wherein training the network model comprises training the network model to perform a target counting based on the target detection
Setting an iteration number learning rate parameter;
inputting a position loss function;
inputting a sample data set for training;
when the iteration reaches 80% of the iteration times, outputting a verification accuracy, if the accuracy is not up to the standard, fine-tuning the parameters, when the iteration reaches 90% of the iteration times, outputting the verification accuracy again and fine-tuning, and when the iteration times reaches a set value, finishing the training.
CN202011550390.1A 2020-12-24 2020-12-24 Steel bar counting method based on target detection Pending CN112580542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011550390.1A CN112580542A (en) 2020-12-24 2020-12-24 Steel bar counting method based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011550390.1A CN112580542A (en) 2020-12-24 2020-12-24 Steel bar counting method based on target detection

Publications (1)

Publication Number Publication Date
CN112580542A true CN112580542A (en) 2021-03-30

Family

ID=75139491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011550390.1A Pending CN112580542A (en) 2020-12-24 2020-12-24 Steel bar counting method based on target detection

Country Status (1)

Country Link
CN (1) CN112580542A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821033A (en) * 2021-09-18 2021-12-21 鹏城实验室 Unmanned vehicle path planning method, system and terminal
CN113888513A (en) * 2021-09-30 2022-01-04 电子科技大学 Reinforcing steel bar detection counting method based on deep neural network model
CN114694032A (en) * 2022-06-02 2022-07-01 中建电子商务有限责任公司 Reinforcing steel bar counting processing method based on dense target detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173036A1 (en) * 2019-02-26 2020-09-03 博众精工科技股份有限公司 Localization method and system based on deep learning
CN111639740A (en) * 2020-05-09 2020-09-08 武汉工程大学 Steel bar counting method based on multi-scale convolution neural network
CN112001388A (en) * 2020-10-29 2020-11-27 南京大量数控科技有限公司 Method for detecting circular target in PCB based on YOLOv3 improved model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173036A1 (en) * 2019-02-26 2020-09-03 博众精工科技股份有限公司 Localization method and system based on deep learning
CN111639740A (en) * 2020-05-09 2020-09-08 武汉工程大学 Steel bar counting method based on multi-scale convolution neural network
CN112001388A (en) * 2020-10-29 2020-11-27 南京大量数控科技有限公司 Method for detecting circular target in PCB based on YOLOv3 improved model

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821033A (en) * 2021-09-18 2021-12-21 鹏城实验室 Unmanned vehicle path planning method, system and terminal
CN113888513A (en) * 2021-09-30 2022-01-04 电子科技大学 Reinforcing steel bar detection counting method based on deep neural network model
CN114694032A (en) * 2022-06-02 2022-07-01 中建电子商务有限责任公司 Reinforcing steel bar counting processing method based on dense target detection

Similar Documents

Publication Publication Date Title
CN112580542A (en) Steel bar counting method based on target detection
CN113469177B (en) Deep learning-based drainage pipeline defect detection method and system
CN112084869B (en) Compact quadrilateral representation-based building target detection method
CN101807352B (en) Method for detecting parking stalls on basis of fuzzy pattern recognition
CN113822247B (en) Method and system for identifying illegal building based on aerial image
CN111145174A (en) 3D target detection method for point cloud screening based on image semantic features
CN110796048A (en) Ship target real-time detection method based on deep neural network
CN109948415A (en) Remote sensing image object detection method based on filtering background and scale prediction
CN111753682B (en) Hoisting area dynamic monitoring method based on target detection algorithm
WO2018000252A1 (en) Oceanic background modelling and restraining method and system for high-resolution remote sensing oceanic image
CN114973002A (en) Improved YOLOv 5-based ear detection method
CN109389105B (en) Multitask-based iris detection and visual angle classification method
CN115564771A (en) Concrete crack identification method based on building foundation column
CN110674674A (en) Rotary target detection method based on YOLO V3
CN110852164A (en) YOLOv 3-based method and system for automatically detecting illegal building
CN113033315A (en) Rare earth mining high-resolution image identification and positioning method
CN117495735B (en) Automatic building elevation texture repairing method and system based on structure guidance
CN106600613A (en) Embedded GPU-based improved LBP infrared target detection method
CN107748361A (en) Based on the two-parameter CFAR detection methods of SAR image for blocking clutter statistics
CN112036404B (en) Marine ship target detection method and system
CN111627018B (en) Steel plate surface defect classification method based on double-flow neural network model
CN110826364A (en) Stock position identification method and device
CN114882375A (en) Intelligent identification method and device for tailing pond
CN114927236A (en) Detection method and system for multiple target images
CN114821165A (en) Track detection image acquisition and analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination