CN115222652A - Method for identifying, counting and centering end faces of bundled steel bars and memory thereof - Google Patents

Method for identifying, counting and centering end faces of bundled steel bars and memory thereof Download PDF

Info

Publication number
CN115222652A
CN115222652A CN202210478695.9A CN202210478695A CN115222652A CN 115222652 A CN115222652 A CN 115222652A CN 202210478695 A CN202210478695 A CN 202210478695A CN 115222652 A CN115222652 A CN 115222652A
Authority
CN
China
Prior art keywords
image
frame
loss
counting
preset algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210478695.9A
Other languages
Chinese (zh)
Inventor
黄思博
邱嘉伟
黄剑锋
崔晗
魏晓慧
蔡昭权
罗中良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou University
Original Assignee
Huizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou University filed Critical Huizhou University
Priority to CN202210478695.9A priority Critical patent/CN115222652A/en
Publication of CN115222652A publication Critical patent/CN115222652A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Abstract

The invention relates to the technical field of machine vision, in particular to a method for identifying, counting and centering end faces of bundled reinforcing steel bars and a memory thereof, wherein the method comprises the steps of S1, shooting images of the end faces of the reinforcing steel bars, and obtaining images to be identified after processing; s2, performing data enhancement operation on the image to be recognized by adopting a first preset algorithm; s3, forming final detection frames in the image to be recognized by adopting a second preset algorithm with a lightweight convolutional neural network, and calculating the number of the final detection frames; and S4, generating a counting result. The invention solves the problems that the counting result is inaccurate and the practical requirement cannot be met because the conventional reinforcing steel bar end face identification technology is generally carried out by adopting a common machine vision algorithm.

Description

Method for identifying, counting and centering end faces of bundled steel bars and memory thereof
Technical Field
The invention relates to the technical field of machine vision, in particular to a method for identifying, counting and centering end faces of bundled reinforcing steel bars and a memory thereof.
Background
Machine vision, which is to use a robot to replace human eyes for measurement and judgment. The machine vision system converts the shot target into an image signal through a vision product thereof, transmits the image signal to a special image processing system to obtain the form information of the shot target, and converts the form information into a digital signal according to the information such as pixel distribution, brightness, color and the like; the image system performs various calculations on these signals to extract the features of the target, and then controls the operation of the on-site equipment according to the result of the discrimination.
The YOLOv3 algorithm can be used to solve the problem of how to detect two similar objects or different classes of objects with close distances, and has good robustness to the close objects or small objects.
The existing steel bar end face recognition technology is usually carried out by adopting a common machine vision algorithm, but because the algorithm can not accurately obtain a result, the application of machine vision in the field of steel bar end face counting can not obtain an accurate result, and the practical requirement can not be met, so that the method for recognizing, counting and centering the steel bar end faces in bundles and the memory thereof are produced.
Disclosure of Invention
The invention aims to provide a method for identifying, counting and centering end faces of a bundle of reinforcing steel bars and a memory thereof, and mainly solves the problems that the counting result is inaccurate and the practical requirement cannot be met because the conventional reinforcing steel bar end face identification technology is generally carried out by adopting a common machine vision algorithm.
The invention provides a method for identifying, counting and centering end faces of bundled reinforcing steel bars, which comprises the following steps:
s1, shooting an image of the end face of a steel bar, and processing to obtain an image to be identified;
s2, performing data enhancement operation on the image to be recognized by adopting a first preset algorithm;
s3, forming final detection frames in the image to be recognized by adopting a second preset algorithm with a lightweight convolutional neural network, and calculating the number of the final detection frames;
and S4, generating a counting result.
Preferably, the step S3 specifically includes:
s31, pre-forming a second preset algorithm comprising a light-weight convolutional neural network;
and S32, forming final detection frames in the image to be recognized by adopting the second preset algorithm, and calculating the number of the final detection frames.
The second preset algorithm in step S31 is specifically to replace the Darknet53 backbone feature extraction network in the YoloV3 original network with the Shfflenetv2 backbone feature extraction network by improving the backbone feature extraction network.
Preferably, the step S31 specifically includes:
s311, clustering the training images to form an anchor frame;
s312, dividing the training image and forming a plurality of small blocks;
s313, generating a plurality of rectangular frames in each small block, wherein the length and the width of each rectangular frame are determined by the anchor frame;
s314, fine-tuning a plurality of rectangular frames under the same small block to form a primary detection frame;
s315, judging whether any small block contains target detection errors, if so, calculating IOU values between a plurality of primary detection frames in the small block and a real frame of a training image, and if all the primary detection frames are larger than a set threshold value, selecting the primary detection frame with the largest IOU value as a positive sample;
and S316, generating a second preset algorithm with the light-weight convolutional neural network after the frame shape of the positive sample is saved.
Preferably, the step S314 specifically includes:
s314a, acquiring a plurality of parameter values of the anchor frame;
s314b, adjusting the rectangular frame according to the acquired multiple parameter values to form a primary detection frame;
wherein the parameter values in step S314a include coordinate values of four corners, and are respectively denoted as t x ,t y ,t w ,t h And also the offset (c) of the anchor frame with respect to the training image x ,c y );
In the step S314b, the adjustment of the rectangular frame includes the following formula,
b x =σ(t x )+c x
b y =σ(t y )+c y
Figure BDA0003626823760000031
Figure BDA0003626823760000032
Figure BDA0003626823760000033
note that the width of the rectangular frame is p w The height of the rectangular frame is p h (ii) a Recording the real value of the coordinates of the rectangular frame as
Figure BDA0003626823760000041
The preset coordinate value is t *
Preferably, after the step S315 and before the step S316, there are provided steps of:
SX, which performs loss calculation by adopting the following formula,
Figure BDA0003626823760000042
Figure BDA0003626823760000043
Figure BDA0003626823760000044
Wbox=2.0-tw*th
Loss=lossbox+lossconf+lossclass
wherein S is 2 Representing the training image size S times S, B for box,
Figure BDA0003626823760000045
represents if at coordinate [ i, j]And if the box has a target, the value of the box is 1, otherwise the box is 0, 3 loss functions are mainly calculated, namely the loss between the box and the center coordinate of the real box and the width and the height, whether the loss of the detection object is contained in the prediction box or not, and the 3 losses are finally added to be used as a loss function value of one level in the prediction type loss, and the average of the three-level loss functions is used as a final loss value in the final loss calculation.
Preferably, in step S1, the processing obtains an image to be recognized, and the processing includes randomly flipping the image, randomly scaling, randomly cutting, and randomly changing brightness and darkness.
Preferably, in step S2, the data enhancement operation is performed on the image to be recognized by using a first preset algorithm, including performing Fmix enhancement mixing on a data set such as dirty data or texture data.
The invention also proposes a computer-readable memory comprising a stored computer program, wherein the computer program when executed controls an apparatus in which the computer-readable memory is located to perform the method according to the preceding claims.
From the above, the following beneficial effects can be obtained by applying the technical scheme provided by the invention:
in the method provided by the invention, through model training, a second preset algorithm formed by the lightweight convolutional neural network can be pertinently used for forming the detection frame, and the accuracy of a technical result is ensured.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, shall fall within the scope of protection of the present invention.
The existing reinforcing steel bar end face recognition technology is generally carried out by adopting a common machine vision algorithm, so that the counting result is inaccurate, and the practicability requirement cannot be met.
It should be emphasized that the counting and centering method proposed in this embodiment is not only applied to the end faces of the steel bars, but also applied to the counting field under different backgrounds and identification objects.
In order to solve the above problems, the present embodiment provides a method for identifying, counting and centering end faces of bundled steel bars, which mainly includes the following steps:
s1, shooting an image of the end face of a steel bar, and processing to obtain an image to be identified;
s2, performing data enhancement operation on the image to be recognized by adopting a first preset algorithm;
s3, forming final detection frames in the image to be recognized by adopting a second preset algorithm with a lightweight convolutional neural network, and calculating the number of the final detection frames;
and S4, generating a counting result.
Preferably, but not limited to, in this embodiment, the processing performed on the captured reinforcing bar end face image in step S1 includes image random inversion, random scaling, random cropping, and random brightness change, so as to ensure that the edges of a plurality of reinforcing bar structures in the reinforcing bar end face are clear, which is beneficial to accurate counting.
Preferably, but not limited to, in this embodiment, the first preset algorithm in step S2 performs a data enhancement operation on the image to be recognized, including performing Fmix enhancement blending on a data set such as dirty or texture. The mixing function is an Fmix function, and the Fmix function is realized as follows: 1) Randomly extracting a picture from the dirty data set; 2) Performing threshold processing on the low-frequency gray image sampled by the Fourier space to obtain a mask; 3) And mask mixing is carried out on the image randomly acquired in the first step and the mask obtained in the second step. Wherein the Fourier transform and mask blend function comprises:
Figure BDA0003626823760000071
Figure BDA0003626823760000072
Figure BDA0003626823760000073
that is, firstly, a random complex tensor whose real part and imaginary part are independent and are in Gaussian distribution is sampled; scaling each component according to the frequency of the component by a parameter lambda to enable the higher value of lambda to attenuate the increase of high-frequency information; performing inverse Fourier transform on the complex tensor, and obtaining a gray level image by an actual obtaining part; finally, the image is changed into a binary mask by a set threshold (top proportion of the image), and the value above the threshold is set to 1, and the value below the threshold is set to 0.
More specifically, the step S3 specifically includes:
s31, a second preset algorithm with a lightweight convolutional neural network is formed in advance;
and S32, forming final detection frames in the image to be recognized by adopting a second preset algorithm, and calculating the number of the final detection frames.
The second preset algorithm in step S31 is specifically to replace the Darknet53 backbone feature extraction network in the YoloV3 original network with the Shfflenetv2 backbone feature extraction network by improving the backbone feature extraction network.
Preferably, the network in step S31 is a network improved based on yolov3, and the backbone feature extraction network is mainly improved, and the number of input and output channels of the network is the same by introducing channel split, so that the access cost of the model memory is reduced; introducing point-by-point packet convolution, namely convolution with packet and convolution kernel of 1 multiplied by 1, so as to reduce the calculation amount, wherein the effect of the point-by-point convolution is to reduce the parameter amount, and the packet point-by-point convolution is to further reduce the calculation amount on the basis of the point-by-point convolution; in order to solve the problem that the network parallelism is reduced due to excessive grouping, channel shunt is introduced, and the channel shunt has the function of enriching the obtained information of each group so as to extract more characteristics;
preferably, step S31 specifically includes:
s311, clustering the training images to form an anchor frame;
s312, dividing the training image and forming a plurality of small blocks;
s313, generating a plurality of rectangular frames in each small block, wherein the length and the width of each rectangular frame are determined by the anchor frame;
s314, fine-tuning a plurality of rectangular frames under the same small block to form a primary detection frame;
s315, judging whether any small block contains a target detection object, if so, calculating IOU values between a plurality of primary detection frames in the current small block and a real frame of a training image, and if all the primary detection frames are larger than a set threshold value, selecting the primary detection frame with the largest IOU value as a positive sample;
and S316, generating a second preset algorithm with the light-weight convolutional neural network after the frame shape of the positive sample is stored.
Preferably, but not limited to, in this embodiment, before S3, a step is further provided for converting the preset data into a format that can be processed by the convolutional network. In this embodiment, the original data set is preset to be labeled in a VOC format, in step S1, when data is enhanced, and when the size of an image changes, the corresponding labeled coordinates of the image also change, and when the data is put into a convolutional neural network, the image data and the labeled data need to be processed respectively, and the image data needs to be subjected to integral normalization processing and converted into NCHW, that is, a format of number, channel number, height, and width. And the label data needs to be subjected to kmeans clustering firstly, and 9 clustering centers are generated to be used as the length and the width of the anchor frame, so that the anchor frame used in training is obtained.
Preferably, but not limited to, the small blocks in step S312 in this embodiment may be set in an N × N format, and the number of the rectangular frames selected in step S313 is three, the shapes of the three rectangular frames are all different, and correspondingly, the size of the anchor frame is divided into three groups from small to large according to the network downsampling magnification, and each group has three rectangular frames from small to large.
In this embodiment, when the preliminary detection amount in step S315 is smaller than the set threshold, it is determined that the coincidence degree between the portion and the real frame is not high, and if the small square does not include the object, it is set as a negative sample.
More specifically, step S314 specifically includes:
s314a, acquiring a plurality of parameter values of the anchor frame;
s314b, adjusting the rectangular frame according to the acquired multiple parameter values to form a primary detection frame;
wherein, the plurality of parameter values in step S314a include coordinate values of four corners, and are respectively marked as t x ,t y ,t w ,t h And an offset (c) of the anchor frame with respect to the training image x ,c y );
In step S314b, the adjustment of the rectangular frame includes the following formula,
b x =σ(t x )+c x
b y =σ(t y )+c y
Figure BDA0003626823760000091
b=p e t
Figure BDA0003626823760000092
note that the width of the current rectangular frame is p w The height of the rectangular frame is p h (ii) a The true value of the coordinates of the rectangular frame is recorded as
Figure BDA0003626823760000093
The preset coordinate value is t *
More specifically, after step S315 and before step S316, there are provided steps of:
SX, which performs loss calculation using the following formula,
Figure BDA0003626823760000101
Figure BDA0003626823760000102
Figure BDA0003626823760000103
Wbox=2.0-tw*th
Loss=lossbox+lossconf+lossclass
wherein S is 2 Representing the training image size as S times S, B representing box,
Figure BDA0003626823760000104
represents if at coordinate [ i, j]And if the box has a target, the value of the box is 1, otherwise the box is 0, 3 loss functions are mainly calculated, namely the loss between the box and the center coordinate of the real box and the width and the height, whether the loss of the detection object is contained in the prediction box or not, and the 3 losses are finally added to be used as a loss function value of one level in the prediction type loss, and the average of the three-level loss functions is used as a final loss value in the final loss calculation.
In this embodiment, the coordinate values obtained according to the plurality of parameter values may be used to calibrate the coordinates of the steel bar in the current detection frame, the preset coordinate value is a determined coordinate value obtained by manually checking the image on the end surface of the steel bar or by checking the image in another manner, and a difference between the preset coordinate value and the determined coordinate value may be used to determine whether the deviation of the current primary detection frame is within an allowable range, if so, the current primary detection frame may be selected as the detection standard, and if not, the primary detection frame is discarded.
It should be emphasized that a memory incorporating the foregoing method also falls within the scope of the present embodiment.
In summary, the embodiment provides a method for identifying, counting and centering end faces of a bundle of reinforcing steel bars, which obtains a most suitable detection frame mainly by iterative replacement of a plurality of detection frames with different scales, and determines the number of end faces of the reinforcing steel bars in a current image according to the identification number of the detection frame in the current image, so that the counting is more accurate.
The above-described embodiments do not limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the above-described embodiments should be included in the protection scope of the technical solution.

Claims (8)

1. A method for identifying, counting and centering end faces of bundles of reinforcing steel bars is characterized by comprising the following steps:
s1, shooting an image of the end face of a steel bar, and processing to obtain an image to be identified;
s2, performing data enhancement operation on the image to be recognized by adopting a first preset algorithm;
s3, forming final detection frames in the image to be recognized by adopting a second preset algorithm with a lightweight convolutional neural network, and calculating the number of the final detection frames;
and S4, generating a counting result.
2. The method for identifying, counting and centering end faces of bundles of steel bars according to claim 1, wherein the step S3 specifically comprises:
s31, a second preset algorithm with a lightweight convolutional neural network is formed in advance;
s32, forming a final detection frame in the image to be recognized by adopting the second preset algorithm, and calculating the number of the final detection frames;
the second preset algorithm in step S31 is specifically to replace the Darknet53 backbone feature extraction network in the YoloV3 original network with the Shfflenetv2 backbone feature extraction network by improving the backbone feature extraction network.
3. The method as claimed in claim 2, wherein the step S31 specifically includes:
s311, clustering the training images to form an anchor frame;
s312, dividing the training image and forming a plurality of small blocks;
s313, generating a plurality of rectangular frames in each small block, wherein the length and the width of each rectangular frame are determined by the anchor frame;
s314, fine-tuning a plurality of rectangular frames under the same small block to form a primary detection frame;
s315, judging whether any small block contains a target detection object, if so, calculating IOU values between a plurality of primary detection frames in the small block and a real frame of a training image, and if all the primary detection frames are larger than a set threshold value, selecting the primary detection frame with the largest IOU value as a positive sample;
and S316, generating a second preset algorithm with the light-weight convolutional neural network after the frame shape of the positive sample is saved.
4. The method as claimed in claim 3, wherein the step S314 specifically includes:
s314a, acquiring a plurality of parameter values of the anchor frame;
s314b, adjusting the rectangular frame according to the acquired multiple parameter values to form a primary detection frame;
wherein, theThe parameter values in step S314a include coordinate values of four corners, which are respectively denoted as t x ,t y ,t w ,t h And an offset (c) of the anchor frame with respect to the training image x ,c y );
In step S314b, the adjustment of the rectangular frame includes the following formula,
b x =σ(t x )+c x
b y =σ(t y )+c y
Figure FDA0003626823750000021
Figure FDA0003626823750000031
Figure FDA0003626823750000032
note that the width of the rectangular frame is p w The height of the rectangular frame is p h (ii) a Recording the real value of the coordinates of the rectangular frame as
Figure FDA0003626823750000033
The preset coordinate value is t *
5. The method for identifying, counting and centering end faces of reinforcing steel bundles according to claim 4, wherein after the step S315 and before the step S316, the steps of:
SX, which performs loss calculation by adopting the following formula,
Figure FDA0003626823750000034
Figure FDA0003626823750000035
Figure FDA0003626823750000036
Wbox=2.0-tw*th
Loss=lossbox+lossconf+lossclass
wherein S is 2 Representing the training image size as S times S, B representing box,
Figure FDA0003626823750000037
represents if at coordinate [ i, j]And if the box has a target, the value of the box is 1, otherwise the box is 0, 3 loss functions are mainly calculated, namely the loss between the box and the center coordinate of the real box and the width and the height, whether the loss of the detection object is contained in the prediction box or not, and the 3 losses are finally added to be used as a loss function value of one level in the prediction type loss, and the average of the three-level loss functions is used as a final loss value in the final loss calculation.
6. The method for identifying, counting and centering the end faces of the steel bars in the bundle according to any one of claims 1 to 5, wherein the method comprises the following steps:
in the step S1, the image to be recognized is obtained after the processing, which includes randomly turning the image, randomly scaling, randomly cutting, and randomly changing brightness and darkness.
7. The method for identifying, calculating and centering end faces of bundles of steel bars according to any one of claims 1 to 5, wherein the method comprises the following steps:
in the step S2, the data enhancement operation is performed on the image to be recognized by using a first preset algorithm, which includes performing Fmix enhancement mixing on a dirty or texture data set.
8. A computer-readable memory, characterized in that: the computer-readable memory comprises a stored computer program, wherein the computer program when executed controls an apparatus in which the computer-readable memory is located to perform the method of any of claims 1-7.
CN202210478695.9A 2022-05-05 2022-05-05 Method for identifying, counting and centering end faces of bundled steel bars and memory thereof Pending CN115222652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210478695.9A CN115222652A (en) 2022-05-05 2022-05-05 Method for identifying, counting and centering end faces of bundled steel bars and memory thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210478695.9A CN115222652A (en) 2022-05-05 2022-05-05 Method for identifying, counting and centering end faces of bundled steel bars and memory thereof

Publications (1)

Publication Number Publication Date
CN115222652A true CN115222652A (en) 2022-10-21

Family

ID=83608616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210478695.9A Pending CN115222652A (en) 2022-05-05 2022-05-05 Method for identifying, counting and centering end faces of bundled steel bars and memory thereof

Country Status (1)

Country Link
CN (1) CN115222652A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546221A (en) * 2022-12-05 2022-12-30 广东广物互联网科技有限公司 Method, device and equipment for counting reinforcing steel bars and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546221A (en) * 2022-12-05 2022-12-30 广东广物互联网科技有限公司 Method, device and equipment for counting reinforcing steel bars and storage medium

Similar Documents

Publication Publication Date Title
CN110348263B (en) Two-dimensional random code image identification and extraction method based on image identification
CN109740572B (en) Human face living body detection method based on local color texture features
CN109034017A (en) Head pose estimation method and machine readable storage medium
CN109711268B (en) Face image screening method and device
CN111709305B (en) Face age identification method based on local image block
CN112633221A (en) Face direction detection method and related device
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN112634262A (en) Writing quality evaluation method based on Internet
CN115222652A (en) Method for identifying, counting and centering end faces of bundled steel bars and memory thereof
CN113592839B (en) Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN111160107A (en) Dynamic region detection method based on feature matching
CN112396016B (en) Face recognition system based on big data technology
CN113252103A (en) Method for calculating volume and mass of material pile based on MATLAB image recognition technology
CN104408430B (en) License plate positioning method and device
CN111161291A (en) Contour detection method based on target depth of field information
CN113643290B (en) Straw counting method and device based on image processing and storage medium
CN115984178A (en) Counterfeit image detection method, electronic device, and computer-readable storage medium
CN112749713B (en) Big data image recognition system and method based on artificial intelligence
CN114594114A (en) Full-automatic online nondestructive detection method for lithium battery cell
CN107491746B (en) Face pre-screening method based on large gradient pixel analysis
CN108734703B (en) Polished tile printing pattern detection method, system and device based on machine vision
CN112052859A (en) License plate accurate positioning method and device in free scene
CN112085683A (en) Depth map reliability detection method in significance detection
CN112541471A (en) Shielded target identification method based on multi-feature fusion
CN111428534A (en) Decryption identification method based on dot matrix steganographic information coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination