CN114998269A - Reinforcing steel bar binding control system and binding positioning identification method - Google Patents

Reinforcing steel bar binding control system and binding positioning identification method Download PDF

Info

Publication number
CN114998269A
CN114998269A CN202210643581.5A CN202210643581A CN114998269A CN 114998269 A CN114998269 A CN 114998269A CN 202210643581 A CN202210643581 A CN 202210643581A CN 114998269 A CN114998269 A CN 114998269A
Authority
CN
China
Prior art keywords
module
banded
image
steel bar
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210643581.5A
Other languages
Chinese (zh)
Inventor
凤若成
王怀东
王诗宇
马仲举
贾有权
王露鸣
李志明
王启迪
曹继伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Railway No 9 Group Co Ltd
Original Assignee
China Railway No 9 Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Railway No 9 Group Co Ltd filed Critical China Railway No 9 Group Co Ltd
Priority to CN202210643581.5A priority Critical patent/CN114998269A/en
Publication of CN114998269A publication Critical patent/CN114998269A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04GSCAFFOLDING; FORMS; SHUTTERING; BUILDING IMPLEMENTS OR AIDS, OR THEIR USE; HANDLING BUILDING MATERIALS ON THE SITE; REPAIRING, BREAKING-UP OR OTHER WORK ON EXISTING BUILDINGS
    • E04G21/00Preparing, conveying, or working-up building materials or building elements in situ; Other devices or measures for constructional work
    • E04G21/12Mounting of reinforcing inserts; Prestressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Structural Engineering (AREA)
  • Civil Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a reinforcement control system, comprising: the image acquisition equipment is used for acquiring a depth image and an RGB image of an area where the steel bar to be bound is located; the central processing unit comprises an image preprocessing module, an identification module and a position recognition module, wherein the image preprocessing module is used for replacing background information and bottom layer steel bar information in an RGB image with a single color according to spatial depth data of a depth image to obtain a preprocessed RGB image; and the actuating mechanism driver is used for controlling the actuating mechanism to bind the reinforcing steel bars according to the position to be bound. And accurately identifying the position to be banded, and controlling an actuating mechanism to finish high-quality banding.

Description

Reinforcing steel bar binding control system and binding positioning identification method
Technical Field
The invention relates to the technical field of steel bar binding control, in particular to a steel bar binding control system and a binding positioning identification method.
Background
In the building industry, the reinforcing bar is just equivalent to human skeleton, play the support, the effect of drawknot building body, a large amount of reinforcing bars need be fixed in proper order according to the engineering drawing before concreting, make the framework of reinforcing bar can be reliable fixed, along with the development of building industrialization and assembly type building, traditional PC component reinforcing bar processing mode mainly adopts artifical paper to rule the ligature on the ground, inefficiency, the superimposed sheet of different models need be marked line again repeatedly, the error rate is high, require highly to the operation workman specialty, and next process need examine the steel bar and recheck, the emergence of very easy production wastrel.
Therefore, a reinforcement control system is needed to realize mechanical control, identification and positioning, and further complete reinforcement actions.
Disclosure of Invention
Technical problem to be solved
In view of the problems in the art described above, the present invention is at least partially addressed. Therefore, the invention aims to provide a steel bar binding control system which is beneficial to accurately identifying a position to be bound and further controlling an executing mechanism to finish high-quality binding operation.
The second purpose of the invention is to provide a binding positioning and identifying method.
(II) technical scheme
In order to achieve the above object, the present invention provides a reinforcement bar binding control system comprising:
the image acquisition equipment is used for acquiring a depth image and an RGB image of an area where the steel bar to be bound is located;
the central processing unit comprises an image preprocessing module and a position identification module to be bound;
the image preprocessing module is used for removing background information and bottom layer steel bar information in the RGB image according to the spatial depth data of the depth image, filling positions of the background information and the bottom layer steel bar information in the RGB image into a single color, and obtaining a preprocessed RGB image;
the system comprises a to-be-banded position identification module, a to-be-banded position identification module and a data processing module, wherein the to-be-banded position identification module is used for inputting a preprocessed RGB image into a pre-trained to-be-banded position identification model, outputting bounding boxes of intersections to be banded, processing each bounding box based on a local characteristic point method to obtain edge characteristic points of each reinforcing steel bar in the bounding box, fitting the linear edges of the reinforcing steel bars according to the edge characteristic points, and taking the intersection points of the linear edges of the reinforcing steel bars in each bounding box as positions to be banded;
and the actuating mechanism driver is used for controlling the actuating mechanism to bind the reinforcing steel bars according to the position to be bound.
Optionally, the image capture device is an RGB-D camera.
Optionally, the to-be-banded position identification module is configured to input the preprocessed RGB images into a pre-trained Ghost-YOLOX network model, and output a bounding box of the to-be-banded intersection; the method for constructing the Ghost-Yolox network model comprises the following steps: the convolution in the CBL module of the YOLOX network was replaced with the Ghost module.
Optionally, the constructing of the Ghost-YOLOX network model further includes: an SE module based on an attention channel mechanism is introduced into a backbone feature extraction network of the YOLOX network, a feature layer extracted by the backbone feature extraction network is input into the SE module, and feature fusion is carried out according to the output of the SE module.
Optionally, the step of processing each bounding box by the to-be-banded position identification module based on a local feature point method to obtain an edge feature point of each steel bar in the bounding box includes: the position identification module to be bound sets a plurality of horizontal lines and a plurality of vertical lines in the bounding box region, wherein the horizontal lines are used as a first group of Line ROIs, and the vertical lines are used as a second group of Line ROIs; and extracting corresponding image pixel values on each straight Line in the Line ROI, and calculating to obtain the position with the maximum pixel value change on each straight Line as an edge feature point of the steel bar.
Optionally, the module for identifying a position to be bound is further configured to remove outliers in the feature points of the edge of the steel bar by using a RANSAC algorithm through an iterative optimization method; the outlier is an edge feature point at the noise position; correspondingly, the identification module of the positions to be banded fits the straight line edges of the steel bars according to the residual edge characteristic points, and the intersection points of the straight line edges of the steel bars in each bounding box are used as the positions to be banded.
In a second aspect, the invention provides a binding positioning and identifying method, which comprises the following steps:
s1, obtaining a depth image and an RGB image of an area where the steel bar to be bound is located;
s2, removing background information and bottom layer steel bar information in the RGB image according to the spatial depth data of the depth image, and filling positions of the background information and the bottom layer steel bar information in the RGB image into a single color to obtain a preprocessed RGB image;
s3, inputting the preprocessed RGB images into a pre-trained recognition model of the positions to be banded, outputting bounding boxes of intersections to be banded, processing each bounding box based on a local feature point method to obtain edge feature points of each reinforcing steel bar in the bounding box, fitting the linear edges of the reinforcing steel bars according to the edge feature points, and taking the intersection points of the linear edges of the reinforcing steel bars in each bounding box as the positions to be banded.
As an improvement of the method, the identification model of the position to be bound is a Ghost-Yolox network model; the method for constructing the Ghost-Yolox network model comprises the following steps: the convolution in the CBL module of the YOLOX network was replaced with the Ghost module.
As an improvement of the method, the construction of the Ghost-Yolox network model further comprises the following steps: an SE module based on an attention channel mechanism is introduced into a Backbone feature extraction network Backbone of the YOLOX network, a feature layer extracted by the Backbone feature extraction network is input into the SE module, and feature fusion is carried out according to the output of the SE module.
As an improvement of the method of the present invention, the processing each bounding box based on the local feature point method to obtain the edge feature point of each steel bar in the bounding box comprises: setting a plurality of horizontal lines and a plurality of vertical lines in the bounding box area, wherein the plurality of horizontal lines are used as a first group of Line ROIs, and the plurality of vertical lines are used as a second group of Line ROIs; and extracting corresponding image pixel values on each straight Line in the Line ROI, and calculating to obtain the position with the maximum pixel value change on each straight Line as an edge feature point of the steel bar.
(III) advantageous effects
The invention has the beneficial effects that:
according to the reinforcement bar binding control system, the depth image and the RGB image of the area where the reinforcement bar to be bound is located are obtained, and the RGB image is preprocessed according to the depth image, so that the contrast ratio of the reinforcement bar to be identified on the top layer of the RGB image and the background is increased, and the accurate identification of the position to be bound is facilitated; and the rough position of each non-binding intersection point is extracted through the identification model of the position to be bound, and then the non-binding intersection points are identified in the rough position of each non-binding intersection point based on the method of the local characteristic points, so that the accurate coordinates of the position to be bound can be obtained.
Drawings
The invention is described with the aid of the following figures:
FIG. 1 is a schematic structural view of a reinforcement bar binding control system according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a Ghost module according to an embodiment of the present invention;
FIG. 3 is a diagram of two residual structures in a GBL module according to an embodiment of the invention; wherein, the Ghost module represents a Ghost module, BN represents batch normalization, Silu represents a Silu activation function, Add represents summation, and DWconv represents depth separable convolution;
FIG. 4 is a schematic diagram of a Ghost-YOLOX network according to embodiments of the invention; wherein Input represents Input, Focus represents picture slicing operation, SPP represents spatial pyramid pooling, upsamplale represents picture upsampling, and Concat represents feature fusion;
FIG. 5 is a schematic block diagram of a SE module according to an embodiment of the present invention;
fig. 6 is a schematic view showing a result of recognition of a binding position of a reinforcing bar in a bounding box region according to an embodiment of the present invention.
[ description of reference ]
1: an image acquisition device;
2: a central processing unit; 21: an image preprocessing module; 22: a module for identifying a position to be bound; 23: a communication module;
3: an actuator driver.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a reinforcement bar binding control system. The reinforcement bar binding control system comprises an image acquisition device 1, a central processing unit 2 and an actuator driver 3.
The image acquisition equipment 1 is used for acquiring a depth image and an RGB image of an area where the steel bars to be bound are located. Thus, the RGB image is acquired because the YOLOX network can only process RGB images, but cannot process depth images; the depth image is acquired in order to pre-process the RGB image, since there is no spatial depth data in the RGB image.
Preferably, the image acquisition equipment 1 is an RGB-D camera, and the RGB-D camera can simultaneously acquire the depth image and the RGB image of the area where the steel bar to be bound is located, so that the subsequent image processing process is facilitated.
The central processor 2 includes an image preprocessing module 21 and a to-be-banded position identification module 22.
The image preprocessing module 21 is configured to remove background information and bottom-layer steel bar information in the RGB image according to the spatial depth data of the depth image, and fill positions where the background information and the bottom-layer steel bar information in the RGB image are located with a single color to obtain a preprocessed RGB image. The background information refers to information other than the information of the steel bars in the RGB image, and the bottom-layer steel bar information refers to information of the steel bars other than the steel bars to be recognized on the top layer in the RGB image. Therefore, the contrast ratio of the reinforcement to be recognized and the background on the top layer of the RGB image is increased, and accurate recognition of the position to be bound is facilitated.
Specifically, as an example, the image preprocessing module 21 is configured to remove background information and bottom layer steel bar information in the RGB image according to the spatial depth data of the depth image, and fill the positions of the background information and the bottom layer steel bar information in the RGB image with black to obtain a preprocessed RGB image.
And the to-be-banded position identification module 22 is used for inputting the preprocessed RGB images into a pre-trained to-be-banded position identification model, outputting bounding boxes of intersections to be banded, processing each bounding box based on a local characteristic point method to obtain edge characteristic points of each reinforcing steel bar in the bounding box, fitting the linear edges of the reinforcing steel bars according to the edge characteristic points, and taking the intersection points of the linear edges of the reinforcing steel bars in each bounding box as positions to be banded. Therefore, the rough position of each non-banded intersection is extracted through the identification model of the position to be banded, and then the non-banded intersection is identified in the rough position of each non-banded intersection based on the method of the local characteristic points, so that the accurate coordinate of the position to be banded can be obtained.
Preferably, the to-be-banded position identifying module 22 is configured to input the preprocessed RGB images into a previously trained Ghost-YOLOX network model, and output a bounding box of the intersection to be banded, where the construction of the Ghost-YOLOX network model includes: the convolution in the CBL module of the YOLOX network was replaced with the Ghost module.
Among them, the YOLOX network can be divided into three parts: a trunk feature extraction network Backbone, a context feature fusion Neck and a result Prediction. In the main feature extraction network, Focus operation (image slicing operation) is firstly used to obtain a double-sampling feature map without information loss, the channel number is expanded to four times of the original number, and then a CBL module and a CSP module are used to extract features. And in the context feature fusion part, constructing a feature pyramid according to three feature layers extracted by the trunk feature extraction network and carrying out effective feature fusion. In the result prediction part, firstly, a 1 × 1 convolution is used for decoupling the classification and regression prediction, and then the class to which the target belongs and the target position are predicted respectively. The invention provides a Ghost-YOLOX network obtained after improvement of a YOLOX network, wherein convolution in a CBL module of the YOLOX network is replaced by a Ghost module, so that the CBL module becomes a GBL module, as shown in FIG. 4. Therefore, on one hand, the YOLOX network is selected as the reference model of the invention, the detection precision and the detection speed of the binding position are high, and on the other hand, the convolution in the CBL module of the YOLOX network is replaced by the Ghost module, so that the execution efficiency of the model is further improved, and the accuracy of feature extraction is further improved.
The Ghost module is structured as shown in FIG. 2, for input
Figure 8907DEST_PATH_IMAGE001
Wherein, in the step (A),
Figure 129310DEST_PATH_IMAGE002
in the form of a matrix of real numbers,
Figure 581151DEST_PATH_IMAGE003
for the number of channels of the input data,
Figure 507519DEST_PATH_IMAGE004
in order to be able to input the height of the data,
Figure 993995DEST_PATH_IMAGE005
is the width of the input data. First using a convolution operation to input
Figure 650411DEST_PATH_IMAGE001
Mapping to intrinsic profiles
Figure 120707DEST_PATH_IMAGE006
Wherein, in the step (A),
Figure 991711DEST_PATH_IMAGE007
in order to account for the number of channels in the feature map,
Figure 96808DEST_PATH_IMAGE008
in order to be the height of the feature layer,
Figure 27855DEST_PATH_IMAGE009
is the width of the feature layer. Then, depth separable convolution is carried out on the intrinsic feature map to obtain a Ghost feature map, the intrinsic feature map (the result obtained by carrying out convolution operation on the input in the figure 1 is the intrinsic feature map) and the Ghost feature map (the result obtained by carrying out depth separable convolution on the intrinsic feature map is the Ghost feature map) are spliced on the channel dimension to obtain the final output
Figure 985447DEST_PATH_IMAGE010
. The eigen feature maps are generated by using fewer conventional convolutions and the Ghost feature maps are generated by using deep separable convolutions, so that the parameter quantity of the model is greatly reduced, and the reasoning speed of the model is improved. The two residual error structures constructed by combining the Ghost module with batch normalization operation BN, the Silu activation function and the depth separable convolution DWConv are shown in FIG. 3, the GBL module is formed by serially stacking the two residual error structures shown in FIG. 3, and the residual error structures arranged in the way are used for extracting features in pictures and are more accurate than the features extracted by a common convolution structure.
Preferably, the construction of the Ghost-YOLOX network model further comprises: an attention channel mechanism-based SE module is introduced into a backbone feature extraction network of the YOLOX network, a feature layer extracted by the backbone feature extraction network is input into the SE module, and feature fusion is performed according to the output of the SE module, as shown in FIG. 4. Thus, the SE module enables different channels of the feature layer extracted by the main feature extraction network to use the information of the global receptive field.
The structure of the SE module is shown in FIG. 5, which first inputs
Figure 833710DEST_PATH_IMAGE011
By passing
Figure 153833DEST_PATH_IMAGE012
Transformation mapping to feature maps
Figure 662306DEST_PATH_IMAGE013
Then by
Figure 277833DEST_PATH_IMAGE014
Operation general characteristic diagram
Figure 287377DEST_PATH_IMAGE013
In its spatial dimension (
Figure 9477DEST_PATH_IMAGE015
) The feature compression above generates a channel descriptor which can generate a globally distributed feature representation of the channel-by-channel feature response, and then passes
Figure 875802DEST_PATH_IMAGE016
Operating to output the global feature representation as a set of adjusted weights for each channel; finally, according to the weight set, use
Figure 778292DEST_PATH_IMAGE017
Operation will feature map
Figure 529210DEST_PATH_IMAGE013
Is adjusted to output
Figure 823925DEST_PATH_IMAGE018
. Thus, the detection accuracy of the Ghost-Yolox network model is further improved.
Preferably, the module 22 for identifying a position to be banded processes each bounding box based on a method of local feature points to obtain edge feature points of each steel bar in the bounding box, including: the position identification module 22 to be banded sets a plurality of horizontal lines and a plurality of vertical lines in the bounding box region, wherein the plurality of horizontal lines are used as a first group of Line ROIs, and the plurality of vertical lines are used as a second group of Line ROIs; and extracting corresponding image pixel values on each straight Line in the Line ROI, and calculating to obtain the position with the maximum pixel value change on each straight Line as an edge feature point of the steel bar.
Each region of interest (i.e. bounding box) extracted by using the Ghost-YOLOX network contains only one transverse steel bar, one longitudinal steel bar and a small amount of interference noise. Therefore, a group of transverse Line ROI and a group of longitudinal Line ROI can be respectively set in each rectangular region of interest (as shown in fig. 6, the region of interest is provided with a plurality of lines, all transverse lines can be regarded as a group of transverse Line ROI, and all vertical lines can be regarded as a group of longitudinal Line ROI), edge feature points of the longitudinal steel bar and the transverse steel bar can be respectively extracted, then the straight Line edge of the steel bar can be accurately fitted according to the edge feature points, and the accurate coordinates of the position to be bound can be obtained by calculating the intersection point of the transverse steel bar edge and the longitudinal steel bar edge.
As an example, the bounding box is described by its horizontal and vertical coordinates relative to the top left and bottom right vertices of the entire picture.
Further preferably, the to-be-banded position identification module 22 extracts the image pixel value corresponding to each Line in the Line ROI, and calculates and obtains the position where the pixel value change is the largest on each Line by using a first-order difference algorithm to serve as the edge feature point of the steel bar. Specifically, in the present embodiment, as shown in fig. 6, a set of Line ROIs includes 15 straight lines, that is, a set of Line ROIs can extract 15 independent edge feature points.
Since the Line ROI may extract noise locations, the edge feature points of the noise locations may be considered outliers. Therefore, it is further preferable that the module 22 for identifying positions to be bound is further configured to use a RANSAC algorithm to remove outliers in the feature points of the edge of the steel bar by an iterative optimization method; correspondingly, the module 22 for identifying the positions to be banded fits the straight line edges of the steel bars according to the remaining edge feature points, and the intersection point of the straight line edges of the steel bars in each bounding box is used as the position to be banded. Further, the outlier rejection number is 30-50% of the total edge feature points. Specifically, in the present embodiment, the outlier rejection number is 40% of the total edge feature point number.
And the actuating mechanism driver 3 is used for controlling the actuating mechanism to bind the reinforcing steel bars according to the position to be bound.
Specifically, in the present embodiment, the actuator driver 3 is a banding servo motor driver.
Preferably, the central processing unit 2 further comprises a communication module 23, and the communication module 23 is configured to convert the information about the position to be banded into a protocol format supported by the servo motor driver. Specifically, in this embodiment, the communication module 23 is configured to convert the information about the position to be bound into a modbus protocol format supported by the servo motor driver, and the central processing unit 2 sends the information to the servo motor driver through the serial port in rs485 communication, so as to control the execution mechanism to bind the steel bars.
The invention also provides a binding positioning identification method, which comprises the following steps:
and step S1, acquiring a depth image and an RGB image of the area where the steel bar to be bound is located, and aligning the depth image and the RGB image.
Step S2, removing background information and bottom layer steel bar information in the RGB image according to the spatial depth data of the depth image, and filling positions of the background information and the bottom layer steel bar information in the RGB image into a single color to obtain a preprocessed RGB image;
and S3, inputting the preprocessed RGB images into a pre-trained recognition model of the positions to be banded, outputting bounding boxes of intersections to be banded, processing each bounding box based on a local characteristic point method to obtain edge characteristic points of each reinforcing steel bar in the bounding box, fitting the linear edges of the reinforcing steel bars according to the edge characteristic points, and taking the intersection points of the linear edges of the reinforcing steel bars in each bounding box as the positions to be banded.
Before step S1, the method further includes: the method comprises the steps of collecting pictures of reinforcement bars bound and unbound on site by using an industrial camera, marking positions bound or to be bound in the pictures by using a LabLeImg picture marking tool, marking labels of the bound and unbound pictures as training data for training a recognition model of the positions to be bound.
It should be noted that the flow of image processing in the banding positioning identification method provided by the present invention may refer to the detailed description of the steel bar banding control system provided in the foregoing embodiment, and is not described herein again.
It should be understood that the above description of specific embodiments of the present invention is only for the purpose of illustrating the technical lines and features of the present invention, and is intended to enable those skilled in the art to understand the contents of the present invention and to implement the present invention, but the present invention is not limited to the above specific embodiments. It is intended to cover in the appended claims all such changes and modifications that are within the scope of this invention.

Claims (10)

1. A reinforcement bar binding control system, comprising:
the image acquisition equipment is used for acquiring a depth image and an RGB image of an area where the steel bar to be bound is located;
the central processing unit comprises an image preprocessing module and a position identification module to be bound;
the image preprocessing module is used for removing background information and bottom layer steel bar information in the RGB image according to the spatial depth data of the depth image, filling positions of the background information and the bottom layer steel bar information in the RGB image into a single color, and obtaining a preprocessed RGB image;
the system comprises a to-be-banded position identification module, a to-be-banded position identification module and a data processing module, wherein the to-be-banded position identification module is used for inputting a preprocessed RGB image into a pre-trained to-be-banded position identification model, outputting bounding boxes of intersections to be banded, processing each bounding box based on a local characteristic point method to obtain edge characteristic points of each reinforcing steel bar in the bounding box, fitting the linear edges of the reinforcing steel bars according to the edge characteristic points, and taking the intersection points of the linear edges of the reinforcing steel bars in each bounding box as positions to be banded;
and the actuating mechanism driver is used for controlling the actuating mechanism to bind the reinforcing steel bars according to the position to be bound.
2. The reinforcement bar binding control system according to claim 1,
the image acquisition device is an RGB-D camera.
3. The reinforcement bar binding control system according to claim 1,
the to-be-banded position identification module is used for inputting the preprocessed RGB images into a previously trained Ghost-Yolox network model and outputting bounding boxes of intersections to be banded;
the method for constructing the Ghost-YOLOX network model comprises the following steps: the convolution in the CBL module of the YOLOX network was replaced with the Ghost module.
4. The reinforcement bar binding control system according to claim 3,
the method for constructing the Ghost-Yolox network model further comprises the following steps: an SE module based on an attention channel mechanism is introduced into a backbone feature extraction network of the YOLOX network, a feature layer extracted by the backbone feature extraction network is input into the SE module, and feature fusion is carried out according to the output of the SE module.
5. The reinforcement bar binding control system according to claim 1, wherein the module for identifying a position to be bound processes each bounding box based on a local feature point method to obtain an edge feature point of each reinforcement bar in the bounding box, comprising:
the position identification module to be banded sets a plurality of horizontal lines and a plurality of vertical lines in the bounding box region, wherein the plurality of horizontal lines are used as a first group of Line ROIs, and the plurality of vertical lines are used as a second group of Line ROIs;
and extracting corresponding image pixel values on each straight Line in the Line ROI, and calculating to obtain the position with the maximum pixel value change on each straight Line as an edge feature point of the steel bar.
6. The reinforcement bar binding control system according to claim 5,
the binding position identification module is also used for eliminating outliers in the edge characteristic points of the reinforcing steel bars by using an RANSAC algorithm through an iterative optimization method; the outlier is an edge feature point at the noise position;
correspondingly, the identification module of the positions to be banded fits the straight line edges of the steel bars according to the residual edge characteristic points, and the intersection points of the straight line edges of the steel bars in each bounding box are used as the positions to be banded.
7. A binding positioning and identifying method is characterized by comprising the following steps:
s1, obtaining a depth image and an RGB image of the area where the steel bar to be bound is located;
s2, removing background information and bottom layer steel bar information in the RGB image according to the spatial depth data of the depth image, and filling positions of the background information and the bottom layer steel bar information in the RGB image into a single color to obtain a preprocessed RGB image;
s3, inputting the preprocessed RGB images into a pre-trained recognition model of the positions to be banded, outputting bounding boxes of intersections to be banded, processing each bounding box based on a local feature point method to obtain edge feature points of each reinforcing steel bar in the bounding box, fitting the linear edges of the reinforcing steel bars according to the edge feature points, and taking the intersection points of the linear edges of the reinforcing steel bars in each bounding box as the positions to be banded.
8. The ligating position identification method of claim 7,
the identification model of the position to be bound is a Ghost-YoloxX network model;
the method for constructing the Ghost-Yolox network model comprises the following steps: the convolution in the CBL module of the YOLOX network was replaced with the Ghost module.
9. The ligating position identification method of claim 8,
the method for constructing the Ghost-Yolox network model further comprises the following steps: an SE module based on an attention channel mechanism is introduced into a Backbone feature extraction network Backbone of the YOLOX network, a feature layer extracted by the Backbone feature extraction network is input into the SE module, and feature fusion is carried out according to the output of the SE module.
10. The ligating position identification method of claim 7,
processing each bounding box based on the local feature point method to obtain the edge feature point of each steel bar in the bounding box, and the method comprises the following steps:
setting a plurality of horizontal lines and a plurality of vertical lines in the bounding box area, wherein the plurality of horizontal lines are used as a first group of Line ROIs, and the plurality of vertical lines are used as a second group of Line ROIs;
and extracting corresponding image pixel values on each straight Line in the Line ROI, and calculating to obtain the position with the maximum pixel value change on each straight Line as an edge feature point of the steel bar.
CN202210643581.5A 2022-06-09 2022-06-09 Reinforcing steel bar binding control system and binding positioning identification method Pending CN114998269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210643581.5A CN114998269A (en) 2022-06-09 2022-06-09 Reinforcing steel bar binding control system and binding positioning identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210643581.5A CN114998269A (en) 2022-06-09 2022-06-09 Reinforcing steel bar binding control system and binding positioning identification method

Publications (1)

Publication Number Publication Date
CN114998269A true CN114998269A (en) 2022-09-02

Family

ID=83032539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210643581.5A Pending CN114998269A (en) 2022-06-09 2022-06-09 Reinforcing steel bar binding control system and binding positioning identification method

Country Status (1)

Country Link
CN (1) CN114998269A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116092012A (en) * 2023-03-06 2023-05-09 安徽数智建造研究院有限公司 Video stream-based steel bar binding procedure monitoring method and monitoring device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116092012A (en) * 2023-03-06 2023-05-09 安徽数智建造研究院有限公司 Video stream-based steel bar binding procedure monitoring method and monitoring device

Similar Documents

Publication Publication Date Title
CN115601549B (en) River and lake remote sensing image segmentation method based on deformable convolution and self-attention model
CN114092780B (en) Three-dimensional target detection method based on fusion of point cloud and image data
CN112287940A (en) Semantic segmentation method of attention mechanism based on deep learning
CN112560980B (en) Training method and device of target detection model and terminal equipment
CN113936139B (en) Scene aerial view reconstruction method and system combining visual depth information and semantic segmentation
CN110807496B (en) Dense target detection method
CN113657388B (en) Image semantic segmentation method for super-resolution reconstruction of fused image
CN111553949B (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
EP2923333B1 (en) Method for the automatic creation of two- or three-dimensional building models
CN111639663A (en) Method for multi-sensor data fusion
CN101221375A (en) Machine vision system used for step photo-etching machine alignment system and its calibration method
CN113239954A (en) Attention mechanism-based image semantic segmentation feature fusion method
CN112347970A (en) Remote sensing image ground object identification method based on graph convolution neural network
CN114998269A (en) Reinforcing steel bar binding control system and binding positioning identification method
CN115147488B (en) Workpiece pose estimation method and grabbing system based on dense prediction
CN111640116A (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN114898313A (en) Bird's-eye view image generation method, device, equipment and storage medium of driving scene
CN113313176A (en) Point cloud analysis method based on dynamic graph convolution neural network
CN117252928B (en) Visual image positioning system for modular intelligent assembly of electronic products
CN113361496A (en) City built-up area statistical method based on U-Net
CN116519106B (en) Method, device, storage medium and equipment for determining weight of live pigs
CN115661694B (en) Intelligent detection method and system for light-weight main transformer with focusing key characteristics, storage medium and electronic equipment
CN116543165A (en) Remote sensing image fruit tree segmentation method based on dual-channel composite depth network
CN116630828A (en) Unmanned aerial vehicle remote sensing information acquisition system and method based on terrain environment adaptation
CN110766732A (en) Robust single-camera depth map estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination