CN110334751B - Image processing method and device for binding nodes and terminal - Google Patents

Image processing method and device for binding nodes and terminal Download PDF

Info

Publication number
CN110334751B
CN110334751B CN201910553331.0A CN201910553331A CN110334751B CN 110334751 B CN110334751 B CN 110334751B CN 201910553331 A CN201910553331 A CN 201910553331A CN 110334751 B CN110334751 B CN 110334751B
Authority
CN
China
Prior art keywords
image
neural network
network model
processed
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910553331.0A
Other languages
Chinese (zh)
Other versions
CN110334751A (en
Inventor
张伟民
郭子原
孙尧
梁震烁
黄强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haribit Intelligent Technology Co ltd
Original Assignee
Beijing Haribit Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haribit Intelligent Technology Co ltd filed Critical Beijing Haribit Intelligent Technology Co ltd
Priority to CN201910553331.0A priority Critical patent/CN110334751B/en
Publication of CN110334751A publication Critical patent/CN110334751A/en
Application granted granted Critical
Publication of CN110334751B publication Critical patent/CN110334751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image processing method and device for binding nodes and a terminal. The method comprises inputting an image to be processed; inputting the image to be processed into a preset deep neural network model, and identifying characteristic points; and acquiring the pixel position of the characteristic point. The method and the device solve the technical problem that the node identification is carried out in a mode of replacing manual operation. The accurate pixel position of the steel bar binding point can be obtained through feature point identification based on deep learning. The application can be suitable for a reinforcement bar binding construction scene.

Description

Image processing method and device for binding nodes and terminal
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and apparatus for binding nodes, and a terminal.
Background
When the steel bars are bound, binding nodes need to be determined and confirmed on a construction site.
The inventor finds out how to replace manual mode in the process of binding the reinforcing steel bars, so that the reduction of manual workload is the problem to be solved. Aiming at the problem that the binding node identification in a manual replacing mode is lacked in the related technology, an effective solution is not provided at present.
Disclosure of Invention
The application mainly aims to provide an image processing method, device and terminal for binding nodes, so as to solve the problem that an alternative manual mode for identifying the binding nodes is lacked.
In order to achieve the above object, according to one aspect of the present application, there is provided an image processing method for a binder node.
The image processing method for the binding node according to the application comprises the following steps: inputting an image to be processed, wherein the image to be processed refers to image information acquired by an image acquisition device installed on a binding machine; inputting the image to be processed into a preset deep neural network model, and identifying characteristic points, wherein the preset deep neural network model adopts an image identification network suitable for the intervals among different binding points; and acquiring the pixel position of the characteristic point, wherein the pixel position refers to the probability of being used as a binding node.
Further, inputting the image to be processed into a preset deep neural network model, and when identifying the feature points, the method includes: training a preset depth neural network model, wherein the number of channels of a target image is set to be 2, the first channel represents an unbound node, and the second channel represents an bound node; when the step of training the preset deep neural network model is performed, the method comprises the following steps: a random angular transformation is added.
Further, when the step of training the preset deep neural network model is performed, the method further includes: random HSV color distortion was added.
Further, when the pixel position of the feature point is obtained, the method includes: and a step of network prediction, namely determining the probability of the characteristic point by outputting the numerical value of each position in the heat map.
Further, when the pixel position of the feature point is obtained, the method further includes: and binarizing the heat map according to a preset threshold value to obtain the middle point of the characteristic point region and the pixel position of the corresponding characteristic point.
Further, the preset deep neural network model adopts a single-layer hourglass network.
In order to achieve the above object, according to another aspect of the present application, there is provided an image processing apparatus for bundling nodes.
An image processing apparatus for binding nodes according to the present application includes: the image processing device comprises an input module, a processing module and a processing module, wherein the input module is used for inputting an image to be processed, and the image to be processed refers to image information acquired by image acquisition equipment arranged on a binding machine; the recognition module is used for inputting the image to be processed into a preset deep neural network model and recognizing the characteristic points, wherein the preset deep neural network model adopts an image recognition network suitable for the space between different binding points; and an obtaining module, configured to obtain a pixel position of the feature point, where the pixel position refers to a probability that the feature point can be used as a bundling node.
Further, the identification module includes: and the training unit is used for adding random angle transformation or random HSV color distortion.
Further, the obtaining module comprises: and the prediction unit is used for determining the probability of the characteristic points by outputting the numerical value of each position in the heat map, and obtaining the middle point of the characteristic point region after binarizing the heat map according to a preset threshold value to obtain the pixel position corresponding to the characteristic points.
In order to achieve the above object, according to still another aspect of the present application, there is provided a terminal for identifying a binder node, comprising: the image processing device.
In the image processing method, the device and the terminal for binding the nodes in the embodiment of the application, the to-be-processed image is input into the preset deep neural network model, the characteristic points are identified, the pixel positions of the characteristic points are obtained, and the purposes of improving the identification rate and reducing manual participation are achieved through the image identification mode of machine learning of the preset deep neural network model, so that the technical effects of meeting the calculation capacity of the existing CPU and accurately identifying the image are achieved, and the technical problems of low manual identification efficiency and large workload are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a schematic flow chart diagram of an image processing method for binding nodes according to an embodiment of the application;
fig. 2 is a schematic structural diagram of an image processing apparatus for binding nodes according to a first embodiment of the present application;
FIG. 3 is a schematic diagram of an image processing apparatus for binding nodes according to a second embodiment of the present application;
FIG. 4 is a schematic diagram of an image processing apparatus for binding nodes according to a third embodiment of the present application;
FIG. 5 is a schematic view of an hourglass network configuration;
FIG. 6 is a schematic diagram of an original input picture;
FIG. 7 is a schematic view of a target picture;
FIG. 8 is a schematic diagram of an output heatmap;
fig. 9 is a schematic diagram of pixel positions of feature points.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1, the method includes steps S102 to S106 as follows:
step S102, inputting the image to be processed,
the image to be processed refers to image information acquired by image acquisition equipment installed on the binding machine. Image information for processing can be acquired and obtained as input by an image acquisition device mounted on the binding machine.
Step S104, inputting the image to be processed into a preset depth neural network model, identifying characteristic points,
the preset deep neural network model adopts an image recognition network suitable for the distance between different binding points.
Specifically, a deep neural network model is adopted to obtain the accurate pixel position of the binding point by adopting a characteristic point identification method based on deep learning. In order to deal with the difference of the distances between different binding points, a deep neural network model is used for identifying the image.
Step S106, obtaining the pixel position of the characteristic point,
and the image output by the preset deep neural network model is a heat map, and the pixel position refers to the probability of serving as a binding node. The numerical value of each position in the output heat map represents the probability that the position is a specific characteristic point. Specifically, during result prediction, the middle point of each feature point region is obtained through manually set threshold values and binary heat maps, and finally the pixel position of the corresponding feature point is obtained.
From the above description, it can be seen that the following technical effects are achieved by the present application:
in the embodiment of the application, the input of the image to be processed is adopted, the image to be processed is input into the preset deep neural network model, the characteristic points are identified, and the pixel positions of the characteristic points are obtained in a mode of machine learning through the preset deep neural network model, so that the purposes of improving the identification rate and reducing manual participation are achieved, the technical effects of meeting the calculation capacity of the existing CPU and accurately identifying the image are achieved, and the technical problems of low manual identification efficiency and high workload are solved.
According to the embodiment of the present application, as shown in fig. 2, as a preferable example in the embodiment, when the image to be processed is input to a preset deep neural network model and a feature point is identified, the method includes: training a preset depth neural network model, wherein the number of channels of a target image is set to be 2, the first channel represents an unbound node, and the second channel represents an bound node;
specifically, during network training, the number of channels of the target heatmap is configured to be 2, one channel represents an unbundled node, and the other channel represents a banded node. The learning objective is constructed by manual calibration and using a 2D gaussian function. The coordinates of a certain keypoint are taken as the center point, so that the center point will have the highest score, the further away from the center point the lower the score will be. The target picture obtained as a result of the processing is as shown in fig. 7, and fig. 6 is an input picture.
Preferably, the step of training the preset deep neural network model includes: a random angular transformation is added.
Preferably, when the step of training the preset deep neural network model is performed, the method further includes: random HSV color distortion was added.
Specifically, when the neural network is trained according to the steps, random angle transformation is added so that the neural network can cope with the angle transformation of a shot image, random HSV color distortion and random brightness contrast change are added so that the neural network has invariance to illumination.
According to the embodiment of the present application, as a preferable example in the embodiment, the acquiring the pixel position of the feature point includes: and a step of network prediction, namely determining the probability of the characteristic point by outputting the numerical value of each position in the heat map.
According to the embodiment of the present application, as a preferable example in the embodiment, when the pixel position of the feature point is taken, the method further includes:
and binarizing the heat map according to a preset threshold value to obtain the middle point of the characteristic point region and the pixel position of the corresponding characteristic point.
Specifically, the numerical value of each position in the output heat map represents the probability that the position is a specific feature point, and during prediction, the heat map is binarized through a manually set threshold value to obtain the midpoint of each feature point region, and finally the pixel position of the corresponding feature point is obtained. The output heat map is shown in fig. 8, and the pixel positions of the corresponding feature points are shown in fig. 9.
According to the embodiment of the present application, as a preferable example in the embodiment, the preset deep neural network model uses a single-layer hourglass network.
Specifically, in order to deal with the difference of the distances between different binding points, a single-layer hourglass network is used for identifying the image, and the fact that hourglass stacking is not carried out is mainly based on algorithm operation efficiency and target scale consideration.
As shown in fig. 5, an hourglass network is generally used to detect key points of a human body, and stack the network, so as to improve the scale invariance and detection accuracy of the network, considering that the scale of target feature points in image data processed in the embodiment of the present application does not change much and semantic information of a target contains relatively less, the work can be completed only by using a single-layer network. Meanwhile, as a data processing center without a GPU is basically processed at a terminal, only a CPU with weak parallel capability can be used for deep learning and resolving, and excessive stacking of networks can slow down the operation of an algorithm. Therefore, the preset deep neural network model adopts a single-layer hourglass network.
Specifically, the single-layer hourglass network structure is as shown in the figure, the receptive field of the network is adjusted by scaling, and the length and width of the final feature image is half of the original image, and the image is obtained by up-sampling the original image to have the same size as the original image, and can be regarded as a heat image (heatmap). The receptive field is a region size of an input layer corresponding to an element in an output result of a certain layer in the convolutional neural network CNN. I.e. one point on the feature map corresponds to an area on the input map.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided an apparatus for implementing the above-described image processing method for binding nodes, as shown in fig. 2, the apparatus including: the image processing device comprises an input module 10, a processing module and a processing module, wherein the input module is used for inputting an image to be processed, and the image to be processed refers to image information acquired by image acquisition equipment arranged on a binding machine; the recognition module 20 is configured to input the image to be processed into a preset deep neural network model, and recognize feature points, where the preset deep neural network model adopts an image recognition network suitable for a distance between different binding points; and an obtaining module 30, configured to obtain pixel positions of the feature points, where the pixel positions refer to probabilities that can serve as bundling nodes.
The image to be processed in the input module 10 of the embodiment of the present application refers to image information acquired by an image acquisition device installed in a binding machine. Image information for processing can be acquired and obtained as input by an image acquisition device mounted on the binding machine.
In the recognition module 20 of the embodiment of the present application, the preset deep neural network model adopts an image recognition network suitable for the distance between different binding points.
Specifically, a deep neural network model is adopted to obtain the accurate pixel position of the binding point by adopting a characteristic point identification method based on deep learning. In order to deal with the difference of the distances between different binding points, a deep neural network model is used for identifying the image.
The image output by the preset deep neural network model in the acquisition module 30 of the embodiment of the present application is a heat map, and the pixel position refers to a probability that the pixel position can be used as a bundling node. The numerical value of each position in the output heat map represents the probability that the position is a specific characteristic point. Specifically, during result prediction, the middle point of each feature point region is obtained through manually set threshold values and binary heat maps, and finally the pixel position of the corresponding feature point is obtained.
According to the embodiment of the present application, as shown in fig. 3, the identification module 20 preferably includes: a training unit 201 for adding random angle transformation or adding random HSV color distortion.
Specifically, during network training, the number of channels of the target heatmap is configured to be 2, one channel represents an unbundled node, and the other channel represents a banded node. The learning objective is constructed by manual calibration and using a 2D gaussian function. The coordinates of a certain keypoint are taken as the center point, so that the center point will have the highest score, the further away from the center point the lower the score will be. The target picture obtained as a result of the processing is as shown in fig. 7, and fig. 6 is an input picture.
Preferably, the step of training the preset deep neural network model includes: a random angular transformation is added.
Preferably, when the step of training the preset deep neural network model is performed, the method further includes: random HSV color distortion was added.
Specifically, when the neural network is trained according to the steps, random angle transformation is added so that the neural network can cope with the angle transformation of a shot image, random HSV color distortion and random brightness contrast change are added so that the neural network has invariance to illumination.
According to the embodiment of the present application, as shown in fig. 4, as a preferred option in the embodiment, the obtaining module 30 includes: the prediction unit 301 is configured to determine the probability of the feature point by outputting the value of each position in the heat map, binarize the heat map according to a preset threshold to obtain a midpoint of a feature point region, and obtain a pixel position corresponding to the feature point. The output heat map is shown in fig. 8, and the pixel positions of the corresponding feature points are shown in fig. 9.
Specifically, the numerical value of each position in the heat map output by the prediction unit 301 in the embodiment of the present application indicates the probability that the position is a specific feature point, and during prediction, the heat map is binarized by a manually set threshold value to find the midpoint of each feature point region, and finally, the pixel position of the corresponding feature point is obtained.
Further, in another embodiment of the present application, there is provided a terminal for identifying a binder node, including: the image processing device. The implementation principle and the beneficial effects of the image processing apparatus are as described above, and are not described in detail herein.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (3)

1. An image processing method for binding nodes, comprising:
inputting an image to be processed, wherein the image to be processed refers to image information acquired by an image acquisition device installed on a binding machine;
inputting the image to be processed into a preset deep neural network model, and identifying characteristic points, wherein the preset deep neural network model adopts an image identification network suitable for the intervals among different binding points; and
acquiring pixel positions of the feature points, wherein the pixel positions refer to the probability of serving as bundling nodes;
the preset deep neural network model adopts a single-layer hourglass network;
when the pixel position of the feature point is obtained, the method further comprises the following steps:
binarizing the heat map according to a preset threshold value to obtain a midpoint of a feature point area and obtain a pixel position corresponding to the feature point; inputting the image to be processed into a preset depth neural network model, and when identifying the characteristic points, the method comprises the following steps: training a preset depth neural network model, wherein the number of channels of a target image is set to be 2, the first channel represents an unbound node, and the second channel represents an bound node;
wherein the content of the first and second substances,
when the step of training the preset deep neural network model is performed, the method comprises the following steps: random angle transformation, random HSV color distortion, and random brightness contrast variation are added.
2. An image processing apparatus for bundling nodes, comprising:
the image processing device comprises an input module, a processing module and a processing module, wherein the input module is used for inputting an image to be processed, and the image to be processed refers to image information acquired by image acquisition equipment arranged on a binding machine;
the recognition module is used for inputting the image to be processed into a preset deep neural network model and recognizing the characteristic points, wherein the preset deep neural network model adopts an image recognition network suitable for the space between different binding points; and
an obtaining module, configured to obtain a pixel position of the feature point, where the pixel position is a probability that the pixel position can be used as a bundling node;
the preset deep neural network model adopts a single-layer hourglass network;
the acquisition module includes: the prediction unit is used for determining the probability of the feature points by outputting the numerical value of each position in the heat map, obtaining the middle point of the feature point region after binarizing the heat map according to a preset threshold value and obtaining the pixel position of the corresponding feature point; inputting the image to be processed into a preset depth neural network model, and when identifying the characteristic points, the method comprises the following steps: training a preset depth neural network model, wherein the number of channels of a target image is set to be 2, the first channel represents an unbound node, and the second channel represents an bound node;
wherein the content of the first and second substances,
when the step of training the preset deep neural network model is performed, the method comprises the following steps: random angle transformation, random HSV color distortion, and random brightness contrast variation are added.
3. A terminal for identifying a bundling node, comprising: the image processing apparatus according to claim 2.
CN201910553331.0A 2019-06-24 2019-06-24 Image processing method and device for binding nodes and terminal Active CN110334751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910553331.0A CN110334751B (en) 2019-06-24 2019-06-24 Image processing method and device for binding nodes and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910553331.0A CN110334751B (en) 2019-06-24 2019-06-24 Image processing method and device for binding nodes and terminal

Publications (2)

Publication Number Publication Date
CN110334751A CN110334751A (en) 2019-10-15
CN110334751B true CN110334751B (en) 2021-08-13

Family

ID=68142413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910553331.0A Active CN110334751B (en) 2019-06-24 2019-06-24 Image processing method and device for binding nodes and terminal

Country Status (1)

Country Link
CN (1) CN110334751B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985338A (en) * 2020-07-22 2020-11-24 中建科技集团有限公司深圳分公司 Binding point identification method, device, terminal and medium
CN112627538B (en) * 2020-11-24 2021-10-29 武汉大学 Intelligent acceptance method for binding quality of steel mesh binding wires based on computer vision
CN114638830B (en) * 2022-05-18 2022-08-12 安徽数智建造研究院有限公司 Training method of tunnel reinforcing steel bar recognition model and tunnel reinforcing steel bar recognition method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096558A (en) * 2016-06-16 2016-11-09 长安大学 A kind of binding reinforcing bars method based on neutral net
CN109094845A (en) * 2018-09-05 2018-12-28 四川知创知识产权运营有限公司 A kind of reinforced-bar binding alignment means
CN109250186A (en) * 2018-09-05 2019-01-22 四川知创知识产权运营有限公司 A kind of reinforcing bar intelligent identifying system
CN109383865A (en) * 2018-11-08 2019-02-26 中民筑友科技投资有限公司 A kind of binding mechanism and reinforced mesh binding device
CN109712180A (en) * 2019-01-19 2019-05-03 北京伟景智能科技有限公司 A kind of reinforcing bar method of counting
CN109815950A (en) * 2018-12-28 2019-05-28 汕头大学 A kind of reinforcing bar end face recognition methods based on depth convolutional neural networks

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5664462A (en) * 1995-08-29 1997-09-09 Teleflex Incorporated Mid-length adjustment and method of assembly
US9167991B2 (en) * 2010-09-30 2015-10-27 Fitbit, Inc. Portable monitoring devices and methods of operating same
ES2552397B1 (en) * 2014-05-27 2016-09-14 Tecnología Marina Ximo, S.L. System and method for estimating tuna caught by species on board fishing vessels
CN105064690A (en) * 2015-06-11 2015-11-18 淮南智辉装饰工程有限公司 Automatic rebar tying machine
CN105303233A (en) * 2015-10-15 2016-02-03 陕西科技大学 Method for counting number of reinforced steel bars based on computer vision
CN105908975B (en) * 2016-04-29 2018-01-30 上海应用技术学院 A kind of blind operating area reinforcing bar binding device and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096558A (en) * 2016-06-16 2016-11-09 长安大学 A kind of binding reinforcing bars method based on neutral net
CN109094845A (en) * 2018-09-05 2018-12-28 四川知创知识产权运营有限公司 A kind of reinforced-bar binding alignment means
CN109250186A (en) * 2018-09-05 2019-01-22 四川知创知识产权运营有限公司 A kind of reinforcing bar intelligent identifying system
CN109383865A (en) * 2018-11-08 2019-02-26 中民筑友科技投资有限公司 A kind of binding mechanism and reinforced mesh binding device
CN109815950A (en) * 2018-12-28 2019-05-28 汕头大学 A kind of reinforcing bar end face recognition methods based on depth convolutional neural networks
CN109712180A (en) * 2019-01-19 2019-05-03 北京伟景智能科技有限公司 A kind of reinforcing bar method of counting

Also Published As

Publication number Publication date
CN110334751A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110334751B (en) Image processing method and device for binding nodes and terminal
CN109117825B (en) Lane line processing method and device
CN103824066B (en) A kind of licence plate recognition method based on video flowing
CN108268867B (en) License plate positioning method and device
CN107066933A (en) A kind of road sign recognition methods and system
CN104239870B (en) A kind of ellipse detection method based on the segmentation of curve arc
CN108830196A (en) Pedestrian detection method based on feature pyramid network
CN108280829A (en) Welding seam method, computer installation and computer readable storage medium
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN112287912B (en) Deep learning-based lane line detection method and device
CN104809452A (en) Fingerprint identification method
CN105320945A (en) Image classification method and apparatus
CN109472262A (en) Licence plate recognition method, device, computer equipment and storage medium
CN102646193A (en) Segmentation method of character images distributed in ring shape
CN112200884B (en) Lane line generation method and device
CN111860496A (en) License plate recognition method, device, equipment and computer readable storage medium
CN103824060A (en) Method for extracting fingerprint detail points
CN110399760A (en) A kind of batch two dimensional code localization method, device, electronic equipment and storage medium
CN109858349A (en) A kind of traffic sign recognition method and its device based on improvement YOLO model
CN109977941A (en) Licence plate recognition method and device
CN112084988B (en) Lane line instance clustering method and device, electronic equipment and storage medium
CN116229406B (en) Lane line detection method, system, electronic equipment and storage medium
CN111160142B (en) Certificate bill positioning detection method based on numerical prediction regression model
CN111178508B (en) Computing device and method for executing full connection layer in convolutional neural network
CN112560856A (en) License plate detection and identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant