CN112052696A - Bar finished product warehouse-out label identification method, device and equipment based on machine vision - Google Patents

Bar finished product warehouse-out label identification method, device and equipment based on machine vision Download PDF

Info

Publication number
CN112052696A
CN112052696A CN202010897845.0A CN202010897845A CN112052696A CN 112052696 A CN112052696 A CN 112052696A CN 202010897845 A CN202010897845 A CN 202010897845A CN 112052696 A CN112052696 A CN 112052696A
Authority
CN
China
Prior art keywords
label
bundled
bar
identification frame
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010897845.0A
Other languages
Chinese (zh)
Other versions
CN112052696B (en
Inventor
刘斌
袁钰博
庞殊杨
李文铃
刘常坤
贾鸿盛
毛尚伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CISDI Chongqing Information Technology Co Ltd
Original Assignee
CISDI Chongqing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CISDI Chongqing Information Technology Co Ltd filed Critical CISDI Chongqing Information Technology Co Ltd
Priority to CN202010897845.0A priority Critical patent/CN112052696B/en
Publication of CN112052696A publication Critical patent/CN112052696A/en
Application granted granted Critical
Publication of CN112052696B publication Critical patent/CN112052696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Toxicology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a bar finished product warehouse-out label identification method based on machine vision, which comprises the following steps: performing target detection on the image of the region of interest through a pre-trained bundled bar detection model, identifying a plurality of bundled bars in the image, and outputting position information of each bundled bar; on the basis of the position information of the plurality of bundled bars, the labels on the plurality of bundled bars are identified through a label detection model trained in advance, and the position information of each label is output; and identifying the label based on the position information of the label to obtain label information. The invention can carry out real-time detection on the automatic delivery production line of the continuously running automobile, identify the material specification information and automatically transmit the material specification information to other systems, replaces manual detection and improves the efficiency and the accuracy of identifying and checking the material specification information.

Description

Bar finished product warehouse-out label identification method, device and equipment based on machine vision
Technical Field
The invention relates to the field of image recognition, in particular to a bar stock delivery label recognition method, a bar stock delivery label recognition device and bar stock delivery label recognition equipment based on machine vision.
Background
In the transportation of steel products, the automatic loading of the finished bars by automobiles is a common way. When the rods in the automatic automobile delivery area are delivered out of the warehouse, material specification information needs to be identified and checked, the accuracy of the material delivery information in/out of the warehouse is guaranteed, otherwise, the situations that the materials are lost, the information is asynchronous, the material specification does not meet the requirements of customers and the like are caused, and the economic loss of a steel mill is caused. Therefore, it is necessary to identify information of the bundled bars and the labels when the finished bar warehouse is out of the warehouse, and automatically transmit the information to other systems.
The identification of the existing finished bar warehouse-out labels mainly depends on experienced workers to identify, and the workers cannot timely transmit label information to other systems due to the fact that a plurality of production lines need to run for a long time and only depend on manual identification, so that the conditions of missed detection and error detection can exist.
Disclosure of Invention
In view of the above drawbacks of the prior art, an object of the present invention is to provide a method, device and apparatus for identifying labels for ex-warehouse of bar products based on machine vision, which are used to solve the drawbacks of the prior art.
In order to achieve the above and other related objects, the present invention provides a method for identifying labels of bar stock warehouse based on machine vision, comprising:
performing target detection on the image of the region of interest through a pre-trained bundled bar detection model, identifying a plurality of bundled bars in the image, and outputting position information of each bundled bar;
on the basis of the position information of the plurality of bundled bars, the labels on the plurality of bundled bars are identified through a label detection model trained in advance, and the position information of each label is output;
and identifying the label based on the position information of the label to obtain label information.
Optionally, the position information of a plurality of bundled rods is:
Figure BDA0002658950240000021
wherein each row corresponds to a bundle bar identification frame, Band1xmin,Band1yminRespectively identifying x and y coordinates of the upper left corner point of the frame for the first bundled bar material; band1xmax,Band1ymaxThe x and y coordinates of the right lower corner point of the first bundled bar material identification frame are respectively, the Band2 represents the second bundled bar material identification frame, the Band3 represents the third bundled bar material identification frame, and the Band represents the nth bundled bar material identification frame.
Optionally, the method further comprises:
sorting the plurality of bundled rods and calculating the priority of each bundled rod; and identifying the labels on the plurality of bundled bars according to the priority and the position information of the plurality of bundled bars.
Optionally, the method for calculating the priority of the bundled rods includes: if the y coordinate of the upper left corner point of the label identification frame is larger, the higher priority is provided; if the y coordinate of the upper left corner point of the label identification frame is equal, the smaller the x coordinate of the upper left corner point of the label identification frame is, the lower the priority is.
Optionally, in each bundle of rods, the position information of the tag is:
Figure BDA0002658950240000022
wherein each rowRespectively corresponding to one of the label recognition frames, Selc1xmin,Selc1yminRespectively identifying x and y coordinates of the upper left corner point of the first label identification frame; selc1xmax,Selc1ymaxThe x and y coordinates are respectively the lower right corner point of the first label identification frame, Selc2 represents the second label identification frame, Selc3 represents the third label identification frame, and Selcn represents the nth label identification frame.
Optionally, the method further comprises:
sequencing a plurality of labels in each bundle of rods, and calculating the priority of each label; and identifying the labels on the plurality of bundled rods according to the priority labels and the position information of the labels.
Optionally, the method for calculating the priority of the tag includes: if the y coordinate of the upper left corner point of the label identification frame is larger, the higher priority is provided; and if the y coordinate of the upper left corner point of the label identification frame is equal, the smaller the x coordinate of the upper left corner point of the label identification frame is, the lower the priority is.
Optionally, the bundled bar detection model or/and the label detection model are trained by using an SSD-MobileNet neural network, R-CNN, Faster-RCNN or YOLO pair.
Optionally, the method for training the bundled bar detection model includes:
collecting an initial image of a bundle of rods;
labeling and frame selecting are carried out on bundled bars in the initial image to obtain position information of an initial bundled bar identification frame;
constructing a data set for training a bundled bar detection model according to the position information of the initial bundled bar identification frame;
and training based on the data set for training the bundled bar detection model to obtain the bundled bar detection model.
Optionally, the method for training the label detection model includes:
acquiring an initial image of the label;
labeling and framing labels in the initial image of the labels to obtain position information of an initial label identification frame;
constructing a data set for training a label detection model according to the position information of the initial label identification frame;
and training based on the data set of the training label detection model to obtain a label detection model.
In order to achieve the above and other related objects, the present invention provides a bar stock delivery label recognition device based on machine vision, including:
the bundled bar detection module is used for carrying out target detection on the image of the region of interest through a pre-trained bundled bar detection model, identifying a plurality of bundled bars in the image, and outputting position information of each bundled bar;
the label detection module is used for identifying labels on the bundled bars through a pre-trained label detection model based on the position information of the bundled bars and outputting the position information of each label;
and the tag information identification module is used for identifying the tag based on the position information of the tag to obtain tag information.
Optionally, in each bundle of rods, the position information of the tag is:
Figure BDA0002658950240000031
wherein each row corresponds to one of the label recognition frames, Selc1xmin,Selc1yminRespectively identifying x and y coordinates of the upper left corner point of the first label identification frame; selc1xmax,Selc1ymaxThe x and y coordinates are respectively the lower right corner point of the first label identification frame, Selc2 represents the second label identification frame, Selc3 represents the third label identification frame, and Selcn represents the nth label identification frame.
Optionally, the position information of a plurality of bundled rods is:
Figure BDA0002658950240000041
wherein each row corresponds to a bundle bar identification frame, Band1xmin,Band1yminRespectively identifying x and y coordinates of the upper left corner point of the frame for the first bundled bar material; band1xmax,Band1ymaxThe x and y coordinates of the right lower corner point of the first bundled bar material identification frame are respectively, the Band2 represents the second bundled bar material identification frame, the Band3 represents the third bundled bar material identification frame, and the Band represents the nth bundled bar material identification frame.
To achieve the above and other related objects, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method.
As described above, the method, device and equipment for identifying the ex-warehouse label of the bar finished product warehouse based on machine vision have the following beneficial effects:
the invention discloses a bar finished product warehouse-out label identification method based on machine vision, which comprises the following steps: performing target detection on the image of the region of interest through a pre-trained bundled bar detection model, identifying a plurality of bundled bars in the image, and outputting position information of each bundled bar; on the basis of the position information of the plurality of bundled bars, the labels on the plurality of bundled bars are identified through a label detection model trained in advance, and the position information of each label is output; and identifying the label based on the position information of the label to obtain label information. The invention can carry out real-time detection on the automatic delivery production line of the continuously running automobile, identify the material specification information and automatically transmit the material specification information to other systems, replaces manual detection and improves the efficiency and the accuracy of identifying and checking the material specification information.
Drawings
Fig. 1 is a flowchart of a bar stock warehouse ex-warehouse label identification method based on machine vision according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a camera position according to an embodiment of the present invention;
FIG. 3 is a schematic view of a camera installation site according to an embodiment of the present invention;
FIG. 4 is a flowchart of a training method of a bundled bar detection model according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for training a label detection model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the sorting effect of bundled rods according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating an identification frame after a tag is focused according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a bar product warehouse-out label recognition device based on machine vision according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The embodiment of the invention provides a bar finished product warehouse-out label identification method based on machine vision, which is used for identifying material specification information when bars are delivered out of a warehouse in an automatic automobile delivery area in the conveying of steel products. Specifically, as shown in fig. 1, the method includes:
s11, carrying out target detection on the image of the region of interest through a pre-trained bundled bar detection model, identifying a plurality of bundled bars in the image, and outputting position information of each bundled bar; the image of the region of interest generally refers to an image including a bundle of bars, the state of the bars is in the bundle state, and due to accumulation of the metal bars, statistics cannot be performed from a direction parallel to the length surface of the metal bars, so that the end surfaces of the bundle of bars are used as recognition targets.
S12, based on the position information of the bundled bars, identifying the labels on the bundled bars through a label detection model trained in advance, and outputting the position information of each label;
s13, identifying the label based on the position information of the label to obtain label information.
By the method, the automatic delivery production line of the continuously running automobile can be detected in real time, the material specification information is identified and automatically transmitted to other systems, manual detection is replaced, and the efficiency and accuracy of identification and checking of the material specification information are improved.
In one embodiment, as shown in fig. 2, in order to improve the work efficiency of automatic delivery of automobiles and ensure the safety of workers, fences 1 are arranged on three sides of a bar finished product area in an automatic delivery area of automobiles to prevent automobiles from entering other dangerous areas;
in order to collect images and improve the quality of the collected images, a focusing camera and a light supplementing device are arranged behind a fence on one side close to the tail of the vehicle in a finished product area. The light supplement device includes, but is not limited to, an auxiliary light source, a reflector, an LED, and other devices having a light supplement effect.
Bundling rods 2 in the automatic automobile delivery area rod finished product area can be overlapped and placed, a focusing camera and light supplementing equipment are arranged after the focusing camera and the light supplementing equipment are close to a fence on one side of the tail of the automobile, the camera is opposite to the tail of the automobile, a camera is opposite to the bundling rods in the carriage to identify labels on the bundling rods, mistaken identification caused by overlapping of the bundling rods can be avoided, and the placing positions of the cameras are shown in fig. 2.
In one embodiment, the region of interest (ROI) is a region surrounded by a barrier opposite to the camera lens, as shown in fig. 2 and 3.
In one embodiment, the target detection model is trained by the following method, including a bundled bar detection model and a label detection model.
As shown in fig. 4, the training method of the bundled bar detection model includes:
s41, collecting an initial image of the bundled rods; when acquiring an initial image, the camera lens needs to be directly opposite to the region of interest.
S42, labeling and framing the bundled rods in the initial image to obtain the position information of the initial bundled rod identification frame; wherein, the identification frame can be a rectangular frame, and the bundled rods are contained in the rectangular frame.
S43, constructing a data set for training a bundled bar detection model according to the position information of the initial bundled bar identification frame;
s44, training to obtain the bundled bar detection model based on the data set of the trained bundled bar detection model.
Specifically, data of a data set trained into a bundled bar detection model are input into a bundled bar detection network for training, and the bundled bar detection network extracts and learns characteristics of bundled bars in an initial image, so that a better detection model for detecting the bundled bars is obtained.
In this embodiment, the SSD-MobileNet neural network is used as the bundled bar detection network, and other target recognition neural networks can also achieve effects similar to those of the present embodiment, such as R-CNN, fast-RCNN, YOLO series, and the like.
As shown in fig. 5, the training method of the label detection model includes:
s51 acquiring an initial image of the label; when acquiring an initial image, the camera lens needs to be directly opposite to the region of interest.
S52, labeling and framing the label in the initial image to obtain the position information of the initial label identification frame; wherein, the identification frame can be a rectangular frame, and the label is contained in the rectangular frame.
S53, constructing a data set for training a label detection model according to the position information of the initial label identification frame;
s54, training based on the data set of the training label detection model to obtain a label detection model.
Specifically, data of a data set for training a label detection model is input into a label detection network for training, and the label detection network extracts and learns the characteristics of the labels in the initial image, so that a better detection model for detecting the labels is obtained.
In this embodiment, the SSD-MobileNet neural network is used as the bundled bar detection network, and other target recognition neural networks can also achieve effects similar to those of the present embodiment, such as R-CNN, fast-RCNN, YOLO series, and the like.
When bundled rods need to be detected, inputting the images of the interested areas acquired in real time into a bundled rod detection model, identifying a plurality of bundled rods therein, and outputting the position information of each bundled rod; the format and content of the output position information are as follows:
Figure BDA0002658950240000071
wherein each row corresponds to a bundle bar identification frame, Band1xmin,Band1yminRespectively identifying x and y coordinates of the upper left corner point of the frame for the first bundled bar material; band1xmax,Band1ymaxThe x and y coordinates of the right lower corner point of the first bundled bar material identification frame are respectively, the Band2 represents the second bundled bar material identification frame, the Band3 represents the third bundled bar material identification frame, and the Band represents the nth bundled bar material identification frame.
In practice, the position of the bundled rods refers to the position of the bundled rod identification frame, which is a smallest rectangular or square frame that can contain the bundled rods.
In an embodiment, after the detection of the plurality of bundled rods is completed, the method further includes:
sorting the plurality of bundled rods and calculating the priority of each bundled rod; and identifying the labels on the plurality of bundled bars according to the priority and the position information of the plurality of bundled bars. Wherein, the intelligent sorting algorithm can be adopted to sort the bundled rods. The ordering effect is shown in fig. 6.
The calculation method of the priority comprises the following steps: the larger the y coordinate of the upper left corner point of the bundled bar identification frame is, the higher the priority is; and if the y coordinates are equal, the smaller the x coordinate of the upper left corner point of the bundled bar identification frame is, the lower the priority is.
Focusing a camera to the position of the bundled bar with the highest priority according to the priority of the bundled bar and the position information of the bundled bar, and identifying all the labels in the bundled bar.
When the labels need to be identified, inputting the images of the region of interest collected in real time into a label detection model, identifying a plurality of labels therein, and outputting the position information of each label; the format and content of the output position information are as follows:
Figure BDA0002658950240000072
wherein each row corresponds to one of the label recognition frames, Selc1xmin,Selc1yminRespectively identifying x and y coordinates of the upper left corner point of the first label identification frame; selc1xmax,Selc1ymaxThe x and y coordinates are respectively the lower right corner point of the first label identification frame, Selc2 represents the second label identification frame, Selc3 represents the third label identification frame, and Selcn represents the nth label identification frame.
In an embodiment, after the detection of the plurality of bundled rods is completed, the method further includes:
sorting the plurality of bundled rods and calculating the priority of each bundled rod; and identifying the labels on the plurality of bundled bars according to the priority and the position information of the plurality of bundled bars.
The calculation method of the priority comprises the following steps: the larger the y coordinate of the upper left corner point of the label identification frame is, the higher the priority is; and if the y coordinate is equal, the smaller the x coordinate of the upper left corner point of the label identification frame is, the lower the priority is.
And selecting the highest priority label according to the priority sequence of the labels, focusing the camera to the label position according to the label position information, intercepting the picture of the label identification frame for information reading, and acquiring the label information. And after the label information with the highest priority is obtained, repeating the label information obtaining step according to the label identification priority until all the label information in the interest is obtained. Focusing to the label position effect can be seen in fig. 7.
The label information reading mode includes but is not limited to scanning two-dimensional code information on the label, and other information reading modes can achieve similar effects, such as scanning bar codes, directly scanning label characters and the like.
In one embodiment, after the tag information is acquired, the tag information is transmitted to other systems; the invention selects RS485 communication protocol, and other information transmission modes Can realize effects similar to the invention, such as RS232 and Can bus.
As shown in fig. 8, a bar stock delivery label recognition device based on machine vision includes:
the bundled bar detecting module 81 is configured to perform target detection on the image of the region of interest through a pre-trained bundled bar detecting model, identify a plurality of bundled bars therein, and output position information of each bundled bar;
the label detection module 82 is configured to identify labels on the plurality of bundled bars through a label detection model trained in advance based on the position information of the plurality of bundled bars, and output position information of each label;
and the tag information identification module 83 is configured to identify the tag based on the location information of the tag to obtain tag information.
In one embodiment, in each bundle of rods, the position information of the tag is:
Figure BDA0002658950240000081
wherein each row corresponds to one of the label recognition frames, Selc1xmin,Selc1yminRespectively identifying x and y coordinates of the upper left corner point of the first label identification frame; selc1xmax,Selc1ymaxThe x and y coordinates are respectively the lower right corner point of the first label identification frame, Selc2 represents the second label identification frame, Selc3 represents the third label identification frame, and Selcn represents the nth label identification frame.
In an embodiment, the position information of the bundles of rods is:
Figure BDA0002658950240000091
wherein each row corresponds to a bundle bar identification frame, Band1xmin,Band1yminRespectively identifying x and y coordinates of the upper left corner point of the frame for the first bundled bar material; band1xmax,Band1ymaxThe x and y coordinates of the right lower corner point of the first bundled bar material identification frame are respectively, the Band2 represents the second bundled bar material identification frame, the Band3 represents the third bundled bar material identification frame, and the Band represents the nth bundled bar material identification frame.
Since the embodiment of the apparatus portion and the embodiment of the method portion correspond to each other, please refer to the description of the embodiment of the method portion for the content of the embodiment of the apparatus portion, which is not repeated here.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods in the present embodiments.
The present invention also provides an apparatus comprising: a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to make the terminal execute any method in the embodiment.
The computer-readable storage medium in the present embodiment may be understood by those skilled in the art as follows: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the above-described method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic or optical disks, etc. may store the program code.
The device provided by the embodiment comprises a processor, a memory, a transceiver and a communication interface, wherein the memory and the communication interface are connected with the processor and the transceiver and are used for realizing mutual communication, the memory is used for storing a computer program, the communication interface is used for carrying out communication, and the processor and the transceiver are used for running the computer program.
In this embodiment, the Memory may include a Random Access Memory (RAM), and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In the above-described embodiments, reference in the specification to "the present embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least some embodiments, but not necessarily all embodiments. The multiple occurrences of "the present embodiment" do not necessarily all refer to the same embodiment. The description describes that a component, feature, structure, or characteristic "may", "might", or "could" be included, that a particular component, feature, structure, or characteristic "may", "might", or "could" be included, that the particular component, feature, structure, or characteristic is not necessarily included.
In the embodiments described above, although the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory structures (e.g., dynamic ram (dram)) may use the discussed embodiments. The embodiments of the invention are intended to embrace all such alternatives, modifications and variances that fall within the broad scope of the appended claims.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The invention is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (14)

1. A bar finished product warehouse-out label identification method based on machine vision is characterized by comprising the following steps:
performing target detection on the image of the region of interest through a pre-trained bundled bar detection model, identifying a plurality of bundled bars in the image, and outputting position information of each bundled bar;
on the basis of the position information of the plurality of bundled bars, the labels on the plurality of bundled bars are identified through a label detection model trained in advance, and the position information of each label is output;
and identifying the label based on the position information of the label to obtain label information.
2. The machine vision-based bar product warehouse ex-warehouse label recognition method according to claim 1, wherein the position information of a plurality of bundled bars is:
Figure FDA0002658950230000011
wherein each row corresponds to a bundle bar identification frame, Band1x min,Band1y minRespectively identifying x and y coordinates of the upper left corner point of the frame for the first bundled bar material; band1x max,Band1y maxThe x, y coordinates of the lower right corner of the frame are identified for the first bundle of rods, respectively, and Band2 represents the second oneThe bundled bar identification frame, the Band3 represents the third bundled bar identification frame, and the Band represents the nth bundled bar identification frame.
3. The machine vision-based bar product warehouse ex-warehouse label identification method according to claim 2, further comprising:
sorting the plurality of bundled rods and calculating the priority of each bundled rod; and identifying the labels on the plurality of bundled bars according to the priority and the position information of the plurality of bundled bars.
4. The method for identifying labels for ex-warehouse of bar products based on machine vision as claimed in claim 3, wherein the method for calculating the priority of bundled bars is as follows: if the y coordinate of the upper left corner point of the label identification frame is larger, the higher priority is provided; if the y coordinate of the upper left corner point of the label identification frame is equal, the smaller the x coordinate of the upper left corner point of the label identification frame is, the lower the priority is.
5. The machine vision-based bar product warehouse ex-warehouse label identification method according to claim 1, wherein in each bundle of bars, the position information of the label is:
Figure FDA0002658950230000021
wherein each row corresponds to one of the label recognition frames, Selc1x min,Selc1y minRespectively identifying x and y coordinates of the upper left corner point of the first label identification frame; selc1x max,Selc1y maxThe x and y coordinates are respectively the lower right corner point of the first label identification frame, Selc2 represents the second label identification frame, Selc3 represents the third label identification frame, and Selcn represents the nth label identification frame.
6. The machine vision-based bar product warehouse ex-warehouse label identification method according to claim 5, further comprising:
sequencing a plurality of labels in each bundle of rods, and calculating the priority of each label; and identifying the labels on the plurality of bundled rods according to the priority labels and the position information of the labels.
7. The method for identifying labels for ex-warehouse of bar products based on machine vision according to claim 6, wherein the calculation method of the priority of the labels is as follows: if the y coordinate of the upper left corner point of the label identification frame is larger, the higher priority is provided; and if the y coordinate of the upper left corner point of the label identification frame is equal, the smaller the x coordinate of the upper left corner point of the label identification frame is, the lower the priority is.
8. The method for identifying labels in the finished bar warehouse based on machine vision according to claim 1, wherein the bundled bar detection model or/and the label detection model are trained by using SSD-MobileNet neural network, R-CNN, fast-RCNN or YOLO pair.
9. The machine vision-based bar product warehouse ex-warehouse label recognition method of claim 1, wherein the method for training the bundled bar detection model comprises:
collecting an initial image of a bundle of rods;
labeling and frame selecting are carried out on bundled bars in the initial image to obtain position information of an initial bundled bar identification frame;
constructing a data set for training a bundled bar detection model according to the position information of the initial bundled bar identification frame;
and training based on the data set for training the bundled bar detection model to obtain the bundled bar detection model.
10. The machine vision-based bar product warehouse ex-warehouse label recognition method according to claim 1, wherein the method for training the label detection model comprises:
acquiring an initial image of the label;
labeling and framing labels in the initial image of the labels to obtain position information of an initial label identification frame;
constructing a data set for training a label detection model according to the position information of the initial label identification frame;
and training based on the data set of the training label detection model to obtain a label detection model.
11. The utility model provides a rod finished product storehouse label recognition device that goes out of warehouse based on machine vision which characterized in that includes:
the bundled bar detection module is used for carrying out target detection on the image of the region of interest through a pre-trained bundled bar detection model, identifying a plurality of bundled bars in the image, and outputting position information of each bundled bar;
the label detection module is used for identifying labels on the bundled bars through a pre-trained label detection model based on the position information of the bundled bars and outputting the position information of each label;
and the tag information identification module is used for identifying the tag based on the position information of the tag to obtain tag information.
12. The machine vision based finished bar warehouse ex-warehouse label recognition device of claim 9, wherein in each bundle of bars, the position information of the label is:
Figure FDA0002658950230000031
wherein each row corresponds to one of the label recognition frames, Selc1x min,Selc1y minRespectively identifying x and y coordinates of the upper left corner point of the first label identification frame; selc1x max,Selc1y maxThe x and y coordinates are respectively the lower right corner point of the first label identification frame, Selc2 represents the second label identification frame, Selc3 represents the third label identification frame, and Selcn represents the nth label identification frame.
13. The machine vision-based bar product warehouse ex-warehouse label recognition device of claim 9, wherein the position information of a plurality of bundled bars is:
Figure FDA0002658950230000041
wherein each row corresponds to a bundle bar identification frame, Band1x min,Band1y minRespectively identifying x and y coordinates of the upper left corner point of the frame for the first bundled bar material; band1x max,Band1y maxThe x and y coordinates of the right lower corner point of the first bundled bar material identification frame are respectively, the Band2 represents the second bundled bar material identification frame, the Band3 represents the third bundled bar material identification frame, and the Band represents the nth bundled bar material identification frame.
14. An apparatus, comprising: a processor and a memory;
the memory is for storing a computer program and the processor is for executing the computer program stored by the memory to cause the apparatus to perform the method of any of claims 1 to 10.
CN202010897845.0A 2020-08-31 2020-08-31 Bar product warehouse-out label identification method, device and equipment based on machine vision Active CN112052696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010897845.0A CN112052696B (en) 2020-08-31 2020-08-31 Bar product warehouse-out label identification method, device and equipment based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010897845.0A CN112052696B (en) 2020-08-31 2020-08-31 Bar product warehouse-out label identification method, device and equipment based on machine vision

Publications (2)

Publication Number Publication Date
CN112052696A true CN112052696A (en) 2020-12-08
CN112052696B CN112052696B (en) 2023-05-02

Family

ID=73606589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010897845.0A Active CN112052696B (en) 2020-08-31 2020-08-31 Bar product warehouse-out label identification method, device and equipment based on machine vision

Country Status (1)

Country Link
CN (1) CN112052696B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222977A (en) * 2021-05-31 2021-08-06 中冶赛迪重庆信息技术有限公司 Rod and wire label identification method, system, medium and terminal in driving operation process
CN114239632A (en) * 2021-12-09 2022-03-25 中冶赛迪重庆信息技术有限公司 Method and system for identifying wire and rod label, electronic device and medium
CN114881184A (en) * 2022-04-26 2022-08-09 中冶赛迪信息技术(重庆)有限公司 Method, device, equipment and medium for identifying label of bar in metallurgical production process

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000249518A (en) * 1999-02-26 2000-09-14 Nitto Seiko Co Ltd Detection method for image of object
CN101486391A (en) * 2008-01-18 2009-07-22 北京光电技术研究所 Automatic separation method and apparatus for metallic rod
CN103577654A (en) * 2013-11-21 2014-02-12 上海电气集团股份有限公司 Finite element precise modeling method for stator bar of large turbine generator
CN104680157A (en) * 2015-03-26 2015-06-03 天津工业大学 Bundled bar material identification and counting method based on support vector machine
CN109175805A (en) * 2018-10-10 2019-01-11 上海易清智觉自动化科技有限公司 Metal wire bar drop robot welding system
CN109775055A (en) * 2019-01-08 2019-05-21 河北科技大学 The bundled rods end face label missing of view-based access control model detects and error measurement method
US10528812B1 (en) * 2019-01-29 2020-01-07 Accenture Global Solutions Limited Distributed and self-validating computer vision for dense object detection in digital images
CN110827247A (en) * 2019-10-28 2020-02-21 上海悦易网络信息技术有限公司 Method and equipment for identifying label
CN111401466A (en) * 2020-03-26 2020-07-10 广州紫为云科技有限公司 Traffic sign detection and identification marking method and device and computer equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000249518A (en) * 1999-02-26 2000-09-14 Nitto Seiko Co Ltd Detection method for image of object
CN101486391A (en) * 2008-01-18 2009-07-22 北京光电技术研究所 Automatic separation method and apparatus for metallic rod
CN103577654A (en) * 2013-11-21 2014-02-12 上海电气集团股份有限公司 Finite element precise modeling method for stator bar of large turbine generator
CN104680157A (en) * 2015-03-26 2015-06-03 天津工业大学 Bundled bar material identification and counting method based on support vector machine
CN109175805A (en) * 2018-10-10 2019-01-11 上海易清智觉自动化科技有限公司 Metal wire bar drop robot welding system
CN109775055A (en) * 2019-01-08 2019-05-21 河北科技大学 The bundled rods end face label missing of view-based access control model detects and error measurement method
US10528812B1 (en) * 2019-01-29 2020-01-07 Accenture Global Solutions Limited Distributed and self-validating computer vision for dense object detection in digital images
CN110827247A (en) * 2019-10-28 2020-02-21 上海悦易网络信息技术有限公司 Method and equipment for identifying label
CN111401466A (en) * 2020-03-26 2020-07-10 广州紫为云科技有限公司 Traffic sign detection and identification marking method and device and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘昶等: "基于 SVM 的捆装圆棒材计数方法", 《包装工程》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222977A (en) * 2021-05-31 2021-08-06 中冶赛迪重庆信息技术有限公司 Rod and wire label identification method, system, medium and terminal in driving operation process
CN114239632A (en) * 2021-12-09 2022-03-25 中冶赛迪重庆信息技术有限公司 Method and system for identifying wire and rod label, electronic device and medium
CN114239632B (en) * 2021-12-09 2023-08-25 中冶赛迪信息技术(重庆)有限公司 Wire and bar label identification method, system, electronic equipment and medium
CN114881184A (en) * 2022-04-26 2022-08-09 中冶赛迪信息技术(重庆)有限公司 Method, device, equipment and medium for identifying label of bar in metallurgical production process

Also Published As

Publication number Publication date
CN112052696B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN112052696A (en) Bar finished product warehouse-out label identification method, device and equipment based on machine vision
CN107617573B (en) Logistics code identification and sorting method based on multitask deep learning
CN108416412B (en) Logistics composite code identification method based on multitask deep learning
US20220084186A1 (en) Automated inspection system and associated method for assessing the condition of shipping containers
CN110705666A (en) Artificial intelligence cloud computing display rack goods and label monitoring and goods storage method
CN109034694B (en) Production raw material intelligent storage method and system based on intelligent manufacturing
CN111274934A (en) Implementation method and system for intelligently monitoring forklift operation track in warehousing management
EP3696135B1 (en) Forklift and system with forklift for the identification of goods
CN113516322B (en) Factory obstacle risk assessment method and system based on artificial intelligence
CN112053336B (en) Bar alignment detection method, system, equipment and medium
US11632499B2 (en) Multi-camera positioning and dispatching system, and method thereof
CN114693529A (en) Image splicing method, device, equipment and storage medium
CN113177941B (en) Steel coil edge crack identification method, system, medium and terminal
Naumann et al. Literature review: Computer vision applications in transportation logistics and warehousing
CN112053339B (en) Rod finished product warehouse driving safety monitoring method, device and equipment based on machine vision
CN115690085A (en) Mobile terminal-based scrap steel identification method and system
CN113978987A (en) Pallet object packaging and picking method, device, equipment and medium
CN114022070A (en) Disc library method, device, equipment and storage medium
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN113222977A (en) Rod and wire label identification method, system, medium and terminal in driving operation process
CN112053337A (en) Bar detection method, device and equipment based on deep learning
CN114239632B (en) Wire and bar label identification method, system, electronic equipment and medium
CN104392436A (en) Processing method and device for remote sensing image
CN115828963A (en) Automatic storage method, system, equipment and medium for wire bundle
CN112037195B (en) Method, system, equipment and medium for detecting abnormal length of bar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 401329 No. 5-6, building 2, No. 66, Nongke Avenue, Baishiyi Town, Jiulongpo District, Chongqing

Applicant after: MCC CCID information technology (Chongqing) Co.,Ltd.

Address before: 20-24 / F, No.7 Longjing Road, North New District, Yubei District, Chongqing

Applicant before: CISDI CHONGQING INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant