CN115138596A - Visual detection method and device - Google Patents

Visual detection method and device Download PDF

Info

Publication number
CN115138596A
CN115138596A CN202210638474.3A CN202210638474A CN115138596A CN 115138596 A CN115138596 A CN 115138596A CN 202210638474 A CN202210638474 A CN 202210638474A CN 115138596 A CN115138596 A CN 115138596A
Authority
CN
China
Prior art keywords
target object
target
image
label
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210638474.3A
Other languages
Chinese (zh)
Inventor
宁垚云
王峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai 100me Network Technology Co ltd
Original Assignee
Shanghai 100me Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai 100me Network Technology Co ltd filed Critical Shanghai 100me Network Technology Co ltd
Priority to CN202210638474.3A priority Critical patent/CN115138596A/en
Publication of CN115138596A publication Critical patent/CN115138596A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • B07C5/3422Sorting according to other particular properties according to optical properties, e.g. colour using video scanning devices, e.g. TV-cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory

Abstract

The embodiment of the application provides a method and a device for visual detection, which are applied to the technical field of automatic detection and comprise the following steps: the upper computer acquires a first image of a target object which is not processed by the labeling machine, and performs quality detection on the target object based on the first image to obtain a target quality detection result of the target object; acquiring a second image of the target object processed by the labeling machine, and identifying the second image to obtain a label identification result of the target object; and if the target quality inspection result and the label identification result of the target object meet the preset abnormal conditions, indicating the remover to remove the target object. The upper computer is used for realizing quality inspection and label identification detection on the target object, so that the efficiency of quality inspection and label detection is improved, and the labor detection cost on a production line is greatly saved. In addition, the target objects of which the quality inspection results and the label identification results do not meet the preset abnormal conditions are automatically removed through the remover, manual sorting by people is not needed, and automation of a production line is realized.

Description

Visual detection method and device
Technical Field
The invention relates to the technical field of automatic detection, in particular to a visual detection method and device.
Background
With the continuous development of economy, people have higher and higher requirements on substances, and accordingly, the detection requirements of various industries on products are more and more.
At present, in the technical field of meat production detection, most of the detection on meat quality and labels on meat packages depends on manual work, namely, quality inspection by a quality inspector needs to perform visual quality inspection, and products with unqualified quality and wrong labels are picked out, so that the detection time is long, and the labor cost is high.
In conclusion, how to make intelligent equipment detect meat quality and meat package labeling is a technical problem which needs to be solved urgently at present.
Disclosure of Invention
The embodiment of the application provides a visual detection method and device, which are used for carrying out automatic quality detection and label detection on a target object on a production line.
In a first aspect, an embodiment of the present application provides a method for visual inspection, which is applied to an upper computer, and includes:
acquiring a first image of a target object which is not processed by a labeling machine, and performing quality detection on the target object based on the first image to obtain a target quality detection result of the target object;
acquiring a second image of a target object processed by a labeling machine, and identifying the second image to obtain a label identification result of the target object;
and if the target quality inspection result and the tag identification result of the target object meet preset abnormal conditions, indicating a remover to remove the target object.
The automatic quality inspection of the target object is realized through the upper computer, the quality inspection by a quality inspector is not needed, the labor cost is reduced, and the quality inspection efficiency is improved. And the upper computer can also automatically detect the label, so that the problem that human eyes cannot identify the two-dimensional code on the label to cause errors is solved, and the label detection efficiency is improved. In addition, the quality inspection result and the target object of which the label identification result does not meet the preset abnormal condition are automatically removed by the remover, manual sorting by people is not needed, and the automation of a production line is realized.
Optionally, performing quality detection on the target object based on the first image to obtain a target quality detection result of the target object, including:
inputting the first image into a quality inspection model for identification processing to obtain an abnormal identification result of the first image, wherein the quality inspection model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of target objects which are qualified in quality inspection and images of target objects which are unqualified in quality inspection;
and determining a target quality inspection result of the target object based on the abnormality recognition result of the first image.
Optionally, the performing recognition processing on the second image to obtain a tag recognition result of the target object includes:
if the target label information of the target object is not obtained after the second image is identified, determining that the label identification result is that the label is not labeled;
and if the second image is identified, obtaining target label information of the target object, and comparing the target label information of the target object with preset reference label information to obtain a label identification result.
Optionally, the comparing the target tag information of the target object with preset reference tag information to obtain a tag identification result includes:
if the target label information of the target object is not matched with the reference label information, determining that the label identification result is a label error;
and if the target label information of the target object is matched with the reference label information, determining that the label identification result is that the label is correct.
Optionally, the reference label information is obtained from a server when the upper computer is started.
Optionally, if the target quality inspection result and the tag identification result of the target object satisfy a preset abnormal condition, instructing a remover to remove the target object, including:
and if at least one of the target quality inspection result is unqualified, the label identification result is not labeled and the label identification result is wrong, indicating a remover to remove the target object.
Optionally, if the target quality inspection result is qualified and the tag identification result is that the tag is correct, the framing processing of the target object is instructed.
Optionally, before acquiring the second image of the target object processed by the labeling machine, the method further includes:
performing class detection on the target object based on the first image to obtain a target class of the target object;
and if the target product is different from the historical product obtained by the last product detection, notifying the labeling machine of switching from the historical label type corresponding to the historical product to the target label type corresponding to the target product, so that the labeling machine labels the target object based on the target label type to obtain the target object processed by the labeling machine.
Optionally, performing class detection on the target object based on the first image to obtain a target class of the target object, including:
and inputting the first image into a classification model for classification processing to obtain a target class of the target object, wherein the classification model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of the target object of different classes.
In a second aspect, an embodiment of the present application provides a visual inspection device, which is applied to an upper computer and includes:
the system comprises an acquisition module, a quality detection module and a control module, wherein the acquisition module is used for acquiring a first image of a target object which is not processed by a labeling machine, and performing quality detection on the target object based on the first image to obtain a target quality detection result of the target object;
the acquisition module is further configured to acquire a second image of the target object that has been processed by the labeling machine, and perform recognition processing on the second image to obtain a tag recognition result of the target object;
and the processing module is used for indicating a remover to remove the target object if the target quality inspection result and the tag identification result of the target object meet preset abnormal conditions.
Optionally, the obtaining module is specifically configured to:
inputting the first image into a quality inspection model for identification processing to obtain an abnormal identification result of the first image, wherein the quality inspection model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of target objects which are qualified in quality inspection and images of target objects which are unqualified in quality inspection;
and determining a target quality inspection result of the target object based on the abnormality recognition result of the first image.
Optionally, the obtaining module is specifically configured to:
if the target label information of the target object is not obtained after the second image is identified, determining that the label identification result is that the label is not labeled;
and if the second image is identified, obtaining target label information of the target object, and comparing the target label information of the target object with preset reference label information to obtain a label identification result.
Optionally, the obtaining module is specifically configured to:
if the target label information of the target object is not matched with the reference label information, determining that the label identification result is a label error;
and if the target label information of the target object is matched with the reference label information, determining that the label identification result is that the label is correct.
Optionally, the reference label information is obtained from a server when the upper computer is started.
Optionally, the processing module is specifically configured to:
and if at least one of the target quality inspection result is unqualified, the label identification result is not labeled and the label identification result is wrong, the remover is instructed to remove the target object.
Optionally, the processing module is further configured to:
and if the target quality inspection result is qualified and the label identification result is correct, indicating to frame the target object.
Optionally, the system further comprises a category detection module;
the category detection module is specifically configured to:
before acquiring a second image of a target object processed by a labeling machine, performing class detection on the target object based on the first image to obtain a target class of the target object;
and if the target product is different from the historical product obtained by the last product detection, notifying the labeling machine of switching from the historical label type corresponding to the historical product to the target label type corresponding to the target product, so that the labeling machine labels the target object based on the target label type to obtain the target object processed by the labeling machine.
Optionally, the category detection module is specifically configured to:
and inputting the first image into a classification model for classification processing to obtain a target class of the target object, wherein the classification model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of the target object of different classes.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the visual inspection method according to any of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer device, wherein the program, when executed on the computer device, causes the computer device to perform the visual inspection method according to any of the first aspects.
In this application embodiment, the host computer can carry out quality control and label discernment to the target object and detect, has realized producing the line and has detected the full automatization, has saved the manpower detection cost on producing the line greatly, and the labeller can independently accomplish the label switching action under the instruction of host computer, need not produce that line workman is manual to carry out the label switching, has reduced the error rate and the target object probability of doing over again, reinspection of labeller, has improved the production efficiency of whole production line greatly. The rejector receives the rejection notice from the upper computer, is arranged before the qualified products are framed, and rejects unqualified quality inspection and unqualified label detection at one time, so that the production efficiency of the production line is improved, and the cost of manual rejection is saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a system architecture diagram according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a visual inspection method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a system architecture according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a visual inspection of a pork bin according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a visual inspection method for a single production line according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of a vision inspection method suitable for multiple production lines according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a visual inspection apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a computing device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, which is a system architecture diagram applicable to the embodiment of the present application, the system architecture may include an upper computer 101, a first image capturing device 102, a second image capturing device 103, a first trigger 104, a second trigger 105, a packaging machine 106, a labeling machine 107, and a rejector 108.
The first trigger 104 and the first image capturing device 102 may be integrated by an electrical numerical control technology. The second trigger 105 and the second image capturing device 103 may be integrated by an electrical numerical control technology. The upper computer 101 is connected with the first image acquisition device 102, the second image acquisition device 103, the labeling machine 107 and the rejector 108 in a wired or wireless manner, such as a serial port, a network port, a wired network, a wireless network, a USB connection and bluetooth. The first image capturing device 102 and the second image capturing device 103 may be cameras, or the like.
In practical application, a target object is conveyed through the conveyor belt, and when the target object is conveyed to the first trigger 104, the first image acquisition device is triggered to acquire a first image of the target object, and the first image is sent to the upper computer 101. The upper computer 101 performs quality detection on the target object based on the first image to obtain a target quality detection result of the target object.
Then, the target object is conveyed to the packaging machine 106 by the conveyor belt, the packaging machine 106 packages the target object, and the packaged target object is conveyed to the labeling machine 107, and the labeling machine 107 labels the packaged target object.
The target object that has been processed by the labeling machine 107 is then transferred to the second trigger 105 by the conveyor belt, triggers the second image capturing device to capture a second image of the target object that has been processed by the labeling machine 107, and sends the second image to the upper computer 101. And the upper computer 101 identifies the second image to obtain a label identification result of the target object.
If the target quality inspection result and the tag identification result of the target object satisfy the preset abnormal condition, the upper computer 101 instructs the remover 108 to remove the target object. When the target object is transferred to the rejector 108, the target object is rejected.
Based on the above description, fig. 2 exemplarily shows a flow of a visual inspection method, where the flow of the method is executed by a computer device, and the computer device may be the upper computer shown in fig. 1, and includes the following steps:
step 201, acquiring a first image of the target object which is not processed by the labeling machine, and performing quality detection on the target object based on the first image to obtain a target quality detection result of the target object.
Specifically, the first image is input into a quality inspection model for identification processing, and an abnormal identification result of the first image is obtained, wherein the quality inspection model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of target objects which are qualified in quality inspection and images of target objects which are unqualified in quality inspection; and determining a target quality inspection result of the target object based on the abnormality recognition result of the first image.
The target object can be various products such as pig forelegs, rib rows, inner ridges, tendon meat, and large rows. Before the upper computer acquires the first image of the target object, the target object is loaded into a packaging box by a production line worker, the target object with the packaging box is conveyed to the position of a first trigger through a conveying belt, and after the first trigger is triggered, first image acquisition equipment is informed to photograph the target object to obtain the first image, wherein the target object is in a non-film-pasted state. And the upper computer inputs the first image into the quality inspection model for identification processing, so that the quality of the target object is detected, and a target quality inspection result of the target object is obtained.
If the visual detection result in the first image indicates that the target object has impurities such as hair, black spots and the like, the target quality detection result is unqualified, and if the visual detection result in the first image indicates that the target object has no impurities, the target quality detection result is qualified.
After the quality inspection model is adopted to determine the first image of the target object with unqualified quality inspection, the first image is added into the sample image set to realize the continuous update of the sample image set, and the quality inspection model is retrained based on the updated sample image set to realize the update of the quality inspection model, so that the accuracy of the quality inspection model is continuously improved.
In some embodiments, the conveyor belt conveys the target object to the packaging machine after the first image capture device captures the first image of the target object. The packaging machine packages and seals films on the target object, and the conveyor belt continuously conveys the packaged target object to the labeling machine.
The first image acquisition device is used as an input device of the upper computer, and includes but is not limited to a Universal Serial Bus (USB) camera, an IEEE1394 camera and a Cam Link camera. The method for the upper computer to execute the image processing operation on the first image comprises but is not limited to mean filtering, median filtering, threshold value transformation, image opening uniform speed, fourier transformation and histogram processing.
Step 202, acquiring a second image of the target object processed by the labeling machine, and performing identification processing on the second image to obtain a label identification result of the target object.
Specifically, after the target object is labeled by the labeling machine, when the target object is conveyed to the second trigger through the conveyor belt, the second trigger triggers the second image acquisition device to acquire images of the target object and the label, so as to obtain a second image of the target object. And the upper computer performs label identification processing on the label of the target object to obtain a label detection result of the target object, wherein the label identification result comprises three conditions of no labeling, label error and label correctness. The tag of the target object comprises at least one of the following: the name of the product, the production date, the production batch, the production address, the two-dimensional code of the manufacturer and the two-dimensional code of the manufacturer, and the shelf life.
In some embodiments, the second image may be subjected to a tag recognition process by a tag recognition model to obtain a tag recognition result of the target object, where the tag recognition model includes, but is not limited to: ALBERT model, convolutional neural network model.
And 203, if the target quality inspection result and the tag identification result of the target object meet preset abnormal conditions, indicating the remover to remove the target object.
Specifically, the rejector receives a rejection instruction from the upper computer, and rejects the target objects with unqualified quality inspection and unqualified labels.
In the embodiment of the application, the automatic quality inspection of the target object is realized through the upper computer, the quality inspection by a quality inspector is not needed, the labor cost is reduced, and the quality inspection efficiency is improved. Secondly, the upper computer can also detect the label automatically, solves the problem that human eyes can not identify the two-dimensional code on the label and has errors, and also improves the efficiency of label detection. In addition, the quality inspection result and the target object of which the label identification result does not meet the preset abnormal condition are automatically removed by the remover, manual sorting by people is not needed, and the automation of a production line is realized.
Optionally, in step 202, the embodiment of the present application performs recognition processing on the second image at least in the following manner to obtain a tag recognition result of the target object, which specifically includes:
and if the target label information of the target object is not obtained after the second image is subjected to the identification processing, determining that the label identification result is that the label is not labeled. And if the second image is identified, obtaining target label information of the target object, and comparing the target label information of the target object with preset reference label information to obtain a label identification result.
Specifically, the reference label information is obtained by the upper computer requesting from the server when the upper computer is started, and the reference label information includes at least one of the following contents: the name of the product, the production date, the production batch, the production address, the two-dimensional code of the manufacturer and the manufacturer, and the like. The server stores reference tag information of the target object of each category in advance.
In some embodiments, if the target tag information of the target object does not match the reference tag information, the tag identification result is determined to be a tag error. And if the target label information of the target object is matched with the reference label information, determining that the label identification result is that the label is correct.
Specifically, each content in the target tag information may be compared with each content in the reference tag information, and when at least one content is different, it is determined that the target tag information is not matched with the reference tag information, and then it is determined that the tag identification result is a tag error. And when the contents of the target label information and the contents of the reference label information are the same, determining that the target label information is matched with the reference label information, and further determining that the label identification result is that the label is correct.
For example, the setting label includes both the item name and the manufacturer. The upper computer performs label detection on the target object to obtain target label information of the target object as follows: pig rib, manufacturer B. The reference label information obtained by the upper computer is as follows: pig rib, manufacturer a. And comparing the target label information with the reference label information by the upper computer to know that the target label information is not matched with the reference label information, and determining that the label identification result is a label error.
In the embodiment of the application, the upper computer compares the target label information with the reference label information, so that the cost of identifying the label information by human eyes is reduced, and the accuracy of label identification is improved; in addition, the label detection method is also suitable for labels which cannot be identified by human eyes, such as two-dimensional code labels, bar code labels and the like, so that the universality of label detection is improved, and the efficiency of label detection on a production line is greatly improved.
In some embodiments, if at least one of the target quality inspection result is that the quality inspection is not qualified, the tag identification result is that the tag is not labeled, and the tag identification result is that the tag is wrong exists, it indicates that the target quality inspection result and the tag identification result of the target object satisfy a preset abnormal condition, and instructs the remover to remove the target object.
In some embodiments, if the target quality inspection result is qualified and the tag identification result is correct, framing of the target object is instructed.
Specifically, when the target quality inspection result is that the quality inspection is qualified and the label identification result is that the label is correct, the target object is transmitted to the framing disc through the conveyor belt, namely, the final product transmission end point. And then, framing the target object which is qualified in quality inspection and correctly signed by the manufacturer.
For example, referring to fig. 3, a system architecture physical diagram applicable to the embodiment of the present application includes an upper computer 301, a buffer conveyor 302, a first trigger 303, a first camera 304, a packaging machine 305, a labeling machine 306, a second trigger 307, a second camera 308, a remover 309, and a framing disk 310.
After the target object is placed in the packing box by the manufacturer, the buffer conveyor 302 is placed. When the buffer conveyor belt 302 conveys the target object to the position of the first trigger 303, the first camera 304 is triggered to take a picture of the target object in the packaging box through the first trigger 303, and a first image is obtained. The first camera 304 sends the first image to the upper computer 301. The upper computer 301 performs visual quality inspection on the target object based on the first image. If the target object is not qualified, a rejecting instruction for the target object is sent to the rejector 309 to inform the rejector 309 to reject when the target object moves to the position of the rejector 309.
When the buffer conveyor 302 conveys the target object to the packaging machine 305, the packaging machine 305 performs film sealing packaging on the target object. Then, the packaged target object is labeled by the labeling machine 306 through the labeling machine 306. And then, the second trigger 307 triggers the second camera 308 to take a picture, so as to obtain a second image, and send the second image to the upper computer 301. The upper computer 301 performs label recognition on the target object based on the second image. If the tag identification result is not tagged or the tag is wrong, a rejection instruction for the target object is sent to notify the rejector 309 to reject when the target object moves to the location of the rejector 309. And the target object with qualified quality and correct label enters the framing disc 310.
In this application embodiment, the host computer can carry out quality control and label identification to the target object and detect, realizes producing the line and detects the full automatization, has saved the manpower detection cost on producing the line greatly. In addition, the remover receives the removal notification from the upper computer, namely before the target object reaches the remover, a corresponding coping strategy is provided for the removal scheme of the target object, the removal efficiency on a production line is improved, and the cost of manual removal is saved. A rejector is arranged before the qualified products are framed, unqualified quality inspection products and unqualified label detection products are rejected at one time, and the production efficiency of a production line is improved.
Optionally, before acquiring the second image of the target object processed by the labeling machine, the method further includes: performing class detection on the target object based on the first image to obtain a target class of the target object; and if the target product is different from the historical product obtained by the last product detection, informing the labeling machine of switching from the historical label type corresponding to the historical product to the target label type corresponding to the target product, so that the labeling machine labels the target object based on the target label type, and obtaining the target object processed by the labeling machine.
Specifically, the first image is input into a classification model for classification processing, and a target class of the target object is obtained, wherein the classification model is obtained based on a plurality of sample images through iterative training, and the plurality of sample images comprise images of the target object of different classes.
The classification model includes, but is not limited to, a neural network model, a lightweight feature extraction model. After the classification model is adopted to determine that the first image corresponds to the target class of the target object, the first image can be added into the sample image set to realize continuous updating of the sample image set, and the classification model is retrained based on the updated sample image set to realize updating of the classification model, so that the accuracy of the classification model is continuously improved.
In some implementations, multiple label categories are supported in the labeling machine, and manual addition or deletion of label categories is supported.
For example, suppose that production line a is responsible for packaging, quality testing, label detection of pig forelegs; and the upper computer performs class detection on the first image of the target object to obtain a target class of the target object, namely the spareribs. And the upper computer compares the target product with the historical product obtained by the last product detection, if the historical product is the pig foreleg, the target product is different from the historical product obtained by the last product detection, and the labeling machine is informed to switch from the historical label type corresponding to the historical product pig foreleg to the target label type corresponding to the target product spareribs, so that the labeling machine labels the target object based on the target label type corresponding to the spareribs, and the target object processed by the labeling machine is obtained.
In this application embodiment, the host computer detects the article class of target object to and inform the labeller to switch the label, the labeller can independently accomplish the label switching action under the instruction of host computer, need not produce that the line workman is manual to carry out the label and switch, avoided the inconsistent problem of target object and label content simultaneously, reduced the error rate of labeller, solved the problem that the target object is reworked, rechecked, improved the production efficiency of whole production line greatly.
In order to better describe the method for visual inspection in the embodiment of the present application, the following describes a scheme in the embodiment of the present application with reference to a specific implementation scenario and a time sequence, as shown in fig. 4, which is a schematic flow chart of the visual inspection of a pork bin according to the embodiment of the present invention. The method comprises the following steps:
the manufacturer starts the upper computer first, and the upper computer is initialized. The upper computer requests the server to obtain the production information of the pork, and the upper computer uses the production information of the pork as the reference label information of the subsequent label identification detection.
The producer puts into the conveyer belt after the packing carton with the pork, conveys the pork to first trigger through the conveyer belt, and first trigger informs first camera and shoots, obtains first image and passes first image to the host computer, and the host computer carries out quality control to the pork based on first image, if quality control is unqualified, for example have the hair, have the black spot scheduling problem, then take place to the flush trimmer and reject the instruction to instruct the flush trimmer to reject this pork. And if the quality inspection is qualified, the rejector is not indicated. Meanwhile, the upper computer detects the pork type based on the first image to obtain a target pork type, and if the target pork type is different from the historical pork type obtained by the last detection, the upper computer indicates that the pork type is changed, so that the upper computer informs the labeling machine to switch from the historical label type corresponding to the historical pork type to the target label type corresponding to the target pork type.
And then conveying the pork to a packaging machine through a conveying belt for packaging to obtain the pork after film sealing. And conveying the pork subjected to film sealing to a labeling machine through a conveying belt, and labeling the pork subjected to film sealing by the labeling machine.
The pork processed by the labeling machine is conveyed to a second trigger through the conveyor belt, the second trigger informs a second camera to take a picture, a second image is obtained, and the second image is transmitted to the upper computer. And the upper computer identifies the tag of the pork based on the second image. And if the upper computer does not recognize the label information, the label recognition result of the pork is not labeled. And if the upper computer identifies the label information, comparing the label information with the pork production information. And if the label information is consistent with the pork production information, the label identification result of the pork is correct. And if the label information is inconsistent with the pork production information, the label identification result of the pork is a label error.
If the label identification result is that the label is not labeled or the label is wrong, a removing instruction is sent to the remover to instruct the remover to remove the pork. And if the label identification result is that the label is correct, the rejector is not indicated.
And then the pork is conveyed to a remover through a conveyor belt, and if a removing instruction for removing the pork is received before the remover, the pork is removed. If the eliminator does not receive an eliminating instruction for eliminating the pork before, the pork is continuously conveyed to the framing disc through the conveyor belt. The finished product on the framing disc is framed by a manufacturer.
In some embodiments, the technical solution in the embodiments of the present application may be applicable to one or more production lines, and the following description is given by way of example in which the visual inspection method in the embodiments of the present application is applied to one production line, and the visual inspection method in the embodiments of the present application is applied to a plurality of production lines.
Referring to fig. 5, a schematic flow chart of a visual inspection method suitable for a single production line is provided for the embodiment of the present application, and includes an upper computer 501, a buffer conveyor 502, a first trigger 503, a first camera 504, a packaging machine 505, a labeling machine 506, a second trigger 507, a second camera 508, a remover 509, and a framing disc 510.
Firstly, after a target object is placed into a packaging box by a manufacturer, a buffer conveying belt 502 is placed, the target object passes through a first trigger 503, a first camera 504 is triggered to take a picture of the target object in the box, the first camera 504 sends the picture to an upper computer 501, the upper computer 501 performs visual quality inspection on the target object, and if the target object is unqualified in quality inspection, a remover 509 is informed to remove the target object when the target object moves to the position of the remover 509; when the target object passes through the packaging machine 505, the packaging machine 505 seals films and packages the target object, then the target object passes through the labeling machine 506, the labeling machine 506 labels the packaged target object, then the second trigger 507 triggers the second camera 508 to take a picture, the picture is sent to the upper computer 501, the upper computer 501 carries out label identification detection on the target object, if the label identification detection is unqualified, the remover 509 is informed to remove the label, and finally, the target object qualified in quality detection and label identification detection can enter the framing disc 510.
Referring to fig. 6, a schematic flow chart of a visual inspection method suitable for multiple production lines is provided for the embodiment of the present application, and includes the following steps:
many produce a line and share a host computer, the host computer can carry out quality control and label identification to the pork on every production line promptly, and the pork type on every production line is inequality. A plurality of production lines can be arranged in the pork bin to work simultaneously, the work flow of each production line is shown in fig. 3, pork is divided by a producer, a conveyor belt is arranged after the pork is loaded into a packing box, the conveyor belt conveys the packing box with the pork, the packing box sequentially passes through a first trigger, a packing machine, a labeling machine, a second trigger, a remover and a framing disc, and finally framing is carried out by the producer, so that the detailed flow and operation are not repeated in detail again.
In this application embodiment, realize many production line full automatization quality control and label identification through the host computer and detect, compare in the human eye identification label comparatively accurate, reduced the probability that quality control and label identification make mistakes, improved the efficiency of producing the line detection. Simultaneously, the human cost of the visual detection and manual switching label of producing the line has been reduced.
Based on the same technical concept, the embodiment of the present application provides a schematic structural diagram of a visual inspection apparatus, as shown in fig. 7, the apparatus 700 includes:
an obtaining module 701, configured to obtain a first image of a target object that is not processed by a labeling machine, and perform quality detection on the target object based on the first image to obtain a target quality detection result of the target object;
the acquiring module 701 is further configured to acquire a second image of the target object that has been processed by the labeling machine, and perform recognition processing on the second image to obtain a tag recognition result of the target object;
the processing module 703 is configured to instruct a remover to remove the target object if the target quality inspection result of the target object and the tag identification result meet a preset abnormal condition.
Optionally, the obtaining module 701 is specifically configured to:
inputting the first image into a quality inspection model for identification processing to obtain an abnormal identification result of the first image, wherein the quality inspection model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of target objects which are qualified in quality inspection and images of target objects which are unqualified in quality inspection;
and determining a target quality inspection result of the target object based on the abnormality recognition result of the first image.
Optionally, the obtaining module 701 is specifically configured to:
if the target label information of the target object is not obtained after the second image is identified, determining that the label identification result is that the label is not labeled;
and if the second image is identified, obtaining target label information of the target object, and comparing the target label information of the target object with preset reference label information to obtain a label identification result.
Optionally, the obtaining module 701 is specifically configured to:
if the target label information of the target object is not matched with the reference label information, determining that the label identification result is a label error;
and if the target label information of the target object is matched with the reference label information, determining that the label identification result is that the label is correct.
Optionally, the reference label information is obtained from a server when the upper computer is started.
Optionally, the processing module 703 is specifically configured to:
and if at least one of the target quality inspection result is unqualified, the label identification result is not labeled and the label identification result is wrong, the remover is instructed to remove the target object.
Optionally, the processing module 703 is further configured to:
and if the target quality inspection result is qualified and the label identification result is correct, indicating to frame the target object.
Optionally, a category detection module 702 is further included;
the category detection module 702 is specifically configured to:
before acquiring a second image of a target object processed by a labeling machine, performing class detection on the target object based on the first image to obtain a target class of the target object;
and if the target product is different from the historical product obtained by the last product detection, notifying the labeling machine of switching from the historical label type corresponding to the historical product to the target label type corresponding to the target product, so that the labeling machine labels the target object based on the target label type to obtain the target object processed by the labeling machine.
Optionally, the category detection module 702 is specifically configured to:
and inputting the first image into a classification model for classification processing to obtain a target category of the target object, wherein the classification model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of the target object of different categories.
In this application embodiment, the host computer can carry out quality control and label discernment to the target object and detect, has realized producing the line and has detected the full automatization, has saved the manpower detection cost on producing the line greatly, and the labeller can independently accomplish the label switching action under the instruction of host computer, need not produce the manual label switching that carries on of line workman, has reduced the error rate of labeller and the probability that the target object is reworked, reinspected, has improved the production efficiency of whole production line greatly. The rejector receives the rejection notice from the upper computer, is arranged before the qualified products are framed, and rejects unqualified quality inspection and unqualified label detection at one time, so that the production efficiency of the production line is improved, and the cost of manual rejection is saved.
Based on the same technical concept, the embodiment of the present application provides a computer device, which may be the upper computer shown in fig. 1, as shown in fig. 8, including at least one processor 801 and a memory 802 connected to the at least one processor, where a specific connection medium between the processor 801 and the memory 802 is not limited in the embodiment of the present application, and the processor 801 and the memory 802 are connected through a bus in fig. 8 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present application, the memory 802 stores instructions executable by the at least one processor 801, and the at least one processor 801 may execute the steps of the visual inspection method by executing the instructions stored in the memory 802.
The processor 801 is a control center of the computer device, and may connect various parts of the computer device by using various interfaces and lines, and implement quality inspection and tag identification detection of the target object by executing or executing instructions stored in the memory 802 and calling data stored in the memory 802. Optionally, the processor 801 may include one or more processing units, and the processor 801 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 801. In some embodiments, the processor 801 and the memory 802 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 801 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, configured to implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 802, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 802 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and the like. The memory 802 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer device, but is not limited to such. The memory 802 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer device, which, when the program is run on the computer device, causes the computer device to perform the steps of the visual inspection method described above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (12)

1. A visual detection method is applied to an upper computer and is characterized by comprising the following steps:
acquiring a first image of a target object which is not processed by a labeling machine, and performing quality detection on the target object based on the first image to obtain a target quality detection result of the target object;
acquiring a second image of a target object processed by a labeling machine, and identifying the second image to obtain a label identification result of the target object;
and if the target quality inspection result and the tag identification result of the target object meet preset abnormal conditions, indicating a remover to remove the target object.
2. The method of claim 1, wherein the performing quality detection on the target object based on the first image to obtain a target quality detection result of the target object comprises:
inputting the first image into a quality inspection model for identification processing to obtain an abnormal identification result of the first image, wherein the quality inspection model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of target objects which are qualified in quality inspection and images of target objects which are unqualified in quality inspection;
and determining a target quality inspection result of the target object based on the abnormal recognition result of the first image.
3. The method of claim 1, wherein the performing the identification process on the second image to obtain the tag identification result of the target object comprises:
if the target label information of the target object is not obtained after the second image is identified, determining that the label identification result is that the label is not labeled;
and if the second image is identified, obtaining target label information of the target object, and comparing the target label information of the target object with preset reference label information to obtain a label identification result.
4. The method of claim 3, wherein comparing the target tag information of the target object with preset reference tag information to obtain a tag identification result comprises:
if the target label information of the target object is not matched with the reference label information, determining that the label identification result is a label error;
and if the target label information of the target object is matched with the reference label information, determining that the label identification result is that the label is correct.
5. The method of claim 4, wherein the reference tag information is obtained by the host computer from a server at startup.
6. The method as claimed in claim 4, wherein the instructing the remover to remove the target object if the target quality inspection result and the tag identification result of the target object satisfy a predetermined abnormal condition comprises:
and if at least one of the target quality inspection result is unqualified, the label identification result is not labeled and the label identification result is wrong, indicating a remover to remove the target object.
7. The method of claim 4, further comprising:
and if the target quality inspection result is qualified and the label identification result is correct, indicating to frame the target object.
8. The method of any of claims 1 to 7, wherein said acquiring a second image of the target object that has been processed by the labelling machine further comprises:
performing class detection on the target object based on the first image to obtain a target class of the target object;
and if the target product is different from the historical product obtained by the last product detection, notifying the labeling machine of switching from the historical label type corresponding to the historical product to the target label type corresponding to the target product, so that the labeling machine labels the target object based on the target label type to obtain the target object processed by the labeling machine.
9. The method of claim 8, wherein said performing class detection on the target object based on the first image to obtain a target class of the target object comprises:
and inputting the first image into a classification model for classification processing to obtain a target category of the target object, wherein the classification model is obtained based on iterative training of a plurality of sample images, and the plurality of sample images comprise images of the target object of different categories.
10. The utility model provides a visual detection's device, is applied to the host computer, its characterized in that includes:
the system comprises an acquisition module, a quality detection module and a control module, wherein the acquisition module is used for acquiring a first image of a target object which is not processed by a labeling machine, and performing quality detection on the target object based on the first image to obtain a target quality detection result of the target object;
the acquisition module is further configured to acquire a second image of the target object that has been processed by the labeling machine, and perform recognition processing on the second image to obtain a tag recognition result of the target object;
and the processing module is used for indicating the remover to remove the target object if the target quality inspection result and the tag identification result of the target object meet preset abnormal conditions.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 9 are implemented by the processor when executing the program.
12. A computer-readable storage medium, in which a computer program is stored which is executable by a computer device, and which, when run on the computer device, causes the computer device to carry out the steps of the method according to any one of claims 1 to 9.
CN202210638474.3A 2022-06-07 2022-06-07 Visual detection method and device Pending CN115138596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210638474.3A CN115138596A (en) 2022-06-07 2022-06-07 Visual detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210638474.3A CN115138596A (en) 2022-06-07 2022-06-07 Visual detection method and device

Publications (1)

Publication Number Publication Date
CN115138596A true CN115138596A (en) 2022-10-04

Family

ID=83407147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210638474.3A Pending CN115138596A (en) 2022-06-07 2022-06-07 Visual detection method and device

Country Status (1)

Country Link
CN (1) CN115138596A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117358622A (en) * 2023-12-08 2024-01-09 格力大松(宿迁)生活电器有限公司 Product detection method, device and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886500A (en) * 2017-10-13 2018-04-06 北京邮电大学 A kind of production monitoring method and system based on machine vision and machine learning
CN111070590A (en) * 2019-12-12 2020-04-28 广州明诚通机器人科技有限公司 Production method and production system of full-automatic special-shaped plastic toy
CN210708289U (en) * 2019-08-28 2020-06-09 中信戴卡股份有限公司 Automatic labeling equipment for hub and production line
CN211443013U (en) * 2019-10-12 2020-09-08 东莞市国瓷新材料科技有限公司 Full-automatic packaging equipment for ceramic substrate
CN111833324A (en) * 2020-07-09 2020-10-27 中国计量大学 Optical fiber ferrule defect detection method based on deep learning
CN111907827A (en) * 2020-07-21 2020-11-10 广州佳帆计算机有限公司 Product package detection method and system
CN113748007A (en) * 2019-03-13 2021-12-03 数字标记公司 Digital marking of recycled articles
CN113879655A (en) * 2021-09-29 2022-01-04 浙江工贸职业技术学院 Intelligent label labeling device for logistics packaging
CN114022797A (en) * 2021-10-29 2022-02-08 深圳供电局有限公司 Protection pressing plate state detection method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886500A (en) * 2017-10-13 2018-04-06 北京邮电大学 A kind of production monitoring method and system based on machine vision and machine learning
CN113748007A (en) * 2019-03-13 2021-12-03 数字标记公司 Digital marking of recycled articles
CN210708289U (en) * 2019-08-28 2020-06-09 中信戴卡股份有限公司 Automatic labeling equipment for hub and production line
CN211443013U (en) * 2019-10-12 2020-09-08 东莞市国瓷新材料科技有限公司 Full-automatic packaging equipment for ceramic substrate
CN111070590A (en) * 2019-12-12 2020-04-28 广州明诚通机器人科技有限公司 Production method and production system of full-automatic special-shaped plastic toy
CN111833324A (en) * 2020-07-09 2020-10-27 中国计量大学 Optical fiber ferrule defect detection method based on deep learning
CN111907827A (en) * 2020-07-21 2020-11-10 广州佳帆计算机有限公司 Product package detection method and system
CN113879655A (en) * 2021-09-29 2022-01-04 浙江工贸职业技术学院 Intelligent label labeling device for logistics packaging
CN114022797A (en) * 2021-10-29 2022-02-08 深圳供电局有限公司 Protection pressing plate state detection method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117358622A (en) * 2023-12-08 2024-01-09 格力大松(宿迁)生活电器有限公司 Product detection method, device and system
CN117358622B (en) * 2023-12-08 2024-04-16 格力大松(宿迁)生活电器有限公司 Method, device and system for detecting indoor and outdoor units of air conditioner

Similar Documents

Publication Publication Date Title
EP3259068B1 (en) Detection of barcode tag conditions on sample tubes
CN109092696B (en) Sorting system and sorting method
CN113877836B (en) Intelligent identification sorting system based on visual detection system
CN112150439A (en) Automatic sorting equipment and sorting method for injection molding parts
CN110136044B (en) Article sorting method, system and equipment and intelligent terminal
US11657599B2 (en) Method for detecting appearance of six sides of chip multi-layer ceramic capacitor based on artificial intelligence
CN111597857B (en) Logistics package detection method, device, equipment and readable storage medium
CN110548698A (en) Sewing equipment and cut piece sorting method, sorting device and sorting system applied to sewing equipment
CN112884718A (en) Method, device and system for detecting code spraying characters of package and storage medium
CN115138596A (en) Visual detection method and device
CN112070000A (en) Intelligent recognition algorithm training method and device, terminal server and storage medium
CN114419038A (en) Method and device for identifying surface defects of hub, storage medium and electronic equipment
CN110927167A (en) Egg detection method and device, electronic equipment and storage medium
JP2002214153A (en) Foreign matter inspecting device and method in liquid filled vessel
CN113469137A (en) Abnormal behavior recognition method and device, storage medium and electronic device
CN114332622A (en) Label detection method based on machine vision
CN111210412A (en) Package detection method and device, electronic equipment and storage medium
US20230096532A1 (en) Machine learning system, learning data collection method and storage medium
CN115809843A (en) Goods tracking and identifying method, device, medium and electronic equipment of logistics sorting channel
CN109344799B (en) Article identification method, article identification device, article identification equipment, storage medium and electronic device
CN114187596A (en) Chip surface character detection system
CN213182797U (en) Smoke box number checking device and tobacco production line system
KR102649500B1 (en) Image identification apparatus, and product manufacturing apparatus provided with image identification apparatus
CN110197143B (en) Settlement station article identification method and device and electronic equipment
CN111351754A (en) Bottle bottom defect detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination