CN115780299A - Plastic bottle defect detection system and method - Google Patents

Plastic bottle defect detection system and method Download PDF

Info

Publication number
CN115780299A
CN115780299A CN202211527787.8A CN202211527787A CN115780299A CN 115780299 A CN115780299 A CN 115780299A CN 202211527787 A CN202211527787 A CN 202211527787A CN 115780299 A CN115780299 A CN 115780299A
Authority
CN
China
Prior art keywords
plastic bottle
image
unit
plastic
identification module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211527787.8A
Other languages
Chinese (zh)
Inventor
苗增良
张亚慧
秦琴
李文辰
屠子美
郑博源
连良斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mirle Automation Technology Shanghai Co ltd
Original Assignee
Mirle Automation Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mirle Automation Technology Shanghai Co ltd filed Critical Mirle Automation Technology Shanghai Co ltd
Priority to CN202211527787.8A priority Critical patent/CN115780299A/en
Publication of CN115780299A publication Critical patent/CN115780299A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present application provides a plastic bottle defect detection system and method. The system comprises: the side surface detection equipment is used for detecting the defects of the side surface of the plastic bottle; a front inspection device for inspecting a front defect of the plastic bottle; a rear inspection device for inspecting a rear defect of the plastic bottle; a transfer control device for transferring and controlling the plastic bottles; and the cloud storage device is used for storing detection data of the side detection device, the front detection device and the rear detection device in a cloud end. According to the method and the device, the defects of the plastic bottles can be comprehensively detected through simple and efficient hardware equipment and software algorithms.

Description

Plastic bottle defect detection system and method
Technical Field
The invention relates to the technical field of image recognition, in particular to a plastic bottle defect detection system and a plastic bottle defect detection method.
Background
Currently, devices and software for automatically detecting the quality of products are available in the field of industrial automation. Whether the product is well-packed and free of contamination is an important aspect of product quality inspection. The existing product packaging, particularly appearance defect detection technology of plastic bottles, still depends on manpower in a large quantity, and because the defects of the outer packaging are various, the comprehensive detection is difficult to achieve by simply adopting an image processing technology or an AI model (particularly a single AI model). Therefore, there is a need in the art for a defect detection technique that can detect defects in the appearance of a plastic bottle in a comprehensive manner, and that is simple to operate and efficient to operate.
Disclosure of Invention
Therefore, the application provides a plastic bottle defect detection system and a plastic bottle defect detection method, which can realize comprehensive detection of package defects through simple and efficient hardware equipment and software algorithms.
In one aspect, the present application provides a plastic bottle defect detection system comprising: the side surface detection equipment is used for detecting the defects of the side surface of the plastic bottle; a front inspection device for inspecting a front defect of the plastic bottle; the rear detection equipment is used for detecting the defect side detection equipment at the rear of the plastic bottle; a transfer control device for transferring and controlling the plastic bottles; and the cloud storage device is used for storing detection data of the side detection device, the front detection device and the rear detection device in a cloud end. The side surface detection device includes: the side image acquisition device is used for acquiring side images of the plastic bottles; and the side face recognition and analysis device is used for recognizing and analyzing the side face image. The foregoing detection device includes: the front image acquisition device is used for acquiring a front image of the plastic bottle; and the front recognition and analysis device is used for recognizing and analyzing the front image. The rear face detection device includes: the back image acquisition device is used for acquiring back images of the plastic bottles; and the rear recognition and analysis device is used for recognizing and analyzing the rear image. The transmission control apparatus includes: the conveying device is used for conveying the plastic bottle to pass through the side image acquisition device, the front image acquisition device and the rear image acquisition device; and the sorting device is used for sorting out the plastic bottles with defects from the conveying device.
According to a particular embodiment of the present application, the lateral identification analysis device comprises: the label lack identification module is used for identifying whether the plastic bottle lacks a label or not; the content missing identification module is used for identifying whether the label lacks the corresponding content; the stain identification module is used for identifying whether stains exist on the plastic bottle; the multi-identification module is used for identifying whether the plastic bottle is stuck with a plurality of labels or not; the wrinkle identification module is used for identifying whether wrinkles exist in the label or not; the labeling over-close identification module is used for identifying whether the label is attached to be over-close to the edge; and the scratch identification module is used for identifying whether scratches exist on the plastic bottle.
According to a particular embodiment of the present application, the default identification module comprises: the first convolution neural network unit is used for processing the side images through the convolution neural network to obtain a convolution characteristic diagram; the regional suggestion network unit is used for processing the convolution characteristic graph through a regional suggestion network to obtain a coordinate value of a suggestion region; the pooling unit is used for pooling the region of interest in the suggested region to obtain a feature map with a fixed size; the classification unit is used for classifying the feature vectors of the suggested areas through a softmax classifier; and the regression unit is used for performing regression on the bounding box of the suggestion region. The content missing identification module comprises: a first clipping unit configured to clip the side image according to the position information of the target area; the first global threshold processing unit is used for carrying out global threshold processing on the cut side image; the first color screening unit is used for converting the side image into a first binary image; and the character recognition unit is used for recognizing the character content in the first binary image. The spot identification module includes: the preprocessing unit is used for preprocessing the side images; the second cutting unit is used for cutting the side image according to the position information of the target area; the second global threshold processing unit is used for carrying out global threshold processing on the cut side image; a second color filtering unit for converting the side image into a second binary image; and the traversing unit is used for traversing the second binary image.
According to a particular embodiment of the present application, the multi-identity module comprises: the second convolutional neural network unit is used for extracting the characteristics of the side images; and the residual error neural network unit is used for performing convolution operation on the side image. The wrinkle identification module comprises: the third convolution neural network unit is used for extracting the characteristics of the side images; the up-sampling unit is used for up-sampling the side image; the probability calculation unit is used for calculating the probability that pixel points of the side images belong to a certain category through a softmax function; and the segmentation unit is used for performing target segmentation according to the probability.
According to a particular embodiment of the present application, the labeled near identification module comprises: an encoder network unit for down-sampling the side images; the cascade expansion convolution neural network unit is used for performing convolution expansion on the downsampled side image; and the decoder network unit is used for up-sampling the side image subjected to the convolution expansion and performing convolution processing on the up-sampled side image. The scratch recognition module includes: the first characteristic extraction unit is used for extracting a candidate region of the side image through a region suggestion network; and the classification positioning unit is used for generating a prediction frame and judging whether the scratch exists according to the score of the prediction frame.
According to a particular embodiment of the present application, the previous identification analysis device comprises: and the position line bending identification module is used for identifying whether the position line of the plastic bottle is bent or not. The back face recognition analysis device comprises: and the joint line multi-material identification module is used for identifying whether the joint line of the plastic bottle has multi-material.
According to a particular embodiment of the present application, the position line bend identification module comprises: the position line region segmentation module is used for segmenting the position line region from the front image; the position line central line extracting module is used for extracting a central line of the position line; and the straightness judging module is used for judging the straightness of the central line. Many materials of joint line identification module includes: the second characteristic extraction unit is used for extracting the characteristics of the subsequent image; and the support vector machine unit is used for identifying and classifying the extracted features.
According to a particular embodiment of the present application, a lateral image acquisition device comprises: a side camera for photographing the side of the plastic bottle; and the side light sources are arranged at the left side and the right side of the side camera and are used for irradiating the side of the plastic bottle when the side camera shoots. The preceding image acquisition device includes: a front camera for photographing the front of the plastic bottle; and front light sources arranged on the left and right sides of the front camera and used for irradiating the front of the plastic bottle when the front camera shoots. The image acquisition device in the back includes: a rear camera for photographing the rear of the plastic bottle; and rear light sources arranged on the left and right sides of the rear camera for illuminating the rear of the plastic bottle when the rear camera shoots.
According to a particular embodiment of the present application, the sorting device comprises: a deflection wheel for deflecting the conveying direction of the defective plastic bottles; a deflection conveyor belt for conveying the diverted plastic bottles to an off-plastic bottle defect detection system; and the single chip microcomputer is used for controlling the steering of the deflection wheel according to the detection results of the side detection device, the front detection device and the rear detection device. The transfer device includes: the main conveying belt is used for conveying the plastic bottles to pass through the side image acquisition device, the front image acquisition device and the rear image acquisition device and conveying the plastic bottles without defects to a defect detection system leaving the plastic bottles; and the conveyor belt control box is used for controlling the movement of the main conveyor belt and the deflection wheel.
In another aspect, the present application provides a method for detecting defects in plastic bottles, comprising: conveying and handling plastic bottles; detecting defects on the side surface of the plastic bottle; detecting a defect in the front of the plastic bottle; detecting a defect in the back of the plastic bottle; the detection data of the side, the front and the back of the plastic bottle are stored in the cloud. Conveying and handling plastic bottles, comprising: conveying the plastic bottles through side, front and back inspection; sorting out the plastic bottles with defects. Detecting defects in the sides of plastic bottles, comprising: collecting a side image of the plastic bottle; the side images are identified and analyzed. Detecting a defect in the front of a plastic bottle, comprising: collecting a front image of the plastic bottle; the previous image is identified and analyzed. Detecting defects in the back of a plastic bottle, comprising: collecting a back image of the plastic bottle; the subsequent images are identified and analyzed.
According to the plastic bottle defect detection system and method, the plastic bottle can be subjected to image acquisition at multiple angles, and a special algorithm model is designed for the image acquired at each angle, so that the comprehensiveness of detection can be guaranteed, the simplicity of the model can be guaranteed, and the plastic bottle defect detection and identification can be realized quickly, efficiently and comprehensively. In addition, the cloud storage equipment is used for storing the data of each detection module, so that the reasons of defects can be comprehensively and accurately analyzed, the subsequent adjustment of the production flow is facilitated, and the product quality is improved.
Drawings
Embodiments of the present application are described in detail below with reference to the attached drawing figures, wherein:
FIG. 1 shows a schematic structural view of a plastic bottle defect detection system according to an embodiment of the present application;
FIG. 2 shows a schematic structural diagram of a plastic bottle defect detection system according to another embodiment of the present application;
FIG. 3 shows a schematic structural diagram of a plastic bottle defect detection system according to another embodiment of the present application;
FIG. 4 is a schematic diagram of a default identification module according to the embodiment of FIG. 3;
FIG. 5 is a schematic diagram of a content missing identification module according to the embodiment in FIG. 3;
FIG. 6 illustrates a schematic structural diagram of a spot identification module according to the embodiment of FIG. 3;
FIG. 7 is a block diagram of a multi-identity module according to the embodiment of FIG. 3;
FIG. 8 illustrates an exemplary side image with multi-mark defects according to the embodiment of FIG. 3;
FIG. 9 shows a schematic structural diagram of a wrinkle identification module according to the embodiment of FIG. 3;
fig. 10 shows a schematic structural view of a labeled close-up identification module according to the embodiment of fig. 3;
FIG. 11 shows an exemplary side image with a labeled too-near defect according to the embodiment of FIG. 3;
fig. 12 is a schematic structural diagram of a scratch recognition module according to the embodiment of fig. 3;
FIG. 13 is a schematic diagram of a positional line bend identification module according to the embodiment of FIG. 3;
FIG. 14 illustrates an exemplary front image for identifying whether a position line is curved according to the embodiment of FIG. 3;
FIG. 15 is a schematic diagram of a multi-material identification module of the joint line according to the embodiment of FIG. 3;
FIG. 16 illustrates an exemplary back image used to identify whether a parting line is rich in accordance with the embodiment of FIG. 3;
fig. 17 shows a schematic flow diagram of a plastic bottle defect detection method according to an embodiment of the present application.
Detailed Description
The present application is described in detail below with reference to specific embodiments in order to make the concept and idea of the present application more clearly understood by those skilled in the art. It is to be understood that the embodiments presented herein are only a few of all the embodiments that the present application may have. Those skilled in the art who review this disclosure will readily appreciate that many modifications, variations, or alterations to the described embodiments, in whole or in part, are possible and are intended to be within the scope of the present disclosure.
As used herein, the terms "a," "an," and other similar words are not intended to mean that there is only one of the described items, but rather that the description is directed to only one of the described items, which may have one or more. As used herein, the terms "comprises," "comprising," and other similar words are intended to refer to logical interrelationships, and are not to be construed as referring to spatial structural relationships. For example, "a includes B" is intended to mean that logically B belongs to a, and not that spatially B is located inside a. Furthermore, the terms "comprising," "including," and other similar words are to be construed as open-ended, rather than closed-ended. For example, "a includes B" is intended to mean that B belongs to a, but B does not necessarily constitute all of a, and a may also include other elements such as C, D, E.
As used herein, the terms "first," "second," and the like are not intended to imply any order, quantity, or importance, but rather are used to distinguish one element from another. The terms "embodiment," "present embodiment," "an embodiment," "one embodiment," and "one embodiment" herein do not mean that the pertinent description applies to only one particular embodiment, but rather that the description may apply to yet another embodiment or embodiments. Those of skill in the art will understand that any of the descriptions given herein for one embodiment may be substituted, combined, or otherwise combined with the descriptions given herein for one or more other embodiments, as new embodiments may be created by those of skill in the art, and are intended to fall within the scope of the present application.
In this context, plastic (plastic) may refer to an organic material that is capable of being molded by injection molding. For example, the material may be a polymer compound obtained by polymerization of a monomer as a raw material by addition polymerization or condensation polymerization, and may be composed of a synthetic resin, and additives such as a filler, a plasticizer, a stabilizer, a lubricant, and a coloring material. For example, plastic can be synthetic or natural polymers, materials or products of plastic materials that can be arbitrarily kneaded into various shapes and ultimately held in shape.
Herein, the plastic bottle may refer to a bottle made of plastic. For example, the plastic bottle may be a bottle for packaging a product, and may be a bottle for packaging a solid product such as powder, granules, capsules, etc., a bottle for packaging a liquid product such as engine oil, lubricating oil, gasoline, edible oil, etc., or a bottle for packaging a gaseous product such as oxygen, nitrogen, etc. The plastic bottle may have a variety of shapes, such as cylindrical, elliptical cylindrical, etc., or other irregular shapes. Some plastic bottles have multiple faces, where the face with the label may be the side face, the face without the label may be the front face, and the face opposite the front face may be the back face. In some flat plastic bottles, the side with the larger area can be used as the side for labeling, and the two sides with the smaller area can be used as the front and the back (for example, the side provided with the handle can be used as the back, and the side opposite to the back can be used as the front).
In this context, a defect may refer to any blemish on the plastic bottle or a leak on the package, such as an out-of-position package, a wrong package, or an over-package, etc. In this context, defect detection may refer to detecting various packaging defects on plastic bottles by a manual or automated system, identifying plastic bottles with packaging defects, and distinguishing them from other completely packaged plastic bottles.
For example, the plastic bottle may be a plastic bottle for containing engine oil, i.e., an engine oil bottle. In the production flow of the engine oil bottle, whether the engine oil bottle body is scratched, multi-label, wrinkle label, label lack, flaw point, whether labeling is too close, whether label content is lacked, whether a liquid level line is bent, and whether a plurality of materials exist on a joint line are important factors for evaluating whether the product is qualified, so that the defect detection of the engine oil bottle body is very important. Traditional machine oil bottle body defect detection mainly detects through artifical mode, has that the subjectivity is strong, inefficiency, easy tired and detect the problem of cycle length. With the development of image processing and machine vision technologies, the conventional image processing technology is gradually applied to label detection instead of a manual detection mode, but the conventional image processing mode has low detection precision and depends on judgment of engineers and long-time debugging to process errors. Along with the development of AI technology, the defect detection application based on the artificial intelligence technology is also rapidly developed, and the artificial intelligence method can obviously improve the detection precision, so that in order to improve the efficiency of the defect detection of the oil bottle body of the elevator, an intelligent oil bottle body defect detection system is developed at present, the system adopts a method combining classical image processing and artificial intelligence, the problem of low precision of a classical image processing mode is solved, and meanwhile, the efficiency of the defect detection of the oil bottle body can be obviously improved. The reason for using AI is that some manual detection still exists at present, and some conventional image processing based on machine vision is used, but problems encountered by the current production line are endless, workers are required to continuously adjust when new problems are encountered, and the robustness is poor; therefore, a processing scheme combining the AI technology and the traditional image processing method is introduced to detect the image.
Fig. 1 shows a schematic structural view of a plastic bottle defect detection system according to an embodiment of the present application.
According to the present embodiment, the plastic bottle defect detecting system 100 includes a side detecting device 110, a front detecting device 120, a rear detecting device 130, and a cloud storage device 150. The side inspection apparatus 110 is used to inspect the sides of plastic bottles for defects. The front inspection device 120 is used to detect defects in the front of the plastic bottle. The back inspection device 130 is used to detect defects in the back of the plastic bottle. The transfer control device 140 is used to transfer and control the plastic bottles. The cloud storage device 150 is configured to store detection data of the side detection device 110, the front detection device 120, and the rear detection device 130 in a cloud.
The side surface detecting apparatus 110 includes a side surface image capturing device 111 and a side surface recognition analyzing device 112. The side image acquiring device 111 is used for acquiring a side image of the plastic bottle. The side recognition and analysis device 112 is used to recognize and analyze the side image.
The front detection device 120 includes a front image capturing means 121 and a front recognition analyzing means 122. The front image capturing device 121 is used to capture a front image of the plastic bottle. The front recognition and analysis means 122 is used to recognize and analyze the front image.
The rear face detection device 130 includes a rear face image pickup means 131 and a rear face recognition analysis means 132. The rear image capturing device 131 is used to capture a rear image of the plastic bottle. The posterior recognition analysis means 132 is used to recognize and analyze the posterior image.
The conveyance control apparatus 140 includes a conveyance device 141 and a sorting device 142. The conveyor 141 is used to transport the plastic bottles past the lateral image acquisition device 111, the front image acquisition device 121 and the rear image acquisition device 131. The sorting device 142 is used to sort out defective plastic bottles from the conveyor.
The sides, front and back of the plastic bottle may be relative. When determining which face the front or front face of the plastic bottle is, the side faces (the faces on the left and right sides) and the rear face (the face opposite to the front face) are determined. For example, the sides of a plastic bottle may be (for a flat plastic bottle) two opposing sides of a larger area that may be used for labeling.
The side recognition analysis device 112, the front recognition analysis device 122 and the rear recognition analysis device 132 can be implemented by three independent electronic devices having calculation and recognition functions, or can be integrated by a unified electronic device with the side, front and rear recognition analysis functions. The lateral, front and rear identification and evaluation devices are distinguished and defined primarily by their operational logic and functional characteristics and are not therefore limited in their physical form of representation.
The cloud storage device 150 may be a device for uploading inspection data of the inspection system, such as a defective image or a defective proportion, to the cloud. For example, the cloud storage device may be a cloud server. The cloud server is mainly used for cloud storage of data, and local body defect detection identification data are uploaded to the cloud end, so that later analysis and visualization are facilitated, the production process of the product is improved, and the yield of the product is improved.
For example, the plastic bottle body defect detection system is composed of a plastic bottle side defect detection system, a plastic bottle front side defect detection system and a plastic bottle rear side defect detection system. The plastic bottle passes through the three defect detection systems in sequence under the transmission of the conveyor belt and collects and transmits the images, when the plastic bottle passes through any one defect detection system and the defect detection system identifies the defective products as the image identification result, the plastic bottle marks the products (the serial numbers of the products can be marked here to ensure that the pictures shot by the camera belong to the same product), other detection systems are prevented from collecting the images of the products again, the efficiency is improved, and the idle work is reduced.
Fig. 2 shows a schematic structural view of a plastic bottle defect detection system according to another embodiment of the present application.
According to the present embodiment, the plastic bottle defect detecting system 200 includes a side detecting device, a front detecting device, a rear detecting device, a transfer controlling device, and a cloud storage device. The side detection device comprises a side image acquisition device 210 and a side recognition analysis device. The front detection device comprises a front image acquisition device 220 and a front identification and analysis device. The rear detection device comprises a rear image acquisition device 230 and a rear identification and analysis device. The conveying control equipment comprises a conveying device and a sorting device.
The side image capture device 210 includes a side camera 211 and a side light source 212. The side camera 211 is used to photograph the sides of the plastic bottle. The side light sources 212 are disposed on both left and right sides of the side camera, and illuminate the sides of the plastic bottle when the side camera photographs.
The front image capture device 220 includes a front camera 221 and a front light source 222. The front camera 221 is used to photograph the front of the plastic bottle. The front light sources 222 are disposed on the left and right sides of the front camera, and illuminate the front of the plastic bottle when the front camera takes a picture.
The rear image capture device 230 includes a rear camera 231 and a rear light source 232. The back camera 231 is used to photograph the back of the plastic bottle. The rear light sources 232 are disposed on the left and right sides of the rear camera for illuminating the rear of the plastic bottle when the rear camera takes a picture.
For the side, front and rear image capturing devices, the light sources are two and are respectively arranged at the left and right sides of the camera. Of course, other arrangements are possible, for example on both the upper and lower sides of the light source.
The camera and the light source are responsible for image acquisition, and when the product reaches a specified position, the camera acquires an image and transmits the image to the PC through the network port for next-step identification. The light source is positioned at 45 degrees, 135 degrees of the camera. The light source is turned on when the image is collected, and is turned off at other times, so that the aim of saving energy can be fulfilled. The camera can be opened all the time, and when the plastic bottle came over, if the record that the singlechip sent to the conveyer belt control box had the interrupt, the singlechip did not send the instruction and carries out image acquisition for the camera.
The transfer device 240 includes: the main conveyor belt 241 is used for conveying plastic bottles to pass through the side image acquisition device 210, the front image acquisition device 220 and the rear image acquisition device 230 and conveying plastic bottles without defects to a defect detection system leaving the plastic bottles; a conveyor control box 242 for controlling the movement of the main conveyor and the deflection wheel 251.
The main conveyor belt 241 is a main conveying unit for conveying both the non-inspected plastic bottles and the plastic bottles that have been inspected and have no defects. The main conveyor belt 241 is arranged in an L-shape, in the horizontal section, passing through the side image capturing device 210 for photographing and defect recognition of both sides of the plastic bottle, and in the vertical section, passing through the front image capturing device 220 and the rear image capturing device 230 for photographing and recognition of the front and rear of the plastic bottle. A conveyor control box 242 is provided beside the conveyor for controlling the movement of the conveyor.
The sorting apparatus 250 includes: a deflection wheel 251 for deflecting the conveying direction of the defective plastic bottles to leave the main conveyor belt 241; a deflection conveyor 252 for conveying diverted plastic bottles to an off-plastic bottle defect detection system; and the single chip microcomputer 253 is used for controlling the steering of the deflection wheel according to the detection results of the side detection device, the front detection device and the rear detection device.
The deflecting wheel 251 is constituted by a plurality of small wheels which can be deflected, receiving the command of the conveyor control box 242, to deflect immediately the plastic bottles which are defective, so that they are conveyed onto the deflecting conveyor 252. The deflection conveyor belt 252 has a run that is not aligned with the run of the main conveyor belt 241, and thus functions to sort out defective plastic bottles.
According to one example of this embodiment, the plastic bottle side defect detection system comprises a main conveyor belt, a sorting conveyor belt, a deflection wheel, a single chip microcomputer, a PC, a camera, a light source controller, a conveyor belt control box and the like.
The main conveying belt, the sorting conveying belt and the deflection wheel form a conveying module which is mainly responsible for conveying plastic bottle products to be detected. The products are conveyed through the main conveyor belt, pass through the image acquisition area and then reach the deflection wheel area, and the deflection wheel conveys the objects to the corresponding position of the sorting conveyor belt according to the received instructions of the conveyor belt control box, so that the purpose of sorting the products is achieved. Traditional modes such as mechanical arm are abandoned to this transfer module when sorting, adopt the deflection wheel to control product conveying letter sorting direction. Compared with the problems of large occupied area, long time consumption and the like of the traditional mechanical arm, the conveying module has the advantages of low investment cost, small occupied area, high stability and soft action in product sorting, reduces the damage to the products and is convenient for secondary recycling.
The photoelectric switch, the singlechip, the light source controller and the conveyor belt control box form a control module and are responsible for controlling the hardware mechanism. When the main conveyor belt conveys products to the image acquisition module area, the photoelectric switch sends signals to the single chip microcomputer, the single chip microcomputer receives the signals and sends the signals to the light source controller and the camera, the light source controller receives the signals to control the on and off of the light source, and the camera receives the signals to take pictures. After the image is identified by the algorithm, the identification result is sent to the single chip microcomputer, and the single chip microcomputer sends an instruction to the conveyor belt control box to control the steering of the deflection wheels, so that the purpose of sorting products is achieved. The module utilizes the singlechip as a control center, saves the cost to a certain extent and reduces the operation complexity.
The cloud server constitutes data storage module, and the high in the clouds storage of mainly used data uploads local body defect detection identification data to the high in the clouds, and the later stage analysis and the visualization of being convenient for improve the production procedure of product, improve the yields of product. The PC and the defect detection algorithm form a picture defect identification module; when the picture is transmitted to the PC, the plastic bottle body defect detection algorithm is called to identify, on one hand, the identification result is displayed at the PC end, and on the other hand, the identification result is transmitted to the single chip microcomputer.
Fig. 3 shows a schematic structural view of a plastic bottle defect detection system according to another embodiment of the present application.
According to the present embodiment, the plastic bottle defect detecting system 301 includes a side detecting device, a front detecting device, a rear detecting device, a transfer control device, and a cloud storage device. The side detection device comprises a side image acquisition device and a side recognition and analysis device 300. The front detection device comprises a front image acquisition device and a front identification and analysis device 400. The rear detection device comprises a rear image acquisition device and a rear identification and analysis device 500. The conveying control equipment comprises a conveying device and a sorting device.
The side recognition analysis device 300 includes a missing mark recognition module 310, a content missing recognition module 320, a stain recognition module 330, a multi-mark recognition module 340, a wrinkle mark recognition module 350, a labeled near recognition module 360, and a scratch recognition module 370. The missing tag identification module 310 is used to identify whether a plastic bottle is missing a tag. The content missing identification module 320 is used to identify whether the tag lacks the content that should be provided. The stain identification module 330 is used to identify whether a stain is present on a plastic bottle. The multiple identifier module 340 is used to identify whether a plastic bottle is labeled with multiple labels. The wrinkle identification module 350 is used to identify whether a wrinkle is present in the label. The labeled near identification module 360 is used to identify whether the label is labeled too near the edge. The score identification module 370 is used to identify whether a score is present on the plastic bottle.
As shown in fig. 3, the missing identification module 310 and the other modules in the side recognition analysis apparatus 300 are connected in series, and the content missing identification module 320, the stain identification module 330, the multi-identification module 340, the wrinkle identification module 350, the labeled near identification module 360 and the scratch identification module 370 are connected in parallel. Whether the lack of the mark is the basis of the subsequent detection or not is the basis of the subsequent detection, so the lack mark identification module 310 is arranged at the forefront end to become a precondition for whether the subsequent detection and the identification are performed or not, and once the lack of the mark is found, the subsequent detection is not required.
The front recognition and analysis device 400 includes a position line bending recognition module 410, and the position line bending recognition module 410 is used for recognizing whether the position line of the plastic bottle is bent.
The back surface recognition and analysis device 500 includes a parting line multi-material recognition module 510, and the parting line multi-material recognition module 510 is configured to recognize whether or not there is multi-material in a parting line of a plastic bottle.
Fig. 4 shows a schematic structural diagram of a default identification module according to the embodiment of fig. 3.
According to the present embodiment, the default identification module 310 includes a first convolutional neural network unit 311, a region suggestion network unit 312, a pooling unit 313, a classification unit 314, and a regression unit 315. The first convolution neural network unit 311 is configured to process the side image through a convolution neural network to obtain a convolution feature map. The area suggestion network unit 312 is configured to process the convolution feature map through the area suggestion network to obtain coordinate values of the suggestion area. The pooling unit 313 is used for pooling the region of interest of the proposed region to obtain a fixed-size feature map. The classification unit 314 is configured to classify the feature vectors of the suggested region by a softmax classifier. The regression unit 315 is configured to perform regression on the bounding box of the suggested region.
The Convolutional Neural Network (CNN) may refer to a feed-forward Neural network that includes Convolutional calculation and has a deep structure, is constructed by simulating a visual perception mechanism of a living being, and can perform supervised learning and unsupervised learning, and the Convolutional kernel parameter sharing in an implicit layer and the sparsity of interlayer connection enable the Convolutional Neural network to learn lattice features such as pixels and audio with a small amount of calculation, so that the Convolutional Neural network has a stable effect and has no additional feature engineering requirements on data.
The side image is processed by the convolutional neural network to obtain a convolutional feature map, and the specific implementation manner of the convolutional feature map is that the side image is input into the convolutional neural network with a plurality of layers of neurons, and features in the side image are extracted through the convolution operation of the convolutional neural network, so that the convolutional feature map is obtained.
A regional recommendation network (RPN) may refer to a convolutional network that takes an image of any size as input, outputting a set of rectangular target recommendation boxes, each box having a score. For example, the area proposal network is part of a fast-RCNN network for extracting the preselected box, which incorporates a convolutional neural network, using a form of feature extraction to generate the location of the preselected box.
The method comprises the specific implementation mode that the convolution characteristic diagram is input, the convolution characteristic diagram is divided into a plurality of areas, and the center of each area is represented by the coordinate of one pixel point on the characteristic diagram.
Pooling (Pooling) can refer to down-sampling, which simulates a human visual system to perform dimensionality reduction on data, and is often used after a convolutional layer when a convolutional neural network is constructed, the characteristic dimensionality output by the convolutional layer is reduced through Pooling, so that network parameters are effectively reduced, and an over-fitting phenomenon can be prevented.
Regression may refer to training a linear regression model to generate more accurate bounding boxes for each recognized object.
The detection principle of the label missing identification module 310 may be that a plastic bottle label missing detection model is trained, the model training takes the type and the specification position of the label as the labeled data, a model capable of identifying the type and the specification position of the label in the image is obtained, an appropriate score is selected for determination according to the score of the prediction result returned by the model on the training set, whether the corresponding position is detected or not is determined according to the result returned by the model, if not, a defective product is determined, otherwise, the detection process is continued. The plastic bottle label defect detection model is mainly used for judging whether a label exists or not, if the label does not exist, the label is directly filtered to be a defective product, the label exists, and the next step of flow is continued.
The plastic bottle defect detection model is mainly divided into three parts, namely a convolutional neural network, RPN and Fast R-CNN (Region with CNN features). Processing the label side image of the plastic bottle by utilizing a pre-training convolutional neural network model to obtain a convolutional characteristic diagram; the convolution characteristic diagram enters RPN, the RPN calculates the coordinate value Of the suggested area, judges whether the coordinate value belongs to the foreground or the background, and performs Region Of Interest (ROI) pooling on the suggested area to obtain a characteristic diagram with fixed size (the maximal pooling is performed on the input with non-uniform size to obtain the characteristic diagram with fixed size); and the characteristic vectors of the suggested region are accessed into a Softmax classifier through a full connection layer to realize classification of targets and regress the boundary frame, so as to accurately identify the tag models and specifications.
When the foreground and the background are judged, the foreground comprises a detection target, namely the model and the specification of the label, and the background comprises other targets except the detection target in the image. For example, a conveyor belt has a screw, which is the foreground in the image and a conveyor belt is the background in the image, and is checked for defects.
Fig. 5 shows a schematic structural diagram of a content missing identification module according to the embodiment in fig. 3.
According to the present embodiment, the missing content recognition module 320 includes a first clipping unit 321, a first global threshold processing unit 322, a first color filtering unit 323, and a text recognition unit 324. The first cropping unit 321 is configured to crop the side image according to the position information of the target area. The first global thresholding unit 322 is configured to perform global thresholding on the cropped side image. The first color filter unit 323 is used to convert the side image into a first binary image. The text recognition unit 324 is used for recognizing text content in the first binary image.
The content missing identification module 320 may perform corresponding position clipping on the image according to the position information of the target area output by the plastic bottle missing detection model processing (clipping the image according to the coordinate information returned by the missing detection model on the transmitted original image), and then perform global threshold processing on the clipped image: setting different outputs by taking a threshold value as a boundary during processing, comparing the value of each channel of the image with the threshold value independently, and performing threshold value processing according to the mode of each channel; then screening image colors, constructing a histogram on the image to know the distribution of image pixel points, further setting a corresponding interval, if the pixel values of the pixel points in the image are in the set interval [ lower, upper ], outputting the pixel value at the corresponding position in the image to be 255, otherwise, outputting the pixel value to be 0, and obtaining a binary image; and performing OCR character recognition on the image, recognizing the character content of the target area, and judging the image to be a defective product if the character content cannot be recognized, namely the label content is lost.
The reason for the design is that the label content area of the plastic bottle is fixed, the label can be judged whether to exist by using the label-missing detection model, the image is cut by using the model return information, and whether to miss or have errors in the text content is judged.
Fig. 6 shows a schematic structural diagram of a stain identification module according to the embodiment of fig. 3.
According to the present embodiment, the taint identification module 330 includes a preprocessing unit 331, a second clipping unit 332, a second global threshold processing unit 333, a second color screening unit 334, and a traversal unit 335. The preprocessing unit 331 is used to preprocess the side images. The second clipping unit 332 is used to clip the side image according to the position information of the target area. The second global thresholding unit 333 is used to perform global thresholding on the cropped side image. The second color filter unit 334 is configured to convert the side image into a second binary image. The traversal unit 335 is configured to traverse the second binary image.
The detection principle of the stain recognition module 330 may be that, due to the existence of plastic bottles with different colors, proper pre-processing is required for performing stain detection, for example, brightness enhancement, contrast processing and the like are required for processing plastic bottles with darker colors. And then, performing operation similar to the operation of the content missing identification module 320 on the preprocessed picture, expanding the range of the threshold processing pixels in the selection of the global threshold processing parameters, traversing the pixels of the whole picture if the processed picture only comprises black background pixels and white stain pixels, and judging the picture to be a defective product if the white pixels exist.
Fig. 7 shows a schematic structural diagram of a multi-identity module according to the embodiment of fig. 3. Fig. 8 shows a side image with multiple defects.
According to the present embodiment, the multi-identity module 340 includes a second convolutional neural network unit 341 and a residual neural network unit 342. The second convolutional neural network unit 341 is configured to perform feature extraction on the side image. The residual neural network unit 342 is used to perform a convolution operation on the side image.
The residual error neural network can be a very effective network for relieving the problems of gradient disappearance and gradient explosion, the depth of the network which can be effectively trained is greatly improved, the residual error unit can be realized in a layer jump connection mode, the input of the unit is directly added with the output of the unit, and then the residual error unit is activated, so that the residual error network can be easily realized by using a mainstream automatic differential deep learning framework, and the gradient is directly updated by using a BP algorithm. The residual error network has the characteristics of being easy to optimize, improving the accuracy rate by increasing equivalent depth, and relieving the gradient disappearance problem caused by increasing the depth in the deep neural network because the internal residual error block uses jump connection.
The detection principle of the multi-label identification module 340 may be that a picture is transmitted into a plastic bottle label classification model, the model is used for judging whether redundant labels exist in a product, an appropriate score is selected as a judgment basis according to the category and the prediction score returned by the model, whether the defect exists is judged, and the identified multi-label picture is removed; the model is formed based on a large number of good product pictures and multi-label defective product pictures.
The plastic bottle classification model based on deep learning mainly comprises two parts: convolutional neural network, residual neural network: when sample data enters a model, firstly, a convolutional neural network is used for feature extraction, 64 convolutional kernels of 3x3 are used for convolution operation, preliminary features are extracted, then, after the maximum pooling operation of 3x3, the residual neural network starts to enter the residual neural network, and the residual neural network comprises four convolutional layers, namely a 3x3x64 convolutional layer, a 3x3x128 convolutional layer, a 3x3x256 convolutional layer and a 3x3x512 convolutional layer. And finally carrying out convolution operation on the obtained object and 1 convolution kernel of 3x3 through maximum 3x3 pooling operation to obtain a final output feature map. And obtaining a trained neural network model after the global average pooling operation of the characteristic diagram, namely the deep learning network model for detecting whether the plastic bottles are multi-target or not.
The reason for selecting the classification model is that the acquired image difference is not very large, so that the label is an obvious characteristic under the condition that multiple labels exist, and the classification model is used for distinguishing a good product from a defective product.
Fig. 9 shows a schematic structural diagram of the wrinkle identification module according to the embodiment of fig. 3.
According to the present embodiment, the wrinkle identification module 350 includes a third convolutional neural network unit 351, an upsampling unit 352, a probability calculation unit 353, and a segmentation unit 354. The third convolutional neural network unit 351 is used for feature extraction of the side image. The up-sampling unit 352 is used to up-sample the side image. The probability calculation unit 353 is configured to calculate a probability that a pixel point of the side image belongs to a certain category through a softmax function. The segmentation unit 354 is configured to perform object segmentation according to the probability.
Sampling may refer to a process of converting a continuous signal in time and amplitude into a discrete signal in time and amplitude under the action of a sampling pulse, so sampling is also called as discretization of a waveform. Upsampling may refer to the re-sampling of a signal, the sampling rate of which is compared to the sampling rate at which the signal was originally obtained, and is greater than the sampling rate at which the signal was originally obtained, referred to as upsampling, which is essentially an interpolation or interpolation.
The softmax function may refer to a function that assigns a probability value to the result of each output classification, indicating the likelihood of belonging to each class.
The detection principle of the wrinkle identification module 350 may be that a picture is transmitted into a plastic bottle label segmentation model, the model is used to segment the outline region of the label, and whether the label has wrinkles or not is judged according to the result, i.e. the area, returned by the model, wherein the model is obtained by training based on a large number of good product labels as labels; and judging whether wrinkles exist according to the area result returned by the model, and if so, judging that the product is a defective product.
The plastic bottle label segmentation model is composed of a convolutional neural network, an input image passes through a plurality of convolutional layers and pooling layers to be subjected to feature extraction, up-sampling operation is carried out for ensuring end-to-end input and output, finally, the probability that each pixel point belongs to a certain category is calculated by utilizing a Softmax function, and target segmentation is carried out according to the probability.
Because the label is completely attached to the surface of the plastic bottle label, and the size of the label is fixed, the label outline is segmented by using the semantic segmentation model, the area in the whole outline is obtained (the information returned by the semantic segmentation model is the area), and whether the label has wrinkles or not is judged by using the label area information returned by the model.
Fig. 10 shows a schematic structural view of a labeled near identification module according to the embodiment of fig. 3. Fig. 11 shows a side image with a labeled too close defect.
According to the present embodiment, the labeled near identification module 360 includes an encoder network unit 361, a cascade expanded convolutional neural network unit 362 and a decoder network unit 363. The encoder network unit 361 is configured to down-sample the side image. The cascaded dilation convolutional neural network unit 362 is used to perform convolutional dilation on the downsampled side image. The decoder network unit 363 is configured to perform upsampling on the convolution-expanded side image, and perform convolution processing on the upsampled side image.
The down-sampling may refer to re-sampling the signal, and the re-sampling rate is compared with the original sampling rate of the signal, and the down-sampling is called as the down-sampling if the re-sampling rate is smaller than the original sampling rate. For example, a sample sequence is sampled once every several samples, and the new sequence thus obtained is a down-sample of the original sequence.
The concatenated convolutional neural network may refer to a convolutional neural network that uses low-pixel candidate windows as input, allows a shallow convolutional neural network to quickly extract candidate windows, and then adjusts the size of windows from a previous stage and uses them as input of corresponding network layers, respectively.
The expanding convolution (convolution expansion), also called hole convolution or dilation convolution, is to inject holes into the standard convolution kernel to increase the receptive field of the model, and the expanding convolution has a parameter expansion rate, which refers to the interval number of the points of the convolution kernel, more than the original normal convolution operation.
The detection principle of the over-labeling and near-identifying module 360 may be that a picture is transmitted into a plastic bottle label region segmentation model, the model is labeled by the area between the label outline and the label outline of the plastic bottle body (such as the area of a plurality of rectangular frames shown in fig. 11), the model is used to determine whether the distance between the label and the label outline on the side surface of the plastic bottle is too close, and whether the distance is too close is determined according to the area between the label returned by the model and the label outline; the model is formed by training defective pictures and good pictures based on the fact that the distance between the label and the label outline is too close, and if the distance is too close, the model is judged to be defective.
The plastic bottle label region segmentation model mainly comprises an encoder network, a cascade expansion convolutional neural network and a decoder network. The encoder is used for down-sampling the image input by the input layer, the cascade expansion convolution network is used for performing convolution expansion on the image output by the encoder, and the decoder is used for up-sampling the image output by the cascade expansion convolution module and performing convolution processing on the image output by the decoder so as to obtain a segmented image.
Fig. 12 is a schematic structural view illustrating a scratch recognition module according to the embodiment of fig. 3.
According to the present embodiment, the scratch recognition module 370 includes a first feature extraction unit 371 and a classification positioning unit 372. The first feature extraction unit 371 is used to extract candidate regions of the side image through a region suggestion network. The classification positioning unit 372 is used for generating a prediction frame and judging whether the scratch exists according to the score of the prediction frame.
The method comprises the steps that a scratch can exist in the production process of a plastic bottle body, a detection model is required to be selected to detect the scratch, the scratch of a defective product is marked during training and marking, a picture is transmitted to the detection model, the model returns the score of a prediction result, when the score is lower than a set threshold value, the scratch is judged, when the score of the prediction result is higher than the set threshold value, the scratch is judged, and the plastic bottle is identified as the defective product.
The detection principle of the scratch recognition module 370 may be that a picture is introduced into a plastic bottle scratch detection model, which is trained based on a large number of body scratch pictures, and a suitable prediction result score, i.e., a probability value of whether a scratch exists, is selected to determine whether a scratch exists in a product. And judging the model as a defective product when the model return result is a defective product, otherwise, judging the model as a good product.
The plastic bottle scratch detection model integrates the processes of feature extraction and classification positioning, extracts the candidate region by using the RPN, sets a sliding window on the last layer of convolution feature output, and is fully connected with the full connection layer, so that a detection prediction frame is generated better, and whether the scratch is detected or not is judged according to the score of the output prediction frame.
Fig. 13 is a schematic structural diagram of a position line bending identification module according to the embodiment of fig. 3. Fig. 14 shows a front image for identifying whether the position line is curved.
According to the present embodiment, the position line bending recognition module 410 includes a position line region segmentation module 411, a position line center line extraction module 412 and a straightness determination module 413. The position line region dividing module 411 is used to divide the position line region from the previous image. The position line center line extracting module 412 is configured to extract a center line of the position line. The straightness determination module 413 is configured to determine the straightness of the central line.
The location line may refer to a viewing line used to determine the volume or upper surface location of the product within the plastic bottle. For example, the location line may be a line having full transparency or translucency in a vertical direction of the plastic bottle. When the product in the plastic bottle is a liquid product such as engine oil, the position line may refer to a liquid level line for observing the position of the liquid level. Line-of-site bending is a common plastic bottle production defect, and is characterized in that the produced plastic bottle line of site is not straight enough and takes on a shape that is bent leftwards or rightwards.
The position line region is segmented from the previous image, and the specific implementation manner of the segmentation method can be that a coordinate frame of the position line region is extracted through a basic algorithm model, and the position line region is cut according to coordinate frame information.
The specific implementation manner of extracting the center line of the position line may be to binarize the clipped position line region image to obtain left and right contours of the position line, and then process the contours to obtain the center line between the two contours.
The straightness of the central line is determined by connecting the first pixel points of the central line into a straight line, determining the deviation degree of each pixel point of the central line from the straight line, and further determining the straightness of the central line.
The detection principle of the position line bending identification module 410 may be that a plastic bottle position line detection model is established first, coordinate information is returned by using the model to obtain a position line region, the position line region is processed by a classical image processing method, and then whether the position line has a bending problem is judged by a mathematical method.
The plastic bottle position line detection model selects YOLO v3 as a basic algorithm model, the backbone network selects Darknet53, the structure is composed of 52 convolution layers, a full connection layer and a pooling layer are removed, a convolution and residual module is reserved as an image feature extraction network, and a total loss function is composed of the sum of 3 types of loss functions: the target loss, frame position loss, and classification loss are measured. If a plurality of prediction frames correspond to the same object in the final prediction result, only the prediction frame with the highest score (such as the rectangular frame shown in fig. 14) is selected.
According to coordinate frame information returned by the model, position line region cutting is carried out on the model, global threshold processing is carried out on the model after cutting, the threshold is selected based on a two-dimensional histogram of an image, some fine noise point distribution exists after global threshold segmentation, small particle noise is removed by adopting opening operation, adhesion among partial objects is broken, a clearer position line region is obtained, then a sobel edge detection algorithm is utilized to detect the position line edge, two contour lines of the position line are obtained, color screening is carried out, and a black background and two white position line contour lines are obtained. The method comprises the steps of respectively obtaining pixel points of two white position line outlines, processing the pixel points to obtain a line between the two position line outlines, on one hand, obtaining a straight line according to two pixel points of the head and the tail of the straight line, obtaining the sum of distances from all the pixel points to the straight line, on the other hand, solving the variance of the abscissa of all the pixel points of the straight line, judging the discrete type of all the pixel points, judging whether the pixel points are in a good product range or not according to the judgment of the sum of the distances from the variance value and the pixel points to the straight line, and if the pixel points are not in the range, bending the position line and judging the pixel points to be defective products.
The position line curvature identification module 410 is a combination of AI + conventional image processing. Firstly, due to the influence of the illumination intensity of the front side of the plastic bottle, if the traditional image processing method is directly used, the influence of the illumination intensity near the position line area cannot be eliminated; the AI is used for processing alone, influence factors in the image are too many, and whether the position line is bent or not cannot be directly judged, so that a position line detection model is introduced, the influence of the nearby illumination intensity is eliminated, the position line in the image is cut by utilizing coordinate information returned by the model, and the position line is processed by applying a traditional image processing method.
Fig. 15 is a schematic structural diagram of a joint line multi-material identification module according to the embodiment of fig. 3. Fig. 16 shows a rear image for identifying whether the parting line is excess.
According to the present embodiment, the joint line multi-material recognition module 510 includes a second feature extraction unit 511 and a support vector machine unit 512. The second feature extraction unit 511 is configured to perform feature extraction on the subsequent image. The support vector machine unit 512 is used to identify and classify the extracted features.
The mold clamping line may be a line formed by an excess material generated along a boundary line between two molds when the molds for injection are clamped, and the mold clamping line may be determined to be excessive when the excess material exceeds a certain amount.
A Support Vector Machine (SVM) may refer to a class of generalized linear classifiers that perform binary classification on data in a supervised learning manner, and a decision boundary of the generalized linear classifier is a maximum edge distance hyperplane for solving a learning sample. For example, the support vector machine calculates empirical risk by using a hinge loss function and adds a regularization term to a solution system to optimize structural risk, and is a classifier with sparsity and robustness.
The detection principle of the joint line multi-material identification module 510 may be that a plastic bottle joint line multi-material classification model is established first, and the plastic bottle joint line multi-material classification model is used for classifying whether the joint line is multi-material or not.
The plastic bottle joint line multi-material classification model is formed by a classification algorithm based on a support vector machine; firstly, extracting image features, and identifying and classifying the extracted features by using a support vector machine. Image feature extraction the joint line features are extracted by using a HOG (Histogram of Oriented gradients) feature extraction algorithm: firstly, a plastic bottle joint line area is divided, cell units are obtained, HOG of each pixel point in each unit is extracted, an HOG feature descriptor is established, and joint line image features are extracted. And then carrying out next class division to judge whether the materials are excessive.
The classification model is adopted because the image acquisition regions are relatively fixed, and the existence of a plurality of materials in the joint line is a relatively obvious characteristic, so that the classification model is directly used for distinguishing.
Fig. 17 shows a schematic flow diagram of a plastic bottle defect detection method according to an embodiment of the present application.
According to this embodiment, the plastic bottle defect detection method 1700 includes:
s1710, conveying and controlling plastic bottles;
s1720, detecting defects on the side face of the plastic bottle;
s1730, detecting defects in the front of the plastic bottle;
s1740, detecting defects behind the plastic bottle;
s1750, storing the detection data of the side face, the front face and the back face of the plastic bottle in a cloud.
In one embodiment, a method of conveying and controlling plastic bottles, comprising:
conveying the plastic bottles through side, front and back inspection;
sorting out the plastic bottles with defects.
In one embodiment, detecting defects in the sides of a plastic bottle comprises:
collecting a side image of the plastic bottle;
the side images are identified and analyzed.
In one embodiment, detecting a defect in the front face of a plastic bottle comprises:
collecting a front image of the plastic bottle;
the previous image is identified and analyzed.
In one embodiment, detecting a defect in the back of a plastic bottle comprises:
collecting a back image of the plastic bottle;
the subsequent images are identified and analyzed.
In one embodiment, identifying and analyzing the side images includes:
identifying whether the plastic bottle lacks a label;
identifying whether the tag lacks content that is supposed to be present;
identifying whether stains exist on the plastic bottle;
identifying whether the plastic bottle is labeled with a plurality of labels;
identifying whether the label has wrinkles;
identifying whether the label is affixed too close to the edge;
identifying whether there is a scratch on the plastic bottle.
In one embodiment, identifying whether a plastic bottle is devoid of labels comprises:
processing the side images through a convolution neural network to obtain a convolution characteristic diagram;
processing the convolution characteristic graph through the area suggestion network to obtain coordinate values of a suggestion area; pooling the region of interest of the suggested region to obtain a characteristic diagram with a fixed size; classifying the feature vectors of the suggested areas through a softmax classifier;
regression is performed on the bounding box of the proposed region.
In one embodiment, identifying whether the tag lacks content comprises:
cutting the side image according to the position information of the target area;
carrying out global threshold processing on the cut side image;
converting the side image into a first binary image;
and identifying the text content in the first binary image.
In one embodiment, identifying the presence of a stain on a plastic bottle comprises:
preprocessing the side images;
cutting the side image according to the position information of the target area;
carrying out global threshold processing on the cut side image;
converting the side image into a second binary image;
and traversing the second binary image.
In one embodiment, identifying whether a plastic bottle is labeled with a plurality of labels comprises:
extracting features of the side images;
and performing convolution operation on the side image.
In one embodiment, identifying whether a label has a wrinkle comprises:
the third convolution neural network unit is used for extracting the characteristics of the side images;
the up-sampling unit is used for up-sampling the side image;
the probability calculation unit is used for calculating the probability that pixel points of the side images belong to a certain category through a softmax function;
and the segmentation unit is used for performing target segmentation according to the probability.
In one embodiment, identifying whether the label is affixed too close to the edge comprises:
a fifth input unit for inputting the side image;
an encoder network unit for down-sampling the side images;
the cascade expansion convolution neural network unit is used for performing convolution expansion on the downsampled side image;
the decoder network unit is used for performing up-sampling on the side face image subjected to convolution expansion and performing convolution processing on the side face image subjected to up-sampling;
in one embodiment, identifying whether a scratch is present on a plastic bottle comprises:
the first characteristic extraction unit is used for extracting a candidate region of the side image through a region suggestion network;
and the classification positioning unit is used for generating a prediction frame and judging whether the scratch exists according to the score of the prediction frame.
In one embodiment, identifying and analyzing the previous image comprises:
it is identified whether the position line of the plastic bottle is bent.
In one embodiment, identifying whether a position line of a plastic bottle is bent comprises:
segmenting the position line region from the previous image;
extracting a center line of the position line;
and judging the straightness of the central line.
In one embodiment, identifying and analyzing subsequent images includes:
and identifying whether the joint line of the plastic bottle has more materials.
In one embodiment, identifying the existence of excess material in the parting line of a plastic bottle comprises:
extracting the characteristics of the subsequent images;
and identifying and classifying the extracted features.
The concepts, principles and concepts of the present application have been described above in detail in connection with specific embodiments (including examples and illustrations). Those skilled in the art will appreciate that the embodiments of the present application are not limited to the above-described forms, and that any possible modifications, substitutions and equivalents of the steps, methods, apparatuses and components of the above-described embodiments may be made by those skilled in the art after reading the present specification, and that such modifications, substitutions and equivalents are to be considered as falling within the scope of the present application. The scope of protection of this application is only governed by the claims.

Claims (10)

1. A plastic bottle defect detection system comprising:
side check out test set for detecting the defect of the side of plastic bottle, side check out test set includes:
the side image acquisition device is used for acquiring side images of the plastic bottles;
side face recognition analysis means for recognizing and analyzing the side face image;
a front inspection apparatus for inspecting a front of the plastic bottle for defects, the front inspection apparatus comprising:
the front image acquisition device is used for acquiring a front image of the plastic bottle;
front recognition and analysis means for recognizing and analyzing the front image;
a back inspection apparatus for inspecting defects of a back of the plastic bottle, the back inspection apparatus comprising:
the back image acquisition device is used for acquiring back images of the plastic bottles;
a rear face recognition analysis means for recognizing and analyzing the rear face image;
a transfer control device for transferring and controlling the plastic bottles, the transfer control device comprising:
the conveying device is used for conveying the plastic bottles to pass through the side image acquisition device, the front image acquisition device and the rear image acquisition device;
sorting means for sorting out defective plastic bottles from said conveying means;
and the cloud storage device is used for storing the detection data of the side detection device, the front detection device and the rear detection device at a cloud end.
2. The plastic bottle defect detection system of claim 1, wherein said side recognition analysis device comprises:
a label missing identification module for identifying whether the plastic bottle is missing a label;
the content missing identification module is used for identifying whether the label lacks the corresponding content;
the stain recognition module is used for recognizing whether stains exist on the plastic bottles or not;
a multi-identification module for identifying whether the plastic bottle is labeled with a plurality of labels;
the wrinkle identification module is used for identifying whether the label has wrinkles or not;
the labeling over-close identification module is used for identifying whether the label is attached to be over-close to the edge;
and the scratch identification module is used for identifying whether scratches exist on the plastic bottle or not.
3. The plastic bottle defect detection system of claim 2, wherein said defect identification module comprises:
the first convolution neural network unit is used for processing the side image through a convolution neural network to obtain a convolution characteristic diagram;
the area suggestion network unit is used for processing the convolution characteristic graph through an area suggestion network to obtain coordinate values of a suggestion area;
the pooling unit is used for pooling the region of interest of the suggested region to obtain a feature map with a fixed size;
a classification unit, configured to classify the feature vector of the suggested region by a softmax classifier;
the regression unit is used for regressing the boundary frame of the suggestion area;
wherein the missing content identification module comprises:
the first clipping unit is used for clipping the side image according to the position information of the target area;
the first global threshold processing unit is used for carrying out global threshold processing on the cut side images;
a first color screening unit for converting the side image into a first binary image;
the character recognition unit is used for recognizing the character content in the first binary image;
wherein the spot identification module comprises:
the preprocessing unit is used for preprocessing the side images;
the second clipping unit is used for clipping the side image according to the position information of the target area;
the second global threshold processing unit is used for carrying out global threshold processing on the cut side images;
a second color filtering unit for converting the side image into a second binary image;
and the traversing unit is used for traversing the second binary image.
4. The plastic bottle defect detection system of claim 2, wherein said multi-identification module comprises:
the second convolutional neural network unit is used for extracting the characteristics of the side image;
the residual error neural network unit is used for performing convolution operation on the side image;
wherein the wrinkle identification module comprises:
the third convolution neural network unit is used for extracting the characteristics of the side image;
an up-sampling unit for up-sampling the side image;
the probability calculation unit is used for calculating the probability that pixel points of the side images belong to a certain category through a softmax function;
and the segmentation unit is used for carrying out target segmentation according to the probability.
5. The plastic bottle defect detection system of claim 2, wherein said labeled too-close identification module comprises:
an encoder network unit for downsampling the side image;
the cascade expansion convolution neural network unit is used for performing convolution expansion on the downsampled side image;
the decoder network unit is used for performing up-sampling on the side image subjected to convolution expansion and performing convolution processing on the side image subjected to up-sampling;
wherein the scratch recognition module includes:
a first feature extraction unit, configured to extract a candidate region of the side image through a region suggestion network;
and the classification positioning unit is used for generating a prediction frame and judging whether the scratch exists according to the score of the prediction frame.
6. The plastic bottle defect detection system of claim 1,
wherein the preceding recognition analysis device includes:
the position line bending identification module is used for identifying whether the position line of the plastic bottle is bent or not;
wherein the latter recognition analysis means comprises:
and the joint line multi-material identification module is used for identifying whether the joint line of the plastic bottle has multi-material.
7. The plastic bottle defect detection system of claim 6,
wherein the position line bending recognition module includes:
the position line region segmentation module is used for segmenting the position line region from the front image;
the position line central line extracting module is used for extracting a central line of the position line;
the straightness judging module is used for judging the straightness of the central line;
wherein, many materials of joint line identification module includes:
the second characteristic extraction unit is used for extracting the characteristics of the rear image;
and the support vector machine unit is used for identifying and classifying the extracted features.
8. The plastic bottle defect detection system of any one of claims 1 to 7,
wherein, side image acquisition device includes:
a side camera for photographing a side of the plastic bottle;
a side light source provided on both left and right sides of the side camera for irradiating a side of the plastic bottle when the side camera photographs;
wherein the front image capturing device comprises:
a front camera for photographing the front of the plastic bottle;
front light sources provided on both left and right sides of the front camera for irradiating the front of the plastic bottle when the front camera takes a picture;
wherein the rear image capturing device comprises:
a rear camera for photographing the rear of the plastic bottle;
and rear light sources arranged on the left and right sides of the rear camera and used for irradiating the rear of the plastic bottle when the rear camera shoots.
9. The plastic bottle defect detection system of any one of claims 1 to 7,
wherein, the sorting device includes:
a deflection wheel for deflecting the conveying direction of the defective plastic bottles;
a deflection conveyor for conveying diverted plastic bottles away from said plastic bottle defect detection system;
the single chip microcomputer is used for controlling the steering of the deflection wheel according to the detection results of the side detection device, the front detection device and the rear detection device;
wherein the transfer device comprises:
the main conveying belt is used for conveying the plastic bottles to pass through the side image acquisition device, the front image acquisition device and the rear image acquisition device and conveying the plastic bottles without defects to a defect detection system away from the plastic bottles;
a conveyor control box for controlling the movement of the main conveyor and the deflection wheel.
10. A method of detecting defects in plastic bottles, comprising:
conveying and controlling the plastic bottles;
detecting defects on the side of the plastic bottle;
detecting a defect in the front face of the plastic bottle;
detecting a defect in the back of the plastic bottle;
storing detection data of the side, the front and the back of the plastic bottle in a cloud end;
wherein said conveying and controlling said plastic bottles comprises:
conveying said plastic bottles through side, front and rear inspection;
sorting out plastic bottles with defects;
wherein the detecting of defects in the side of the plastic bottle comprises:
collecting a side image of the plastic bottle;
identifying and analyzing the side images;
wherein said detecting a defect in the front face of said plastic bottle comprises:
collecting a front image of the plastic bottle;
identifying and analyzing the previous image;
wherein said detecting a defect in the back of said plastic bottle comprises:
collecting a rear image of the plastic bottle;
the subsequent images are identified and analyzed.
CN202211527787.8A 2022-12-01 2022-12-01 Plastic bottle defect detection system and method Pending CN115780299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211527787.8A CN115780299A (en) 2022-12-01 2022-12-01 Plastic bottle defect detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211527787.8A CN115780299A (en) 2022-12-01 2022-12-01 Plastic bottle defect detection system and method

Publications (1)

Publication Number Publication Date
CN115780299A true CN115780299A (en) 2023-03-14

Family

ID=85444369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211527787.8A Pending CN115780299A (en) 2022-12-01 2022-12-01 Plastic bottle defect detection system and method

Country Status (1)

Country Link
CN (1) CN115780299A (en)

Similar Documents

Publication Publication Date Title
CN108548820B (en) Cosmetic paper label defect detection method
CN109724984B (en) Defect detection and identification device and method based on deep learning algorithm
Yazdi et al. Feature extraction algorithm for fill level and cap inspection in bottling machine
CN111862064A (en) Silver wire surface flaw identification method based on deep learning
CN105403147B (en) One kind being based on Embedded bottle embryo detection system and detection method
CN109308700A (en) A kind of visual identity defect inspection method based on printed matter character
CN104483320A (en) Digitized defect detection device and detection method of industrial denitration catalyst
CN114445707A (en) Intelligent visual fine detection method for defects of bottled water labels
CN210071686U (en) Fruit grading plant based on orthogonal binocular machine vision
CN111239142A (en) Paste appearance defect detection device and method
CN106780437B (en) A kind of quick QFN chip plastic packaging image obtains and amplification method
Kulkarni et al. An automated computer vision based system for bottle cap fitting inspection
CN111487250A (en) Intelligent visual detection method and system applied to injection molding defective product detection
KR20220164124A (en) System for inspecting product defects by type based on a deep learning model
CN111487192A (en) Machine vision surface defect detection device and method based on artificial intelligence
CN108073940A (en) A kind of method of 3D object instance object detections in unstructured moving grids
CN114820626A (en) Intelligent detection method for automobile front part configuration
CN114998205A (en) Method for detecting foreign matters in bottle in liquid filling process based on optical means
Li et al. Integrating deformable convolution and pyramid network in cascade R-CNN for fabric defect detection
Koodtalang et al. Glass bottle bottom inspection based on image processing and deep learning
Khule et al. Automated object counting for visual inspection applications
CN117214178A (en) Intelligent identification method for appearance defects of package on packaging production line
CN115780299A (en) Plastic bottle defect detection system and method
CN114662594B (en) Target feature recognition analysis system
CN113228049A (en) Milk analyzer for classifying milk

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination