CN113406092B - Digital production detection system, method, device, equipment and storage medium - Google Patents

Digital production detection system, method, device, equipment and storage medium Download PDF

Info

Publication number
CN113406092B
CN113406092B CN202110947050.0A CN202110947050A CN113406092B CN 113406092 B CN113406092 B CN 113406092B CN 202110947050 A CN202110947050 A CN 202110947050A CN 113406092 B CN113406092 B CN 113406092B
Authority
CN
China
Prior art keywords
image
detected
feature
map
product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110947050.0A
Other languages
Chinese (zh)
Other versions
CN113406092A (en
Inventor
方无迪
陈怡姿
何梁博
孙凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202110947050.0A priority Critical patent/CN113406092B/en
Publication of CN113406092A publication Critical patent/CN113406092A/en
Application granted granted Critical
Publication of CN113406092B publication Critical patent/CN113406092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • G01N21/95607Inspecting patterns on the surface of objects using a comparative method
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biochemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The embodiment of the application provides a digital production detection system, a digital production detection method, a digital production detection device, digital production detection equipment and a storage medium. In the embodiment of the application, by means of cutting to obtain the image blocks based on the characteristics and performing flaw detection on the image block granularity, automatic flaw detection on products produced by a digital factory can be achieved, flaw detection efficiency is improved, and flaw detection is performed on the image blocks with suspected flaw sub-regions. Furthermore, the production efficiency of a digital factory can be improved.

Description

Digital production detection system, method, device, equipment and storage medium
Technical Field
The present application relates to the field of intelligent manufacturing technologies, and in particular, to a digital production detection system, method, apparatus, device, and storage medium.
Background
With the continuous development of technologies such as cloud computing, internet of things and artificial intelligence, more and more digital factories emerge. The digital factory can realize the data processing of the whole production chain of the product from raw material purchase, product design, production processing and the like; production and manufacturing can also be performed in a flexible manufacturing mode. The flexible manufacturing mode is characterized in that the production system can quickly adapt to market demand changes through the improvement of system structures, personnel organization, operation modes, marketing and the like, meanwhile, redundant and useless loss is eliminated, and enterprises are strived to obtain greater benefits. Under the flexible manufacturing mode, a digital factory takes the requirement of a consumer as a core, reconstructs the traditional production mode with production and marketing, and realizes the intelligent manufacturing according to the requirement.
At present, in order to ensure the product quality and improve the customer satisfaction, in the digital production process, the quality detection needs to be performed on finished products or semi-finished products produced in some production procedures. In the prior art, in most production industries, especially in the printing process in the garment production industry, it is common to manually detect whether there are defects such as color drag, color splatter, blurred pattern, stop mark, and partial embrittlement in the cut pieces printed with the print. However, the manual detection method not only has low detection efficiency, but also is prone to false detection or missing detection.
Disclosure of Invention
Aspects of the present disclosure provide a system, method, apparatus, device and storage medium for digital production inspection, which are used to improve defect inspection efficiency.
The embodiment of the application provides a digital production detecting system, includes: the system comprises a central scheduling node, production equipment which is deployed in a production environment and is in charge of a target process, and a quality detection system which provides quality detection service for the production equipment, wherein the quality detection system comprises image acquisition equipment and edge computing equipment; the production equipment is used for producing corresponding products to be detected according to the target process, and the products to be detected are conveyed into the operation area of the image acquisition equipment; the image acquisition equipment is used for acquiring an image of a product to be detected in the operation area of the image acquisition equipment, and sending the image to be detected to the edge computing equipment as the image to be detected, wherein the image to be detected comprises a target area on the product to be detected; the edge computing device is used for identifying a suspected flaw sub-area existing in the target area according to the characteristic information in the image to be detected and the template image; cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion; according to the characteristic information in the image block to be detected and the template image block, performing flaw detection on the image block to be detected to obtain a flaw detection result, and reporting the flaw detection result to a central scheduling node; the template image is an image containing a target area on a reference product; and the central dispatching node is used for generating a sorting instruction according to the flaw detection result and sending the sorting instruction to a manipulator in a production environment so as to control the manipulator to sort the products to be detected, wherein the flaw detection result comprises whether flaws exist in the products to be detected and flaw categories when the flaws exist.
The embodiment of the application further provides a digital production detection method, which comprises the following steps: acquiring an image to be detected, wherein the image to be detected comprises a target area on a product to be detected, and the product to be detected is a product produced by production equipment according to a target procedure; identifying suspected defect sub-areas existing in the target area according to the characteristic information in the image to be detected and the template image, wherein the template image is an image containing the target area on the reference product; cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion; and performing flaw detection on the image block to be detected according to the characteristic information in the image block to be detected and the image block of the template to obtain a flaw detection result, wherein the flaw detection result comprises whether flaws exist in the product to be detected and the types of the flaws when the flaws exist.
The embodiment of the present application further provides a digital production detection device, including: the acquisition module is used for acquiring an image to be detected, the image to be detected comprises a target area on a product to be detected, and the product to be detected is produced by production equipment according to a target procedure; the processing module is used for identifying suspected defect sub-areas existing in the target area according to the characteristic information in the image to be detected and the template image, wherein the template image is an image containing the target area on the reference product; the cutting module is used for cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion; and the processing module is further used for carrying out flaw detection on the image block to be detected according to the characteristic information in the image block to be detected and the image block of the template so as to obtain a flaw detection result, wherein the flaw detection result comprises whether a flaw exists in a product to be detected and the flaw category when the flaw exists.
The embodiment of the present application further provides a quality detection apparatus, including: a memory and a processor; wherein the memory is used for storing the computer program; the processor is coupled to the memory for executing the computer program for performing the steps in the methods provided by the embodiments of the present application.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps in the method provided by the embodiments of the present application.
In the embodiment of the application, by means of cutting to obtain the image blocks based on the characteristics and performing flaw detection on the image block granularity, automatic flaw detection on products produced by a digital factory can be achieved, flaw detection efficiency is improved, and flaw detection is performed on the image blocks with suspected flaw sub-regions. Furthermore, the production efficiency of a digital factory can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a digital production inspection system according to an exemplary embodiment of the present disclosure;
FIG. 2 is a process diagram of a digital production detection method in a practical application according to an embodiment of the present application;
FIG. 3a is a schematic flow chart of a method for detecting digital production according to an exemplary embodiment of the present application;
fig. 3b is a schematic flow chart of an intelligent sorting method according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural diagram of a digital production detection apparatus according to an exemplary embodiment of the present application;
fig. 5 is a schematic structural diagram of a quality detection apparatus according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, the manual detection mode is low in detection efficiency, and false detection or missing detection is easy to occur. Therefore, the embodiment of the application provides a digital production detection system, which comprises a central scheduling node, production equipment which is deployed in a production environment and is responsible for a target process, and a quality detection system which provides quality detection service for the production equipment, wherein the quality detection system comprises image acquisition equipment and edge computing equipment. The production equipment produces a corresponding product to be detected according to a target process, the produced product to be detected is conveyed to an operation area of the image acquisition equipment, the image acquisition equipment acquires an image of the product to be detected, and the acquired image is used as an image to be detected and sent to the edge computing equipment; the edge computing device combines a template image containing a target area on a reference product to carry out flaw detection on an image to be detected, and in the process, an image block to be detected and a template image block corresponding to a suspected flaw sub-area are cut out from the two images based on image feature identification; secondly, flaw detection is carried out according to the granularity of the image blocks, and flaw detection results are reported to a central scheduling node; and the central dispatching node controls a manipulator in the production environment to sort the products to be detected according to the flaw detection result. Therefore, in the embodiment of the application, by means of cutting the image blocks based on the features and performing defect detection on the image block granularity, automatic defect detection on products produced by a digital factory can be achieved, defect detection efficiency is improved, defect detection is performed on the image blocks with suspected defect sub-areas, background information contained in the image blocks is reduced, the suspected defect sub-areas are amplified relatively, micro defects in the products can be located more effectively, accuracy of defect detection results is improved, and probability of false detection or missed detection events is reduced. Furthermore, the production efficiency of a digital factory can be improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a digital production detection system according to an exemplary embodiment of the present application. As shown in fig. 1, the system includes: production equipment 11, a central scheduling node 13 and a quality detection system 12.
Production facility 11 refers to a facility deployed in a production environment to produce a product. The production environment refers to a product production site, such as a production plant. Generally, a production plant is deployed with a plurality of production lines, on which a plurality of workstations, as shown in fig. 1, are deployed, each of which is deployed with production equipment 11 and production personnel. The production equipment 11 may have different forms of implementation depending on the production process for which the production equipment 11 is responsible, and similarly, the products produced by the production equipment 11 may have a form of semi-finished products or finished products. It should be understood that a semi-finished product refers to a product that still needs to be processed according to the remaining production processes in the whole production process, and a finished product refers to a product that is processed according to all the production processes involved in the whole production process. For example, a garment is usually produced from a fabric to a garment through a plurality of production processes such as cloth inspecting, cutting, printing, sewing and ironing, and accordingly, the production equipment 11 includes a cloth inspecting machine in charge of the cloth inspecting process, a cutting machine in charge of the cutting process, a printing machine in charge of the printing process, a sewing machine in charge of the sewing process and an ironing machine in charge of the ironing process. Taking ironing as an example of the last production process in the whole production process, the clothes ironed by the ironing machine are ready-made clothes, namely finished products, and the clothes processed by the cloth inspecting machine, the cutting machine, the printing machine, the sewing machine and the like are semi-finished products.
In the embodiment of the present application, the production equipment 11 produces corresponding products according to their associated target processes. For ease of understanding and distinction, the product produced by the production equipment 11 that has not been defect detected is referred to as the product to be inspected. Typically, the production facility 11 produces the product to be tested secondary in batches. The production equipment 11 can convey each product to be detected to the quality detection system 12 in sequence for flaw detection after each product to be detected in a batch is produced; of course, the production equipment 11 may also deliver a batch of products to be detected to the quality inspection system 12 for defect inspection after the batch of products to be detected is produced in batches.
Further optionally, when receiving the batch production task, the production equipment 11 may first perform trial production on the product required to be produced by the batch production task, and perform defect detection on the product produced in trial production by a quality inspector. If the defect detection of the product produced in the trial production is qualified, the production equipment 11 performs mass production (i.e., mass production) on the remaining products required to be produced by the production task, and conveys the mass produced products to the quality detection system 12 for defect detection. If the defect detection of the product produced in the trial production is not qualified, the production equipment 11 suspends the batch production of the remaining products required to be produced by the production task, and continues the batch production after the trial production is successful again. It should be understood that the trial production can find the quality problem of the product produced by the production equipment 11 in advance, and perform debugging, maintenance and other processing on the production equipment 11 or the assembly line in time, so as to ensure the product quality when the production equipment 11 performs batch production.
After the production equipment 11 produces the product to be detected, the product to be detected is conveyed to a quality detection system 12 providing quality detection service for flaw detection. Further optionally, as shown in fig. 1, the quality detection system 12 includes an image capture device 121 and an edge calculation device 122.
Wherein the image capture device 121 may interact with the edge computing device 122 via a wired network, or a wireless network. For example, the wired network may include a coaxial cable, a twisted pair, an optical fiber, and the like, and the Wireless network may be a 2G network, a 3G network, a 4G network, or a 5G network, a Wireless Fidelity (WIFI) network, and the like. The specific type or specific form of the interaction is not limited in the present application as long as the function of the interaction between the image capture device 121 and the edge computing device 122 can be realized.
In the embodiment of the present application, the image capturing device 121 may be any device having an image capturing function. For example, the image capture device 121 may be classified into an area camera and a line camera according to structural characteristics of the sensor. For another example, the image capturing device 121 may be classified into a standard definition camera and a high definition camera according to the difference in picture resolution. For another example, the image pickup device 121 may be an analog camera and a digital camera according to signal classification.
In consideration of the situation that the line-scan camera is likely to cause uneven imaging light, and the acquired image is bright in the middle and dark on two sides, brightness adjustment needs to be performed on the acquired image, and the brightness pre-adjustment process may aggravate image noise, so in the above or following embodiments of the present application, the image acquisition device 121 may select an area-scan camera with relatively even imaging light.
In view of the fact that the image sharpness is related to the detection accuracy of the defect detection result, in the above or below embodiment of the present application, the image capturing apparatus 121 may select a high-definition camera of HD 720P with a resolution of 1280 × 720, or a high-definition camera of HD 960P with a resolution of 1280 × 960, but is not limited thereto.
After the product to be detected produced by the production equipment 11 is conveyed into the working area (i.e. the image acquisition area in fig. 1) of the image acquisition equipment 121, the image acquisition equipment 121 performs image acquisition on the product to be detected located in the working area to obtain an image of the product to be detected.
Notably, the image to be detected contains a target area on the product to be detected. The target area is an area on the product to be detected, which is related to a production process for producing the product to be detected, and is an area which needs to pay attention to whether a defect exists or not, and the area is an interest area in defect detection. To a certain extent, a target region is understood to be a product region which is processed by a corresponding production process. It should be understood that, due to different production processes, the target area on the product to be detected is different, and the target area on the product to be detected is specifically defined according to the actual application requirements. For example, the cut pieces cut by the cutting machine are all target areas related to the cutting process, and the cut pieces are products to be detected in the cutting process. For another example, the cut pieces cut by the cutting machine are printed by a printing machine and then become printed cut pieces, the printing areas in the printed cut pieces can be target areas related to the printing process, and the printed cut pieces are products to be detected in the printing process. For another example, the printed panel is processed by a sewing machine to become a sewn product, the sewing thread region in the sewn product may be a target region related to the sewing process, and the sewn product is a product to be detected in the sewing process.
In the embodiment of the present application, for convenience of understanding and distinction, the image of the product to be detected acquired by the image acquisition device 121 may be referred to as a first original image containing the product to be detected. The image capturing device 121 may directly send the first original image as the image to be detected to the edge computing device 122, or the image capturing device 121 may cut the first original image, cut off other image areas except the target area in the first original image, and send the cut first original image as the image to be detected to the edge computing device 122. Thus, in an alternative embodiment, one implementation of the image capturing device 121 for capturing the image to be detected is: collecting a first original image containing a product to be detected; and cutting the first original image according to the product position of the target area recorded in the production configuration file used in the target process to obtain an image to be detected containing the target area.
It should be noted that the production configuration file used by the target process records configuration information related to the target process, where the configuration information includes a product location of the target area, and of course, the configuration information may also include, but is not limited to, size, pattern, color, and the like of the target area. The product position of the target area refers to a position of the target area in the product produced by the production equipment 11 according to the target process. For example, for a printing process, the position of the printing area in the printing cut piece is the product position of the printing area.
It should be understood that the image to be detected containing the target area, which is obtained by cutting the first original image based on the product position of the target area, has a smaller size than the first original image, but the size of the target area in the image to be detected is the same as the size of the target area in the first original image. It is worth noting that in the image to be detected with the smaller size, because some redundant background areas are cut off, the target area on the product to be detected can be more prominently displayed, and the accuracy of flaw detection on the target area on the product to be detected is favorably improved. In addition, clipping is performed on the original image, and the effect on the sharpness of the clipped image is small.
In this embodiment of the application, the image capturing device 121 needs to perform image capturing on a reference product produced in a target process, in addition to the product to be detected. The reference product refers to a product with qualified quality detection, and can be selected from products trial-produced by the production equipment 11 by quality testing personnel according to a quality detection qualified standard. The quality inspection qualified standard is defined according to actual application requirements, for example, the quality inspection qualified standard is that a product is free of defects or a product has a small amount of defects but the defects are negligible, that is, a reference product is a product free of defects, or a product has a small amount of defects but the defects are negligible.
In the embodiment of the present application, for convenience of understanding and distinction, the image of the reference product acquired by the image acquisition device 121 may be referred to as a second original image containing the reference product. When the second original image is acquired, the second original image needs to be cut to obtain a template image containing a target area on the reference product. It is worth noting that some redundant background areas in the template image with the size smaller than that of the second original image are cut off, so that the target area on the reference product can be more prominently displayed, and when the defect detection is carried out on the target area on the product to be detected by taking the target area as the reference, the accuracy of the defect detection carried out on the target area on the product to be detected is favorably improved. The template image can be obtained by, but not limited to, the following methods:
mode 1: one implementation of the image capturing device 121 to obtain the template image is: selecting a reference product, collecting a second original image containing the reference product, and cutting the second original image according to the product position of the target area recorded in the production configuration file used in the target process to obtain a template image containing the target area.
Mode 2: in order to obtain a high quality template image, one implementation of the image capturing device 121 to obtain the template image is: collecting a plurality of second original images containing a reference product, and respectively cutting the plurality of second original images according to the product position of a target area recorded in a production configuration file used by a target process to obtain a plurality of candidate images containing the target area; and generating a template image according to the plurality of candidate images.
For example, a quality inspector can select a plurality of standard products with qualified quality from the products produced in a trial mode, and image acquisition equipment 121 is used for respectively carrying out image acquisition on the plurality of standard products to obtain a plurality of second original images containing the standard products; cutting the second original image based on the product position of the target area, and taking the cut image containing the target area as a candidate image; and performing fusion processing on the multiple candidate images to obtain a high-quality template image.
In the embodiment of the present application, the edge computing device 122 refers to a device capable of performing edge computing, and examples include, but are not limited to, a Programmable Logic Controller (PLC), a Programmable Automation Controller (PAC), an edge gateway, and the like. The edge computing is arranged at one side close to an object or a data source, and an open platform integrating network, computing, storage and application core capabilities is adopted to provide intelligent analysis processing service nearby.
In the embodiment of the present application, after receiving the image to be detected sent by the image acquisition device 121, the edge computing device 122 executes a defect detection task, and reports a defect detection result to the central scheduling node 13, as shown in fig. 1. Further details regarding the performance of the flaw detection task by the edge computing device 122 are provided below. The central scheduling node 13, the edge computing devices 122 and the production devices 11 may form a cloud-edge-end cooperative network system, and the central scheduling node 13 performs global scheduling and management on the production devices 11, personnel, production lines and other resources in the entire production environment, and fully utilizes the computing power of the edge computing devices 122 to meet the real-time requirement of defect detection.
The defect detection result output by the edge computing device 122 may include, but is not limited to, whether a defect exists in the product to be detected and a defect type when the defect exists. It should be noted that the defect category can be flexibly set according to the actual application requirement. For example, the defect categories include defective, non-defective, repairable defects. Also for example, the defect categories include repairable defects, irreparable defects, and no defects. Also for example, the defect categories include non-defects, print slip defects, splash defects, pattern blur defects, parking mark defects, and partial embrittlement defects, among others.
The central scheduling node 13 is located in a cloud, for example, deployed in a central cloud or a traditional data center, and may be a cloud server, a server array, or a virtual machine in an implementation form. In addition, the central scheduling node 13 may interact with the edge computing device 122, the production device 11, and the robot 14, respectively, via a wired network, or a wireless network.
Further optionally, as shown in fig. 1, in this embodiment of the application, in order to implement automatic and fast sorting, the central scheduling node 13 may further generate a sorting instruction according to the flaw detection result, and send the sorting instruction to the manipulator 14 in the production environment, so as to control the manipulator 14 to perform a sorting operation on the product to be detected.
Specifically, the manipulator 14 may sort the products to be inspected into different areas according to the defect detection result indicated by the sorting instruction. For example, a good area and a bad area are shown in fig. 1. Further, the defective area may be further subdivided according to the defect classification. The manipulator 14 sorts the products to be detected with qualified quality detection to good product areas, and sorts the products to be detected with unqualified quality detection to corresponding defective product areas.
Further optionally, as shown in fig. 1, in the embodiment of the present application, the production device 11 is provided with a display screen, and the central scheduling node 13 is further configured to: sending the flaw detection result to the production equipment 11; the production facility 11 is also used for: and displaying the flaw detection result on a display screen.
In practical applications, a production person on the production equipment 11 side may check the defect detection result output on the display screen of the terminal equipment located on the workstation shown in fig. 1, determine whether to improve the production operation based on the defect detection result, or determine whether to find the cause of the defect based on the defect detection result, for example, whether the parameter setting of the production equipment 11 is not reasonable or whether the production equipment 11 is out of order. If the defect detection result shows that the defects are serious and the production operation needs to be improved, the production personnel can improve the production operation in time, or can adjust the parameters or detect faults of the production equipment 11 in time, so that the product quality of the whole production line is improved.
In the present embodiment, the edge computing device 122 is primarily responsible for, but not limited to, flaw detection tasks. One implementation of the flaw detection task performed by the edge computing device 122 is: identifying a suspected defect subarea in the target area according to the characteristic information in the image to be detected and the template image; cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion; and then, carrying out flaw detection on the granularity of the image block, specifically: and performing flaw detection on the image block to be detected according to the characteristic information in the image block to be detected and the image block of the template to obtain a flaw detection result.
In the embodiment of the application, the feature information of the image to be detected and the template image can be flexibly extracted according to the actual application requirement. For example, the characteristic information includes, but is not limited to: luminance features, edge features, texture features, color features, histogram features, and principal component features, among others. In addition to these features, a neural network model may also be used to extract higher dimensional feature information from the image to be detected and the template image.
It should be understood that, for the same image area between the image to be detected and the template image, if the difference between the feature information of the image area in the image to be detected and the feature information of the image area in the template image is large, the probability that the image area in the image to be detected is a suspected defect sub-area is large. If the difference between the characteristic information of the image area in the image to be detected and the characteristic information of the image area in the template image is smaller, the probability that the image area in the image to be detected is a suspected defect subarea is smaller.
Thus, in the embodiment of the present application, the edge calculation device 122 may first calculate, for the same image region between the image to be detected and the template image, an image similarity between the image region in the image to be detected and the image region in the template image based on the feature information of the image region in the image to be detected and the feature information of the image region in the template image; and then, selecting an image area with the image similarity smaller than a preset similarity threshold value to identify the suspected defect sub-area existing in the target area. The preset similarity threshold is set according to practical application requirements, the image area with the image similarity smaller than the preset similarity threshold is identified as a suspected defect sub-area existing in the target area, and the image area with the image similarity larger than or equal to the preset similarity threshold is not identified as a suspected defect sub-area existing in the target area.
In the above or below embodiments of the present application, the distance value between the images may be used to represent the similarity between the images, and the suspected defect sub-area existing in the target area may be more accurately identified based on the distance value between the images. Then, one implementation process for identifying the sub-regions suspected of being defective in the target region according to the feature information in the image to be detected and the template image is as follows: respectively extracting features of the image to be detected and the template image to obtain a first feature map and a second feature map with a plurality of channels; generating a distance map according to the feature values of the plurality of channels in the first feature map and the second feature map, wherein the distance map comprises the distance values between the same-position points in the first feature map and the second feature map; and identifying suspected defect sub-areas existing in the target area according to the distribution condition of the distance values in the distance map.
The embodiment of the application does not limit the way of respectively extracting the features of the image to be detected and the template image. As a possible implementation manner, a trained feature extraction network may be used to perform feature extraction on the image to be detected and the template image respectively. Optionally, the model structure of the feature extraction network may include, but is not limited to: convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long-Short Term Memory Networks (LSTM). It should be understood that the feature extraction network includes at least an input layer, an intermediate layer, and an output layer. Several ways of respectively extracting features of the image to be detected and the template image by using the feature extraction network are described below.
Mode 1: carrying out feature extraction on an image to be detected by using a feature extraction network, and taking high-level features output by an output layer of the feature extraction network as a first feature map; and performing feature extraction on the template image by using a feature extraction network, and taking the high-level features output by an output layer of the feature extraction network as a second feature map.
Mode 2: performing feature extraction on an image to be detected by using a feature extraction network, and performing feature fusion on a middle-layer feature output by a middle layer of the feature extraction network and a high-layer feature output by an output layer to obtain a first feature map; and performing feature extraction on the template image by using a feature extraction network, and performing feature fusion on the middle-layer features output by the middle layer of the feature extraction network and the high-layer features output by the output layer to obtain a second feature map.
It is worth noting that in the method 2, the middle-level features and the high-level features are fused in the feature extraction process, so that the extracted first feature map or second feature map can carry more feature information, and the defect detection accuracy can be improved.
In the above or following embodiments of the present application, one implementation of generating the distance map from the feature values on the plurality of channels in the first feature map and the second feature map is: aiming at any identical position point in the first feature map and the second feature map, calculating a sub-distance of the position point on each channel according to the feature value of the position point on each channel; performing numerical calculation on the sub-distances of the position point on the plurality of channels to obtain a distance value corresponding to the position point; and generating a distance map according to the distance values corresponding to the same position points in the first feature map and the second feature map.
When calculating the sub-distance of any position point on each channel, a distance value calculation algorithm is specifically adopted to calculate the sub-distance of the position point on each channel based on the characteristic value of the position point on each channel of the first characteristic diagram and the characteristic value of the position point on each channel of the second characteristic diagram. The distance value calculation algorithm of the position points is not limited. For example, distance value calculation algorithms include, but are not limited to: euclidean Distance (Euclidean Distance) algorithm, Cosine of angle (Cosine) algorithm, jackard Distance (Jaccard Distance) algorithm, and Mahalanobis Distance (Mahalanobis Distance) algorithm.
Because the first feature map and the second feature map include a plurality of channels, when the sub-distance of any position point on each channel is obtained, the sub-distances of the position point on the plurality of channels are also required to be calculated, and the distance value corresponding to the position point is obtained. The numerical calculation may be to average the sub-distances in the multiple channels, or to perform weighted summation on the sub-distances in the multiple channels, which is not limited in this embodiment of the present application.
In the embodiment of the present application, the first characteristic diagram and the second characteristic diagram have the same size. Let the dimension of the first and second feature maps be H × W × C, where H is the length of the first and second feature maps, W is the width of the first and second feature maps, and C is the number of channels in the first and second feature maps. According to the principle of generating the distance map, the size of the distance map is represented as H × W × 1.
It should be understood that the distance values in the distance map characterize the similarity between the same location points in the image to be detected and the template image. The larger the distance value in the distance map is, the more similar the image characteristics of the corresponding position point in the image to be detected and the image characteristics of the corresponding position point of the template image are; the smaller the distance value in the distance map is, the more dissimilar the image characteristics of the corresponding position points in the image to be detected and the image characteristics of the corresponding position points of the template image are. Based on this, after obtaining the distance map, distribution statistics may be performed on the distance values in the distance map, and based on the distribution of the distance values in the distance map, an abnormal distance value and a normal distance value among the distance values are identified, and an image area corresponding to the abnormal distance value identifies a suspected defect sub-area existing in the target area.
The distribution statistical method is not limited in the embodiment of the present application, and for example, the exponential distribution of the distance values in the distance map may be counted. As another example, a Poisson distribution of distance values in a distance map may be counted. As another example, a Gaussian distribution of distance values in the distance map may be counted.
In practical application, the numerical value interval of the abnormal distance value and the numerical value interval of the normal distance value can be flexibly set based on the distribution statistical result. It can be understood that the value interval of the abnormal distance value and the value interval of the normal distance value which are flexibly set can better adapt to different production scenes, that is, different production scenes are set with different value intervals of the abnormal distance value and different value intervals of the normal distance value.
Taking Gaussian distribution as an example, counting a mean value mu and a variance delta corresponding to the distance values in the distance map, and setting a numerical interval of a normal distance value as [ mu-3 delta, mu +3 delta ]; the range of anomaly distance values is between [ - ∞, μ -3 δ) and [ μ +3 δ, + ∞). Thus, for any distance value in the distance map, if the distance value falls within [ μ -3 δ, μ +3 δ ], the distance value is a normal distance value. If the distance value falls within [ - ∞, μ -3 δ) or [ μ +3 δ, + ∞ ], the distance value is an outlier distance value.
In the foregoing or following embodiments of the present application, in order to accurately identify the sub-regions suspected of being defective in the target region, according to the distribution of the distance values in the distance map, an implementation process of identifying the sub-regions suspected of being defective in the target region is as follows: generating a middle mask map with the same size as the distance map according to the distribution condition of the distance values in the distance map, wherein each position point in the middle mask map represents that the distance value on the corresponding position point in the distance map is a normal distance value or an abnormal distance value; the intermediate mask image is up-sampled to obtain a target mask image with the same size as the template image or the image to be detected, the target mask image comprises at least one connected region, and the connected region is formed by adjacent position points representing abnormal distance values; and taking the sub-area corresponding to the connected area in the image to be detected as the suspected defect sub-area existing in the target area.
The embodiment of the present application does not limit the manner of generating the intermediate mask map having the same size as the distance map. For example, one implementation of generating a reticle map having the same size as the distance map according to the distribution of the distance values in the distance map is: carrying out distribution statistics on the distance values in the distance map to obtain a normal distance value and an abnormal distance value; creating an initial mask map without pixel values according to the size of the distance map, wherein each position point in the initial mask map corresponds to the distance value of the corresponding position point in the distance map; for each position point in the initial mask image, if the corresponding distance value is an abnormal distance value, the pixel value of the position point is a first pixel value; and if the corresponding distance value is the normal distance value, the pixel value of the position point is the second pixel value. For example, the first pixel value is 255 and the second pixel value is 0. Alternatively, the first pixel value is 0 and the second pixel value is 255. Of course, the first pixel value and the second pixel value may take other different values, which is not limited.
In the embodiment of the present application, the intermediate mask map may be regarded as a mask map of the distance map. Due to the fact that the size of the first feature map relative to the image to be detected is reduced and the size of the second feature map relative to the template image is reduced due to feature extraction, the sizes of the distance map and the middle mask map relative to the image to be detected are also reduced. Therefore, in order to accurately acquire a mask map reflecting an image to be detected, the intermediate mask map needs to be upsampled to a mask map having the same size as that of the template image or the image to be detected, and the upsampled mask map is referred to as a target mask map for easy understanding and distinction. After the target mask map is obtained, at least one connected region formed by adjacent position points representing abnormal distance values can be determined conveniently by analyzing the pixel values of the position points in the target mask map. Based on the relation between the image to be detected and the target mask image, the suspected defect subarea in the target area of the image to be detected can be accurately found out.
For example, in the case that the first pixel value is 255 and the second pixel value is 0, after the target mask map is generated, a connected component with a pixel value of 255 is determined, and a circumscribed rectangle is created for the connected component, and an image area represented by four vertex coordinates of the circumscribed rectangle is a suspected defect sub-area existing in the target area of the image to be detected. It should be understood that there may be more than one connected component with a pixel value of 255 in the target mask image, that is, there may be more than one suspected defect sub-area in the target area of the image to be detected, and there may be one or more than one.
In the above or below embodiments of the present application, in order to improve the identification accuracy of the suspected defect sub-regions, a neural network model may be deployed in the edge computing device 122 to identify suspected defect sub-regions existing in the target region of the image to be detected, and for ease of understanding and distinguishing, the neural network model is referred to as a first neural network model. The feature extraction layer in the first neural network model may be the feature extraction network mentioned in the foregoing. Alternatively, the model structure of the first neural network model may include, but is not limited to: convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long-Short Term Memory Networks (LSTM).
Alternatively, in training the first neural network model, first, a large number of sample image pairs each including a sample detection image and a sample template image may be prepared. Then, labeling the sample image pair, wherein the labeling result comprises whether a suspected defect sub-area exists and the position coordinates of the suspected defect sub-area when the suspected defect sub-area exists. And finally, training based on a large number of sample image pairs and corresponding labeling results to obtain a first neural network model.
In the above or following embodiments of the present application, after the suspected defect sub-region existing in the target region is identified by using the first neural network model, the edge computing device 122 cuts the image to be detected and the template image to obtain the image block to be detected and the template image block corresponding to the suspected defect sub-region. And then, performing the next stage of flaw detection, namely performing flaw detection on the image block to be detected according to the characteristic information in the image block to be detected and the image block of the template, thereby realizing a two-stage flaw detection scheme.
It should be understood that the cut image block to be detected can enlarge the area ratio of the defect in the image block to be detected, that is, the defect in the image block to be detected is more obvious and easier to observe. Therefore, the flaw detection is carried out based on the cut image block to be detected and the template image block, and the flaw detection accuracy can be further improved.
According to the image block to be detected and the template image block, the defect detection method of the image block to be detected is not limited.
In some optional embodiments of the present application, in order to perform defect detection more accurately, a neural network model may be trained to perform defect detection on an image block to be detected according to feature information in the image block to be detected and a template image block. For ease of understanding, the neural network model for performing flaw detection on the image block to be detected is referred to as a second neural network model. Alternatively, the model structure of the second neural network model may include, but is not limited to: convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long-Short Term Memory Networks (LSTM).
Optionally, the trained second neural network model is a dual-input model, that is, the image block to be detected in the original image and the template image block are simultaneously used as input parameters of the model; meanwhile, the second neural network model also performs fusion processing on the feature maps of the image blocks to be detected and the template image blocks, so that the second neural network model can perform flaw detection by using more abundant information, and the identification accuracy of the second neural network model is improved. Optionally, a model structure of the second neural network model includes a feature extraction layer, a feature fusion layer, and an MLP (multi layer Perceptron) classification layer.
Optionally, the trained second neural network model can perform flaw classification and also can position the position coordinates of flaws in the image block to be detected. As an example, when the second neural network model locates the position coordinates of the flaw in the image block to be detected, the circumscribed graph of the flaw in the target mask graph is first determined, and the position coordinates of the flaw in the image to be detected are located according to the position coordinates of the circumscribed graph. Optionally, the circumscribed figure may be a circumscribed rectangle, a circumscribed circle or a circumscribed square, which is not limited. Taking the circumscribed rectangle as an example, the position coordinates of the flaw in the image to be detected can be positioned based on the position coordinates of the four vertexes of the circumscribed rectangle.
Optionally, when training the second neural network model, first, a large number of sample image block pairs may be prepared, where each sample image block pair includes a sample image block to be detected and a sample template image block. And then, labeling the sample image block pair, wherein the labeling result comprises whether a flaw exists, a flaw type and a flaw position. And finally, training based on a large number of sample image block pairs and corresponding labeling results to obtain a second neural network model. In the model training process, the characteristics of the sample detection image blocks in the sample image block pair and the characteristics of the sample template image blocks can be fused, model training is performed based on the fused characteristics, and finally a neural network model supporting double input, namely a second neural network model, is obtained.
In the embodiment of the present application, the second neural network model may perform the flaw detection in several ways, but is not limited to:
mode 1: respectively extracting the features of the image block to be detected and the template image block to obtain a third feature map and a fourth feature map with a plurality of channels; comparing and fusing the third characteristic diagram and the fourth characteristic diagram to obtain target fusion characteristics; and utilizing the classification neural network layer to classify the flaws of the target fusion characteristics so as to obtain flaw detection results. The classification neural network layer may be an MLP classification layer, but is not limited thereto.
In mode 1, the second neural network model can be regarded as a classification convolutional neural network model, and the model structure of the classification convolutional neural network model comprises a feature extraction layer, a feature fusion layer and a classification neural network layer. Wherein the feature extraction layer may include a plurality of neural network layers. The feature extraction layer respectively extracts feature maps of the image block to be detected and the template image block, inputs the feature maps of the two image blocks into the feature fusion layer for fusion processing, and inputs target fusion features obtained through fusion processing into the classification neural network layer for flaw classification. In addition, the classification neural network layer can also position the position coordinates of the flaws in the image to be detected.
Mode 2: respectively extracting the features of the image block to be detected and the template image block to obtain a third feature map and a fourth feature map with a plurality of channels; comparing and fusing the third characteristic diagram and the fourth characteristic diagram to obtain target fusion characteristics; and matching the target fusion features with the existing fusion features in the feature library, and taking the flaw information corresponding to the existing fusion features matched with the target fusion features as a flaw detection result.
In mode 2, the second neural network model can be regarded as a search convolutional neural network model, and the model structure of the search convolutional neural network model includes a feature extraction layer, a feature fusion layer, and a search neural network layer. Wherein the feature extraction layer may include a plurality of neural network layers. The feature extraction layer respectively extracts feature maps of the image block to be detected and the template image block, the feature maps of the two image blocks are input to the feature fusion layer for fusion processing, target fusion features obtained through fusion processing are input to the retrieval neural network layer, the retrieval neural network layer retrieves matched existing fusion features from the feature library, and defect information corresponding to the retrieved existing fusion features is used as a defect detection result.
Mode 3: respectively extracting the features of the image block to be detected and the template image block to obtain a third feature map and a fourth feature map with a plurality of channels; comparing and fusing the third characteristic diagram and the fourth characteristic diagram to obtain target fusion characteristics; utilizing a classification neural network layer to classify flaws of the target fusion characteristics; and if the image has the defects, positioning the position information of the defects in the image to be detected by utilizing the segmentation neural network layer.
In mode 3, the second neural network model may be regarded as a segmented convolutional neural network model whose model structure includes a feature extraction layer, a feature fusion layer, a classification neural network layer, and a segmented neural network layer. Wherein the feature extraction layer may include a plurality of neural network layers. The feature extraction layer respectively extracts feature maps of the image block to be detected and the template image block, the feature maps of the two image blocks are input to the feature fusion layer for fusion processing, target fusion features obtained through fusion processing are input to the classification neural network layer for flaw classification, and if flaws exist, the segmentation neural network layer is used for positioning position information of the flaws in the image to be detected.
In order to facilitate better understanding of those skilled in the art, a digital production detection method in practical application is described below. Fig. 2 is a process diagram of a digital production detection method in a practical application according to an embodiment of the present application. Referring to fig. 2, the whole digital production detection method mainly includes two detection stages, wherein the first detection stage can be regarded as a defect initial detection stage, and the main task of the stage is to locate a suspected defect sub-region in an image to be detected and cut out the suspected defect sub-region from the image to be detected. The second detection stage may be considered as a defect review stage, and the main task of the second detection stage is to perform defect detection on the suspected defect sub-region again to identify whether there is a defect in the suspected defect sub-region, a defect type, and defect location information, etc.
Referring to fig. 2, the defect detection process of the first detection stage includes the following steps: respectively extracting features of the input image to be detected and the template image to obtain a first feature map and a second feature map; generating a distance map between the first characteristic map and the second characteristic map based on the first characteristic map and the second characteristic map, performing distribution statistics on distance values in the distance map, and finding out n abnormal distance values in the distance map; and positioning the position coordinates of the suspected defect sub-regions corresponding to the n abnormal distance values respectively, and cutting the image to be detected and the template image respectively based on the position coordinates of the suspected defect sub-regions corresponding to the n abnormal distance values respectively to obtain the image block to be detected and the template image block. In addition, the position coordinates and the size of the suspected defect sub-area corresponding to the n abnormal distance values are recorded, namely the position and the size of the cutting area in fig. 2.
Referring to fig. 2, the defect detection process of the second detection stage includes the following steps: the image processing method comprises the steps of utilizing a feature extraction layer to extract features of an image block to be detected and a template image block, utilizing a feature fusion layer to perform feature fusion on the extracted features, utilizing an MLP classification layer to perform flaw classification processing on the fusion features, and if the defects exist, performing original image coordinate conversion based on the position and the size of a cutting area to calculate the position and the size of the defect area in an original image (namely an image to be detected).
In addition, the flaw detection task of the first detection stage may be implemented by the first neural network model in the foregoing, and the flaw detection task of the second detection stage may be implemented by the second neural network model in the foregoing.
When the digital production detection method provided by the embodiment of the application is applied to defect detection of a printing process in the clothing production industry, the problem that printing defect detection is not accurate enough because the size of the printing defect is far smaller than that of a printing area can be effectively solved. Specifically, to the to-be-detected printed cut piece and the reference printed cut piece, firstly, an image of the to-be-detected printed cut piece and an image of the reference printed cut piece are collected. Secondly, cutting the printed cut piece image to be detected to obtain the cut piece image to be detected including the printed area; and cutting the reference printing cut piece image to obtain a template cut piece image comprising a printing area. And then, executing a flaw detection process of a first detection stage on the cut piece image to be detected and the template cut piece image to obtain a suspected flaw sub-area existing in the identified printing area. And then, cutting the to-be-detected cut piece image and the template cut piece image again based on the position of the suspected defect sub-area to obtain a to-be-detected cut piece and a template cut piece comprising the suspected defect sub-area. And then, executing the flaw detection process of the second detection stage on the to-be-detected cutting block and the template cutting block. Analyzing the printing defect detection flow to know that the area ratio of the printing area in the cut piece image to be detected and the template cut piece image is enlarged through image cutting before the first detection stage is executed. And before executing the defect detection process of the second detection stage, enlarging the area ratio of the suspected defect subarea in the to-be-detected cutting block and the template cutting block through image cutting. It should be understood that the larger the area ratio occupied by the defective region, the more prominent and easily observable the defective region in the image is, which is beneficial to improving the accuracy and efficiency of printing defect detection.
The digital production detection system that the above-mentioned embodiment of this application provided not only can carry out the flaw to the product of production facility output and detect, can also carry out intelligent letter sorting to these products according to flaw detection result. From the perspective of intelligent sorting of products, the embodiment of the present application further provides an intelligent sorting system, and the structure of the intelligent sorting system is the same as that of the digital production detection system shown in fig. 1. Specifically, the intelligent sorting system comprises a production facility 11, a central scheduling node 13, a quality detection system 12 and a manipulator 14.
The production equipment 11 is used for producing corresponding products to be detected according to target procedures, and the products to be detected are conveyed to the quality detection system 12. And the quality detection system 12 is used for acquiring an image of the product to be detected as an image to be detected, performing flaw detection on the product to be detected according to the image to be detected and the characteristic information in the template image to obtain a flaw detection result, and reporting the flaw detection result to the central scheduling node 13. Wherein, the template image is an image containing a target area on a reference product; the defect detection result comprises whether a defect exists in the product to be detected and the defect type when the defect exists.
And the central dispatching node 13 is used for generating a sorting instruction according to the flaw detection result and sending the sorting instruction to the manipulator 14 in the production environment. And the manipulator 14 is used for carrying out sorting operation on the products to be detected according to the sorting instruction, namely sorting the products to be detected with different defect categories to different areas. It should be noted that the sorting command may be different according to the defect detection result. In an optional embodiment, the sorting instruction includes a type field, on one hand, the type field corresponds to a flaw detection result, and for different flaw detection results, values of the field are different; on the other hand, the field of the type corresponds to the sorting action of the manipulator, and the values of the field are different, which indicates that the manipulator needs to execute different sorting actions.
For example, if the defect detection result indicates that the product to be detected belongs to a good product, and the type field in the sorting instruction can take a value of 01, the manipulator 14 will transport the product to be detected to a good product area according to the type field taking a value of 01 when executing the sorting instruction, so that the product to be detected enters the next production process; if the flaw detection result indicates that the product to be detected belongs to an unrepairable defective product and the type field in the sorting instruction can take the value of 02, the manipulator 14 conveys the product to be detected to a defective product area according to the type field taking the value of 02 when executing the sorting instruction, and waits for recovery processing; if the flaw detection result indicates that the product to be detected belongs to a repairable defective product, and the type field in the sorting instruction can take the value of 03, the manipulator 14 can convey the product to be detected to the area to be repaired according to the type field taking the value of 03 when executing the sorting instruction, and waits for being repaired. And after the product to be detected is repaired, the next production procedure can be continuously carried out.
Further optionally, a manipulator 14 is disposed on each production line, and the manipulator 14 on each production line is responsible for performing a sorting operation on products output by each production device on the production line according to the sorting instruction sent by the central scheduling node 13.
Further optionally, the manipulator 14 includes a manipulator frame, a plurality of grabbing components fixed to the manipulator frame, the grabbing components being capable of individually grabbing the product and jointly grabbing the product, and a control cable for switching and controlling the plurality of grabbing components, and the plurality of grabbing components are respectively connected to the control cable. The robot 14 can grasp different products produced by different production facilities by activating different grasping members. Taking the intelligent clothing manufacturing field as an example, the manipulator 14 can adopt the grabbing component A to grab the cut-parts cut out by the cutting equipment, can adopt the grabbing component B to grab the printed cut-parts produced by the printing equipment, and can also adopt the grabbing component C to grab the ready-made clothing produced by the whole ironing equipment.
Further optionally, as shown in fig. 1, the quality detection system 12 includes an image capture device 121 and an edge calculation device 122.
The product to be inspected produced by the production apparatus 11 is conveyed into the work area of the image pickup apparatus 121. The image acquisition device 121 is configured to acquire an image of a product to be detected located in the work area of the image acquisition device, and send the image to be detected to the edge computing device as the image to be detected, where the image to be detected includes a target area on the product to be detected.
And the edge computing device 122 is configured to perform flaw detection on the product to be detected according to the feature information in the image to be detected and the template image to obtain a flaw detection result, and report the flaw detection result to the central scheduling node 13.
The specific manner in which the components of the intelligent sorting system in the above-described embodiments perform operations has been described in detail in relation to the embodiments of the digital production inspection system and will not be described in detail herein.
It is worth noting that, before products produced by production equipment are sorted, the intelligent sorting system provided by the embodiment of the application utilizes the quality detection system 12 to detect flaws of the products to be sorted, and generates sorting instructions matched with the flaw detection results based on the flaw detection results through the central scheduling node 13 and sends the sorting instructions to the manipulator, so that the manipulator can carry out automatic sorting with pertinence, sorting efficiency is improved, and sorting errors are reduced.
Fig. 3a is a schematic flowchart of a digital production detection method according to an exemplary embodiment of the present application. The method may be performed by a quality detection system in the digital production detection system shown in fig. 1. As shown in fig. 3a, the method may comprise the steps of:
301. and collecting an image to be detected.
The image to be detected comprises a target area on a product to be detected, and the product to be detected is a product produced by production equipment according to a target process.
302. And identifying a suspected defect subarea in the target area according to the characteristic information in the image to be detected and the template image, wherein the template image is an image containing the target area on the reference product.
303. And cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion.
304. And performing flaw detection on the image block to be detected according to the characteristic information in the image block to be detected and the image block of the template to obtain a flaw detection result.
The defect detection result comprises whether a defect exists in the product to be detected and the defect type when the defect exists.
Further optionally, acquiring the image to be detected includes: collecting a first original image containing a product to be detected; and cutting the first original image according to the product position of the target area recorded in the production configuration file used in the target process to obtain an image to be detected containing the target area.
Further optionally, the method further includes: collecting a plurality of second original images containing a reference product, and respectively cutting the plurality of second original images according to the product position of a target area recorded in a production configuration file used by a target process to obtain a plurality of candidate images containing the target area; and generating a template image according to the plurality of candidate images.
Further optionally, identifying a suspected defect sub-region existing in the target region according to the feature information in the image to be detected and the template image, including: respectively extracting features of the image to be detected and the template image to obtain a first feature map and a second feature map with a plurality of channels; generating a distance map according to the feature values of the plurality of channels in the first feature map and the second feature map, wherein the distance map comprises the distance values between the same-position points in the first feature map and the second feature map; and identifying suspected defect sub-areas existing in the target area according to the distribution condition of the distance values in the distance map.
Further optionally, the performing feature extraction on the image to be detected and the template image respectively to obtain a first feature map and a second feature map with a plurality of channels includes: performing feature extraction on an image to be detected by using a feature extraction network, and performing feature fusion on a middle-layer feature output by a middle layer of the feature extraction network and a high-layer feature output by an output layer to obtain a first feature map; and performing feature extraction on the template image by using a feature extraction network, and performing feature fusion on the middle-layer features output by the middle layer of the feature extraction network and the high-layer features output by the output layer to obtain a second feature map.
Further optionally, generating a distance map according to the feature values on the plurality of channels in the first feature map and the second feature map includes: aiming at any identical position point in the first feature map and the second feature map, calculating a sub-distance of the position point on each channel according to the feature value of the position point on each channel; performing numerical calculation on the sub-distances of the position point on the plurality of channels to obtain a distance value corresponding to the position point; and generating a distance map according to the distance values corresponding to the same position points in the first feature map and the second feature map.
Further optionally, identifying a suspected defect sub-area existing in the target area according to a distribution of distance values in the distance map includes: generating a middle mask map with the same size as the distance map according to the distribution condition of the distance values in the distance map, wherein each position point in the middle mask map represents that the distance value of the corresponding position point in the distance map is a normal distance value or an abnormal distance value; the intermediate mask image is up-sampled to obtain a target mask image with the same size as the template image or the image to be detected, the target mask image comprises at least one connected region, and the connected region is formed by adjacent position points representing abnormal distance values; and taking the sub-area corresponding to the connected area in the image to be detected as the suspected defect sub-area existing in the target area.
Further optionally, generating a middle mask map with the same size as the distance map according to the distribution of the distance values in the distance map, including: carrying out distribution statistics on the distance values in the distance map to obtain a normal distance value and an abnormal distance value; creating an initial mask map without pixel values according to the size of the distance map, wherein each position point in the initial mask map corresponds to the distance value of the corresponding position point in the distance map; for each position point in the initial mask image, if the corresponding distance value is an abnormal distance value, the pixel value of the position point is a first pixel value; and if the corresponding distance value is the normal distance value, the pixel value of the position point is the second pixel value.
Further optionally, performing defect detection on the image block to be detected according to the feature information in the image block to be detected and the template image block to obtain a defect detection result, including: respectively extracting the features of the image block to be detected and the template image block to obtain a third feature map and a fourth feature map with a plurality of channels; comparing and fusing the third characteristic diagram and the fourth characteristic diagram to obtain target fusion characteristics; and carrying out flaw detection on the product to be detected according to the target fusion characteristics to obtain a flaw detection result.
Further optionally, performing flaw detection on the product to be detected according to the target fusion feature to obtain a flaw detection result, including: utilizing a classification neural network layer to classify the flaws of the target fusion characteristics so as to obtain flaw detection results; or matching the target fusion feature with the existing fusion feature in the feature library, and taking the flaw information corresponding to the existing fusion feature matched with the target fusion feature as a flaw detection result.
The specific implementation of the digital production detection method performed by the quality detection system has been described in detail in the embodiments related to the digital production detection system, and will not be described in detail herein.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 301 to 304 may be device a; for another example, the execution subject of steps 301 and 302 may be device a, and the execution subject of step 303 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 301, 302, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 3b is a schematic flowchart of an intelligent sorting method according to an exemplary embodiment of the present application. As shown in fig. 3b, the method may comprise the steps of:
31. collecting an image to be detected; the image to be detected comprises a target area on a product to be detected, and the product to be detected is a product produced by production equipment according to a target process.
32. Carrying out flaw detection on a product to be detected according to the characteristic information in the image to be detected and the template image to obtain a flaw detection result; the template image is an image including a target area on a reference product.
In this embodiment, the defect detection result includes whether a defect exists in the product to be detected and a defect type when the defect exists.
33. And generating a sorting instruction according to the flaw detection result, and sending the sorting instruction to a manipulator in the production environment so that the manipulator can execute sorting operation on the product to be detected according to the sorting instruction.
In an optional embodiment, performing defect detection on a product to be detected according to feature information in an image to be detected and a template image to obtain a defect detection result, includes: identifying suspected defect sub-areas existing in the target area according to the characteristic information in the image to be detected and the template image, wherein the template image is an image containing the target area on the reference product; cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion; and performing flaw detection on the image block to be detected according to the characteristic information in the image block to be detected and the image block of the template to obtain a flaw detection result.
It should be noted that the above method embodiment may be implemented by cooperation of the image capturing device 121, the edge calculating device 122 and the central scheduling node 13 in the intelligent sorting system provided in the foregoing embodiment, or may be implemented by cooperation of the image capturing device 121 and the central scheduling node 13 alone. Under the condition of being implemented by the cooperation of the image acquisition equipment 121 and the central scheduling node 13, the image acquisition equipment 121 acquires an image to be detected and uploads the image to be detected to the central scheduling node 13, and the central scheduling node 13 performs defect detection on a product to be detected according to characteristic information in the image to be detected and the template image to obtain a defect detection result; and generating a sorting instruction according to the flaw detection result, and sending the sorting instruction to a manipulator in the production environment so that the manipulator can execute sorting operation on the product to be detected according to the sorting instruction.
The detailed implementation of each step in the intelligent sorting method has been described in detail in the foregoing embodiments, and will not be elaborated herein.
Fig. 4 is a schematic structural diagram of a digital production detection apparatus according to an exemplary embodiment of the present application. As shown in fig. 4, the apparatus includes:
the acquisition module 41 is configured to acquire an image to be detected, where the image to be detected includes a target area on a product to be detected, and the product to be detected is a product produced by production equipment according to a target procedure;
the processing module 42 is configured to identify a suspected defect sub-area existing in the target area according to the feature information in the image to be detected and the template image, where the template image is an image including the target area on the reference product;
the cutting module 43 is configured to cut the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect sub-region;
the processing module 42 is further configured to perform defect detection on the image block to be detected according to the feature information in the image block to be detected and the template image block to obtain a defect detection result, where the defect detection result includes whether a defect exists in the product to be detected and a defect type when the defect exists.
Further optionally, when the acquisition module 41 acquires an image to be detected, it is specifically configured to: collecting a first original image containing a product to be detected; and cutting the first original image according to the product position of the target area recorded in the production configuration file used in the target process to obtain an image to be detected containing the target area.
Further optionally, the acquisition module 41 is further configured to: collecting a plurality of second original images containing a reference product, and respectively cutting the plurality of second original images according to the product position of a target area recorded in a production configuration file used by a target process to obtain a plurality of candidate images containing the target area; and generating a template image according to the plurality of candidate images.
Further optionally, when the processing module 42 identifies a suspected defect sub-area existing in the target area, it is specifically configured to: respectively extracting features of the image to be detected and the template image to obtain a first feature map and a second feature map with a plurality of channels; generating a distance map according to the feature values of the plurality of channels in the first feature map and the second feature map, wherein the distance map comprises the distance values between the same-position points in the first feature map and the second feature map; and identifying suspected defect sub-areas existing in the target area according to the distribution condition of the distance values in the distance map.
Further optionally, when the processing module 42 respectively performs feature extraction on the image to be detected and the template image, the processing module is specifically configured to: performing feature extraction on an image to be detected by using a feature extraction network, and performing feature fusion on a middle-layer feature output by a middle layer of the feature extraction network and a high-layer feature output by an output layer to obtain a first feature map; and performing feature extraction on the template image by using a feature extraction network, and performing feature fusion on the middle-layer features output by the middle layer of the feature extraction network and the high-layer features output by the output layer to obtain a second feature map.
Further optionally, when the processing module 42 generates the distance map, it is specifically configured to: aiming at any identical position point in the first feature map and the second feature map, calculating a sub-distance of the position point on each channel according to the feature value of the position point on each channel; performing numerical calculation on the sub-distances of the position point on the plurality of channels to obtain a distance value corresponding to the position point; and generating a distance map according to the distance values corresponding to the same position points in the first feature map and the second feature map.
Further optionally, when the processing module 42 identifies a suspected defect sub-area existing in the target area, it is specifically configured to:
generating a middle mask map with the same size as the distance map according to the distribution condition of the distance values in the distance map, wherein each position point in the middle mask map represents that the distance value of the corresponding position point in the distance map is a normal distance value or an abnormal distance value; the intermediate mask image is up-sampled to obtain a target mask image with the same size as the template image or the image to be detected, the target mask image comprises at least one connected region, and the connected region is formed by adjacent position points representing abnormal distance values; and taking the sub-area corresponding to the connected area in the image to be detected as the suspected defect sub-area existing in the target area.
Further optionally, when the processing module 42 generates the intermediate mask map, it is specifically configured to: carrying out distribution statistics on the distance values in the distance map to obtain a normal distance value and an abnormal distance value; creating an initial mask map without pixel values according to the size of the distance map, wherein each position point in the initial mask map corresponds to the distance value of the corresponding position point in the distance map; for each position point in the initial mask image, if the corresponding distance value is an abnormal distance value, the pixel value of the position point is a first pixel value; and if the corresponding distance value is the normal distance value, the pixel value of the position point is the second pixel value.
Further optionally, when the processing module 42 performs defect detection on the image block to be detected, the processing module is specifically configured to: respectively extracting the features of the image block to be detected and the template image block to obtain a third feature map and a fourth feature map with a plurality of channels; comparing and fusing the third characteristic diagram and the fourth characteristic diagram to obtain target fusion characteristics; and carrying out flaw detection on the product to be detected according to the target fusion characteristics to obtain a flaw detection result.
Further optionally, when the processing module 42 performs defect detection on the product to be detected according to the target fusion feature, the processing module is specifically configured to: utilizing a classification neural network layer to classify the flaws of the target fusion characteristics so as to obtain flaw detection results; or matching the target fusion feature with the existing fusion feature in the feature library, and taking the flaw information corresponding to the existing fusion feature matched with the target fusion feature as a flaw detection result.
The digital production detection apparatus of fig. 4 can execute the digital production detection method of the embodiment shown in fig. 3, and the implementation principle and the technical effect are not described again. The detailed manner in which each module and unit of the digital production detection device in the above embodiments perform operations has been described in detail in the embodiments related to the digital production detection system, and will not be described in detail herein.
Fig. 5 is a schematic structural diagram of a quality detection apparatus according to an exemplary embodiment of the present application. As shown in fig. 5, the quality detection apparatus may include: memory 51, processor 52.
The memory 51 is used to store computer programs and may be configured to store other various data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 51 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 52 coupled to the memory 51 for executing the computer program in the memory 51 for:
acquiring an image to be detected, wherein the image to be detected comprises a target area on a product to be detected, and the product to be detected is a product produced by production equipment according to a target procedure; identifying suspected defect sub-areas existing in the target area according to the characteristic information in the image to be detected and the template image, wherein the template image is an image containing the target area on the reference product; cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion; and performing flaw detection on the image block to be detected according to the characteristic information in the image block to be detected and the image block of the template to obtain a flaw detection result, wherein the flaw detection result comprises whether flaws exist in the product to be detected and the types of the flaws when the flaws exist.
Further optionally, when the processor 52 acquires the image to be detected, it is specifically configured to: collecting a first original image containing a product to be detected; and cutting the first original image according to the product position of the target area recorded in the production configuration file used in the target process to obtain an image to be detected containing the target area.
Further optionally, the processor 52 is further configured to: collecting a plurality of second original images containing a reference product, and respectively cutting the plurality of second original images according to the product position of a target area recorded in a production configuration file used by a target process to obtain a plurality of candidate images containing the target area; and generating a template image according to the plurality of candidate images.
Further optionally, when the processor 52 identifies a suspected defect sub-area existing in the target area, it is specifically configured to: respectively extracting features of the image to be detected and the template image to obtain a first feature map and a second feature map with a plurality of channels; generating a distance map according to the feature values of the plurality of channels in the first feature map and the second feature map, wherein the distance map comprises the distance values between the same-position points in the first feature map and the second feature map; and identifying suspected defect sub-areas existing in the target area according to the distribution condition of the distance values in the distance map.
Further optionally, when the processor 52 respectively performs feature extraction on the image to be detected and the template image, the feature extraction is specifically configured to: performing feature extraction on an image to be detected by using a feature extraction network, and performing feature fusion on a middle-layer feature output by a middle layer of the feature extraction network and a high-layer feature output by an output layer to obtain a first feature map; and performing feature extraction on the template image by using a feature extraction network, and performing feature fusion on the middle-layer features output by the middle layer of the feature extraction network and the high-layer features output by the output layer to obtain a second feature map.
Further optionally, when the processor 52 generates the distance map, it is specifically configured to: aiming at any identical position point in the first feature map and the second feature map, calculating a sub-distance of the position point on each channel according to the feature value of the position point on each channel; performing numerical calculation on the sub-distances of the position point on the plurality of channels to obtain a distance value corresponding to the position point; and generating a distance map according to the distance values corresponding to the same position points in the first feature map and the second feature map.
Further optionally, when the processor 52 identifies a suspected defect sub-area existing in the target area, it is specifically configured to:
generating a middle mask map with the same size as the distance map according to the distribution condition of the distance values in the distance map, wherein each position point in the middle mask map represents that the distance value of the corresponding position point in the distance map is a normal distance value or an abnormal distance value; the intermediate mask image is up-sampled to obtain a target mask image with the same size as the template image or the image to be detected, the target mask image comprises at least one connected region, and the connected region is formed by adjacent position points representing abnormal distance values; and taking the sub-area corresponding to the connected area in the image to be detected as the suspected defect sub-area existing in the target area.
Further optionally, when the processor 52 generates the intermediate mask map, it is specifically configured to: carrying out distribution statistics on the distance values in the distance map to obtain a normal distance value and an abnormal distance value; creating an initial mask map without pixel values according to the size of the distance map, wherein each position point in the initial mask map corresponds to the distance value of the corresponding position point in the distance map; for each position point in the initial mask image, if the corresponding distance value is an abnormal distance value, the pixel value of the position point is a first pixel value; and if the corresponding distance value is the normal distance value, the pixel value of the position point is the second pixel value.
Further optionally, when the processor 52 performs defect detection on the image block to be detected, the method is specifically configured to: respectively extracting the features of the image block to be detected and the template image block to obtain a third feature map and a fourth feature map with a plurality of channels; comparing and fusing the third characteristic diagram and the fourth characteristic diagram to obtain target fusion characteristics; and carrying out flaw detection on the product to be detected according to the target fusion characteristics to obtain a flaw detection result.
Further optionally, when the processor 52 performs defect detection on the product to be detected according to the target fusion feature, the method is specifically configured to: utilizing a classification neural network layer to classify the flaws of the target fusion characteristics so as to obtain flaw detection results; or matching the target fusion feature with the existing fusion feature in the feature library, and taking the flaw information corresponding to the existing fusion feature matched with the target fusion feature as a flaw detection result.
Further, as shown in fig. 5, the quality inspection apparatus further includes: communication components 53, display 54, power components 55, audio components 56, and the like. Only some of the components are shown schematically in fig. 5, and it is not meant that the quality testing apparatus includes only the components shown in fig. 5. In addition, the components within the dashed box in fig. 5 are optional components, not necessary components, and may be determined according to the product form of the quality inspection apparatus. The quality detection device of this embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device, or may be a server device such as a conventional server, a cloud server, or a server array. If the quality detection device of this embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, etc., the quality detection device may include components within a dashed line frame in fig. 5; if the quality detection device of this embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, the components in the dashed box in fig. 5 may not be included.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the quality detection device in the foregoing method embodiments when executed.
The communication component of fig. 5 described above is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The display in fig. 5 described above includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply assembly of fig. 5 described above provides power to the various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component of fig. 5 described above may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. A digital production inspection system, comprising: the system comprises a central scheduling node, production equipment which is deployed in a production environment and is in charge of a target process, and a quality detection system which provides quality detection service for the production equipment, wherein the quality detection system comprises image acquisition equipment and edge computing equipment;
the production equipment is used for producing corresponding products to be detected according to the target process, and the products to be detected are conveyed into the operation area of the image acquisition equipment;
the image acquisition equipment is used for acquiring an image of the product to be detected in the operation area of the image acquisition equipment, and sending the image to be detected to the edge computing equipment as the image to be detected, wherein the image to be detected comprises a target area on the product to be detected;
the edge computing device is used for identifying a suspected defect sub-area existing in the target area according to the characteristic information in the image to be detected and the template image; cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion; according to the characteristic information in the image block to be detected and the template image block, performing flaw detection on the image block to be detected to obtain a flaw detection result, and reporting the flaw detection result to the central scheduling node; the template image is an image containing a target area on a reference product;
the central scheduling node is located at a cloud end and used for generating a sorting instruction according to the flaw detection result and sending the sorting instruction to a manipulator in the production environment so as to control the manipulator to sort the product to be detected, wherein the flaw detection result comprises whether flaws exist in the product to be detected and flaw categories when flaws exist;
the edge computing device is specifically configured to, when identifying a suspected defect sub-region existing in the target region according to the feature information in the image to be detected and the template image: respectively extracting the features of the image to be detected and the template image to obtain a first feature map and a second feature map with a plurality of channels; generating a distance map according to feature values on a plurality of channels in a first feature map and a second feature map, wherein the distance map comprises distance values between points at the same position in the first feature map and the second feature map; and identifying suspected defect sub-areas existing in the target area according to the distribution situation of the distance values in the distance map.
2. The system of claim 1, wherein the production facility has a display screen, and wherein the central scheduling node is further configured to: sending the flaw detection result to the production equipment; the production facility is also for: and displaying the flaw detection result on the display screen.
3. The system according to claim 1 or 2, wherein the target process is a printing process in the field of garment production, the product to be detected is a printed cut piece, and the target area is a printed area in the printed cut piece.
4. A digital production detection method is characterized by comprising the following steps:
acquiring an image to be detected, wherein the image to be detected comprises a target area on a product to be detected, and the product to be detected is produced by production equipment according to a target procedure;
identifying suspected defect sub-areas existing in the target area according to the characteristic information in the image to be detected and the template image, wherein the template image is an image containing the target area on the reference product;
cutting the image to be detected and the template image to obtain an image block to be detected and a template image block corresponding to the suspected defect subregion;
performing flaw detection on the image block to be detected according to the characteristic information in the image block to be detected and the image block of the template to obtain a flaw detection result, wherein the flaw detection result comprises whether a flaw exists in the product to be detected and the category of the flaw when the flaw exists;
identifying a suspected defect sub-area existing in the target area according to the characteristic information in the image to be detected and the template image, wherein the method comprises the following steps: respectively extracting the features of the image to be detected and the template image to obtain a first feature map and a second feature map with a plurality of channels; generating a distance map according to feature values on a plurality of channels in a first feature map and a second feature map, wherein the distance map comprises distance values between points at the same position in the first feature map and the second feature map; and identifying suspected defect sub-areas existing in the target area according to the distribution situation of the distance values in the distance map.
5. The method of claim 4, wherein acquiring an image to be detected comprises:
collecting a first original image containing the product to be detected; and cutting the first original image according to the product position of the target area recorded in the production configuration file used in the target process so as to obtain an image to be detected containing the target area.
6. The method of claim 5, further comprising:
collecting a plurality of second original images containing the reference product, and respectively cutting the plurality of second original images according to the product positions of the target area recorded in the production configuration file used by the target process to obtain a plurality of candidate images containing the target area; and generating the template image according to the candidate images.
7. The method of claim 4, wherein the extracting features of the image to be detected and the template image to obtain a first feature map and a second feature map with a plurality of channels comprises:
carrying out feature extraction on the image to be detected by using a feature extraction network, and carrying out feature fusion on a middle layer feature output by a middle layer of the feature extraction network and a high layer feature output by an output layer to obtain a first feature map;
and performing feature extraction on the template image by using a feature extraction network, and performing feature fusion on the middle-layer features output by the middle layer of the feature extraction network and the high-layer features output by the output layer to obtain the second feature map.
8. The method of claim 4, wherein generating a distance map from the feature values on the plurality of channels in the first feature map and the second feature map comprises:
aiming at any identical position point in the first feature map and the second feature map, calculating a sub-distance of the position point on each channel according to the feature value of the position point on each channel; performing numerical calculation on the sub-distances of the position point on the plurality of channels to obtain a distance value corresponding to the position point;
and generating the distance map according to the distance value corresponding to each identical position point in the first feature map and the second feature map.
9. The method of claim 4, wherein identifying the sub-regions of suspected defects existing in the target region according to the distribution of the distance values in the distance map comprises:
generating a middle mask map with the same size as the distance map according to the distribution situation of the distance values in the distance map, wherein each position point in the middle mask map represents that the distance value of the corresponding position point in the distance map is a normal distance value or an abnormal distance value;
the intermediate mask image is up-sampled to obtain a target mask image with the same size as the template image or the image to be detected, the target mask image comprises at least one connected region, and the connected region is formed by adjacent position points representing abnormal distance values;
and taking the sub-area corresponding to the connected area in the image to be detected as a suspected defect sub-area existing in the target area.
10. The method according to any one of claims 4 to 9, wherein performing defect detection on the image block to be detected according to the feature information in the image block to be detected and the template image block to obtain a defect detection result comprises:
respectively extracting the features of the image block to be detected and the template image block to obtain a third feature map and a fourth feature map with a plurality of channels;
comparing and fusing the third characteristic diagram and the fourth characteristic diagram to obtain target fusion characteristics; and carrying out flaw detection on the product to be detected according to the target fusion characteristics to obtain a flaw detection result.
11. The method of claim 10, wherein performing fault detection on the product to be detected according to the target fusion feature to obtain a fault detection result comprises:
utilizing a classification neural network layer to classify the flaws of the target fusion characteristics to obtain flaw detection results;
alternatively, the first and second electrodes may be,
and matching the target fusion feature with the existing fusion feature in a feature library, and taking the flaw information corresponding to the existing fusion feature matched with the target fusion feature as the flaw detection result.
12. A quality detection apparatus, comprising: a memory and a processor;
the memory for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps of the method of any of claims 4-11.
13. A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 4 to 11.
CN202110947050.0A 2021-08-18 2021-08-18 Digital production detection system, method, device, equipment and storage medium Active CN113406092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110947050.0A CN113406092B (en) 2021-08-18 2021-08-18 Digital production detection system, method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110947050.0A CN113406092B (en) 2021-08-18 2021-08-18 Digital production detection system, method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113406092A CN113406092A (en) 2021-09-17
CN113406092B true CN113406092B (en) 2022-01-11

Family

ID=77688587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110947050.0A Active CN113406092B (en) 2021-08-18 2021-08-18 Digital production detection system, method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113406092B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114070829B (en) * 2021-10-22 2024-01-09 南通软云智能科技有限公司 Abnormal data acquisition method and system based on MQTT
CN113696641B (en) * 2021-10-28 2022-04-15 阿里巴巴(中国)有限公司 Digital printing system, method, equipment and storage medium
CN114466183A (en) * 2022-02-21 2022-05-10 江东电子材料有限公司 Copper foil flaw detection method and device based on characteristic spectrum and electronic equipment
CN114841915A (en) * 2022-03-14 2022-08-02 阿里巴巴(中国)有限公司 Tile flaw detection method and system based on artificial intelligence and storage medium
CN114742791A (en) * 2022-04-02 2022-07-12 深圳市国电科技通信有限公司 Auxiliary defect detection method and device for printed circuit board assembly and computer equipment
CN116067964B (en) * 2023-03-06 2023-06-09 广东省农业科学院动物科学研究所 Method and system for promoting fish muscle embrittlement by utilizing condensed tannin

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106409711A (en) * 2016-09-12 2017-02-15 佛山市南海区广工大数控装备协同创新研究院 Solar silicon wafer defect detecting system and method
CN107301637A (en) * 2017-05-22 2017-10-27 南京理工大学 Nearly rectangle plane shape industrial products surface flaw detecting method
CN111445459A (en) * 2020-03-27 2020-07-24 广东工业大学 Image defect detection method and system based on depth twin network
CN112288723A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Defect detection method, defect detection device, computer equipment and storage medium
CN112508846A (en) * 2020-10-30 2021-03-16 北京市商汤科技开发有限公司 Defect detection method and device, electronic equipment and storage medium
CN112686869A (en) * 2020-12-31 2021-04-20 上海智臻智能网络科技股份有限公司 Cloth flaw detection method and device
CN112784900A (en) * 2021-01-22 2021-05-11 深圳壹账通智能科技有限公司 Image target comparison method and device, computer equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106409711A (en) * 2016-09-12 2017-02-15 佛山市南海区广工大数控装备协同创新研究院 Solar silicon wafer defect detecting system and method
CN107301637A (en) * 2017-05-22 2017-10-27 南京理工大学 Nearly rectangle plane shape industrial products surface flaw detecting method
CN111445459A (en) * 2020-03-27 2020-07-24 广东工业大学 Image defect detection method and system based on depth twin network
CN112288723A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Defect detection method, defect detection device, computer equipment and storage medium
CN112508846A (en) * 2020-10-30 2021-03-16 北京市商汤科技开发有限公司 Defect detection method and device, electronic equipment and storage medium
CN112686869A (en) * 2020-12-31 2021-04-20 上海智臻智能网络科技股份有限公司 Cloth flaw detection method and device
CN112784900A (en) * 2021-01-22 2021-05-11 深圳壹账通智能科技有限公司 Image target comparison method and device, computer equipment and readable storage medium

Also Published As

Publication number Publication date
CN113406092A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN113406092B (en) Digital production detection system, method, device, equipment and storage medium
EP3785021B1 (en) System and method for performing automated analysis of air samples
JP4699873B2 (en) Defect data processing and review equipment
JP4616864B2 (en) Appearance inspection method and apparatus, and image processing evaluation system
US20190164270A1 (en) System and method for combined automatic and manual inspection
US20050075801A1 (en) Apparatus and method for automated web inspection
CN103502801A (en) Defect classification method, and defect classification system
KR20060128979A (en) Maximization of yield for web-based articles
US11657599B2 (en) Method for detecting appearance of six sides of chip multi-layer ceramic capacitor based on artificial intelligence
CN112966772A (en) Multi-person online image semi-automatic labeling method and system
JP2000162135A (en) Inspecting method, inspecting system and production of electronic device
CN115184359A (en) Surface defect detection system and method capable of automatically adjusting parameters
CN111598863A (en) Defect detection method, device, equipment and readable storage medium
CN113610848A (en) Digital cloth processing system, cloth flaw detection method, device and medium
CN111951210A (en) Data processing method, device and equipment
JP4652917B2 (en) DEFECT DATA PROCESSING METHOD AND DATA PROCESSING DEVICE
JP6049052B2 (en) Wafer visual inspection apparatus and sensitivity threshold setting method in wafer visual inspection apparatus
CN111028250A (en) Real-time intelligent cloth inspecting method and system
CN116681677A (en) Lithium battery defect detection method, device and system
US20230169642A1 (en) Inspecting Sheet Goods Using Deep Learning
CN114596243A (en) Defect detection method, device, equipment and computer readable storage medium
Niskanen et al. Experiments with SOM based inspection of wood
CN113642473A (en) Mining coal machine state identification method based on computer vision
CN110989422A (en) Management system and management method for AOI (automated optical inspection) over-inspection parameters based on serial number code spraying
CN112893186B (en) Rapid visual detection method and system for electrifying LED lamp filament

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant