CN111951210A - Data processing method, device and equipment - Google Patents

Data processing method, device and equipment Download PDF

Info

Publication number
CN111951210A
CN111951210A CN201910401719.9A CN201910401719A CN111951210A CN 111951210 A CN111951210 A CN 111951210A CN 201910401719 A CN201910401719 A CN 201910401719A CN 111951210 A CN111951210 A CN 111951210A
Authority
CN
China
Prior art keywords
image
sub
detection
feature
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910401719.9A
Other languages
Chinese (zh)
Inventor
陈岩
金智勇
李海洋
邹远鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910401719.9A priority Critical patent/CN111951210A/en
Publication of CN111951210A publication Critical patent/CN111951210A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]

Abstract

The embodiment of the invention provides a data processing method, a device and equipment, wherein the method comprises the following steps: acquiring an image to be detected and a template image corresponding to an object to be detected; cutting an image to be detected to obtain a plurality of sub-detection images; aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images; obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature; and determining a quality detection result corresponding to the object to be detected according to the plurality of sub-detection results corresponding to the sub-detection images. And obtaining a sub-detection result according to the image characteristics of the sub-template image and the image characteristics of the sub-detection image, further determining the quality detection result, realizing the automatic quality detection process, ensuring the accuracy and efficiency of detection and simultaneously reducing the detection cost.

Description

Data processing method, device and equipment
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a data processing method, apparatus, and device.
Background
With the advent of the internet + era, more development possibilities are possible for the traditional manufacturing industry, and the cross-border thinking of the internet + can greatly promote the extension and optimization of an industrial chain and a service chain and improve the technical content and the added value of the whole manufacturing industry.
For the manufacturing industry, quality inspection is an important process, and the existing quality inspection methods are generally manual inspection, for example: the manufactured products (such as circuit boards, cloth, floors, desktop building materials and the like) are manually inspected to identify whether the products are qualified. However, the manual detection mode has low detection efficiency and high labor cost, and the detection accuracy cannot be guaranteed.
Disclosure of Invention
The embodiment of the invention provides a data processing method, a data processing device and data processing equipment, which can automatically detect a product, ensure the detection efficiency, reduce the detection cost and improve the detection accuracy.
In a first aspect, an embodiment of the present invention provides a data processing method, including:
acquiring an image to be detected and a template image corresponding to an object to be detected;
cutting the image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a quality detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
In a second aspect, an embodiment of the present invention provides an apparatus for processing data, including:
the first acquisition module is used for acquiring an image to be detected and a template image corresponding to an object to be detected;
the first cutting module is used for cutting the image to be detected to obtain a plurality of sub-detection images;
the first obtaining module is used for obtaining a plurality of sub-template images corresponding to the sub-detection images aiming at the template images;
a first analysis module, configured to obtain a sub-detection result corresponding to the sub-detection image based on an image feature of the sub-template image and an image feature of the sub-detection image, where the image feature includes at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
and the first processing module is used for determining a quality detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement a method of processing data according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer implement the data processing method in the first aspect when executed.
The method comprises the steps of obtaining an image to be detected and a template image, cutting the image to be detected into a plurality of sub-detection images, obtaining a plurality of sub-template images corresponding to the sub-detection images aiming at the template image, analyzing and processing the sub-detection images according to the sub-template images, and obtaining sub-detection results corresponding to the sub-detection images, so that quality detection results corresponding to an object to be detected can be determined based on the sub-detection results, the process of automatically detecting the quality of the object to be detected is realized, the accuracy and the efficiency of quality detection are guaranteed, the product quality of the object to be detected is improved, the detection cost and the labor cost are reduced, the practicability of the method is effectively improved, and the method is beneficial to popularization and application in the market.
In a fifth aspect, an embodiment of the present invention provides a method for detecting a circuit board, including:
acquiring an image to be detected and a template image corresponding to a circuit board;
cutting the image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a defect detection result corresponding to the circuit board according to a plurality of sub-detection results corresponding to the sub-detection images.
In a sixth aspect, an embodiment of the present invention provides a device for detecting a circuit board, including:
the second acquisition module is used for acquiring an image to be detected and a template image corresponding to the circuit board;
the second cutting module is used for cutting the image to be detected to obtain a plurality of sub-detection images;
the second obtaining module is used for obtaining a plurality of sub-template images corresponding to the sub-detection images aiming at the template images;
a second analysis module, configured to obtain a sub-detection result corresponding to the sub-detection image based on an image feature of the sub-template image and an image feature of the sub-detection image, where the image feature includes at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
and the second processing module is used for determining a flaw detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
In a seventh aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, and wherein the one or more computer instructions, when executed by the processor, implement a method for inspecting a circuit board according to the fifth aspect.
In an eighth aspect, an embodiment of the present invention provides a computer storage medium, which is used for storing a computer program, and the computer program enables a computer to implement the method for detecting a circuit board in the fifth aspect when executed.
By acquiring an image to be detected and a template image, cutting the image to be detected into a plurality of sub-detection images, aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images, and then analyzing and processing the sub-detection images according to the sub-template images to obtain sub-detection results corresponding to the sub-detection images, thereby determining defect detection results corresponding to the circuit board based on the sub-detection results, realizing automatic defect detection of the circuit board, not only ensuring the accuracy and efficiency of the defect detection, being beneficial to improving the product quality of the circuit board, but also reducing the detection cost and the labor cost, effectively improving the practicability of the method, and being beneficial to popularization and application in the market.
In a ninth aspect, an embodiment of the present invention provides a method for detecting a fabric, including:
acquiring an image to be detected and a template image corresponding to the cloth;
cutting the image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a flaw detection result corresponding to the cloth according to a plurality of sub-detection results corresponding to the sub-detection images.
In a tenth aspect, an embodiment of the present invention provides a device for detecting a fabric, including:
the third acquisition module is used for acquiring an image to be detected and a template image corresponding to the cloth;
the third cutting module is used for cutting the image to be detected to obtain a plurality of sub-detection images;
the third obtaining module is used for obtaining a plurality of sub-template images corresponding to the sub-detection images aiming at the template images;
a third analyzing module, configured to obtain a sub-detection result corresponding to the sub-detection image based on an image feature of the sub-template image and an image feature of the sub-detection image, where the image feature includes at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
and the third processing module is used for determining a flaw detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
In an eleventh aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is used for storing one or more computer instructions, and the one or more computer instructions, when executed by the processor, implement the method for detecting a fabric in the ninth aspect.
In a twelfth aspect, an embodiment of the present invention provides a computer storage medium, which is used for storing a computer program, and the computer program enables a computer to implement the method for detecting a fabric in the ninth aspect when executed.
By acquiring an image to be detected and a template image, cutting the image to be detected into a plurality of sub-detection images, aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images, then analyzing and processing the sub-detection images according to the sub-template images, and obtaining sub-detection results corresponding to the sub-detection images, thereby determining defect detection results corresponding to the cloth based on the sub-detection results, realizing automatic defect detection of the cloth, not only ensuring the accuracy and efficiency of the defect detection, being beneficial to improving the product quality of the cloth, but also reducing the detection cost and the labor cost, effectively improving the practicability of the method, and being beneficial to popularization and application in the market.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1a is a schematic system structure diagram of a method for processing application data according to an embodiment of the present invention;
fig. 1 is a first flowchart of a data processing method according to an embodiment of the present invention;
fig. 2 is a second flowchart of a data processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart of acquiring a difference parameter between the image to be detected and the template image according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating cropping the image to be detected to obtain a plurality of sub-detection images according to an embodiment of the present invention;
fig. 5 is a flowchart of obtaining a plurality of sub-template images corresponding to the sub-inspection image for the template image according to the embodiment of the present invention;
fig. 6 is a flowchart of adjusting the intermediate image to obtain a plurality of sub-template images corresponding to the sub-detection images according to the embodiment of the present invention;
fig. 7 is a flowchart for obtaining sub-detection results corresponding to the sub-detection images based on the image features of the sub-template images and the image features of the sub-detection images according to the embodiment of the present invention;
fig. 8 is a flowchart of analyzing and processing the third image feature and the fourth image feature to obtain sub-detection results corresponding to the sub-detection images according to the embodiment of the present invention;
fig. 9 is a flowchart for analyzing and processing the fusion feature vector to obtain sub-detection results corresponding to the sub-detection images according to the embodiment of the present invention;
fig. 10 is a flowchart of a data processing method according to an embodiment of the present invention;
fig. 11 is a fourth flowchart of a data processing method according to an embodiment of the present invention;
fig. 12 is a flowchart of a circuit board detection method according to an embodiment of the present invention;
fig. 13 is a flowchart of a cloth detecting method according to an embodiment of the present invention;
fig. 14 is a first schematic diagram illustrating a data processing method according to an embodiment of the present invention;
fig. 15 is a second schematic diagram illustrating a data processing method according to an embodiment of the present invention;
fig. 16 is a third schematic diagram illustrating a data processing method according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of an electronic device corresponding to the data processing apparatus provided in the embodiment shown in fig. 17;
fig. 19 is a schematic structural diagram of a circuit board detection apparatus according to an embodiment of the present invention;
fig. 20 is a schematic structural diagram of an electronic device corresponding to the detection device of the circuit board provided in the embodiment shown in fig. 19;
fig. 21 is a schematic structural diagram of a cloth detecting apparatus according to an embodiment of the present invention;
fig. 22 is a schematic structural diagram of an electronic device corresponding to the cloth detection apparatus provided in the embodiment shown in fig. 21.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
In order to facilitate understanding of the technical solutions of the present application, the following briefly describes the prior art: for the product manufacturing industry, quality detection is an important process, and the existing quality detection mode is generally a manual detection mode or a mode combining image detection and manual detection. For example: when flaw detection is carried out on the appearance of the PCB, a detection image of the PCB and a corresponding normal circuit board image can be acquired and detected through the double cameras, and then color difference comparison operation is carried out on the detection image and the normal circuit board image so as to determine whether flaws exist in the PCB. However, this method cannot specifically locate the defect type, defect position, defect shape and area, and the detection function is relatively simple; at this time, in order to determine the operation such as the specific defect type, defect shape and area, it is necessary to use a lot of manpower to perform manual inspection, and specifically, the image of the PCB may be displayed on a screen in a state of being enlarged by several hundreds times for the operator to confirm the defect. However, in such a detection method, the misjudgment rate is as high as five or more, thereby reducing the accuracy and reliability of quality detection.
In order to solve the above technical problem, referring to fig. 1a, the present embodiment provides a data processing system capable of implementing a data processing method, the data processing system includes an image capturing device 01 and a processing device 02 communicatively connected to the image capturing device 01, wherein the image capturing device 01 may be various electronic devices capable of implementing image capturing, such as a CCD image sensor, an X-ray detector, an infrared camera, and the like, and the processing device 02 may be implemented as software, or a combination of software and hardware. In addition, the image capturing device 01 and the processing apparatus 02 may be communicatively connected to implement image transmission. Specifically, the method comprises the following steps:
the image acquisition equipment 01 is used for detecting execution operations (clicking operations, key operations and the like) input by a user, acquiring images of an object to be detected according to the execution operations and acquiring the images to be detected corresponding to the object to be detected; and meanwhile, image acquisition can be carried out on the template object according to the execution operation to obtain a template image corresponding to the object to be detected, wherein the template object corresponds to the object to be detected and is used for carrying out quality detection on the object to be detected. After the image acquisition device 01 acquires the image to be detected and the template image corresponding to the object to be detected, the image to be detected and the template image may be sent to the processing device 02.
The processing device 02 is in communication connection with the image acquisition equipment 01 and is used for receiving the image to be detected and the template image which are sent by the image acquisition equipment 01, after the image to be detected and the template image are received, the quality of the image to be detected can be detected based on the template image, in order to improve the accuracy of quality detection, the image to be detected can be cut into a plurality of sub-detection images, the template image is cut into sub-template images corresponding to the sub-detection images based on the sub-detection images, and then the sub-detection images are analyzed and processed based on the sub-template images. Specifically, the sub-detection image may be analyzed and processed by comprehensively considering image features such as color features, texture features, shape features, position relationship features, background features, and the like of the image, so that a sub-detection result corresponding to the sub-detection image may be obtained. It can be understood that, since the number of the sub detection images may be multiple, the sub detection result may also be multiple corresponding to the sub detection image, and the quality detection result corresponding to the object to be detected may be determined based on the multiple sub detection results, and the quality detection result may include at least one of the following: detecting whether the object to be detected meets the preset requirement, detecting whether the object to be detected has a defect, detecting the position of the object to be detected with the defect, the type of the defect, the shape and the area of the defect and the like.
In this embodiment, after the image to be detected and the template image are acquired by the image acquisition device, the image to be detected and the template image may be analyzed by using the processing device, specifically, the image to be detected and the template image may be cut first, and then the sub-detection image may be analyzed and processed based on image feature dimensions such as color features, texture features, shape features, position relationship features, background features, and the like of the image, so as to obtain sub-detection results corresponding to the sub-detection image. Then, the quality detection result corresponding to the object to be detected can be determined based on the plurality of sub-detection results, so that the operation of automatically detecting the quality of the product is realized, the efficiency of quality detection is ensured, and the detection cost is reduced; in addition, because the images are analyzed and processed by utilizing a plurality of image characteristics, the accuracy and the reliability of the quality detection result are effectively improved, the detailed degree of the quality detection result is also improved, the practicability of the method is further improved, and the popularization and the application of the market are facilitated.
Specifically, the following describes in detail the process of processing the image to be detected and the template image by the processing device, and referring to fig. 1, this embodiment provides a data processing method, where the execution subject of the data processing method may be the processing device, and the processing device may be implemented as software, or a combination of software and hardware. Specifically, the method may include:
s101: and acquiring an image to be detected and a template image corresponding to the object to be detected.
Wherein, the object to be detected comprises at least one of the following: circuit board, cloth, building material surface. Of course, the object to be detected is not limited to the above example, and those skilled in the art may also set the object to be detected according to the specific application requirement, which is not described herein again.
In addition, the template image is image information corresponding to the template object, and the template object is used for performing quality detection on the object to be detected. For example: when the object to be detected is a circuit board, the template object can be a standard circuit board, and the standard circuit board can be a circuit board meeting user requirements, design requirements and quality requirements. When the object to be detected is cloth, the template object can be standard cloth, and the standard cloth can be cloth meeting user requirements, design requirements and quality requirements. Specifically, when acquiring an image to be detected and a template image, the image to be detected and the template image may be acquired by using an image acquisition device, wherein the image acquisition device may be a CCD image sensor, an X-ray detector, an infrared camera, or other electronic devices capable of acquiring images.
When the method is applied specifically, when the quality of the object to be detected needs to be detected, the object to be detected can be shot by using image acquisition equipment, so that an image to be detected corresponding to the object to be detected is obtained; then based on the template object corresponding to the object to be detected, the template object is shot by utilizing image acquisition equipment to obtain a template image corresponding to the template object, the template image corresponds to the object to be detected, and after the image to be detected and the template image are obtained, the image acquisition equipment can send the image to be detected and the template image to processing equipment, so that the processing equipment can obtain the image to be detected and the template image corresponding to the object to be detected.
It can be understood that the image capturing device and the processing device may be integrated into a whole, or may be separately configured, and those skilled in the art may perform any configuration according to specific application requirements and design requirements, which are not described herein.
S102: and cutting the image to be detected to obtain a plurality of sub-detection images.
After the image to be detected is acquired, the image to be detected may be cropped into a plurality of sub-detection images, specifically, as can be seen with reference to fig. 4, cropping the image to be detected to obtain a plurality of sub-detection images may include:
s1021: and acquiring a first cutting size corresponding to the image to be detected.
S1022: and cutting the image to be detected according to the first cutting size to obtain a plurality of sub-detection images.
The first cut size may be preset, the specific size of the first cut size is not limited in this embodiment, and a person skilled in the art may set the first cut size according to specific design requirements and application requirements, for example: the first cutoff size may be: 127 pixels, 255 pixels, etc., it may be preferable to set the first cutoff size to 127 pixels in consideration of the quality and efficiency of processing the image to be detected. Furthermore, the image to be detected may be cropped according to the first crop size, and a plurality of sub-detection images may be obtained, where the size of the obtained sub-detection images is 127 pixels by 127 pixels.
S103: for the template image, a plurality of sub-template images corresponding to the sub-detection images are obtained.
The template image is used for quality detection of the image to be detected, so that the template image should include all image information in the image to be detected, and then after the sub-detection image is obtained, the template image can be processed based on the sub-detection image, so that a sub-template image corresponding to the sub-detection image is obtained in the template image, and the size of the sub-template image can be the same as that of the sub-detection image, so that the sub-detection image can be analyzed and processed based on the sub-template image.
For example: the object to be detected is a circuit board, the template object is a standard circuit board, the circuit board and the standard circuit board are respectively shot, an image to be detected and a template image can be obtained, wherein the image to be detected can include: circuit structure 1, circuit structure 2, circuit structure 3 and circuit structure 4, then cut the image of treating to detect according to first size of cutting, obtain three sub-detection image, the image information that includes in the three sub-detection image is respectively: circuit configuration 1, circuit configuration 2, circuit configuration 4, and circuit configuration 3. Based on the above-mentioned result of cutting, because be provided with on the standard circuit board with wait to detect the circuit structure in the image corresponding: standard structure 1, standard structure 2, standard structure 3 and standard structure 4. After the sub-detection image is obtained, corresponding sub-template images can be obtained in the template image based on the sub-detection image, that is, three sub-template images can be obtained, wherein image information included in the three sub-template images is respectively: the standard structure 1, the standard structure 2, the standard structure 4 and the standard structure 3 are respectively provided with a sub-template image corresponding to each sub-detection image, so that the sub-detection images can be subjected to quality detection operation based on the sub-template images.
S104: obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color features, texture features, shape features, position relation features, background features.
After the sub-template image and the sub-detection image are acquired, analyzing the corresponding sub-detection image based on the sub-template image, specifically, analyzing and comparing the sub-template image and the sub-detection image based on a preset image feature, where the image feature may include at least one of: color feature, texture feature, shape feature, position relation feature and background feature; sub-detection results corresponding to the sub-detection images may thus be obtained, and may include: and whether the area corresponding to the sub detection image has the defects, the types of the defects, the shapes and the sizes of the defects, the positions of the defects and the like is judged.
S105: and determining a quality detection result corresponding to the object to be detected according to the plurality of sub-detection results corresponding to the sub-detection images.
After acquiring a plurality of sub-detection results corresponding to the sub-detection images, the acquired plurality of sub-detection results may be subjected to fusion processing, so that a quality detection result corresponding to the object to be detected may be obtained, where the quality detection result may include: whether the object to be detected has defects, the types of the defects, the shapes and the sizes of the defects, the positions of the defects and the like.
The data processing method provided by the embodiment comprises the steps of obtaining an image to be detected and a template image, cutting the image to be detected into a plurality of sub-detection images, obtaining a plurality of sub-template images corresponding to the sub-detection images according to the template image, and obtaining sub-detection results corresponding to the sub-detection images based on the image characteristics of the sub-template images and the image characteristics of the sub-detection images, wherein the image characteristics comprise at least one of color characteristics, texture characteristics, shape characteristics, position relation characteristics and background characteristics; and then can guarantee the accurate reliability that the sub-testing result obtained, then can confirm and wait to detect the quality testing result that the target corresponds based on the sub-testing result, realized treating automatically and waiting to detect the process that the target carries out the quality testing, not only guaranteed the accuracy and the efficiency of quality testing, help improving and wait to detect the product quality of target, and still reduced detection cost and cost of labor, improved the practicality of this method effectively, be favorable to the popularization and the application in market.
Fig. 2 is a second flowchart of a data processing method according to an embodiment of the present invention; FIG. 3 is a flowchart of acquiring a difference parameter between an image to be detected and a template image according to an embodiment of the present invention; on the basis of the foregoing embodiment, as shown in fig. 2 to 3 with continued reference to the accompanying drawings, in the process of acquiring the image to be detected and the template image, due to the influence of the image acquisition device, a difference may exist between the image to be detected and the template image, at this time, in order to ensure the accuracy and reliability of data processing, before the image to be detected is cropped to obtain a plurality of sub-detection images, the method in this embodiment may further include:
s001: and acquiring the difference parameters between the image to be detected and the template image.
After acquiring the image to be detected and the template image, the image to be detected and the template image may be analyzed and compared to determine a difference parameter between the image to be detected and the template image, and specifically, acquiring the difference parameter between the image to be detected and the template image may include:
s0011: acquiring a first image characteristic corresponding to an image to be detected and a second image characteristic corresponding to a template image;
wherein the first image feature comprises at least one of: color feature, texture feature, shape feature, position relation feature and background feature; correspondingly, the second image feature includes at least one of: color features, texture features, shape features, position relation features, background features.
In addition, the embodiment does not limit the specific implementation manner for acquiring the first image feature and the second image feature, for example: scanning an image by using an RGBD sensor to obtain scanning data, and analyzing and processing the scanning data so as to obtain corresponding image characteristics; of course, those skilled in the art may also adopt other methods to obtain the image features according to specific design requirements and application requirements, as long as the accuracy and reliability of obtaining the first image feature and the second image feature can be ensured, which is not described herein again.
S0012: and analyzing the first image characteristic and the second image characteristic by using a convolutional neural network to obtain a difference parameter between the image to be detected and the template image.
The convolutional neural network can be obtained through training of difference parameters corresponding to the historical images (the historical detection images and the historical template images) and the historical images, and is used for identifying difference information between the two images. After obtaining the first image feature and the second image feature, the first image feature and the second image feature may be analyzed and processed by using a convolutional neural network, so that a difference parameter between the image to be detected and the template image may be obtained, where the difference parameter may include at least one of: rotation angle, horizontal translation vector, vertical translation vector, scaling. It can be understood that the specific value ranges of the difference parameters are all greater than or equal to 0, and when the value of one difference parameter is 0, it indicates that the image to be detected and the template image are consistent in the dimension corresponding to the difference parameter.
S002: and adjusting the image to be detected according to the difference parameters so as to align the image to be detected with the template image.
After the difference parameters are acquired, the image to be detected can be adjusted based on the difference parameters, so that the image to be detected and the template image can be aligned. For example: the difference parameters between the image to be detected and the template image are obtained as follows: the rotation angle (theta), the translation vector (x, y) and the scaling scale(s), therefore, the image to be detected can be adjusted in rotation according to the rotation angle, the image to be detected can be subjected to translation operation (including horizontal translation operation and vertical translation operation) in a plane according to the translation vector, and the image to be detected can be scaled according to the scaling scale. When the difference parameter is 0, the image to be detected may not be subjected to any operation in the dimension of the difference parameter, for example: when the rotation angle is 0 °, rotation adjustment of the image to be detected is not required.
In the embodiment, before the image to be detected is cut and a plurality of sub-detection images are obtained, the image to be detected and the template image are aligned by obtaining the difference parameters between the image to be detected and the template image and then adjusting the image to be detected according to the difference parameters, so that the image to be detected and the template image are conveniently analyzed, the accuracy and the reliability of the analysis of the image to be detected are further improved, and the accuracy degree of quality detection is ensured.
Fig. 5 is a flowchart of obtaining a plurality of sub-template images corresponding to sub-inspection images for a template image according to an embodiment of the present invention; fig. 6 is a flowchart illustrating adjusting an intermediate image to obtain a plurality of sub-template images corresponding to sub-inspection images according to an embodiment of the present invention; on the basis of the foregoing embodiment, with reference to fig. 5 to 6, in this embodiment, a specific implementation manner for obtaining a plurality of sub-template images corresponding to sub-detection images is not limited, and a person skilled in the art may perform any setting according to specific design requirements and application requirements, and preferably, obtaining a plurality of sub-template images corresponding to sub-detection images with respect to a template image in this embodiment may include:
s1031: and acquiring a second cut size corresponding to the template image, wherein the second cut size is larger than the first cut size.
The second cut size may be preset, the specific size of the second cut size is not limited in this embodiment, and a person skilled in the art may set the second cut size according to specific design requirements and application requirements as long as the second cut size is larger than the first cut size; for example: the second cutoff size may be: 255 pixels by 255 pixels, 512 pixels by 512 pixels, and so on.
S1032: and cutting the template image according to the second cutting size to obtain a plurality of intermediate images corresponding to the sub detection images.
The second crop size may be set to 255 pixels by 255 pixels, and correspondingly, the first crop size may be 127 pixels by 127 pixels, in view of the quality and efficiency of processing the image to be detected. Furthermore, the template image can be cropped according to the second cropping size, an intermediate image corresponding to the plurality of sub-detection images can be obtained, and the size of the obtained intermediate image is 255 pixels by 255 pixels; since the second crop size is larger than the first crop size, it is ensured that the intermediate image includes the sub-detection image so that the sub-detection image is subjected to the analysis processing based on the intermediate image.
S1033: and adjusting the intermediate image to obtain a plurality of sub-template images corresponding to the sub-detection images, wherein the sizes of the sub-template images are consistent with the sizes of the sub-detection images.
After the intermediate image is acquired, since the size of the intermediate image is larger than that of the sub inspection image, in order to improve the efficiency of image analysis contrast, the intermediate image may be adjusted based on the sub inspection image, so that the sub template image corresponding to the sub inspection image may be obtained. Specifically, adjusting the intermediate image to obtain a plurality of sub-template images corresponding to the sub-detection images may include:
s10331: and acquiring the difference parameters between the image to be detected and the template image.
S10332: and adjusting the intermediate image according to the difference parameters to obtain a sub-template image corresponding to the sub-detection image.
Wherein, the difference parameter between the image to be detected and the template image can include: rotation angle (θ), translation vector (x, y), scaling scale(s); the specific implementation manner of obtaining the difference parameter between the image to be detected and the template image in this embodiment may be the same as the specific implementation manner of obtaining the difference parameter in the above embodiment, and the above statements may be specifically referred to, and will not be described herein again.
After the difference parameters are obtained, the intermediate image is adjusted according to the difference parameters, so that a sub-template image corresponding to the sub-detection image can be obtained, and the size of the sub-template image is the same as that of the sub-detection image. For example, the difference parameters between the image to be detected and the template image are obtained as follows: the rotation angle (θ), the translation vector (x, y), and the scaling scale(s), so that the intermediate image can be adjusted by rotation according to the rotation angle, the intermediate image can be translated in a plane according to the translation vector, the intermediate image can be scaled according to the scaling scale, and after the above-mentioned series of adjustments, the sub-template image corresponding to the sub-detection image can be obtained. When the above-mentioned difference parameter is 0, then no operation may be performed on the intermediate image in the dimension of the difference parameter, for example: when the rotation angle is 0 °, the rotation adjustment of the intermediate image is not necessary.
In this embodiment, the template image is cropped according to the second cropping size to obtain a plurality of intermediate images corresponding to the sub-detection images, and then the intermediate images are adjusted to obtain a plurality of sub-template images corresponding to the sub-detection images, wherein the size of the sub-template images is consistent with the size of the sub-detection images; the method effectively ensures the accuracy and reliability of the acquisition of the sub-template image and further improves the stability and reliability of the method.
Fig. 7 is a flowchart of obtaining sub-detection results corresponding to sub-detection images based on image features of sub-template images and image features of sub-detection images according to an embodiment of the present invention; based on the foregoing embodiment, with reference to fig. 7, in this embodiment, specific implementation manners of the image features based on the sub-template image and the image features of the sub-inspection image are not limited, and those skilled in the art may perform arbitrary setting according to specific design requirements and application requirements, and preferably, the obtaining the sub-inspection result corresponding to the sub-inspection image based on the image features of the sub-template image and the image features of the sub-inspection image in this embodiment may include:
s1041: and acquiring a third image characteristic corresponding to the sub-detection image and a fourth image characteristic corresponding to the sub-template image.
After acquiring the plurality of sub-detection images, analyzing the corresponding sub-detection images based on the sub-template images, specifically, acquiring the third image features corresponding to the sub-detection images and the fourth image features corresponding to the sub-template images may include:
s10411: acquiring a third image feature corresponding to the sub-detection image by using a convolutional neural network, wherein the third image feature comprises at least one of the following: color features, texture features, shape features, position relation features, background features.
S10412: acquiring a fourth image feature corresponding to the sub-template image by using a convolutional neural network, wherein the fourth image feature comprises at least one of the following: color features, texture features, shape features, position relation features, background features.
S1042: and analyzing the third image characteristic and the fourth image characteristic to obtain a sub-detection result corresponding to the sub-detection image.
It is to be noted that the convolutional neural network in the present embodiment is used to identify image features in an image; after the third image feature and the fourth image feature are acquired, the third image feature and the fourth image feature may be subjected to analysis processing to obtain sub-detection results corresponding to the sub-detection images. Specifically, referring to fig. 8, in the present embodiment, the analyzing the third image feature and the fourth image feature to obtain the sub-detection result corresponding to the sub-detection image may include:
s10421: a first feature vector corresponding to the third image feature and a second feature vector corresponding to the fourth image feature are determined.
S10422: and performing fusion processing on the first feature vector and the second feature vector to obtain a fusion feature vector.
After the first feature vector and the second feature vector are obtained, the first feature vector and the second feature vector can be directly merged, so that a fusion feature vector integrating information of the first feature vector and information of the second feature vector can be obtained.
S10423: and analyzing and processing the fusion characteristic vector to obtain a sub-detection result corresponding to the sub-detection image.
After the fusion feature vector is acquired, because the fusion feature vector includes the image feature information of the sub-detection image and the image feature information of the sub-template image, the sub-detection result corresponding to the sub-detection image can be directly acquired through the fusion feature vector. Specifically, referring to fig. 9, in this embodiment, analyzing the fusion feature vector to obtain sub-detection results corresponding to the sub-detection images may include:
s104231: and acquiring a detector for processing the fusion feature vector, wherein the detector is preset with a plurality of detection scales.
S104232: analyzing and processing the fusion feature vector by using a detector to obtain a sub-detection result corresponding to the sub-detection image, wherein the sub-detection result comprises at least one of the following components: flaw location, flaw classification, flaw size, flaw shape.
The detector is preset and used for analyzing and processing the fusion characteristic vector and obtaining a detection result corresponding to the fusion characteristic vector. It is noted that the detector has a plurality of detection scales. For example: after the fused feature vector is obtained, the fused feature vector may be analyzed by a detector, and the process includes: analyzing and processing the fusion characteristic vector by using a first analysis scale, analyzing and processing the fusion characteristic vector by using a second analysis scale, and analyzing and processing the fusion characteristic vector by using a third analysis scale, wherein the first analysis scale can be an analysis strategy corresponding to a normal display size, the second analysis scale can be an analysis strategy corresponding to a 2-time display size, the third analysis scale can be an analysis strategy corresponding to a 3-time display size, and the like; thus, scale detection results of three different analysis scales can be obtained for the same fusion feature vector; and then combining the different size detection results to determine a sub-detection result corresponding to the sub-detection image, wherein the sub-detection result may comprise at least one of the following: flaw location, flaw classification, flaw size, flaw shape.
In the embodiment, the fusion characteristic vector comprising the image characteristics of the sub-detection image and the image characteristics of the sub-template image is obtained, and then the fusion characteristic vector is analyzed and processed by using the detector with multiple scales, so that the defect detection results with different scales can be effectively focused, and the sub-detection results corresponding to the sub-detection image can be obtained based on the detection results with different scales, thereby effectively improving the defect detection recall rate and the detection precision and further improving the accuracy and the reliability of the method.
Fig. 10 is a flowchart of a data processing method according to an embodiment of the present invention; on the basis of the foregoing embodiment, with continuing reference to fig. 10, after determining the quality detection result corresponding to the object to be detected according to the plurality of sub-detection results corresponding to the sub-detection images, the method in this embodiment may further include:
s201: feedback information is obtained for the quality detection result.
S202: and updating and adjusting the detector according to the feedback information.
After analyzing and processing the fusion feature vector by using the detector, a quality detection result may be obtained, and the user may input feedback information for the obtained quality detection result, where the feedback information may be actual quality information corresponding to the object to be detected, for example: the feedback information can reflect the detection accuracy of the detector, and in order to improve the processing accuracy of the detector, the detection parameters corresponding to the detector can be updated based on the feedback information, so that a more accurate quality detection result can be obtained by using the detector next time.
By continuously updating and adjusting the detector, the accuracy of data processing of the detector is improved, the adaptability is enhanced, the popularization is good, and the practicability of the method is further improved.
Fig. 11 is a fourth flowchart of a data processing method according to an embodiment of the present invention; on the basis of the foregoing embodiment, with continued reference to fig. 11, after determining the quality detection result corresponding to the object to be detected according to the plurality of sub-detection results corresponding to the sub-detection images, the method in this embodiment further includes:
s301: and acquiring an evaluation rule aiming at the quality detection result.
S302: and evaluating the quality detection result according to the evaluation rule to obtain evaluation information corresponding to the quality detection result.
The evaluation rule may be preset for the object to be detected, for example, when the object to be detected is a circuit board, the evaluation rule may be: for ink marks, scratches, etc. on the pads, traces, etc., the defects are considered to be defects regardless of their size, and the defects may correspond to a predetermined defect score. Generally speaking, the evaluation rule is used for directly evaluating the quality of the circuit board based on the quality detection result, so that a user can directly judge whether the circuit board meets the design requirement, the application requirement or the quality requirement according to the evaluation information, and can correct or adjust the circuit board in time when the design requirement or the application requirement is not met, and the like.
In the embodiment, the quality detection result is evaluated through the evaluation rule, the evaluation information corresponding to the quality detection result is obtained, the evaluation rule can be configured individually based on different quality requirements provided by customers, flexible evaluation of the object to be detected based on the quality detection result is realized, and the convenience and reliability of the method are further improved.
Fig. 12 is a flowchart of a circuit board detection method according to an embodiment of the present invention; referring to fig. 12, the present embodiment provides a detection method of a circuit board, and an execution subject of the detection method of the circuit board may be a detection device, and the detection device may be implemented as software, or a combination of software and hardware. Specifically, the method may include:
s401: and acquiring an image to be detected and a template image corresponding to the circuit board.
S402: and cutting the image to be detected to obtain a plurality of sub-detection images.
S403: for the template image, a plurality of sub-template images corresponding to the sub-detection images are obtained.
S404: obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color features, texture features, shape features, position relation features, background features.
S405: and determining a defect detection result corresponding to the circuit board according to a plurality of sub-detection results corresponding to the sub-detection images.
The implementation process and implementation effect of the steps in this embodiment are similar to those of steps S101 to S105 in the above embodiment, and the above statements may be specifically referred to, and are not repeated herein.
It is understood that the detection method in this embodiment may further include the method in the embodiments shown in fig. 2 to 11, and reference may be made to the related description of the embodiments shown in fig. 2 to 11 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 2 to fig. 11, and are not described herein again.
In the method for detecting a circuit board provided by this embodiment, an image to be detected and a template image are obtained, the image to be detected is cut into a plurality of sub-detection images, a plurality of sub-template images corresponding to the sub-detection images are obtained for the template image, and then sub-detection results corresponding to the sub-detection images are obtained based on image features of the sub-template images and image features of the sub-detection images, where the image features include at least one of the following: the method has the advantages that the color characteristic, the texture characteristic, the shape characteristic, the position relation characteristic and the background characteristic are effectively guaranteed, accurate reliability of sub-detection result acquisition is effectively guaranteed, then the flaw detection result corresponding to the circuit board can be determined based on the sub-detection result, flaw detection of the circuit board is automatically achieved, accuracy and efficiency of flaw detection are guaranteed, product quality of the circuit board is improved, detection cost and labor cost are reduced, practicability of the method is effectively improved, and popularization and application of the market are facilitated.
Fig. 13 is a flowchart of a cloth detecting method according to an embodiment of the present invention; referring to fig. 13, the embodiment provides a cloth detecting method, and an execution main body of the cloth detecting method may be a detecting device, and the detecting device may be implemented as software, or a combination of software and hardware. Specifically, the method may include:
s501: and acquiring an image to be detected and a template image corresponding to the cloth.
S502: and cutting the image to be detected to obtain a plurality of sub-detection images.
S503: for the template image, a plurality of sub-template images corresponding to the sub-detection images are obtained.
S504: obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color features, texture features, shape features, position relation features, background features.
S505: and determining a flaw detection result corresponding to the cloth according to the plurality of sub-detection results corresponding to the sub-detection images.
The implementation process and implementation effect of the steps in this embodiment are similar to those of steps S101 to S105 in the above embodiment, and the above statements may be specifically referred to, and are not repeated herein.
It is understood that the detection method in this embodiment may further include the method in the embodiments shown in fig. 2 to 11, and reference may be made to the related description of the embodiments shown in fig. 2 to 11 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 2 to fig. 11, and are not described herein again.
In the cloth detection method provided by this embodiment, an image to be detected and a template image are obtained, the image to be detected is cut into a plurality of sub-detection images, a plurality of sub-template images corresponding to the sub-detection images are obtained for the template image, and then sub-detection results corresponding to the sub-detection images are obtained based on image features of the sub-template images and image features of the sub-detection images, where the image features include at least one of the following: the method has the advantages that the color characteristic, the texture characteristic, the shape characteristic, the position relation characteristic and the background characteristic are effectively guaranteed, accurate reliability of sub-detection result acquisition is effectively guaranteed, then the flaw detection result corresponding to the cloth can be determined based on the sub-detection result, automatic flaw detection of the cloth is achieved, accuracy and efficiency of flaw detection are guaranteed, product quality of the cloth is improved, detection cost and labor cost are reduced, practicability of the method is effectively improved, and popularization and application of the market are facilitated.
In specific application, the embodiment of the present application provides a data processing method, and in order to understand a specific implementation process of the method, a circuit board is taken as an example for explanation. For the circuit board, various defects such as copper leakage, ink marks, scratches, character deletion and the like are easily generated in the production process due to the influence of equipment reasons, process operation reasons and external factors. In order to improve the quality of the products, enhance the competitiveness of the products and increase the added value of the products, manufacturers or designers try to improve the quality of the products continuously. Therefore, quality detection is an important environment, and enterprises currently face a serious challenge in quality control: the manual quality inspection is easy to fatigue eyes, unstable personnel and influenced by artificial emotion, and the influence caused by manual intervention needs to be avoided as much as possible in the production process.
In order to avoid the above defects, the present application embodiment provides a data processing method, which can automatically perform flaw detection on the appearance of the PCB, perform operations such as positioning of flaw positions, classification of flaw types, calculation of flaw shapes and area sizes, and the like, and can ensure a flaw recall rate of 99.9% and a precision of 90%, thereby helping an enterprise reduce personnel and reducing production cost; and the quality inspection speed, the production efficiency, the detection rate and the product quality can be improved, and the customer complaints are reduced. The method is realized according to the principle that the consistency comparison is carried out between the PCB and a normal PCB which is designed in advance, so that all flaw points which are inconsistent with the surface of the normal PCB can be identified and recorded. Specifically, referring to fig. 14, the method may include the following three processes:
(A) matching Process
Firstly, acquiring an image to be detected and a template image corresponding to a PCB to be detected by CCD acquisition equipment, wherein the template image corresponds to a template circuit board, and the template circuit board is used for carrying out flaw detection on the PCB to be detected. In the process of acquiring the image to be detected and the template image by using the CCD acquisition equipment, certain errors exist in the CCD acquisition equipment, so that certain differences exist between the module image and the image to be detected in the aspects of translation, scaling, rotation angle and the like, therefore, difference parameters between the template image and the image to be detected can be acquired first, and the image to be detected and the template image are aligned based on the difference parameters.
Specifically, referring to fig. 15, when acquiring the difference parameter between the template image and the image to be detected, the method may include the following steps: cutting an image to be detected into a plurality of sub-detection images Z by using a preset first cutting size (127 pixels by 127 pixels can be used), wherein the size of each sub-detection image Z can be 127 pixels by 127 pixels; the template image is cropped into a plurality of sub-template images X by using a preset second cropping size (which may be 255 pixels by 255 pixels), where the size of each sub-template image X may be 255 pixels by 255 pixels, and at this time, the second cropping size is 2 times the first cropping size, so that it may be determined that the cropped sub-template images X may include information of the sub-detection image Z. Then, feature extraction and normalization processing are carried out on the sub-detection image Z by using a deep learning neural network, so that first image features (6X 128 pixels) with high dimensions can be obtained, feature extraction and normalization processing are carried out on the sub-template image X by using the deep learning neural network, so that second image features (22X 128 pixels) with high dimensions can be obtained, and each of the first image features and the second image features can comprise at least one of the following: color features, texture features, shape features, spatial relationship features, background features. And then, after analyzing and processing the first image feature and the second image feature by using a convolutional neural network, obtaining a difference parameter with 4 x 1 dimensions, where the difference parameter may include: rotation angle (θ), translation vector (x, y), scaling scale(s).
After obtaining the difference parameters, the sub-template image may be subjected to normalized clipping according to the difference parameters, so that the sub-template image and the sub-inspection image are kept consistent and are all 127 pixels by 127 pixels, and then a flaw detection operation may be performed based on the processed sub-template image and the sub-inspection image.
(II) detection Process
Referring to fig. 16, the detection process may include: the feature extraction and feature fusion operations are performed on the sub-template image and the sub-detection image, specifically, the feature extraction operations can be performed on the sub-template image and the sub-detection image respectively by using a convolutional neural network to obtain a first feature vector and a second feature vector corresponding to the sub-template image and the sub-detection image respectively, and then the feature fusion operations are performed on the first feature vector and the second feature vector, so that a fusion feature vector can be obtained. The fusion operation may be dot product multiplication, fusion addition, or other fusion operations, and the present application embodiment is not limited to the above feature fusion operation, and a person skilled in the art may also select other feature fusion modes according to specific application requirements, and details are not described here.
After the fused feature vector is obtained, the fused feature vector can be subjected to multi-scale flaw detection by using a multi-scale detector (for example, the detector in fig. 16 has three detection scales), and the flaw position can be labeled and displayed. Therefore, the recall rate of the flaw can be improved, and the operations of positioning the flaw position, classifying the flaw type, determining the area size and determining the shape can be completed.
It can be understood that, in the process of obtaining a detection result by using a detector to perform flaw detection on the fused feature vector, because the analysis and identification strategies are different, for the same sub-detection image, multiple detection results can be obtained, that is, the detection result corresponding to one image to be detected may include multiple ones, for example: a plurality of different detection results can be determined for the same object to be detected, and the plurality of different detection results can comprise different information such as defect types, defect positions, defect sizes and shapes. Taking the flaw type as an example, when the flaw is detected, different threshold values are set in advance for different flaw types, and when the point value of the flaw type of a certain area of the object to be detected exceeds the corresponding threshold value, the flaw type of the area can be determined to be the flaw type corresponding to the threshold value; for example: the object to be detected has a flaw, the score of the flaw exceeds a first threshold corresponding to the flaw A and also exceeds a second threshold corresponding to the flaw B, and at the moment, the object to be detected can be determined to have two detection results, namely an A-type flaw and a B-type flaw.
Based on the above, some false detection may occur, such as: in the case where a defective area or position is not detected, a defective area or position is detected, and a type a defect is detected as a type B defect, the detector may be updated and adjusted to improve the detection accuracy of the detector. Specifically, feedback information input by a user based on the detection result can be obtained, the feedback information can be a standard detection result corresponding to the object to be detected, and includes a defect type, a defect position, a defect size, a defect shape, and the like, at this time, some detection parameters of the detector can be updated and adjusted based on the feedback information, specifically, the accuracy of the detection result can be determined according to the feedback information, the detection result with the accuracy smaller than a preset threshold value is determined as false detection data, and the detector is updated and trained based on the false detection data and the corresponding feedback information, so that the accuracy of the detector identification can be improved, and the probability of the error is reduced.
(III) evaluation procedure
After the detection result is obtained, the detection result may be evaluated and scored, specifically, the scoring rule may be arbitrarily set according to human needs, for example: the detection result may be evaluated and scored according to the following evaluation rule, so that evaluation information may be obtained.
(a) Defects such as copper leakage, drill hole falling, bonding pad deformation, character missing and the like are obtained through detection, the performance of the PCB is seriously influenced, the defects belong to first-level defects, and the production line needs to be stopped immediately for troubleshooting so as to avoid waste caused by subsequent production;
(b) regarding ink flowers, scratches and the like on the bonding pads and the lines, the defects are regarded as defects no matter the sizes of the ink flowers and the scratches;
(c) for failures such as ink marks, scratches, etc. with an area of <5 pixels by 5 pixels elsewhere, the appearance performance is still considered as a defect since performance is not affected and since the appearance is less affected, the appearance can be considered as normal, but with an area of >5 pixels by 5 pixels.
According to the data processing method provided by the application embodiment, the template images are used for comparison, the matching areas of the images to be detected are searched and compared, and the matching parameters are estimated, so that redesign along with the change of the design drawing is not needed, the method is high in adaptability and good in popularization; in addition, multi-branch detection is carried out by using a multi-scale detector, defects of different scales can be focused, and the detection recall rate and detection precision of the defects are improved; and can determine detailed information of the detection result, including: the method can determine the information such as the type, position, area, shape and the like of the flaw, can carry out personalized configuration according to different requirements of customers, and can meet the quality requirements of different customers through a flexible evaluation mechanism; in addition, the system can help enterprises reduce staff and reduce production cost; the quality inspection speed and quality are improved, the production efficiency is improved, and the speed is improved by 5-10 times compared with the manual quality inspection speed; the detection rate is improved, the product quality is improved, the customer complaints are reduced, the practicability of the method is further improved, and the popularization and the application of the market are facilitated.
Fig. 17 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention; referring to fig. 17, the present embodiment provides a data processing apparatus, and the processing apparatus can execute the data processing method corresponding to fig. 1. Specifically, the processing device may include:
the first acquisition module 11 is configured to acquire an image to be detected and a template image corresponding to an object to be detected;
the first cutting module 12 is configured to cut an image to be detected to obtain a plurality of sub-detection images;
a first obtaining module 11, configured to obtain, for a template image, a plurality of sub-template images corresponding to sub-detection images;
a first analyzing module 13, configured to obtain a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, where the image features include at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
the first processing module 14 is configured to determine a quality detection result corresponding to the object to be detected according to a plurality of sub detection results corresponding to the sub detection images.
Wherein, the object to be detected comprises at least one of the following: circuit board, cloth, building material surface.
Alternatively, before the image to be detected is cropped to obtain a plurality of sub-detection images, the first obtaining module 11 and the first processing module 14 in this embodiment may be configured to perform the following steps:
the first acquisition module 11 is configured to acquire a difference parameter between an image to be detected and a template image;
and the first processing module 14 is configured to adjust the image to be detected according to the difference parameter, so that the image to be detected is aligned with the template image.
Optionally, when the first acquiring module 11 acquires the difference parameter between the image to be detected and the template image, the first acquiring module 11 may be configured to perform: acquiring a first image characteristic corresponding to an image to be detected and a second image characteristic corresponding to a template image; and analyzing the first image characteristic and the second image characteristic by using a convolutional neural network to obtain a difference parameter between the image to be detected and the template image.
Wherein the first image feature comprises at least one of: color feature, texture feature, shape feature, position relation feature and background feature; correspondingly, the second image feature includes at least one of: color features, texture features, shape features, position relation features, background features.
The difference parameter includes at least one of: rotation angle, horizontal translation vector, vertical translation vector, scaling.
Optionally, when the first cropping module 12 crops the image to be detected to obtain a plurality of sub-detection images, the first cropping module 12 may be configured to perform: acquiring a first cutting size corresponding to an image to be detected; and cutting the image to be detected according to the first cutting size to obtain a plurality of sub-detection images.
Alternatively, when the first obtaining module 11 obtains a plurality of sub-template images corresponding to the sub-detection images for the template image, the first obtaining module 11 may be configured to perform: acquiring a second cut size corresponding to the template image, wherein the second cut size is larger than the first cut size; cutting the template image according to the second cutting size to obtain a plurality of intermediate images corresponding to the sub detection images; and adjusting the intermediate image to obtain a plurality of sub-template images corresponding to the sub-detection images, wherein the sizes of the sub-template images are consistent with the sizes of the sub-detection images.
Optionally, when the first obtaining module 11 adjusts the intermediate image to obtain a plurality of sub-template images corresponding to the sub-detection images, the first obtaining module 11 may be configured to perform: acquiring a difference parameter between an image to be detected and a template image; and adjusting the intermediate image according to the difference parameters to obtain a sub-template image corresponding to the sub-detection image.
Alternatively, when the first analysis module 13 obtains the sub-detection result corresponding to the sub-detection image based on the image feature of the sub-template image and the image feature of the sub-detection image, the first analysis module 13 may be configured to perform: acquiring a third image characteristic corresponding to the sub-detection image and a fourth image characteristic corresponding to the sub-template image; and analyzing the third image characteristic and the fourth image characteristic to obtain a sub-detection result corresponding to the sub-detection image.
Optionally, when the first analysis module 13 obtains the third image feature corresponding to the sub-detection image and the fourth image feature corresponding to the sub-template image, the first analysis module 13 may be configured to perform: acquiring a third image feature corresponding to the sub-detection image by using a convolutional neural network, wherein the third image feature comprises at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature; acquiring a fourth image feature corresponding to the sub-template image by using a convolutional neural network, wherein the fourth image feature comprises at least one of the following: color features, texture features, shape features, position relation features, background features.
Optionally, when the first analysis module 13 performs analysis processing on the third image feature and the fourth image feature to obtain a sub-detection result corresponding to the sub-detection image, the first analysis module 13 may be configured to perform: determining a first feature vector corresponding to the third image feature and a second feature vector corresponding to the fourth image feature; performing fusion processing on the first feature vector and the second feature vector to obtain a fusion feature vector; and analyzing and processing the fusion characteristic vector to obtain a sub-detection result corresponding to the sub-detection image.
Optionally, when the first analysis module 13 performs analysis processing on the fusion feature vector to obtain a sub-detection result corresponding to the sub-detection image, the first analysis module 13 may be configured to perform: acquiring a detector for processing the fusion characteristic vector, wherein the detector is preset with a plurality of detection scales; analyzing and processing the fusion feature vector by using a detector to obtain a sub-detection result corresponding to the sub-detection image, wherein the sub-detection result comprises at least one of the following components: flaw location, flaw classification, flaw size, flaw shape.
Alternatively, after determining the quality detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images, the first obtaining module 11 and the first processing module 14 in this embodiment may be configured to perform the following steps:
a first obtaining module 11, configured to obtain feedback information for a quality detection result;
and the first processing module 14 is configured to perform update adjustment on the detector according to the feedback information.
Alternatively, after determining the quality detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images, the first obtaining module 11 and the first processing module 14 in this embodiment may be configured to perform the following steps:
a first obtaining module 11, configured to obtain an evaluation rule for a quality detection result;
the first processing module 14 is configured to evaluate the quality detection result according to the evaluation rule, and obtain evaluation information corresponding to the quality detection result.
The apparatus shown in fig. 17 can perform the method of the embodiments shown in fig. 1-11 and fig. 14-16, and the detailed description of this embodiment can refer to the related descriptions of the embodiments shown in fig. 1-11 and fig. 14-16. The implementation process and technical effect of the technical solution are described in the embodiments shown in fig. 1 to 11 and fig. 14 to 16, and are not described again here.
In one possible design, the structure of the data processing apparatus shown in fig. 17 may be implemented as an electronic device, which may be a mobile phone, a tablet computer, a server, or other devices. As shown in fig. 18, the electronic device may include: a first processor 21 and a first memory 22. Wherein the first memory 22 is used for storing programs that support the electronic device to execute the processing methods of the data provided in the embodiments shown in fig. 1-11 and fig. 14-16, and the first processor 21 is configured to execute the programs stored in the first memory 22.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the first processor 21, are capable of performing the steps of:
acquiring an image to be detected and a template image corresponding to an object to be detected;
cutting an image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a quality detection result corresponding to the object to be detected according to the plurality of sub-detection results corresponding to the sub-detection images.
Optionally, the first processor 21 is further configured to perform all or part of the steps in the embodiments shown in fig. 1-11 and 14-16.
The electronic device may further include a first communication interface 23 for communicating with other devices or a communication network.
In addition, the embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the processing method of the data in the method embodiments shown in fig. 1 to 11 and fig. 14 to 16.
Fig. 19 is a schematic structural diagram of a circuit board detection apparatus according to an embodiment of the present invention; referring to fig. 19, the present embodiment provides a circuit board detection apparatus, and the processing apparatus may perform the circuit board detection method corresponding to fig. 12. Specifically, the processing device may include:
the second obtaining module 31 is configured to obtain an image to be detected and a template image corresponding to the circuit board;
a second cropping module 32, configured to crop an image to be detected to obtain a plurality of sub-detection images;
a second obtaining module 31, configured to obtain, for the template image, a plurality of sub-template images corresponding to the sub-detection images;
a second analyzing module 33, configured to obtain a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, where the image features include at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
and the second processing module 34 is configured to determine a defect detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
The apparatus shown in fig. 19 can perform the method of the embodiment shown in fig. 12 and 14-16, and the related description of the embodiment shown in fig. 12 and 14-16 can be referred to for the part not described in detail in this embodiment. The implementation process and technical effect of the technical solution are described in the embodiments shown in fig. 12 and fig. 14 to fig. 16, and are not described herein again.
In one possible design, the structure of the detection apparatus of the circuit board shown in fig. 19 may be implemented as an electronic device, which may be a mobile phone, a tablet computer, a server, or other devices. As shown in fig. 20, the electronic device may include: a second processor 41 and a second memory 42. Wherein the second memory 42 is used for storing programs that support the electronic device to execute the detection method of the circuit board provided in the embodiments shown in fig. 12 and fig. 14-16, and the second processor 41 is configured to execute the programs stored in the second memory 42.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the second processor 41, are capable of performing the steps of:
acquiring an image to be detected and a template image corresponding to a circuit board;
cutting an image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a defect detection result corresponding to the circuit board according to a plurality of sub-detection results corresponding to the sub-detection images.
The electronic device may further include a second communication interface 43 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the method for detecting a circuit board in the method embodiments shown in fig. 12 and fig. 14 to fig. 16.
Fig. 21 is a schematic structural diagram of a cloth detecting apparatus according to an embodiment of the present invention; referring to fig. 21, the present embodiment provides a cloth detecting apparatus, and the processing apparatus can perform the cloth detecting method corresponding to fig. 13. Specifically, the processing device may include:
a third obtaining module 51, configured to obtain an image to be detected and a template image corresponding to the cloth;
a third cropping module 52, configured to crop an image to be detected to obtain a plurality of sub-detection images;
a third obtaining module 51, configured to obtain, for the template image, a plurality of sub-template images corresponding to the sub-detection images;
a third analyzing module 53, configured to obtain a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, where the image features include at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
and a third processing module 54, configured to determine a defect detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
The apparatus shown in fig. 21 may perform part of the method of the embodiment shown in fig. 13-16, and reference may be made to the related description of the embodiment shown in fig. 13-16 for parts not described in detail in this embodiment. The implementation process and technical effect of the technical solution are described in the embodiments shown in fig. 13 to fig. 16, and are not described herein again.
In one possible design, the structure of the detecting device for cloth shown in fig. 21 may be implemented as an electronic device, which may be a mobile phone, a tablet computer, a server, or other devices. As shown in fig. 22, the electronic device may include: a third processor 61 and a third memory 62. The third memory 62 is used for storing a program that supports the electronic device to execute the cloth detecting method provided in the embodiments shown in fig. 13 to 16, and the third processor 61 is configured to execute the program stored in the third memory 62.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the third processor 61, are capable of performing the steps of:
acquiring an image to be detected and a template image corresponding to the cloth;
cutting an image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a flaw detection result corresponding to the cloth according to the plurality of sub-detection results corresponding to the sub-detection images.
Optionally, the third processor 61 is further configured to perform all or part of the steps in the embodiments shown in fig. 13-16.
The electronic device may further include a third communication interface 63 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the method for detecting a fabric in the method embodiments shown in fig. 13 to 16.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (23)

1. A method for processing data, comprising:
acquiring an image to be detected and a template image corresponding to an object to be detected;
cutting the image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a quality detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
2. The method of claim 1, wherein before cropping the image to be detected to obtain a plurality of sub-detection images, the method further comprises:
acquiring a difference parameter between the image to be detected and the template image;
and adjusting the image to be detected according to the difference parameters so as to align the image to be detected with the template image.
3. The method of claim 2, wherein obtaining a difference parameter between the image to be detected and the template image comprises:
acquiring a first image characteristic corresponding to the image to be detected and a second image characteristic corresponding to the template image;
and analyzing the first image characteristic and the second image characteristic by using a convolutional neural network to obtain a difference parameter between the image to be detected and the template image.
4. The method of claim 3, wherein the first image feature comprises at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
correspondingly, the second image feature includes at least one of: color features, texture features, shape features, position relation features, background features.
5. The method of claim 3, wherein the difference parameter comprises at least one of: rotation angle, horizontal translation vector, vertical translation vector, scaling.
6. The method of claim 1, wherein cropping the image to be detected to obtain a plurality of sub-detection images comprises:
acquiring a first cutting size corresponding to the image to be detected;
and cutting the image to be detected according to the first cutting size to obtain a plurality of sub-detection images.
7. The method of claim 6, wherein obtaining, for the template image, a plurality of sub-template images corresponding to the sub-inspection images comprises:
acquiring a second cut size corresponding to the template image, wherein the second cut size is larger than the first cut size;
cutting the template image according to the second cutting size to obtain a plurality of intermediate images corresponding to the sub detection images;
and adjusting the intermediate image to obtain a plurality of sub-template images corresponding to the sub-detection images, wherein the sizes of the sub-template images are consistent with the sizes of the sub-detection images.
8. The method of claim 7, wherein adjusting the intermediate image to obtain a plurality of sub-template images corresponding to the sub-inspection images comprises:
acquiring a difference parameter between the image to be detected and the template image;
and adjusting the intermediate image according to the difference parameters to obtain a sub-template image corresponding to the sub-detection image.
9. The method of claim 1, wherein obtaining sub-detection results corresponding to the sub-detection images based on image features of the sub-template images and image features of the sub-detection images comprises:
acquiring a third image characteristic corresponding to the sub-detection image and a fourth image characteristic corresponding to the sub-template image;
and analyzing and processing the third image characteristic and the fourth image characteristic to obtain a sub-detection result corresponding to the sub-detection image.
10. The method of claim 9, wherein obtaining a third image feature corresponding to the sub-inspection image and a fourth image feature corresponding to the sub-template image comprises:
acquiring, with a convolutional neural network, a third image feature corresponding to the sub-detection image, the third image feature including at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
acquiring a fourth image feature corresponding to the sub-template image by using a convolutional neural network, wherein the fourth image feature comprises at least one of the following: color features, texture features, shape features, position relation features, background features.
11. The method of claim 9, wherein analyzing the third image feature and the fourth image feature to obtain sub-detection results corresponding to the sub-detection images comprises:
determining a first feature vector corresponding to the third image feature and a second feature vector corresponding to the fourth image feature;
performing fusion processing on the first feature vector and the second feature vector to obtain a fusion feature vector;
and analyzing and processing the fusion characteristic vector to obtain a sub-detection result corresponding to the sub-detection image.
12. The method according to claim 11, wherein analyzing the fused feature vector to obtain sub-detection results corresponding to the sub-detection images comprises:
acquiring a detector for processing the fusion feature vector, wherein the detector is preset with a plurality of detection scales;
analyzing and processing the fusion feature vector by using the detector to obtain a sub-detection result corresponding to the sub-detection image, wherein the sub-detection result comprises at least one of the following: flaw location, flaw classification, flaw size, flaw shape.
13. The method according to claim 12, characterized in that after determining a quality detection result corresponding to the object to be detected from a plurality of sub-detection results corresponding to the sub-detection images, the method further comprises:
acquiring feedback information aiming at the quality detection result;
and updating and adjusting the detector according to the feedback information.
14. The method according to any one of claims 1 to 13, characterized in that after determining a quality detection result corresponding to the object to be detected from a plurality of sub-detection results corresponding to the sub-detection images, the method further comprises:
obtaining an evaluation rule aiming at the quality detection result;
and evaluating the quality detection result according to the evaluation rule to obtain evaluation information corresponding to the quality detection result.
15. The method according to any one of claims 1 to 13,
the object to be detected comprises at least one of the following objects: circuit board, cloth, building material surface.
16. A method for detecting a circuit board is characterized by comprising the following steps:
acquiring an image to be detected and a template image corresponding to a circuit board;
cutting the image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a defect detection result corresponding to the circuit board according to a plurality of sub-detection results corresponding to the sub-detection images.
17. A cloth detection method is characterized by comprising the following steps:
acquiring an image to be detected and a template image corresponding to the cloth;
cutting the image to be detected to obtain a plurality of sub-detection images;
aiming at the template image, obtaining a plurality of sub-template images corresponding to the sub-detection images;
obtaining a sub-detection result corresponding to the sub-detection image based on the image features of the sub-template image and the image features of the sub-detection image, wherein the image features comprise at least one of the following: color feature, texture feature, shape feature, position relation feature and background feature;
and determining a flaw detection result corresponding to the cloth according to a plurality of sub-detection results corresponding to the sub-detection images.
18. An apparatus for processing data, comprising:
the first acquisition module is used for acquiring an image to be detected and a template image corresponding to an object to be detected;
the first cutting module is used for cutting the image to be detected to obtain a plurality of sub-detection images;
the first obtaining module is used for obtaining a plurality of sub-template images corresponding to the sub-detection images aiming at the template images;
a first analysis module, configured to obtain a sub-detection result corresponding to the sub-detection image based on an image feature of the sub-template image and an image feature of the sub-detection image, where the image feature includes at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
and the first processing module is used for determining a quality detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
19. An electronic device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement a method of processing data according to any one of claims 1 to 15.
20. A detection apparatus for a circuit board, comprising:
the second acquisition module is used for acquiring an image to be detected and a template image corresponding to the circuit board;
the second cutting module is used for cutting the image to be detected to obtain a plurality of sub-detection images;
the second obtaining module is used for obtaining a plurality of sub-template images corresponding to the sub-detection images aiming at the template images;
a second analysis module, configured to obtain a sub-detection result corresponding to the sub-detection image based on an image feature of the sub-template image and an image feature of the sub-detection image, where the image feature includes at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
and the second processing module is used for determining a flaw detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
21. An electronic device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of circuit board inspection of claim 16.
22. A cloth detection device is characterized by comprising:
the third acquisition module is used for acquiring an image to be detected and a template image corresponding to the cloth;
the third cutting module is used for cutting the image to be detected to obtain a plurality of sub-detection images;
the third obtaining module is used for obtaining a plurality of sub-template images corresponding to the sub-detection images aiming at the template images;
a third analyzing module, configured to obtain a sub-detection result corresponding to the sub-detection image based on an image feature of the sub-template image and an image feature of the sub-detection image, where the image feature includes at least one of: color feature, texture feature, shape feature, position relation feature and background feature;
and the third processing module is used for determining a flaw detection result corresponding to the object to be detected according to a plurality of sub-detection results corresponding to the sub-detection images.
23. An electronic device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of detecting a fabric according to claim 17.
CN201910401719.9A 2019-05-14 2019-05-14 Data processing method, device and equipment Pending CN111951210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910401719.9A CN111951210A (en) 2019-05-14 2019-05-14 Data processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910401719.9A CN111951210A (en) 2019-05-14 2019-05-14 Data processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN111951210A true CN111951210A (en) 2020-11-17

Family

ID=73335794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910401719.9A Pending CN111951210A (en) 2019-05-14 2019-05-14 Data processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN111951210A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598627A (en) * 2020-12-10 2021-04-02 广东省大湾区集成电路与系统应用研究院 Method, system, electronic device and medium for detecting image defects
CN113293593A (en) * 2021-01-18 2021-08-24 阿里巴巴(中国)有限公司 Loop cutting equipment, control method and equipment of loop cutting equipment
CN113744268A (en) * 2021-11-04 2021-12-03 深圳市城市交通规划设计研究中心股份有限公司 Crack detection method, electronic device and readable storage medium
CN116580021A (en) * 2023-07-03 2023-08-11 湖南益友新材料有限公司 Environment-friendly concrete carbon reduction product production and quality detection method
CN117011304A (en) * 2023-10-08 2023-11-07 深圳思谋信息科技有限公司 Defect detection method, defect detection device, computer equipment and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3031603B2 (en) * 1994-07-15 2000-04-10 株式会社リコー Image compression method
CN106778779A (en) * 2016-12-12 2017-05-31 广东省智能制造研究所 A kind of electric injection molding machine mould detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3031603B2 (en) * 1994-07-15 2000-04-10 株式会社リコー Image compression method
CN106778779A (en) * 2016-12-12 2017-05-31 广东省智能制造研究所 A kind of electric injection molding machine mould detection method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598627A (en) * 2020-12-10 2021-04-02 广东省大湾区集成电路与系统应用研究院 Method, system, electronic device and medium for detecting image defects
CN113293593A (en) * 2021-01-18 2021-08-24 阿里巴巴(中国)有限公司 Loop cutting equipment, control method and equipment of loop cutting equipment
CN113744268A (en) * 2021-11-04 2021-12-03 深圳市城市交通规划设计研究中心股份有限公司 Crack detection method, electronic device and readable storage medium
CN116580021A (en) * 2023-07-03 2023-08-11 湖南益友新材料有限公司 Environment-friendly concrete carbon reduction product production and quality detection method
CN116580021B (en) * 2023-07-03 2023-09-22 湖南益友新材料有限公司 Environment-friendly concrete carbon reduction product production and quality detection method
CN117011304A (en) * 2023-10-08 2023-11-07 深圳思谋信息科技有限公司 Defect detection method, defect detection device, computer equipment and computer readable storage medium
CN117011304B (en) * 2023-10-08 2024-03-26 深圳思谋信息科技有限公司 Defect detection method, defect detection device, computer equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN111951210A (en) Data processing method, device and equipment
US11774735B2 (en) System and method for performing automated analysis of air samples
KR102058427B1 (en) Apparatus and method for inspection
WO2017092427A1 (en) Electronic element positioning method and apparatus
CN110726724A (en) Defect detection method, system and device
US11842482B2 (en) Defect detection of a component in an assembly
CN110415214A (en) Appearance detecting method, device, electronic equipment and the storage medium of camera module
TWI715051B (en) Machine learning method and automatic optical inspection device using the method thereof
CN112884743B (en) Detection method and device, detection equipment and storage medium
CN115937170A (en) Circuit board detection method and device, computer equipment and storage medium
CN112634227A (en) Detection and identification method and device for PCB jointed board, electronic equipment and storage medium
CN115937101A (en) Quality detection method, device, equipment and storage medium
WO2021046726A1 (en) Method and device for detecting mechanical equipment parts
CN109741295B (en) Product quality detection method and device
CN110672620B (en) Chip defect detection method and system
CN112418590B (en) Printed circuit board component detection method and system
CN111815552A (en) Workpiece detection method and device, readable storage medium and terminal equipment
CN111768439B (en) Method, device, electronic equipment and medium for determining experiment scores
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
US11508143B2 (en) Automated salience assessment of pixel anomalies
TW201522949A (en) Inspection method for image data
KR102022494B1 (en) System for automatic generating of documents using vision image detection and a method thereof
CN113469944A (en) Product quality inspection method and device and electronic equipment
CN110989422A (en) Management system and management method for AOI (automated optical inspection) over-inspection parameters based on serial number code spraying
WO2024044942A1 (en) Point inspection method and device for visual inspection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201117