CN117347368A - Appearance defect detection method and appearance defect detection equipment - Google Patents

Appearance defect detection method and appearance defect detection equipment Download PDF

Info

Publication number
CN117347368A
CN117347368A CN202311175164.3A CN202311175164A CN117347368A CN 117347368 A CN117347368 A CN 117347368A CN 202311175164 A CN202311175164 A CN 202311175164A CN 117347368 A CN117347368 A CN 117347368A
Authority
CN
China
Prior art keywords
defect
appearance
image
product
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311175164.3A
Other languages
Chinese (zh)
Inventor
李玉惠
靳习永
李会富
谢学智
张富强
闫华锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN202311175164.3A priority Critical patent/CN117347368A/en
Publication of CN117347368A publication Critical patent/CN117347368A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Analytical Chemistry (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The present disclosure provides an appearance defect detection method and an appearance defect detection apparatus, the method including: acquiring a first target detection operator and a post-processing operator which are obtained through pre-training; constructing an appearance defect detection module according to the first target detection operator and the post-processing operator, and deploying the appearance defect detection module on line; acquiring a first appearance image of a product to be detected; detecting whether the first appearance image contains a set defect or not based on a first target detection operator of the appearance defect detection module, and obtaining a defect image and a corresponding defect type under the condition that the first appearance image contains the set defect, wherein the defect image is an image area containing the set defect in the first appearance image; and determining an appearance defect detection result of the product to be detected according to the defect image and the corresponding defect type based on a post-processing operator of the appearance defect detection module.

Description

Appearance defect detection method and appearance defect detection equipment
Technical Field
The present disclosure relates to the field of defect detection technology, and more particularly, to an appearance defect detection method and an appearance defect detection apparatus.
Background
In the manufacturing process of precise products, the defects of various forms of the produced products, such as unstable process, insufficient mechanical positioning precision, environmental factors in a factory building and the like, are often caused, and the defects not only influence the appearance of the products, but also have potential safety hazards, so the defect detection is an indispensable link in industrial production all the time.
Aiming at the existing industrial scene, the conventional manual visual inspection is adopted, and after defects are found, unqualified products are manually removed, so that the existing quality inspection conditions have factors such as visual fatigue and emotion fluctuation of quality inspection workers, the conventional method inevitably has the problems of poor quality standard objectivity and low speed.
Disclosure of Invention
It is an object of the present disclosure to provide a new solution that can solve at least one of the above problems.
According to a first aspect of the present disclosure, there is provided an appearance defect detection method including:
acquiring a first target detection operator and a post-processing operator which are obtained through pre-training;
constructing an appearance defect detection module according to the first target detection operator and the post-processing operator, and deploying the appearance defect detection module on line;
Acquiring a first appearance image of a product to be detected;
processing the first appearance image based on the appearance defect detection module to obtain an appearance detection result of the product to be detected;
the processing the first appearance image based on the appearance defect detection module to obtain an appearance detection result of the product to be detected includes:
detecting whether the first appearance image contains a set defect or not based on a first target detection operator of the appearance defect detection module, and obtaining a defect image and a corresponding defect type under the condition that the first appearance image contains the set defect, wherein the defect image is an image area containing the set defect in the first appearance image;
and determining an appearance defect detection result of the product to be detected according to the defect image and the corresponding defect type based on a post-processing operator of the appearance defect detection module.
Optionally, the method further comprises:
acquiring a second target detection operator obtained through pre-training, and constructing the appearance defect detection module according to the second target detection operator;
the processing the first appearance image based on the appearance defect detection module to obtain an appearance detection result of the product to be detected, further includes:
Detecting whether a set pattern is contained in the first appearance image or not based on the first target detection operator, and obtaining a set pattern image under the condition that the set pattern is contained in the first appearance image, wherein the set pattern image is an image area containing the set pattern in the first appearance image;
and determining an appearance defect detection result of the product to be detected according to the set pattern image based on a second target detection operator of the appearance defect detection module.
Optionally, the network layer structure of the re-parameterization module trained in the first target detection operator is the same as that of the inferred re-parameterization module.
Optionally, the method further comprises:
obtaining a marked second appearance image, wherein the marked second appearance image comprises a marked defect area and a corresponding defect type, and the defect area is an image area containing a set defect in the second appearance image;
generating a first training sample according to the marked second appearance image;
and training a network structure of the YOLOv7 based on the first training sample to obtain the first target detection operator.
Optionally, before training the network structure of YOLOv7 based on the first training sample to obtain the first target detection operator, the method further includes:
And determining the maximized pool parameter of the network structure of the YOLOv7 according to the defect type marked by the second appearance image in the first training sample and the size of the defect area.
Optionally, the method further comprises:
acquiring a third appearance image, wherein the third appearance image is a product appearance image without appearance defects;
obtaining a defect template according to the defect area marked in the second appearance image;
generating a second training sample according to the third appearance image, the defect template and the corresponding defect type;
and training a network structure of the YOLOv7 based on the second training sample to obtain the first target detection operator.
Optionally, the obtaining the noted second appearance image includes:
providing a data annotation interface, wherein the data annotation interface comprises a plurality of second appearance images uploaded in advance;
and marking the selected defect area and the corresponding defect type in the second appearance image in response to the marking operation of any second appearance image, so as to obtain a marked second appearance image.
Optionally, the constructing an appearance defect detection module according to the first target detection operator and the post-processing operator includes:
Providing a module construction interface, wherein the module construction interface comprises a canvas area and an operator list, and the operator list at least comprises the first target detection operator and the post-processing operator;
responsive to an operation of selecting an operator in the operator list, displaying the corresponding operator in a canvas area;
and generating connecting lines between corresponding operators in the canvas area in response to the operation of connecting the operators, so as to obtain the appearance defect detection module.
Optionally, the obtaining the first appearance image of the product to be tested includes:
controlling a first camera to shoot a first surface of a first product to be tested; controlling a second camera to shoot the first surface of a second product to be detected; and controlling a third camera to move to a first position corresponding to the first product to be detected, shooting the second surface of the first product to be detected, controlling the third camera to move to a second position corresponding to the second product to be detected, and shooting the second surface of the second product to be detected to obtain a first appearance image of the first product to be detected and a first appearance image of the second product to be detected.
Optionally, in the process of controlling the first camera to shoot the first surface of the first product to be tested, the method further includes: controlling the first product to be tested to rotate according to the set frequency and the set angle; in the process of controlling the second camera to shoot the first surface of the second product to be tested, the method further comprises: and controlling the second product to be tested to rotate according to the set frequency and the set angle.
According to a second aspect of the present disclosure there is provided an appearance defect detection device comprising a processor and a memory for storing a computer program for controlling the processor to perform the method of the first aspect of the present disclosure. .
Optionally, the device further comprises a moving mechanism, a first camera, a second camera, and a third camera;
the first camera is used for shooting a first surface of a first product to be tested;
the second camera is used for shooting the first surface of a second product to be detected;
the moving mechanism is used for controlling the third camera to move between a first position and a second position;
the third camera is used for shooting the second surface of the first product to be tested when being positioned at the first position, or shooting the second surface of the second product to be tested when being positioned at the second position.
According to the embodiment of the disclosure, an appearance defect detection module is constructed according to a first target detection operator and a post-processing operator which are obtained through pre-training, the appearance defect detection module is deployed on line, whether a first appearance image of a product to be detected contains a set defect or not is detected based on the first target detection operator of the appearance defect detection module, and a defect image and a corresponding defect type are obtained under the condition that the first appearance image contains the set defect; based on a post-processing operator of the appearance defect detection module, an appearance defect detection result of a product to be detected is determined according to the defect image and the corresponding defect type, whether the appearance of the product to be detected has defects or not can be automatically and rapidly detected according to the defect type, the detection efficiency and the detection accuracy are improved, in addition, the construction difficulty of the appearance defect detection module can be reduced, the construction efficiency of the appearance defect detection module is improved, the development period of the appearance defect detection module is shortened, and the development cost of the appearance defect detection module is reduced.
Other features of the present disclosure and its advantages will become apparent from the following detailed description of exemplary embodiments of the disclosure, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of an appearance defect detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an operator training interface according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a module build interface according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of one example of an appearance defect detection method according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a vision-guided processing device in accordance with an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
The present disclosure provides an appearance defect detection method. Fig. 1 is a flowchart of an appearance defect detection method according to an embodiment of the present disclosure.
As shown in fig. 1, the appearance defect detection method may include steps S1000 to S4000 as follows:
step S1000, obtaining a first target detection operator and a post-processing operator obtained by pre-training.
In this embodiment, the first target detection operator may be a target detection operator obtained by training a network structure of YOLOv 7.
YOLO is a real-time object detector, which is mainly oriented to various CPUs and GPUs from edge devices to cloud, and comprises the following characteristics: the method has the advantages that various free training skills are adopted in training, so that the detection precision of the real-time detector is greatly improved, wherein the free training skill only plays a role in the training process, and the time consumption of the reasoning process is not increased; two problems in model training are solved: namely, the application problem of the heavy parameterization method in a residual structure and the dynamic label distribution problem in a multi-output layer; a complex scaling method is proposed to more effectively utilize the parameters and calculations of the real-time detector; the YOLOv7 has smaller parameter quantity and less calculation quantity than the current optimal model, and has faster reasoning speed and higher detection precision.
The post-processing operator is used for carrying out repeated judgment on the defect image according to the set production line standard so as to judge whether the defect in the defect image accords with the production line standard, and eliminating the corresponding defect image under the condition of meeting the production line standard; and (3) reserving a corresponding defect image under the condition that the production line standard is not met, and obtaining a re-judging result according to the finally reserved defect image.
In one embodiment of the present disclosure, the network layer structure of the re-parameterized module trained in the first object detection operator is the same as that of the inferred re-parameterized module.
In this embodiment, the re-parameterized module for training is the rep module for training, the inferred re-parameterized module is the rep module for testing, and the rep module for training and the rep module for testing are unified into a network layer structure during training, so that the parameter lossless flow of the first target detection operator can be ensured, and the detection accuracy and stability of the first target detection operator are ensured.
In one embodiment of the present disclosure, before performing step S1000, the method may further include steps S1100 to S1300 as follows:
step S1100, obtaining a marked second appearance image.
The marked second appearance image comprises marked defect areas and corresponding defect types, and the defect areas are image areas containing set defects in the second appearance image.
The second appearance image may be an image obtained by photographing a reference product having an appearance defect. And marking the defect area and the corresponding defect type contained in the second appearance image to obtain the marked second appearance image.
In one embodiment of the present disclosure, obtaining the annotated second appearance image may include steps S1110 to S1120 as follows:
in step S1110, a data annotation interface is provided, where the data annotation interface includes a plurality of second appearance images uploaded in advance.
A data import key may be included in the data annotation interface and the user may click on the data import key to upload the plurality of second appearance images.
In the case of uploading a plurality of second appearance images, the uploaded second appearance images may be displayed in the data annotation interface.
And step S1120, marking the selected defect area and the corresponding defect type in the second appearance image in response to the marking operation of any second appearance image, so as to obtain a marked second appearance image.
In this embodiment, the labeling operation for any second appearance image may be an operation of framing a defect area in the second appearance head portrait and setting a defect type corresponding to the defect area on the basis of selecting the second appearance image.
The marking of the defect type of the selected defect region in the second appearance image may be marking the region coordinates of the selected defect region in the second appearance image and the corresponding defect type, and the coordinates of the defect region may be coordinates of two diagonally arranged vertices of the defect region.
Step S1200, generating a first training sample according to the annotated second appearance image.
In this embodiment, a second appearance image and corresponding annotation may be used as a first training sample.
Step S1300, training the network structure of YOLOv7 based on the first training sample to obtain a first target detection operator.
In one embodiment of the present disclosure, an operator training interface as shown in fig. 2 may be provided, in which operator training parameters may be provided, wherein the operator training parameters may include at least one of: algorithms, image width, image height, sample size (batch size), learning rate (learning rate), training period, model capacity, etc.
The user may be a parameter value that sets operator training parameters in advance according to the application scenario or specific requirements. For example, the parameter value corresponding to the algorithm may be set to YOLOv7, the parameter value corresponding to the image width may be set to 512, the parameter value corresponding to the image height may be set to 512, the number of samples (batch size) may be set to 8, the parameter value corresponding to the learning rate (learning rate) may be set to 0.001, the parameter value corresponding to the training period may be set to 300, and the parameter value corresponding to the model capacity may be set to a large capacity.
Based on the above, training the network structure of YOLOv7 based on the first training sample according to the set parameter value to obtain a first target detection operator.
In the operator training interface shown in fig. 2, a start training button may be further provided, and in response to a triggering operation of the start training button, a step of training the network structure of YOLOv7 based on the first training sample to obtain a first target detection operator may be performed.
In one embodiment of the present disclosure, before performing step S1300, the method further includes: and determining the maximized pool parameter of the network structure of the YOLOv7 according to the defect type marked by the second appearance image in the first training sample and the size of the defect area.
In this embodiment, the relevant parameters of maxpool can be adaptively adjusted according to the type of defect and the size of the defect region marked by the second appearance image in the first training sample, instead of 5, 9 and 13 fixed as in the prior art. Therefore, the first target detection operator obtained through training can identify defects of any size, and the detection accuracy of the first target detection operator is improved.
The maxpool related parameters may include any one or more of a first parameter value, a second parameter ksize, a third parameter stride, and a fourth parameter padding. The first parameter value indicates the input to be pooled, and the pooling layer is generally connected behind the convolution layer, so the input is usually feature map (feature map) and includes the number (batch), height (height), height (width), channel (channels) and other features. The second parameter, ksize, represents the size of the pooling window, taking a four-dimensional vector, typically [ batch, height, width, channels ]. The third parameter stride represents the step size of the window sliding in each dimension. The fourth parameter padding may take either 'VALID' or 'SAME', with VALID indicating that the edge is not filled with 0 and SAME indicating that the edge is filled with 0.
By the embodiment, the first detection operator can stably detect the set defects in the image, and the defect omission ratio of the first detection operator is reduced.
In one embodiment of the present disclosure, the fast iteration and tuning of the first object detection operator may also be accomplished by using techniques of transfer learning.
In one embodiment of the present disclosure, in an industrial environment, after the defect detection device is online, a sufficient amount of effective data cannot be collected in a short time to perform model training and iterative optimization, and for the problem of insufficient data volume, the method of the present embodiment may further include a step of data augmentation, specifically including steps S1400 to S1600 as follows:
step S1400, a third appearance image is obtained, wherein the third appearance image is a product appearance image without appearance defects.
In the present embodiment, the third appearance image may be an image obtained by photographing a reference product having no appearance defect.
Step S1500, obtaining a defect template according to the defect area marked in the second appearance image.
In this embodiment, labeling the second appearance image based on labelme, and generating a json file representing the location of the defect region in the second appearance image and the corresponding defect type; then based on a labelme self-contained tool or by a developer, writing a script by himself, and generating a defect mask diagram by combining the second appearance image and the j son file; and reading the position of the defect area in json, and cutting the defect in the defect mask graph according to the position to obtain a defect template.
Further, for each defect in each second appearance image, a corresponding defect mask map may be obtained. Wherein the defect mask map may mask other areas of the second appearance image except for the corresponding defect.
Step S1600, generating a second training sample according to the third appearance image, the defect template and the corresponding defect type.
In this embodiment, a fourth appearance image including a defect may be generated based on the third appearance image and the defect template; a fourth visual image and corresponding defect type are taken as a second training sample.
Further, the brightness of the defect template may be adjusted to be consistent with the third appearance image, and then at least one defect template may be added to an arbitrary position in the third appearance image, thereby obtaining the fourth appearance image.
In the fourth appearance image, a defect area of the fourth appearance image is obtained according to the position of the defect template added in the third appearance image, and the defect type corresponding to the defect template added in the fourth appearance image is used as the defect type corresponding to the defect area of the fourth appearance image.
Based on the first training sample, training the network structure of YOLOv7 to obtain a first target detection operator can comprise training the network structure of YOLOv7 based on the first training sample and the second training sample to obtain the first target detection operator.
By the method, the number of training samples can be expanded, the iteration speed of the first target operator can be increased, the development period is shortened, and the development cost is reduced.
Still further, the network structure of YOLOv7 may be trained based on the third appearance image to obtain the first object detection operator.
And step S2000, constructing an appearance defect detection module according to the first target detection operator and the post-processing operator, and deploying the appearance defect detection module on line.
In one embodiment of the present disclosure, the first object detection operator and the post-processing operator may be sequentially connected to obtain the appearance defect detection module.
In the case of obtaining the appearance defect detection module, the appearance defect detection module may be deployed on-line, so that the appearance defect detection module is applied to a production line to perform defect appearance detection.
In one embodiment of the present disclosure, constructing the appearance defect detection module according to the first object detection operator and the post-processing operator may include steps S2100 to S2300 as follows:
in step S2100, a module building interface is provided, where the module building interface includes a canvas area and an operator list, and the operator list includes at least a first object detection operator and a post-processing operator.
In this embodiment, the module building interface may be as shown in fig. 3, where a plurality of operators obtained by training may be displayed in an operator list, and the operators displayed in the operator list at least include a first target detection operator and a post-processing operator.
In step S2200, responsive to selecting an operator from the list of operators, displaying the corresponding operator in the canvas area.
In this embodiment, the operation of selecting an operator in the operator list may be an operation of clicking an operator to be selected in the operator list, or an operation of dragging the operator to be selected to the canvas area.
In the case of performing the operation of selecting an operator, it may be to display the selected operator in the canvas area. For example, a first object detection operator and a post-processing operator may be selected from the operator list to display the first object detection operator and the post-processing operator in the canvas area.
Step S2300, in response to the operation of the connection operators, connecting lines are generated between the corresponding operators in the canvas area, and the appearance defect detection module is obtained.
In this embodiment, the output end of the first target operator may be connected to the input end of the post-processing operator, so as to obtain the appearance defect detection module.
By the embodiment, the construction difficulty of the appearance defect detection module can be reduced, the construction efficiency of the appearance defect detection module is improved, the development period of the appearance defect detection module is shortened, and the development cost of the appearance defect detection module is reduced.
Step S3000, obtaining a first appearance image of the product to be tested.
The product to be tested in this embodiment is the same as the reference product described above. For example, the product to be tested may be an electronic product such as VR device, mobile phone, tablet computer, AR device, etc.
In one embodiment of the disclosure, a camera may be controlled to shoot a product to be tested, so as to obtain a first appearance image.
Further, the surface to be detected of the product to be detected is large, and the camera may not be capable of shooting the surface to be detected of the product to be detected at the same time, so that the camera is controlled to rotate in the process of controlling the camera to shoot the product to be detected, so that the camera can shoot the product to be detected comprehensively.
In another embodiment of the present disclosure, obtaining a first appearance image of a product to be tested may include: controlling a first camera to shoot a first surface of a first product to be tested; controlling a second camera to shoot the first surface of a second product to be detected; and controlling the third camera to move to a first position corresponding to the first product to be detected, shooting the second surface of the first product to be detected, controlling the third camera to move to a second position corresponding to the second product to be detected, shooting the second surface of the second product to be detected, and obtaining a first appearance image of the first product to be detected and a first appearance image of the second product to be detected.
In this embodiment, the setting positions of the first camera and the second camera may be fixed, and the position of the third camera may be movable. The first camera, the second camera, and the third camera may take pictures at the same time.
In one example, the first surface may be a side of the product to be tested and the second surface may be a top surface of the product to be tested.
For example, the product to be tested has five surfaces to be tested, the first surface of the first product to be tested includes f1, f2, f3, f4, the second surface is t1, the first surface of the second product to be tested includes ff1, ff2, ff3, ff4, the second surface is tt1, and it takes one unit of time u to shoot one surface, then the first camera can shoot the side surfaces f1, f2, f3, f4 of the first product to be tested, the second camera can shoot the side surfaces ff1, ff2, ff3, ff4 of the second product to be tested, the third camera can shoot the top surface tt1 of the second product to be tested at the second position, after shooting is finished, the top surface t1 of the first product to be tested is shot at the first position, and under the condition that all shooting is finished by the three cameras, the first appearance images of the two products to be tested are obtained.
Specifically, in each unit time, the shooting surfaces of the first product to be tested and the second product to be tested may be as shown in the following table 1:
TABLE 1
1u 2u 3u 4u 5u
Second product to be tested tt1 ff1 ff2 ff3 ff4
First product to be tested f1 f2 f3 f4 t1
Through this embodiment, three cameras are photographed in coordination, can improve the shooting efficiency of the product that awaits measuring, and then improve appearance defect detection speed and production line running speed, improve the output of production line.
In one embodiment of the present disclosure, in controlling the first camera to take a photograph of the first surface of the first product under test, the method further comprises: controlling the first product to be tested to rotate according to the set frequency and the set angle; in the process of controlling the second camera to shoot the first surface of the second product to be tested, the method further comprises: and controlling the second product to be tested to rotate according to the set frequency and the set angle.
The setting frequency and the setting angle can be set in advance according to application scenes or specific requirements, so that the camera can carry out complete shooting on the surface to be detected of the product to be detected.
Further, when the product to be detected is located on each shooting point, each camera is controlled to shoot the first appearance images under at least two light sources respectively, and the brightness of the at least two light sources is different, so that the appearance detection result of the product to be detected is more accurate.
And S4000, processing the first appearance image based on the appearance defect detection module to obtain an appearance detection result of the product to be detected.
In an embodiment of the present disclosure, in the case of obtaining the first appearance image, the first appearance image may be first subjected to a masking process, and then step S4000 is performed.
In this embodiment, by performing the mask processing on the first appearance image, the appearance defect detection module may detect only the specified area in the first appearance image, but not the other areas, so that the appearance defect detection efficiency may be improved.
The processing the first appearance image based on the appearance defect detection module to obtain an appearance detection result of the product to be detected may include steps S4100 to S4200 as follows:
step S4100, detecting whether the first appearance image includes a set defect based on the first target detection operator of the appearance defect detection module, and obtaining a defect image and a corresponding defect type if the first appearance image includes the set defect.
The defect image is an image area containing a set defect in the first appearance image.
In this embodiment, the defect image may be a rectangular image area, and may be represented by corresponding position coordinates.
The setting defect in the present embodiment may be set according to the application scenario or specific requirements. For example, the set defect may include at least one of a linear defect, a sheet defect, a smudge defect, and a gouge defect, the linear defect may include a hairline and a linear bright mark, and the sheet defect may include a sheet bright mark, etc.
In one embodiment of the present disclosure, in a case where it is detected that the first appearance image does not include the set defect based on the first target detection operator of the appearance defect detection module, it may be determined that the product to be tested is not defective.
Step S4200, determining an appearance defect detection result of the product to be detected according to the defect image and the corresponding defect type based on the post-processing operator of the appearance defect detection module.
In this embodiment, the post-processing operator may perform a re-determination on the product to be tested according to the defect image and the corresponding defect type, detect whether the defect in the defect image meets the line standard of the corresponding defect type, delete the defect image meeting the line standard, and retain the defect image not meeting the line standard. Under the condition that defects in all the defect images accord with production line standards of corresponding defect types, the product to be detected is determined to be qualified, and then the appearance defect detection result of the product to be detected can be first information indicating that the product to be detected is qualified. Under the condition that the defects in at least one defect image do not meet the production line standard of the corresponding defect type, the defect can be determined to be unqualified, and then the appearance defect detection result of the product to be detected can be second information indicating that the product to be detected is unqualified, and the defect image which is finally reserved and does not meet the production line standard and the corresponding defect type can be included.
In this embodiment, a corresponding production line standard may be set in advance for each defect type, for example, as shown in the following table 2:
TABLE 2
Defect type Production line standard (mm) 2 )
Dirt defect and wool fiber The defect area S is less than or equal to 0.08mm 2
The knocks and damages Is not allowed to have
Linear bright mark (length)<5mm)&(width)<0.15mm)
Sheet-like bright mark (length)<2.5mm)&(width)<2mm)
According to the embodiment of the disclosure, an appearance defect detection module is constructed according to a first target detection operator and a post-processing operator which are obtained through pre-training, the appearance defect detection module is deployed on line, whether a first appearance image of a product to be detected contains a set defect or not is detected based on the first target detection operator of the appearance defect detection module, and a defect image and a corresponding defect type are obtained under the condition that the first appearance image contains the set defect; based on a post-processing operator of the appearance defect detection module, an appearance defect detection result of a product to be detected is determined according to the defect image and the corresponding defect type, whether the appearance of the product to be detected has defects or not can be automatically and rapidly detected according to the defect type, the detection efficiency and the detection accuracy are improved, in addition, the construction difficulty of the appearance defect detection module can be reduced, the construction efficiency of the appearance defect detection module is improved, the development period of the appearance defect detection module is shortened, and the development cost of the appearance defect detection module is reduced.
In one embodiment of the present disclosure, before determining the appearance defect detection result of the product to be detected according to the defect image and the corresponding defect type, the method further includes: graying treatment is carried out on the defect image to obtain a gray value of an image matrix of the defect image; determining a standard deviation of gray values of an image matrix of the defect image as a gray standard deviation; and eliminating the defect image with the gray standard deviation smaller than or equal to the set standard deviation threshold value.
In this embodiment, the gray standard deviation may be a standard deviation of gray values of each pixel point in the defect image.
The defect image with the gray standard deviation smaller than or equal to the set standard deviation threshold is blurred, and defect re-judgment cannot be accurately performed, so that the defect image with the gray standard deviation smaller than or equal to the set standard deviation threshold can be removed. Because each point location is shot to obtain at least two pictures with different brightness, defect images with gray standard deviation smaller than or equal to a set standard deviation threshold are removed, and defect missing detection is avoided.
In the embodiment, the gray standard deviation of the defect image is compared with the set standard deviation threshold, the defect image with the gray standard deviation smaller than or equal to the set standard deviation threshold is removed, and the defect image with the gray standard deviation larger than the standard deviation threshold is reserved, so that the accuracy of subsequent defect detection can be improved.
In an embodiment of setting the defect to be a linear defect or a sheet defect, determining an appearance defect detection result of the product to be detected according to the defect image and the corresponding defect type includes:
determining the size of the defect image; eliminating the defect image with the size smaller than or equal to the set size threshold; and obtaining an appearance defect detection result of the product to be detected according to the defect image with the size larger than the size threshold value and the corresponding defect type.
In the case where the defect is set to be a linear defect, the size of the defect image may be the diagonal length of the defect image. In the case where the defect is set to be a sheet-like defect, the size of the defect image may be the area of the defect image.
Further, for each defect type, a corresponding size threshold may be set according to a corresponding production line standard.
And under the condition that the size of the defect image is smaller than or equal to the corresponding size threshold, the defect in the defect image accords with the corresponding production line standard, and the appearance defect of the product to be detected can not be determined.
Under the condition that the defect image with the size larger than the size threshold value and the defect type of linear defect or sheet defect exists, the appearance defect detection result of the product to be detected can be obtained according to the defect image with the size larger than the size threshold value and the corresponding defect type. And under the condition that the defect image with the size larger than the size threshold value and the defect type of linear defects or sheet defects does not exist, determining that the product to be detected does not exist the linear defects or the sheet defects.
In an embodiment in which the defect of the product to be detected is a dirty defect, determining an appearance defect detection result of the product to be detected according to the defect image and the corresponding defect type includes: performing binarization processing on the defect image based on the set gray threshold value to obtain a binarized image; determining the number of dirty pixel points in the corresponding defect image according to the binarized image; removing defect images corresponding to binarized images with the number of dirty pixel points smaller than or equal to a set number threshold; and obtaining an appearance defect detection result of the product to be detected according to the residual defect image and the corresponding defect type.
In one embodiment of the present disclosure, the defect image may be binarized based on a set gray threshold using an adaptive threshold (binarization) function in OpenCV,
specifically, for each pixel point in the defect image, an average value of gray values of all the pixel points in the preset area with the pixel point as a center, that is, a first average value, may be obtained. Specifically, the preset area may be an area taking n×n pixels with the pixel point as a core. The specific value of N can be flexibly set according to actual conditions.
After the first average value of the pixel point is obtained, the threshold value of the pixel point can be obtained by making a difference between the first average value and the first parameter. The gray value of the pixel is compared with a threshold value to determine whether the gray value of the pixel is set to 0 or 255. In one example, a pixel where the gray value becomes 255 may be determined as a dirty pixel.
In this embodiment, the gray standard deviation of the defect image may be brought into a preset function, and the first parameter of the defect image may be determined. The first parameter is a parameter for distinguishing whether the pixel point is a dirty pixel point.
After the binarization processing is performed on the defective portion, the number of dirty pixel points of the defective portion can be obtained.
Under the condition that the defect images with the number of the dirty pixel points being larger than the number threshold value exist, the defect that the product to be detected has the dirty defect can be determined, and then the appearance defect detection result of the product to be detected can be obtained according to the defect images with the number of the dirty pixel points being larger than the number threshold value and the corresponding defect types. And under the condition that no defect image with the number of the dirty pixel points being larger than the number threshold value exists, determining that no dirty defect exists in the product to be detected.
By the embodiment, the dirt defect in the product to be detected can be accurately detected.
In the embodiment that the defect of the product to be detected is a gouge defect, if the first target detection operator detects a defect image of the gouge defect, the gouge defect can be directly determined to exist in the appearance of the product to be detected, and according to the defect image of which the defect type is the gouge defect and the corresponding defect type, an appearance defect detection result of the product to be detected is obtained. If the first target detection operator does not detect the defect image of the gouge defect, the defect image can be used for determining that the gouge defect does not exist in the appearance of the product to be detected.
In one embodiment of the present disclosure, the method may further comprise: and obtaining a second target detection operator obtained through pre-training, and constructing an appearance defect detection module according to the second target detection operator.
In this embodiment, the second object detection operator may be obtained by training the network structure of YOLOv 7.
Specifically, the training manner of the second target detection operator may refer to the foregoing training manner of the first target detection operator, which is not described herein again.
Further, the network layer structure of the re-parameterization module trained in the second target detection operator is the same as that of the inferred re-parameterization module.
Still further, the appearance defect detection module is further constructed according to the second target detection operator, that is, the appearance defect detection module is constructed according to the first target detection operator, the second target detection operator and the post-processing operator, and the foregoing step S2000 may be referred to specifically, and will not be repeated here.
In this embodiment, the output end of the first target operator may be connected to the input end of the second target detection operator, and the output end of the first target operator may be further connected to the input end of the post-processing operator.
On the basis, the appearance defect detection module is used for processing the first appearance image to obtain an appearance detection result of the product to be detected, and the method further comprises the following steps S4300-S4400:
in step S4300, whether the first appearance image includes a set pattern is detected based on the first object detection operator, and if the first appearance image includes a set pattern, a set pattern image is obtained.
The set pattern image is an image area containing a set pattern in the first appearance image.
The set pattern in this embodiment may be a pattern set on the product to be tested, which is set in advance according to an application scenario or specific requirements. For example, the set pattern may be a logo or other identifying pattern.
The set pattern image in this embodiment may be a smallest rectangular area including the set pattern in the first appearance image.
Step S4400, further determining an appearance defect detection result of the product to be detected according to the set pattern image based on the second target detection operator of the appearance defect detection module.
In this embodiment, the second target detection operator may determine that the set pattern of the product to be detected has an appearance defect when it is determined that any set defect exists in the set pattern according to the set pattern image; and under the condition that no defect exists in the set pattern, determining that no appearance defect exists in the set pattern of the product to be tested.
In this embodiment, the first target detection operator may determine that the product to be detected is qualified when it is detected that the first appearance image does not include a defect and does not include a set pattern.
When the first object detection operator detects that the first appearance image contains a defect or contains a set pattern, the appearance defect detection result of the product to be detected can be obtained according to the detection result of the second object detection operator and the detection result of the post-processing operator.
And under the condition that the second target detection operator detects that the set pattern of the product to be detected has an appearance defect or the post-processing operator detects that the product to be detected has at least one defect of a linear defect, a sheet defect, a dirt defect and a gouge defect, the appearance defect of the product to be detected can be determined.
Further, in the case that it is determined that the product to be detected has an appearance defect, the appearance defect detection result may include a defect image and a corresponding defect type.
And under the condition that the second target detection operator detects that the set pattern of the product to be detected does not have an appearance defect and the post-processing operator detects that the product to be detected does not have a linear defect, a sheet defect, a dirt defect and a gouge defect, the condition that the product to be detected does not have the appearance defect can be determined.
Fig. 4 is a schematic diagram of one example of an appearance defect detection method according to an embodiment of the present disclosure.
As shown in fig. 4, the first target detection operator may process the first appearance image, determine whether the first appearance image includes a set defect or a set pattern, if not, determine that the product to be detected is qualified; if so, the set pattern image, the defect image and the corresponding defect type are determined. The post-processing operator outputs a defect image and a corresponding defect type when the set defect is included in the first appearance image, and outputs a set pattern image to the second target detection operator when the set pattern is included in the first appearance image.
The second target detection operator detects the set pattern image, and under the condition that the set pattern is detected to be defective, the set pattern defect of the product to be detected is determined; and under the condition that no defect of the set pattern is detected, determining that the product to be detected has no defect.
The post-processing operator determines that the product to be detected has defects under the condition that the defect type corresponding to the defect image is a gouged defect; under the condition that the defect type corresponding to the defect image is a linear defect, a sheet defect or a dirty defect, whether the defect in the defect image accords with a corresponding production line standard or not can be determined according to the defect type, under the condition that the defect corresponding to any defect image does not accord with the corresponding production line standard, the defect of the product to be detected can be determined, and under the condition that the defects corresponding to all the defect images accord with the corresponding production line standard, the product to be detected can be determined to be qualified.
< appearance defect detecting device >
The present disclosure also provides an appearance defect detection apparatus, as shown in fig. 5, the appearance defect detection apparatus 6000 may include a processor 6100 and a memory 6200, where the memory 6200 is configured to store a computer program, and the computer program is configured to control the processor 6100 to execute the method described in the foregoing embodiment.
In one embodiment of the present disclosure, the appearance defect detection apparatus 6000 may further include a moving mechanism, a first camera, a second camera, and a third camera. The first camera is used for shooting a first surface of a first product to be tested; the second camera is used for shooting the first surface of a second product to be detected; the moving mechanism is used for controlling the third camera to move between a first position and a second position; the third camera is used for shooting the second surface of the first product to be tested when being positioned at the first position, or shooting the second surface of the second product to be tested when being positioned at the second position.
Further, the appearance defect detecting device 6000 may include a first light source and a second light source, where the brightness of the first light source is different from the brightness of the second light source, so as to shoot each point of the product to be detected at different brightness, and obtain a first appearance image.
The embodiments described above mainly focus on differences from other embodiments, but it should be clear to a person skilled in the art that the embodiments described above may be used alone or in combination with each other as desired.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other, and each embodiment is mainly described as different from other embodiments, but it should be apparent to those skilled in the art that the above embodiments may be used alone or in combination with each other as required. In addition, for the device embodiment, since it corresponds to the method embodiment, description is relatively simple, and reference should be made to the description of the corresponding part of the method embodiment for relevant points. The system embodiments described above are merely illustrative, in that the modules illustrated as separate components may or may not be physically separate.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or border servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as python, java, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the present disclosure is defined by the appended claims.

Claims (12)

1. An appearance defect detection method, comprising:
acquiring a first target detection operator and a post-processing operator which are obtained through pre-training;
constructing an appearance defect detection module according to the first target detection operator and the post-processing operator, and deploying the appearance defect detection module on line;
acquiring a first appearance image of a product to be detected;
processing the first appearance image based on the appearance defect detection module to obtain an appearance detection result of the product to be detected;
The processing the first appearance image based on the appearance defect detection module to obtain an appearance detection result of the product to be detected includes:
detecting whether the first appearance image contains a set defect or not based on a first target detection operator of the appearance defect detection module, and obtaining a defect image and a corresponding defect type under the condition that the first appearance image contains the set defect, wherein the defect image is an image area containing the set defect in the first appearance image;
and determining an appearance defect detection result of the product to be detected according to the defect image and the corresponding defect type based on a post-processing operator of the appearance defect detection module.
2. The method according to claim 1, wherein the method further comprises:
acquiring a second target detection operator obtained through pre-training, and constructing the appearance defect detection module according to the second target detection operator;
the processing the first appearance image based on the appearance defect detection module to obtain an appearance detection result of the product to be detected, further includes:
detecting whether a set pattern is contained in the first appearance image or not based on the first target detection operator, and obtaining a set pattern image under the condition that the set pattern is contained in the first appearance image, wherein the set pattern image is an image area containing the set pattern in the first appearance image;
And determining an appearance defect detection result of the product to be detected according to the set pattern image based on a second target detection operator of the appearance defect detection module.
3. The method of claim 1, wherein the re-parameterized module trained in the first object detection operator and the inferred re-parameterized module network layer structure are identical.
4. The method according to claim 1, wherein the method further comprises:
obtaining a marked second appearance image, wherein the marked second appearance image comprises a marked defect area and a corresponding defect type, and the defect area is an image area containing a set defect in the second appearance image;
generating a first training sample according to the marked second appearance image;
and training a network structure of the YOLOv7 based on the first training sample to obtain the first target detection operator.
5. The method of claim 4, wherein before training the network structure of YOLOv7 based on the first training sample to obtain the first object detection operator, the method further comprises:
and determining the maximized pool parameter of the network structure of the YOLOv7 according to the defect type marked by the second appearance image in the first training sample and the size of the defect area.
6. The method of claim 4, the method further comprising:
acquiring a third appearance image, wherein the third appearance image is a product appearance image without appearance defects;
obtaining a defect template according to the defect area marked in the second appearance image;
generating a second training sample according to the third appearance image, the defect template and the corresponding defect type;
and training a network structure of the YOLOv7 based on the second training sample to obtain the first target detection operator.
7. The method of claim 4, wherein the obtaining the annotated second appearance image comprises:
providing a data annotation interface, wherein the data annotation interface comprises a plurality of second appearance images uploaded in advance;
and marking the selected defect area and the corresponding defect type in the second appearance image in response to the marking operation of any second appearance image, so as to obtain a marked second appearance image.
8. The method of claim 1, wherein constructing an appearance defect detection module from the first object detection operator and the post-processing operator comprises:
Providing a module construction interface, wherein the module construction interface comprises a canvas area and an operator list, and the operator list at least comprises the first target detection operator and the post-processing operator;
responsive to an operation of selecting an operator in the operator list, displaying the corresponding operator in a canvas area;
and generating connecting lines between corresponding operators in the canvas area in response to the operation of connecting the operators, so as to obtain the appearance defect detection module.
9. The method of claim 1, wherein the acquiring a first appearance image of the product under test comprises:
controlling a first camera to shoot a first surface of a first product to be tested; controlling a second camera to shoot the first surface of a second product to be detected; and controlling a third camera to move to a first position corresponding to the first product to be detected, shooting the second surface of the first product to be detected, controlling the third camera to move to a second position corresponding to the second product to be detected, and shooting the second surface of the second product to be detected to obtain a first appearance image of the first product to be detected and a first appearance image of the second product to be detected.
10. The method of claim 9, wherein during controlling the first camera to capture a first surface of a first product under test, the method further comprises: controlling the first product to be tested to rotate according to the set frequency and the set angle; in the process of controlling the second camera to shoot the first surface of the second product to be tested, the method further comprises: and controlling the second product to be tested to rotate according to the set frequency and the set angle.
11. An appearance defect detection apparatus comprising a processor and a memory, the memory for storing a computer program for controlling the processor to perform the method of any one of claims 1 to 10.
12. The apparatus of claim 11, further comprising a movement mechanism, a first camera, a second camera, and a third camera;
the first camera is used for shooting a first surface of a first product to be tested;
the second camera is used for shooting the first surface of a second product to be detected;
the moving mechanism is used for controlling the third camera to move between a first position and a second position;
The third camera is used for shooting the second surface of the first product to be tested when being positioned at the first position, or shooting the second surface of the second product to be tested when being positioned at the second position.
CN202311175164.3A 2023-09-12 2023-09-12 Appearance defect detection method and appearance defect detection equipment Pending CN117347368A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311175164.3A CN117347368A (en) 2023-09-12 2023-09-12 Appearance defect detection method and appearance defect detection equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311175164.3A CN117347368A (en) 2023-09-12 2023-09-12 Appearance defect detection method and appearance defect detection equipment

Publications (1)

Publication Number Publication Date
CN117347368A true CN117347368A (en) 2024-01-05

Family

ID=89370093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311175164.3A Pending CN117347368A (en) 2023-09-12 2023-09-12 Appearance defect detection method and appearance defect detection equipment

Country Status (1)

Country Link
CN (1) CN117347368A (en)

Similar Documents

Publication Publication Date Title
CN111598863B (en) Defect detection method, device, equipment and readable storage medium
CN113344901B (en) Glue spreading defect detection method and device, storage medium and electronic equipment
CN110726724A (en) Defect detection method, system and device
CN108648169B (en) Method and device for automatically identifying defects of high-voltage power transmission tower insulator
CN109816745B (en) Human body thermodynamic diagram display method and related products
KR102108956B1 (en) Apparatus for Performing Inspection of Machine Vision and Driving Method Thereof, and Computer Readable Recording Medium
EP3779874A2 (en) System and method for automated surface assessment
CN111339902B (en) Liquid crystal display indication recognition method and device for digital display instrument
TW202001795A (en) Labeling system and method for defect classification
CN106682652B (en) Structure surface disease inspection and analysis method based on augmented reality
CN111951210A (en) Data processing method, device and equipment
CN112954198A (en) Image processing method and device and electronic equipment
CN116071315A (en) Product visual defect detection method and system based on machine vision
CN114078127A (en) Object defect detection and counting method, device, equipment and storage medium
WO2024045963A1 (en) Appearance defect detection method, electronic device, and storage medium
Block et al. Image-Bot: Generating synthetic object detection datasets for small and medium-sized manufacturing companies
CN116167910B (en) Text editing method, text editing device, computer equipment and computer readable storage medium
CN117347368A (en) Appearance defect detection method and appearance defect detection equipment
CN115496759B (en) Dust detection method and device and storage medium
CN111486790A (en) Full-size detection method and device for battery
KR102031001B1 (en) Apparatus for Providing Service of Checking Workpiece and Driving Method Thereof
CN115601341A (en) Method, system, equipment, medium and product for detecting defects of PCBA (printed circuit board assembly) board
EP3509035A1 (en) Testing a battery
CN110390731A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN113591761B (en) Video shot language identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination