CN111598857A - Method and device for detecting surface defects of product, terminal equipment and medium - Google Patents

Method and device for detecting surface defects of product, terminal equipment and medium Download PDF

Info

Publication number
CN111598857A
CN111598857A CN202010395324.5A CN202010395324A CN111598857A CN 111598857 A CN111598857 A CN 111598857A CN 202010395324 A CN202010395324 A CN 202010395324A CN 111598857 A CN111598857 A CN 111598857A
Authority
CN
China
Prior art keywords
image
product
difference
original image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010395324.5A
Other languages
Chinese (zh)
Inventor
黄耀
陈光斌
吴雨培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aqrose Robot Technology Co ltd
Original Assignee
Beijing Aqrose Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aqrose Robot Technology Co ltd filed Critical Beijing Aqrose Robot Technology Co ltd
Priority to CN202010395324.5A priority Critical patent/CN111598857A/en
Publication of CN111598857A publication Critical patent/CN111598857A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a method and a device for detecting surface defects of a product, terminal equipment and a storage medium, wherein the method comprises the steps of collecting an original image of the surface of the product; inputting the original image into a preset depth learning model to obtain a first image feature of the original image and a second image feature of the restored image corresponding to the original image; comparing the first image characteristic with the second image characteristic to obtain a difference image; and determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product. The method has the characteristics of more good images and less defect images based on the industrial field, utilizes the good images to train deep learning for defect detection, avoids the problems of poor training effect and inaccurate detection of the deep learning model caused by lack of training data in the traditional deep learning by training the defect images, and improves the detection efficiency.

Description

Method and device for detecting surface defects of product, terminal equipment and medium
Technical Field
The present invention relates to the field of image detection technologies, and in particular, to a method and an apparatus for detecting surface defects of a product, a terminal device, and a computer-readable storage medium.
Background
With the rapid development of deep learning technology, the autonomous learning of image features based on convolutional neural network to detect the surface defects of products aiming at images has been a very common technical operation in the industry for a long time.
However, although the threshold of detecting the product defects of the images by machine vision can be greatly reduced based on deep learning, in order to train a good deep learning neural network, a large number of images with defects on the surface of the product are generally required to be input into the neural network as training data for training, and each image needs to accurately label the defects on the surface of the product in the image at a pixel level. Because the defect map needs to be manually marked, the marking time and the marking accuracy directly influence the final trained model online time and the accuracy of defect detection of the model. The more critical problem is that the industrial field generally has more good product drawings without defects on the product surface and fewer bad product drawings (namely defect drawings) with defects on the product surface, and the lack of the defect drawings directly results in poor model training effect and the detection of the defects on the product surface aiming at the images.
Disclosure of Invention
The invention mainly aims to provide a method and a device for detecting surface defects of products, a terminal device and a computer readable storage medium, and aims to solve the technical problems that the detection performance of deep learning on the surface defects of the products aiming at images is low and the detection requirements of the product defects cannot be met due to more good images and fewer defect images in the industrial field.
In order to achieve the above object, the present invention provides a method for detecting surface defects of a product, the method comprising:
collecting an original image of the surface of a product;
inputting the original image into a preset deep learning model to obtain a first image feature of the original image and a second image feature of the restored image corresponding to the original image;
comparing the first image characteristic with the second image characteristic to obtain a difference image;
and determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product.
Further, the method for detecting the surface defects of the product further comprises the following steps:
and inputting the product image without the defect on the surface of the product as training data into a preset neural network to train to obtain the preset deep learning model.
Further, the preset neural network includes: a full convolutional neural network and a transposed convolutional neural network,
the step of inputting the product image without the defect on the surface of the product as training data into a preset neural network to train and obtain the preset deep learning model comprises the following steps:
taking the full convolution neural network as an encoder and the transposed convolution neural network as a decoder, and forming an initial model by the encoder and the decoder;
collecting a product image without defects on the surface of the product as training data;
inputting the training data into the initial model for iterative training so as to train the initial model into the preset deep learning model.
Further, the step of inputting the training data into the initial model for iterative training includes:
inputting the training data to the encoder to obtain a first data characteristic of the training data;
inputting the first data characteristic to the decoder to obtain restored data of the training data, and inputting the restored data to the encoder to obtain a second data characteristic of the restored data;
and comparing the first data characteristic with the second data characteristic by taking SSIM (structural similarity index, which is an index for measuring the similarity of two images) as a loss function to carry out characteristic constraint, and carrying out iterative training on the encoder and the decoder according to preset model training parameters until convergence.
Further, the step of inputting the original image into a preset deep learning model to obtain a first image feature of the original image and a second image feature of a restored image of the original image includes:
inputting the original image to the encoder to extract a first image feature of the original image by the encoder;
inputting the image features to the decoder to generate a restored image of the original image by the decoder;
inputting the restored image to the encoder to extract a second image feature of the restored image by the encoder.
Further, the step of comparing the first image feature and the second image feature to obtain a difference image includes:
performing channel-by-channel difference comparison on the first image feature and the second image feature to obtain difference features;
and integrating the difference features to obtain a difference feature image, and performing up-sampling on the difference feature image to obtain a difference image with the same size as the original image.
Further, the step of determining, as a defect area on the surface of the product, a target image area with a difference value greater than a preset threshold value in each image area corresponding to the difference image includes:
acquiring difference values of image areas corresponding to the difference image and the original image;
and if the target difference value larger than a preset threshold value exists in the difference values, determining a target image area corresponding to the target difference value as a defect area on the surface of the product.
In order to achieve the above object, the present invention further provides an apparatus for detecting surface defects of a product, including:
the acquisition module is used for acquiring an original image of the surface of the product;
the acquisition module is used for inputting the original image into a preset deep learning model so as to acquire a first image feature of the original image and a second image feature of the restored image corresponding to the original image;
the comparison module is used for comparing the first image characteristic with the second image characteristic to obtain a difference image;
and the determining module is used for determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product.
The present invention also provides a terminal device, including: the detection program of the surface defect of the product realizes the steps of the detection method of the surface defect of the product as described in the above when being executed by the processor.
The invention also provides a computer-readable storage medium, which is characterized in that the computer-readable storage medium stores a computer program, and the computer program is executed by a processor to realize the steps of the method for detecting the surface defects of the product.
The invention provides a method, a device, terminal equipment and a computer readable storage medium for detecting surface defects of a product, which are characterized in that an original image of the surface of the product is collected; inputting the original image into a preset deep learning model to obtain a first image feature of the original image and a second image feature of the restored image corresponding to the original image; comparing the first image characteristic with the second image characteristic to obtain a difference image; and determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product.
The invention realizes that when the invention detects the surface defect of the product to be detected, the invention firstly utilizes the deep learning model obtained by training the product image without the defect on the product surface to obtain the first image characteristic of the original image of the detected product and the second image characteristic of the restored image of the original image, and then the defect area on the product surface can be determined by comparing the characteristic difference of the first image characteristic and the second image characteristic and detecting the difference value of the corresponding image area.
In addition, based on the characteristics that the good image (product image without defects on the surface of the product) is more and the defect image (product image with defects on the surface of the product) is less in the industrial field, the deep learning model is obtained by training by using the good image as training data, the problems that the obtained deep learning model is poor in training effect and inaccurate in detection due to the fact that the defect image is less in the traditional mode that the defect image is used as training data are solved, the detection efficiency is improved, the deep learning model is trained through the good image without marking image features by working personnel, the purpose of unsupervised training of the model is achieved, the time required by the model to be on-line is greatly reduced, and the requirement of detecting the defects on the surface of the product in the industrial field is met.
Drawings
Fig. 1 is a schematic structural diagram of the hardware operation of a terminal device according to an embodiment of the present invention;
FIG. 2 is a diagram of a wireless communication device of the mobile terminal of FIG. 1;
FIG. 3 is a schematic flow chart of a first embodiment of the method for detecting surface defects of a product according to the present invention;
FIG. 4 is a schematic view of an application flow of an embodiment of the method for detecting surface defects of a product according to the present invention;
FIG. 5 is a functional block diagram of an embodiment of an apparatus for detecting surface defects of products according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal device may be implemented in various forms. For example, the terminal device described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm top computer, a Personal Digital Assistant (PDA), and a fixed terminal such as a desktop computer, a large server, and the like.
While the following description will be made taking a mobile terminal device as an example, those skilled in the art will appreciate that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal device in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, the terminal device 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the terminal device in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and terminal equipment can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the terminal device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the terminal device 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 can be used for storing software programs and various data, and the memory 109 can be a computer storage medium, and the memory 109 stores the detection program of the surface defect of the product of the present invention. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. For example, the processor 110 executes the detecting program of the product surface defect in the memory 109 to implement the steps of the embodiments of the detecting method of the product surface defect of the present invention.
Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Although not shown in fig. 1, the terminal device 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the terminal device of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal device 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and charging functions Entity) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the hardware structure of the mobile terminal and the communication network system, the invention provides various embodiments of the detection method for the surface defects of the product.
The invention provides a method for detecting surface defects of a product, which comprises the following steps:
collecting an original image of the surface of a product; inputting the original image into a preset deep learning model to obtain a first image feature of the original image and a second image feature of the restored image corresponding to the original image; comparing the first image characteristic with the second image characteristic to obtain a difference image; and determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product.
Specifically, referring to fig. 3, fig. 3 is a flowchart illustrating a method for detecting surface defects of a product according to a first embodiment of the present invention.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein.
The method for detecting the surface defects of the product is applied to the terminal equipment, and comprises the following steps:
and step S100, acquiring an original image of the surface of the product.
And the original image of the surface of the product is shot and collected aiming at the product to be detected, wherein the collection device is transmitted from the collection device, and the collection device is connected with a terminal device which is used for detecting whether the surface of the product has defects or not at present in advance.
It should be noted that, in this embodiment, the acquisition device connected to the current terminal device may specifically be a device including one or more camera terminals, and the device transmits an original image acquired in real time on the surface of the product to be detected to the current terminal device, or the device may add an identifier of the product to be detected to the acquired original image, and then periodically transmit the original image to the terminal device currently used for detecting whether the surface of the product has a defect.
Step S200, inputting the original image into a preset deep learning model to obtain a first image feature of the original image and a second image feature of a restored image corresponding to the original image;
it should be noted that, in this embodiment, the preset deep learning model is a converged deep learning model obtained by using a large number of product images (for convenience of expression, a good image is directly used to replace the good image for description) which are acquired and stored by the acquisition device and have no defect on the surface of the product as training data and then performing iterative training using the training data, based on deep learning in advance.
After the current terminal equipment collects an original image of the surface of a product to be detected through the collecting equipment, the original image is input into a deep learning model which is trained in advance, the deep learning model analyzes and trains the original image and outputs the image characteristics of the original image and the image characteristics of a restored image corresponding to the original image.
Further, the deep learning model may be embodied as an image self-encoder with feature layer contrast, the image self-encoder comprising an encoder and a decoder.
The step S200 may include:
step S201, inputting the original image to the encoder so as to extract a first image characteristic of the original image through the encoder;
after an original image of the surface of a product to be detected is collected by a current terminal device through a collecting device, the original image is input into a deep learning model which is trained in advance, and a first image feature of the original image is extracted and output by an encoder in the deep learning model.
Step S202, inputting the image characteristics to the decoder so as to generate a restored image of the original image through the decoder;
after the current terminal device obtains the first image feature of the input original image based on the deep learning model, the deep learning model further inputs the first image feature extracted by the encoder into a decoder in the deep learning model, so that the decoder regenerates a restored image of the original image by using the first image feature.
Specifically, for example, in the application process shown in fig. 4, after the current terminal device inputs the acquired original image of the surface of the product to be detected into the trained deep learning model-image self-encoder, the image self-encoder extracts the first image feature of the original image through the encoder, and further directly inputs the first image feature into the decoder, so that the decoder regenerates a restored image, namely the restored image of the original image input by the current terminal device, by using the first image feature.
Step S203, inputting the restored image to the encoder, so as to extract a second image feature of the restored image through the encoder.
After the current terminal device regenerates the restored image of the original image by using the first image feature of the original image proposed by the encoder through the decoder based on the deep learning model, the deep learning model further inputs the restored image into the encoder, so as to extract the second image feature of the restored image and return the second image feature to the current terminal device.
Step S300, comparing the first image characteristic with the second image characteristic to obtain a difference image;
the current terminal device inputs an original image into a deep learning model so as to obtain a first image feature of the original image and a second image feature of a restored image of the original image, compares feature differences between the first image feature of the original image and the second image feature of the restored image, and integrates the feature differences so as to obtain a difference image between the feature differences and the original image.
Further, step S300 may include:
step S301, performing channel-by-channel difference comparison on the first image feature and the second image feature to obtain difference features;
it should be noted that, in this embodiment, because the current terminal device inputs the original image of the surface of the product to be detected into the trained deep learning model, so as to obtain the first image feature of the obtained original image, and the first image feature of the original image corresponding to the restored image is a high-dimensional feature, and the high-dimensional feature generally has a plurality of channels, in this way, when performing difference comparison on the first image feature and the second image feature corresponding to the original image and the restored image, the first image feature and the second image feature need to be compared channel by channel, so as to obtain the difference feature under each channel.
Specifically, for example, the current terminal device obtains, after obtaining a first image feature of an original image of a surface of a product to be detected and a second image feature of a restored image corresponding to the original image through an encoder and a decoder in a deep learning model-image self-encoder, an L2 distance (a loss function based on a pixel-by-pixel comparison difference and then taking a square) or an L1 distance (a loss function based on a pixel-by-pixel comparison difference and then taking an absolute value) between the first image feature of the original image and the second image feature of the restored image on a channel-by-channel basis, thereby obtaining a difference feature of each channel between the first image feature of the original image and the second image feature of the restored image.
Step S302, integrating each difference feature to obtain a difference feature image, and performing up-sampling on the difference feature image to obtain a difference image with the same size as the original image.
It should be noted that, in this embodiment, since the encoder in the deep learning model-graphics self-encoder employs a down-sampling operation in the process of extracting the first image feature of the original image, it may cause that part of information of the first image feature of the original image is lost in the down-sampling process, so that there may be a slight difference between a restored image obtained by inputting the first image feature into the decoder and the original image based on the down-sampling operation, and such a difference will interfere with the final detection result, i.e. cause an increase in the pixel-level over-detection rate. Based on this, in this embodiment, after performing channel-by-channel difference comparison on the first image feature and the second image feature corresponding to the original image and the restored image, the difference feature image obtained by integrating the obtained difference features in each channel is up-sampled, so as to obtain a difference image with the same size as the original image. Therefore, the phenomenon that the pixel level over-detection rate is increased can be effectively inhibited, and the accuracy of detecting the surface defects of the product is further ensured.
And step S400, determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product.
Further, step S400 may include:
step S401, obtaining difference values of each image area corresponding to the difference image and the original image;
after obtaining a difference image between the current terminal device and the original image, detecting a difference value of each image area where the original image and the difference image correspond to each other.
Step S402, if a target difference value larger than a preset threshold value is detected in the difference values, determining a target image area corresponding to the target difference value as a defect area of the surface of the product.
And detecting whether a target difference value larger than a preset threshold value exists in the difference values of all the image areas, if so, determining the target difference value larger than the preset threshold value as a defect area of the surface of the product to be detected corresponding to the target graph area in the original image.
It should be noted that, in this embodiment, since the size of the acquired difference image is the same as the size of the original image, if there is a defect on the surface of the product to be detected, the difference value between the normal image area in each image area of the original image and the image area corresponding to the difference image is smaller, and conversely, the difference value between the defective image area in each image area of the original image and the image area corresponding to the difference image is larger, so that a specific difference value threshold is set based on the actual application, and thus the difference values of the image areas corresponding to the original image and the difference image can be directly determined as normal in the image area below the set threshold, and the image area with the difference value above the set threshold is determined as a defective area.
In this embodiment, an original image of the surface of a product is captured for the product to be detected, wherein the original image is transmitted from a capturing device, the capturing device is connected to a terminal device that is used to detect whether the surface of the product is defective, the current terminal device inputs the original image into a deep learning model that has been trained in advance after the original image of the surface of the product to be detected is captured by the capturing device, the deep learning model analyzes and trains the original image and outputs a first image feature of the original image and a second image feature of a restored image corresponding to the original image, compares feature differences between the first image feature of the original image and the second image feature of the restored image, integrates the feature differences to obtain a difference image between the original image and the feature differences, and finally detects difference values of image regions corresponding to the original image and the difference image, therefore, the difference value larger than the preset threshold value is determined to be a defect area of the surface of the product to be detected corresponding to the target graph area in the original image.
When the surface defect detection is carried out on the product to be detected, the method firstly utilizes the deep learning model obtained by training the product image without the defect on the product surface to obtain the first image characteristic of the original image of the detected product and the second image characteristic of the restored image of the original image, and then the defect area on the product surface can be determined by carrying out characteristic difference comparison on the first image characteristic and the second image characteristic and detecting the difference value of the corresponding image area.
In addition, the method and the device have the characteristics that the good quality images are more and the defect images are less based on the industrial field, the good quality images are used as training data to train to obtain the deep learning model, the problems that the obtained deep learning model is poor in training effect and inaccurate in detection due to the fact that the defect images are less in the traditional mode that the defect images are used as training data are solved, the detection efficiency is improved, the deep learning model is trained through the good quality images, workers do not need to label the image characteristics, the purpose of unsupervised training of the model is achieved, the time required by online of the model is greatly reduced, and the requirement of detecting surface defects of products in the industrial field is met.
Further, based on the first embodiment of the method for detecting surface defects of products, a second embodiment of the method for detecting surface defects of products is provided.
In a second embodiment of the method for detecting surface defects of a product of the present invention, the method for detecting surface defects of a product of the present invention further includes:
and step A, inputting a product image without defects on the surface of the product as training data into a preset neural network to train to obtain the preset deep learning model.
The current terminal device uses the collected and stored product image (i.e. good image) without defects on the product surface as training data based on the neural network in deep learning, and performs iterative training on the neural network to obtain a deep learning model capable of being used for detecting the defects on the product surface.
It should be noted that, in this embodiment, the preset neural network includes, but is not limited to, a full convolutional neural network and a transposed convolutional neural network, and in this embodiment, the deep learning model-image self-encoder is obtained by training the neural network with a full convolutional structure, so that the encoder in the image self-encoder abandons the use of a full connection layer and performs a convolutional structure, and image features extracted by the encoder can be directly input into a decoder, so as to better retain spatial information in an original image picture, and thus, a detection effect is not good due to loss of defect information in the original image.
Further, step a may include:
step A1, using the full convolution neural network as an encoder, using the transposed convolution neural network as a decoder, and combining the encoder and the decoder into an initial model;
the current terminal device respectively uses the full convolution neural network as an encoder of the deep learning model-image self-encoder and uses the transposed convolution neural network as a decoder of the image self-encoder, so that the full convolution neural network and the transposed convolution neural network are combined to form an initial model of the deep learning model-image self-encoder.
A2, collecting a product image without defects on the surface of the product as training data;
step A3, inputting the training data into the initial model for iterative training, so as to train the initial model into the preset deep learning model.
The current terminal device takes a good product image which is acquired and stored by the acquisition device and has no defects on the surface of a product as training data based on the acquisition device which is connected in advance, inputs the training data into an initial model, and controls the initial model to carry out iterative training of the model according to preset parameters such as iteration times, learning rate and the like.
Further, in one embodiment, before the current terminal device uses the good picture which is collected and stored by the pre-connected collecting device and has no surface defect as the training data, the current terminal device may also perform image preprocessing on the goodness image, for example, perform operations such as translation, rotation and illumination enhancement on the goodness image, so that by making changes to the goodness chart so that the initial model is in the process of training based on the goodness chart as training data, more image features can be learned, and in addition, if the current terminal equipment detects that the size of the good image does not accord with the input size specified by the initial model, the current terminal device can also adjust the image size of the good image so as to adjust the good image to the input size specified by the initial model for training of the initial model.
Further, in step a3, the step of inputting the training data into the initial model for iterative training may include:
step A301, inputting the training data to the encoder to obtain a first data feature of the training data;
after the current terminal device takes a good picture without defects on the surface, which is collected and stored by a pre-connected collecting device, as training data, the training data is input to an encoder in an initial model, so that a first data feature of the training data is extracted and output by the encoder.
Step A302, inputting the first data characteristic to the decoder to obtain restored data of the training data, and inputting the restored data to the encoder to obtain a second data characteristic of the restored data;
after the encoder based on the initial model extracts and outputs the first data feature of the training data, the current terminal device further inputs the first data feature to a decoder in the initial model, so that the decoder regenerates the restored data of the training data by using the first data feature, and then continuously inputs the restored data to the encoder, so that the encoder extracts the second data feature of the restored data.
Step A303, comparing the first data characteristic and the second data characteristic by taking the structural similarity SSIM as a loss function to carry out characteristic constraint, and carrying out iterative training on the encoder and the decoder according to preset model training parameters until convergence.
It should be noted that, in this embodiment, the current terminal device further uses the structural similarity-SSIM as a loss function for making the initial model so that the initial model can better learn the image features of the good image.
Since the existing self-encoders generally use the L1 distance or the L2 distance as a metric. In the embodiment, structural similarity-SSIM is additionally adopted as a metric of the image self-encoder, compared with the case that only the difference between single pixels is calculated by adopting L1 or L2 distance, the difference between the two pixels is comprehensively considered by the present embodiment when the difference between the two pixels is calculated by adopting SSIM distance, so that the three indexes of brightness, contrast and structural similarity are important parameters for measuring one image, and therefore, compared with L1, the L2 distance is only a simple value for comparing the difference between the two pixels, and the adoption of the SSIM distance as the metric of the difference can help the image self-encoder to better learn the characteristics of the image.
In addition, in this embodiment, the preset model training parameters are parameters such as the number of iterations, the learning rate, and the like that are preset by the current terminal device to control the initial model to perform model training, and it should be understood that the method for detecting surface defects of a product according to the present invention does not limit the specific types of the preset model training parameters.
After obtaining a first data feature of training data and a second data feature of restored data corresponding to the training data, the current terminal device performs feature constraint by comparing SSIM (structural similarity) distance between the first data feature and the second data feature, and then controls an encoder and a decoder of the initial model to perform iterative training until the initial model converges according to parameters such as preset iteration times, learning rate and the like, so that the converged initial model is stored as a deep learning model for performing defect detection on the surface of a product for subsequent direct calling.
Further, in another embodiment, after obtaining the first data feature of the training data and the second data feature of the corresponding recovery data of the training data, the feature constraint may also be performed by comparing the L1 or L2 distance between the first data feature and the second data feature.
In this embodiment, the current terminal device uses the full convolution neural network as an encoder of the deep learning model-image self-encoder and uses the transposed convolution neural network as a decoder of the image self-encoder, so that the full convolution neural network and the transposed convolution neural network are combined to form an initial model of the deep learning model-image self-encoder, and the current terminal device further uses the structural similarity-SSIM as a loss function for forming the initial model, so that the initial model better learns the image characteristics of the product image. In addition, by introducing feature layer comparison of training data, namely inputting the training data to the encoder to obtain a first data feature of the training data, then inputting the first data feature to the decoder to obtain restored data of the training data, inputting the restored data to the encoder to obtain a second data feature of the restored data, finally comparing the first data feature and the second data feature to carry out feature constraint, and carrying out iterative training on the encoder and the decoder according to preset model training parameters until convergence. Therefore, by constraining the data characteristics of the training data, the characteristics between the input training data and the restored data regenerated by the decoder are more similar, the speed of convergence of the whole training of the model is accelerated, the time for the model to be on-line is shortened, the initial model can learn the better data characteristics of the training data for subsequent defect detection, and the detection accuracy is further ensured.
In addition, referring to fig. 5, fig. 5 is a functional module schematic diagram of the device for detecting surface defects of products according to the present invention, and in an embodiment of the present invention, a device for detecting surface defects of products is further provided, the device for detecting surface defects of products according to the present invention includes:
the acquisition module is used for acquiring an original image of the surface of the product;
the acquisition module is used for inputting the original image into a preset deep learning model so as to acquire a first image feature of the original image and a second image feature of the restored image corresponding to the original image;
the comparison module is used for comparing the first image characteristic with the second image characteristic to obtain a difference image;
and the determining module is used for determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product.
Optionally, the apparatus for detecting surface defects of a product of the present invention further includes:
and the training module is used for inputting the product image without the defect on the surface of the product as training data into a preset neural network so as to train to obtain the preset deep learning model.
Optionally, the preset neural network includes: full convolution neural network and transposition convolution neural network, training module includes:
the combination unit is used for taking the full convolution neural network as an encoder, taking the transposed convolution neural network as a decoder and enabling the encoder and the decoder to form an initial model;
the acquisition unit is used for acquiring a product image without defects on the surface of the product as training data;
and the training unit is used for inputting the training data into the initial model for iterative training so as to train the initial model into the preset deep learning model.
Optionally, a training unit comprising:
the first input unit is used for inputting the training data to the encoder to obtain a first data characteristic of the training data;
the second input unit is used for inputting the first data characteristic to the decoder to obtain restored data of the training data and inputting the restored data to the encoder to obtain a second data characteristic of the restored data;
and the first comparison unit is used for comparing the first data characteristic with the second data characteristic by taking the structural similarity SSIM as a loss function so as to carry out characteristic constraint, and carrying out iterative training on the encoder and the decoder according to preset model training parameters until convergence.
Optionally, the obtaining module includes:
a third input unit configured to input the original image to the encoder to extract a first image feature of the original image by the encoder;
a fourth input unit, configured to input the image features to the decoder, so as to generate a restored image of the original image through the decoder;
a fifth input unit, configured to input the restored image to the encoder, so as to extract a second image feature of the restored image by the encoder.
Optionally, the comparison module comprises:
a second comparison unit, configured to perform channel-by-channel difference comparison on the first image feature and the second image feature to obtain difference features;
and the integration unit is used for integrating each difference characteristic to obtain a difference characteristic image, and up-sampling the difference characteristic image to obtain a difference image with the same size as the original image.
Optionally, the determining module includes:
the acquiring unit is used for acquiring difference values of image areas corresponding to the difference image and the original image;
and the determining unit is used for determining a target image area corresponding to the target difference value as a defective area of the product surface if the target difference value larger than a preset threshold value is detected in the difference values.
The steps implemented by the functional modules of the device for detecting surface defects of products can refer to the embodiments of the method for detecting surface defects of products of the present invention, and are not described herein again.
The present invention also provides a mobile terminal, comprising: memory, processor, communication bus and detection program of surface defects of products stored on said memory:
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is used for executing the detection program of the product surface defects so as to realize the steps of the detection method of the product surface defects.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, which is applied to a computer, and the computer-readable storage medium may be a non-volatile computer-readable storage medium, on which a detection program for detecting a surface defect of a product is stored, and when the detection program is executed by a processor, the steps of the detection method for detecting a surface defect of a product are implemented.
The steps implemented when the detection program for detecting the product surface defect run on the processor is executed may refer to the embodiments of the detection method for detecting the product surface defect of the present invention, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for detecting surface defects of a product, comprising:
collecting an original image of the surface of a product;
inputting the original image into a preset deep learning model to obtain a first image feature of the original image and a second image feature of the restored image corresponding to the original image;
comparing the first image characteristic with the second image characteristic to obtain a difference image;
and determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product.
2. The method for detecting surface defects of a product according to claim 1, further comprising:
and inputting the product image without the defect on the surface of the product as training data into a preset neural network to train to obtain the preset deep learning model.
3. The method of claim 2, wherein the predetermined neural network comprises: a full convolutional neural network and a transposed convolutional neural network,
the step of inputting the product image without the defect on the surface of the product as training data into a preset neural network to train and obtain the preset deep learning model comprises the following steps:
taking the full convolution neural network as an encoder and the transposed convolution neural network as a decoder, and forming an initial model by the encoder and the decoder;
collecting a product image without defects on the surface of the product as training data;
inputting the training data into the initial model for iterative training so as to train the initial model into the preset deep learning model.
4. The method of claim 3, wherein the step of inputting the training data into the initial model for iterative training comprises:
inputting the training data to the encoder to obtain a first data characteristic of the training data;
inputting the first data characteristic to the decoder to obtain restored data of the training data, and inputting the restored data to the encoder to obtain a second data characteristic of the restored data;
and comparing the first data characteristic and the second data characteristic by taking the structural similarity SSIM as a loss function to carry out characteristic constraint, and carrying out iterative training on the encoder and the decoder according to preset model training parameters until convergence.
5. The method for detecting surface defects of a product according to claim 3, wherein the step of inputting the original image into a preset deep learning model to obtain a first image feature of the original image and a second image feature of the original image corresponding to the restored image comprises:
inputting the original image to the encoder to extract a first image feature of the original image by the encoder;
inputting the image features to the decoder to generate a restored image of the original image by the decoder;
inputting the restored image to the encoder to extract a second image feature of the restored image by the encoder.
6. The method of claim 1, wherein the step of comparing the first image feature and the second image feature to obtain a difference image comprises:
performing channel-by-channel difference comparison on the first image feature and the second image feature to obtain difference features;
and integrating the difference features to obtain a difference feature image, and performing up-sampling on the difference feature image to obtain a difference image with the same size as the original image.
7. The method for detecting surface defects of a product according to claim 1, wherein the step of determining a target image area with a difference value greater than a preset threshold value in each image area corresponding to the difference image as a defect area of the surface of the product comprises:
acquiring difference values of image areas corresponding to the difference image and the original image;
and if the target difference value larger than a preset threshold value exists in the difference values, determining a target image area corresponding to the target difference value as a defect area on the surface of the product.
8. An apparatus for detecting surface defects of a product, comprising:
the acquisition module is used for acquiring an original image of the surface of the product;
the acquisition module is used for inputting the original image into a preset deep learning model so as to acquire a first image feature of the original image and a second image feature of the restored image corresponding to the original image;
the comparison module is used for comparing the first image characteristic with the second image characteristic to obtain a difference image;
and the determining module is used for determining the target image area with the difference value larger than a preset threshold value in each image area corresponding to the difference image as a defect area on the surface of the product.
9. A terminal device, characterized in that the terminal device comprises: a memory, a processor and a program for detecting surface defects of a product stored on the memory and executable on the processor, the program for detecting surface defects of a product implementing the steps of the method for detecting surface defects of a product according to any one of claims 1 to 7 when executed by the processor.
10. A storage medium, characterized in that it has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for detecting surface defects of a product according to any one of claims 1 to 7.
CN202010395324.5A 2020-05-11 2020-05-11 Method and device for detecting surface defects of product, terminal equipment and medium Pending CN111598857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010395324.5A CN111598857A (en) 2020-05-11 2020-05-11 Method and device for detecting surface defects of product, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010395324.5A CN111598857A (en) 2020-05-11 2020-05-11 Method and device for detecting surface defects of product, terminal equipment and medium

Publications (1)

Publication Number Publication Date
CN111598857A true CN111598857A (en) 2020-08-28

Family

ID=72187081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010395324.5A Pending CN111598857A (en) 2020-05-11 2020-05-11 Method and device for detecting surface defects of product, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN111598857A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183342A (en) * 2020-09-28 2021-01-05 国网安徽省电力有限公司检修分公司 Comprehensive convertor station defect identification method with template
CN113034432A (en) * 2021-01-08 2021-06-25 苏州真目人工智能科技有限公司 Product defect detection method, system, device and storage medium
CN114022442A (en) * 2021-11-03 2022-02-08 武汉智目智能技术合伙企业(有限合伙) Unsupervised learning-based fabric defect detection algorithm
CN114170227A (en) * 2022-02-11 2022-03-11 北京阿丘科技有限公司 Product surface defect detection method, device, equipment and storage medium
CN116609345A (en) * 2023-07-19 2023-08-18 北京阿丘机器人科技有限公司 Battery cover plate defect detection method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650770A (en) * 2016-09-29 2017-05-10 南京大学 Mura defect detection method based on sample learning and human visual characteristics
CN109829903A (en) * 2019-01-28 2019-05-31 合肥工业大学 A kind of chip surface defect inspection method based on convolution denoising self-encoding encoder
CN110619618A (en) * 2018-06-04 2019-12-27 杭州海康威视数字技术股份有限公司 Surface defect detection method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650770A (en) * 2016-09-29 2017-05-10 南京大学 Mura defect detection method based on sample learning and human visual characteristics
CN110619618A (en) * 2018-06-04 2019-12-27 杭州海康威视数字技术股份有限公司 Surface defect detection method and device and electronic equipment
CN109829903A (en) * 2019-01-28 2019-05-31 合肥工业大学 A kind of chip surface defect inspection method based on convolution denoising self-encoding encoder

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183342A (en) * 2020-09-28 2021-01-05 国网安徽省电力有限公司检修分公司 Comprehensive convertor station defect identification method with template
CN112183342B (en) * 2020-09-28 2022-07-12 国网安徽省电力有限公司检修分公司 Comprehensive convertor station defect identification method with template
CN113034432A (en) * 2021-01-08 2021-06-25 苏州真目人工智能科技有限公司 Product defect detection method, system, device and storage medium
CN113034432B (en) * 2021-01-08 2023-10-27 苏州真目人工智能科技有限公司 Product defect detection method, system, device and storage medium
CN114022442A (en) * 2021-11-03 2022-02-08 武汉智目智能技术合伙企业(有限合伙) Unsupervised learning-based fabric defect detection algorithm
CN114170227A (en) * 2022-02-11 2022-03-11 北京阿丘科技有限公司 Product surface defect detection method, device, equipment and storage medium
CN116609345A (en) * 2023-07-19 2023-08-18 北京阿丘机器人科技有限公司 Battery cover plate defect detection method, device, equipment and storage medium
CN116609345B (en) * 2023-07-19 2023-10-17 北京阿丘机器人科技有限公司 Battery cover plate defect detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108650503B (en) Camera fault determination method and device and computer readable storage medium
CN111598857A (en) Method and device for detecting surface defects of product, terminal equipment and medium
CN109840444B (en) Code scanning identification method, equipment and computer readable storage medium
CN110675342A (en) Video frame optimization method, mobile terminal and computer readable storage medium
CN108198150B (en) Method for eliminating image dead pixel, terminal and storage medium
CN107145855B (en) Reference quality blurred image prediction method, terminal and storage medium
CN110086993B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN112866685A (en) Screen projection delay measuring method, mobile terminal and computer readable storage medium
CN112598678A (en) Image processing method, terminal and computer readable storage medium
CN109710168B (en) Screen touch method and device and computer readable storage medium
CN109709561B (en) Ranging method, terminal, and computer-readable storage medium
CN111614902A (en) Video shooting method and device and computer readable storage medium
CN109711850B (en) Secure payment method, device and computer readable storage medium
CN108876387B (en) Payment verification method, payment verification equipment and computer-readable storage medium
CN110275667B (en) Content display method, mobile terminal, and computer-readable storage medium
CN114328451A (en) Sensitive information base construction method and device based on machine learning and computer readable storage medium
CN114581504A (en) Depth image confidence calculation method and device and computer readable storage medium
CN114040073A (en) Starry sky image shooting processing method and equipment and computer readable storage medium
CN112135047A (en) Image processing method, mobile terminal and computer storage medium
CN113079528A (en) Network exception handling method and device and computer readable storage medium
CN113485667A (en) Method for screen projection display of terminal, terminal and storage medium
CN107239745B (en) Fingerprint simulation method and corresponding mobile terminal
CN113222850A (en) Image processing method, device and computer readable storage medium
CN113301251A (en) Auxiliary shooting method, mobile terminal and computer-readable storage medium
CN108549648B (en) Character picture processing method, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200828

RJ01 Rejection of invention patent application after publication