CN114359158A - Object detection method, storage medium, processor and system - Google Patents

Object detection method, storage medium, processor and system Download PDF

Info

Publication number
CN114359158A
CN114359158A CN202111495847.8A CN202111495847A CN114359158A CN 114359158 A CN114359158 A CN 114359158A CN 202111495847 A CN202111495847 A CN 202111495847A CN 114359158 A CN114359158 A CN 114359158A
Authority
CN
China
Prior art keywords
target
image
sample
detection result
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111495847.8A
Other languages
Chinese (zh)
Inventor
李晨阳
刘伟
陈想
罗斌
陈列
汪彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202111495847.8A priority Critical patent/CN114359158A/en
Publication of CN114359158A publication Critical patent/CN114359158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an object detection method, a storage medium, a processor and a system. Wherein, the method comprises the following steps: the method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result. The invention solves the technical problem of low accuracy of detecting the surface defects of the object.

Description

Object detection method, storage medium, processor and system
Technical Field
The present invention relates to the field of computers, and in particular, to an object detection method, a storage medium, a processor, and a system.
Background
At present, the detection of surface defects of an object is one of the most widely applied tasks of computer vision, and the detection difficulty is higher because the surface defects have different shapes and different heights.
In the related art, the surface of an object is generally detected by adopting a conventional imaging scheme of visible light, infrared light and the like, but the imaging expression forms of different types of surface defects are different, partial defects exist, and the surface defects are easily confused with the background in imaging environments of visible light, infrared light and the like, so that the technical problem of low accuracy of detecting the surface defects of the object exists.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an object detection method, a storage medium, a processor and a system, which are used for at least solving the technical problem of low accuracy of detection on surface defects of an object.
According to an aspect of an embodiment of the present invention, there is provided an object detection method. The method can comprise the following steps: the method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result.
According to another aspect of the embodiment of the invention, an object detection method is also provided. The method can comprise the following steps: acquiring a first surface image and a second surface image of an industrial object to be detected, wherein the first surface image is obtained by imaging the surface of the industrial object through a first light source in one lighting direction, and the second surface image is obtained by imaging the surface of the industrial object through corresponding second light sources in a plurality of lighting directions respectively; detecting the defects of the surface of the industrial object based on the first surface image to obtain a first surface detection result; determining a target area in the second surface image based on the first surface detection result; and carrying out defect detection on the target area to obtain a second surface detection result.
According to another aspect of the embodiment of the invention, an object detection method is also provided. The method can comprise the following steps: acquiring a first target image and a second target image of a target object to be detected by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the first target image and the second target image, the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions; carrying out defect detection on the surface of the target object based on the first target image to obtain a first target detection result; determining a target area in the second target image based on the first target detection result; detecting defects of the target area to obtain a second target detection result; and outputting a second target detection result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the second target detection result.
According to another aspect of the embodiment of the invention, an object detection method is also provided. The method can comprise the following steps: responding to an input operation instruction acting on an operation interface, and displaying a first target image and a second target image of a target object on the operation interface, wherein the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and responding to a detection operation instruction acting on the operation interface, and displaying a second target detection result on the operation interface, wherein the second target detection result is obtained by performing defect detection on a target area in a second target image, the target area is determined in the second target image based on a first target detection result, and the first target detection result is obtained by performing defect detection on the surface of the target object based on the first target image.
According to another aspect of the embodiment of the invention, an object detection method is also provided. The method can comprise the following steps: responding to a selection operation instruction acting on an operation interface, and selecting a plurality of lighting directions for lighting a target object; displaying a second target image on the operation interface, wherein the second target image is obtained by imaging a target object through corresponding second light sources in a plurality of lighting directions respectively; and displaying a second target detection result of a target area in a second target image on the operation interface in response to a defect detection instruction acting on the operation interface, wherein the target area is determined by a first target detection result, the first target detection result is obtained by performing defect detection on the surface of the target object based on a first target image of the target object, and the first target image is obtained by imaging the target object through a first light source in one lighting direction.
According to another aspect of the embodiment of the invention, an object detection device is also provided. The apparatus may include: the first detection unit is used for carrying out defect detection on the surface of the target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in one lighting direction; a first determining unit, configured to determine a target region in a second target image of the target object based on the first target detection result, where the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions, respectively; and the second detection unit is used for detecting the defects of the target area to obtain a second target detection result.
According to another aspect of the embodiments of the present invention, an object detecting apparatus is also provided. The apparatus may include: the device comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring a first surface image and a second surface image of an industrial object to be detected, the first surface image is obtained by imaging the surface of the industrial object through a first light source in one lighting direction, and the second surface image is obtained by imaging the surface of the industrial object through corresponding second light sources in a plurality of lighting directions; the third detection unit is used for carrying out defect detection on the surface of the industrial object based on the first surface image to obtain a first surface detection result; a second determination unit that determines a target region in the second surface image based on the first surface detection result; and the fourth detection unit is used for detecting the defects of the target area to obtain a second surface detection result.
According to another aspect of the embodiments of the present invention, an object detecting apparatus is also provided. The apparatus may include: the second acquisition unit is used for acquiring a first target image and a second target image of a target object to be detected by calling the first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the first target image and the second target image, the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions; the fifth detection unit is used for carrying out defect detection on the surface of the target object based on the first target image to obtain a first target detection result; a third determination unit configured to determine a target region in the second target image based on the first target detection result; detecting defects of the target area to obtain a second target detection result; and the sixth detection unit is used for outputting a second target detection result by calling a second interface, wherein the second interface comprises a second parameter, and a parameter value of the second parameter is the second target detection result.
According to another aspect of the embodiments of the present invention, an object detecting apparatus is also provided. The apparatus may include: the first display unit is used for responding to an input operation instruction acting on the operation interface and displaying a first target image and a second target image of a target object on the operation interface, wherein the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and the second display unit is used for responding to a detection operation instruction acting on the operation interface and displaying a second target detection result on the operation interface, wherein the second target detection result is obtained by carrying out defect detection on a target area in a second target image, the target area is determined in the second target image based on a first target detection result, and the first target detection result is obtained by carrying out defect detection on the surface of the target object based on the first target image.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, which includes a stored program, wherein when the program runs, the apparatus on which the storage medium is located is controlled to execute the method for object detection in any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes to perform the method for object detection in any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided an object detection system, including: a processor; a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: the method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result.
In the embodiment of the invention, defect detection is carried out on the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result. That is to say, the present invention utilizes two stages to detect the defect on the surface of the target object, firstly performs defect detection on the surface of the target object based on the first target image (e.g., a conventional image) to obtain a first target detection result, then determines the target region in the second target image (e.g., a photometric stereo image) of the target object based on the first target detection result, and performs defect detection on the target region to obtain a second target detection result, thereby suppressing false alarm according to the second target detection result, further achieving the technical effect of improving the accuracy of detecting the surface defect of the object, and solving the technical problem of low accuracy of detecting the surface defect of the object.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a computer terminal (or mobile device) of an object detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of object detection according to an embodiment of the present invention;
FIG. 3 is a flow chart of another object detection method according to an embodiment of the invention;
FIG. 4 is a flow chart of another object detection method according to an embodiment of the invention;
FIG. 5 is a flow chart of another object detection method according to an embodiment of the invention;
FIG. 6 is a schematic flow chart of a model training phase according to an embodiment of the present invention;
FIG. 7 is a flow diagram of a model test phase according to an embodiment of the invention;
FIG. 8 is a schematic illustration of the imaging effect of four directional illumination according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of sample equalization sampling according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an object detection apparatus according to an embodiment of the present invention;
FIG. 11 is a schematic view of another object detecting apparatus according to an embodiment of the present invention;
FIG. 12 is a schematic view of another object detecting apparatus according to an embodiment of the present invention;
FIG. 13 is a schematic view of another object detecting apparatus according to an embodiment of the present invention;
fig. 14 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
There is also provided, in accordance with an embodiment of the present invention, an embodiment of an object detection method, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing an object detection method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission module 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the object detection method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implements the object detection method of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
In the operational environment shown in fig. 1, the present application provides an object detection method as shown in fig. 2. It should be noted that the object detection method of this embodiment may be executed by the mobile terminal of the embodiment shown in fig. 1.
Fig. 2 is a flow chart of an object detection method according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
step S202, based on a first target image of the target object, performing defect detection on the surface of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction.
In the technical solution provided in step S202 of the present invention, the target object may be a product manufactured by various vertical industries below the heavy industry and the light industry in two major directions, for example, the steel industry, the 3C industry, the textile industry, and the like; the first target image may be a conventional imaging image of visible light, infrared light, and the like, and may be obtained by imaging the target object through the first light source in a lighting direction, where the first light source may be a light source of visible light, infrared light, and the lighting direction may be up, down, left, and right, and is not limited herein.
Optionally, the surface of the target object is polished to obtain a first target image, the surface of the target object is subjected to defect detection, and the detected defect type is intercepted to obtain a first target detection result with marking traces.
Step S204, based on the first target detection result, determining a target area in a second target image of the target object, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively.
In the technical solution provided in step S204 of the present invention, the second target image is obtained by imaging the target object through the corresponding second light source in a plurality of lighting directions, for example, defining the plurality of lighting directions, obtaining visible light imaging images of the plurality of lighting directions, and using the visible light imaging images to generate the corresponding second target image, optionally, the second target image is a photometric stereo image. Photometric stereo imaging can be a method of reconstructing the surface of a target object, which can reconstruct the normal vector of the surface of the target object, as well as the reflectivity of different surface points of the target object.
Optionally, the multiple polishing directions are to keep the target object still at the same shooting station, switch different polishing directions, and perform multiple shooting, and the multiple polishing directions are used because at least three images in the polishing directions are required to make the second target image a photometric stereo image, the number of the polishing directions can be determined according to morphological characteristics of the defect, and a suitable polishing direction is designed according to morphological characteristics of the defect.
In this embodiment, the second target image is a photometric stereo image obtained by irradiating a target object with second light sources corresponding to different lighting directions and shooting the target object, where the second light sources are respectively arranged in multiple lighting directions, that is, each lighting direction corresponds to one second light source (multiple directional light sources), and the target object is imaged by the respective second light sources in the multiple lighting directions to obtain the second target image, where the number of lighting directions may be appropriately increased to improve the accuracy of the target object surface recovery during reconstruction; optionally, in this embodiment, there may also be one second light source, but more than one lighting direction may be changed by blocking different positions of the light source, so as to realize different lighting directions, and then the target object is imaged by the second light source in multiple lighting directions, so as to obtain a second target image.
In this embodiment, based on the first target detection result, a target region is determined in a second target image of the target object, where the target region may be a region corresponding to the first target detection result and cut under the second target image, for example, for a certain target object to be detected, after obtaining a prediction result of the first target detection, the corresponding second target image is clipped according to the prediction result of the first target detection.
And step S206, carrying out defect detection on the target area to obtain a second target detection result.
In the technical solution provided by step S206 of the present invention, the second target detection result may be used to indicate whether the target object is defective or non-defective. Optionally, detecting the intercepted target area, and finally judging that the target object is normal if the target area of the target object is detected to have no defects; and finally judging that the target object is defective if the target area of the target object is defective.
Through the above steps S202 to S206 of the present application, defect detection is performed on the surface of the target object based on a first target image of the target object to obtain a first target detection result, where the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result. That is to say, the present invention utilizes two stages to detect the defect on the surface of the target object, firstly performs defect detection on the surface of the target object based on the first target image (e.g., a conventional image) to obtain a first target detection result, then determines the target region in the second target image (e.g., a photometric stereo image) of the target object based on the first target detection result, and performs defect detection on the target region to obtain a second target detection result, thereby suppressing false alarm according to the second target detection result, further achieving the technical effect of improving the accuracy of detecting the surface defect of the object, and solving the technical problem of low accuracy of detecting the surface defect of the object.
The above method of this embodiment is further described below.
As an optional implementation manner, in step S206, performing defect detection on the target area to obtain a second target detection result, including: and carrying out defect detection on the concave-convex information in the target area to obtain a second target detection result, wherein the concave-convex information is used for representing the defects of the three-dimensional form in the target area.
In this embodiment, the asperity information is used to represent defects in the three-dimensional morphology in the target region.
Optionally, the target area determined in the second target image is distinguished through the concavity and convexity, so that whether the first target detection result is false alarm or not is judged, and then a second target detection result is obtained.
As an optional implementation manner, performing defect detection on the target area to obtain a second target detection result includes: and performing defect detection on the target area based on a first defect detection model to obtain a second target detection result, wherein the first defect detection model is obtained by training a positive sample image and a negative sample image based on a quantity balance, the positive sample image is used for representing the surface with the real defects, and the negative sample image is used for representing the surface without the real defects.
In this embodiment, the first defect detection model may also be referred to as a second-stage defect classifier and a feature classification model, and may be obtained by training a positive sample image and a negative sample image based on quantity equalization, where the quantity equalization may refer to a ratio of the positive sample image to the negative sample image of 1:1, and is used for detecting a defect in a target region, thereby reducing false alarms. A positive sample image is used to represent a surface having a real defect and a negative sample image is used to represent a surface not having the real defect.
For example, a threshold is preset according to actual requirements, for example, the threshold is set to 0.5, the target area is detected, if the overlapping degree with the real labeling rectangular frame is higher than the threshold, the target area is a positive sample image, namely a real defect sample image, otherwise, if the overlapping degree with the real labeling rectangular frame is lower than the threshold, the target area is a negative sample image, namely a normal background sample image, a lot of positive sample images and negative sample images are obtained by detecting a batch of target areas, and the obtained positive sample images and negative sample images are subjected to statistical and random sampling, so that positive sample images and negative sample images with a ratio of 1:1 are obtained.
Optionally, the second-stage defect classifier is obtained by training using the sampling result with the balanced number of positive and negative sample images.
As an alternative embodiment, the generating of the positive sample image and the negative sample image based on the second defect detection model, wherein the defect detection is performed on the surface of the target object to obtain the first target detection result, includes: and carrying out defect detection on the surface of the target object based on the second defect detection model to obtain a first target detection result.
In this embodiment, the second defect detection model, which may also be referred to as a first-stage defect detector, is used for detecting the first target pattern and may be trained on the second defect detector model by detecting defects on the surface of the target object, wherein the core of the second defect detection model is a detection model which is substantially the same as a general detection network, such as deep learning target detection (fast rcnn), single-stage detection model (ssd), neural network model with suggestion box (yolo), first-order fully-convolution target detection model (fcos), and the like.
Optionally, according to an actual situation, a defect type of the target image is predefined, the first target image is taken as an input, and the detected defect type of the target image is intercepted, so as to obtain a first target detection result, wherein the first target detection result can be labeled through a rectangular frame, which is not specifically limited herein,
for example, a first target graph meeting the specification and the picture type meeting the specification is input into a second defect detection model to obtain a first target detection result, wherein the first target detection result comprises a prediction result of the defect type and the region, and then sample balanced sampling is carried out based on the first target detection result obtained by the second defect detection model to obtain a positive sample image and a negative sample image.
As an alternative embodiment, a first image sample of a first sample object is obtained, where the first image sample is obtained by imaging the first sample object through a first light source in one lighting direction; performing defect detection on the first image sample based on the second defect detection model to obtain a first sample detection result; acquiring a second image sample of the first sample object, and performing balanced sampling on the second image sample based on the first sample detection result to obtain a positive sample image and a negative sample image, wherein the images in the first image sample are obtained by imaging the images through corresponding second light sources in a plurality of lighting directions respectively; and training based on the positive sample image and the negative sample image to obtain a first defect detection model.
In this embodiment, the first sample image may be a detection object in a training phase, and the first image sample may be a conventional imaging pattern such as visible light, infrared light, and the like in the training phase.
For example, the first sample image can be produced in various vertical industries below the heavy industry and the light industry in two major directions, such as the steel industry, the 3C industry, the textile industry and the like; the first image sample may be an imaging image of visible light, infrared light, and the like, and may be obtained by imaging the first sample image through a first light source in a lighting direction, where the first light source may be a light source of visible light, infrared light, and the lighting direction may be up, down, left, and right, and is not limited herein.
In this embodiment, the first sample detection result may be a prediction result of the first-stage defect detector, which may also be referred to as a full-scale inference result, and is obtained by detecting the first image sample through the second defect detection model.
For example, the second defect detection model is used for performing prediction labeling on the first image sample, and the first image sample subjected to prediction labeling is verified to obtain a first sample detection result, namely a defect prediction result.
In this embodiment, the second sample image may be a photometric stereo image in the training stage, and may be obtained by imaging the first sample object through the corresponding second light source in a plurality of lighting directions, for example, defining the plurality of lighting directions, acquiring visible light imaging images in the plurality of lighting directions, and using the visible light imaging images to generate the corresponding second sample image. Optionally, sample equalization sampling is performed on the second image sample, resulting in a positive sample image and a negative sample image.
Optionally, the first defect detection model classifies the positive sample image and the negative sample image of the result of the second defect detection model, so as to output a final test result.
As an alternative embodiment, performing equalization sampling on the second image sample based on the first sample detection result to obtain a positive sample image and the negative sample image, includes: determining a corresponding first area sample on the second image sample based on the first sample detection result; and classifying the sub-image samples of the first area samples to obtain a positive sample image and a negative sample image.
In this embodiment, the first area sample may be a cut rectangular frame area, and the cut rectangular frame area may be cut from the second image sample based on the first sample detection result obtained by the second defect detection model, and the cut second image sample may be classified to obtain a positive sample and a negative sample pattern.
For example, a threshold is set in advance for a certain condition, for example, the threshold of the area intersection ratio is set to be 0.4, the first area sample obtained by clipping is classified, and if the obtained sample graph overlap degree, that is, the area intersection ratio is higher than 0.4, the sample is a real defect sample, that is, a positive sample; if the obtained sample graph overlap degree, namely the area intersection ratio is lower than 0.4, the sample is a normal background sample, namely a negative sample.
As an alternative implementation, classifying the sub-image samples of the first area sample to obtain a positive sample image and a negative sample image includes: determining the degree of overlap between the first area sample and a second area sample, wherein the second area sample comprises an area for representing real defects on the first sample object; determining an image corresponding to a second area sample with the overlapping degree larger than a target threshold value in the sub-image samples as a positive sample image; and determining an image corresponding to a second area sample with the overlapping degree lower than a target threshold value in the sub-image samples as a negative sample image.
In this embodiment, the second region sample may be a sample of a true labeled rectangular box for representing a region of the first sample object having a true defect. The target threshold may be a region cross-over ratio, which may be represented by an IoU threshold.
For example, according to the actual situation, a region intersection ratio threshold is preset, and positive and negative samples are classified on the cut sample image, where the positive sample image represents a sample whose overlapping degree with the real labeling rectangular frame is higher than IoU threshold, that is, a real defect sample, and the negative sample image represents a sample whose overlapping degree with the real labeling rectangular frame is lower than IoU threshold, that is, a normal background sample. Based on the above operation, a series of positive and negative sample images can be obtained.
Optionally, after the above operations, a series of positive and negative sample images can be obtained, and further, the number of the positive and negative sample images is statistically and randomly sampled, so that the ratio of the positive and negative sample images is close to 1: 1.
As an optional implementation manner, a third image sample of a second sample object and a corresponding second sample detection result are obtained, where the second sample detection result is obtained by labeling a defect on a surface of the second sample object based on the third image sample; and training the sub-detection model based on the third image sample and the second sample detection result to obtain a second defect detection model.
In this embodiment, the second sample detection result may be obtained by labeling the defect type with a rectangular frame, or labeling the defect on the surface of the second sample object based on the third image sample. The sub-detection model may be a defect detector, and the second defect detection model is obtained by training the defect detector.
For example, the defect detector is trained based on the third image sample and the second sample detection result, so as to obtain a second defect detection model.
As an optional implementation manner, the target object is imaged by corresponding second light sources in at least three lighting directions to obtain at least three sub-images; a second target image is generated based on the at least three sub-images.
In this embodiment, since photometric stereo imaging requires at least 3 images in the lighting directions, the target object is imaged by the corresponding second light sources in the at least three lighting directions to obtain at least three sub-images, and the second target image is generated based on the at least three sub-images.
The embodiment of the invention also provides another object detection method.
Fig. 3 is a flow chart of another object detection method according to an embodiment of the invention. As shown in fig. 3, the method may include the steps of:
step S302, a first surface image and a second surface image of the industrial object to be detected are obtained, where the first surface image is obtained by imaging the surface of the industrial object through a first light source in one lighting direction, and the second surface image is obtained by imaging the surface of the industrial object through corresponding second light sources in a plurality of lighting directions.
In the technical solution provided in step S302 of the present invention, the first surface image may be obtained by imaging the surface of the industrial object through the first light source in one lighting direction, and may be an imaging image of visible light, infrared light, or the like, and the second surface image may be obtained by imaging the surface of the industrial object through the corresponding second light sources in a plurality of lighting directions, respectively, and may be a photometric stereo imaging image.
Step S304, defect detection is carried out on the surface of the industrial object based on the first surface image, and a first surface detection result is obtained.
Optionally, the second defect detection model is used for performing defect detection on the surface of the industrial object to obtain a first surface detection result, where the first surface detection result includes a prediction result of the defect type and the defect region.
Step S306, determining a target region in the second surface image based on the first surface detection result.
In this embodiment, the first surface detection result is further classified, so as to determine a target region in the second surface image, where the target region may be a predicted defect region captured under the second surface image.
And step S308, carrying out defect detection on the target area to obtain a second surface detection result.
Optionally, the predicted defect region intercepted under the second surface image is subjected to defect detection, so as to obtain a second surface result.
For example, for a certain industrial object to be detected, after the prediction result of the second defect detection model is obtained, the prediction result is cut on the corresponding luminosity three-dimensional image, the luminosity three-dimensional image obtained by cutting is sent to the first defect detection model, the classification result of positive and negative samples is obtained, if all the classification results are negative samples, the sample is finally judged to be a normal sample, and if the classification result is a positive sample, the defective sample of the sample is judged.
The embodiment of the invention also provides another object detection method.
Fig. 4 is a flow chart of another object detection method according to an embodiment of the invention. As shown in fig. 4, the method may include the steps of:
step S402, a first target image and a second target image of a target object to be detected are obtained by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the first target image and the second target image, the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions.
In the technical solution provided by step S402 of the present invention, the first interface may be an interface for performing data interaction between the server and the client. The client can transmit the first target image and the second target image of at least one target object to be detected into the first interface as a first parameter of the first interface, so that the purpose of uploading the first target image and the second target image of the target object to be detected to the server is achieved.
Optionally, the first target image is obtained by imaging the target object through the first light source in one lighting direction, and the second target image is obtained by imaging the target object through the corresponding second light source in a plurality of lighting directions respectively.
Optionally, the platform acquires a first target image and a second target image of the target object to be detected by calling a first interface, where the first interface is used to deploy the object detection method through the internet and access the object detection method to the system to be measured, so as to acquire the first target image and the second target image of the target object to be detected.
Step S404, defect detection is carried out on the surface of the target object based on the first target image, and a first target detection result is obtained.
Step S406, determining a target area in the second target image based on the first target detection result; and carrying out defect detection on the target area to obtain a second target detection result.
Step S408, outputting a second target detection result by calling a second interface, where the second interface includes a second parameter, and a parameter value of the second parameter is the second target detection result.
In the technical solution provided in step S408 of the present invention, the second interface may be an interface for performing data interaction between the server and the client, and the server may transmit the second target detection result to the second interface as a parameter of the second interface, so as to achieve the purpose of issuing the second target detection result to the client. Optionally, the platform outputs a second target detection result by calling a second interface, where the second interface is used to deploy and access the result of the object detection to the system to be measured through the internet, so as to output the second target detection result.
The embodiment of the invention also provides another object detection method.
FIG. 5 is a flow chart of another object detection method according to an embodiment of the invention. As shown in fig. 5, the method may include the steps of:
step S502, in response to an input operation instruction acting on the operation interface, displaying a first target image and a second target image of a target object on the operation interface, where the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions, respectively.
In the technical solution provided by step S502 of the present invention, the input operation instruction may be triggered by a user to display the first target image and the second target image of the target object on the operation interface, so that the embodiment responds to the input operation instruction acting on the interactive interface to generate the first target image and the second target image.
And step S504, responding to a detection operation instruction acting on the operation interface, and displaying a second target detection result on the operation interface, wherein the second target detection result is obtained by performing defect detection on a target area in a second target image, the target area is determined in the second target image based on a first target detection result, and the first target detection result is obtained by performing defect detection on the surface of the target object based on the first target image.
In the technical solution provided in step S504 of the present invention, in response to a detection operation instruction acting on an operation interface on an interaction interface, a defect detection is performed on a surface of a target object with respect to a first target image to obtain a first target detection result, a target area is determined in a second target image based on the first target detection result, a defect detection is performed on the target area in the second target image to obtain a second target detection result, and the second target detection result is displayed on the interaction interface.
As an alternative example, the method may comprise: responding to a selection operation instruction acting on an operation interface, and selecting a plurality of lighting directions for lighting a target object; displaying a second target image on the operation interface, wherein the second target image is obtained by imaging a target object through corresponding second light sources in a plurality of lighting directions respectively; and displaying a second target detection result of a target area in a second target image on the operation interface in response to a defect detection instruction acting on the operation interface, wherein the target area is determined by a first target detection result, the first target detection result is obtained by performing defect detection on the surface of the target object based on a first target image of the target object, and the first target image is obtained by imaging the target object through a first light source in one lighting direction.
In this embodiment, the user may be provided with a user interaction process to select the direction of the light. The embodiment can receive a selection operation instruction triggered by a user on an operation interface, wherein the selection operation instruction can be used for selecting a plurality of lighting directions for lighting a target object, and the plurality of lighting directions are also the lighting directions which are independently selected (customized) by the user according to the actual scene requirements, so that the flexibility of selecting the plurality of lighting directions is improved. Alternatively, the embodiment may respond to the selection operation instruction according to the morphological characteristics of the defect of the target object to determine the appropriate lighting direction and the number of lighting directions.
Optionally, the operation interface of this embodiment may further display a first target image, where the first target image is obtained by imaging the target object through the first light source in one lighting direction, and may perform defect detection on the surface of the target object based on the first target image, determine the target area in a second target image of the target object based on the first target detection result, and then determine the target area in the second target image of the target object based on the first target detection result. The embodiment can display the position of the target area in the second target image on the operation interface, and further trigger a defect detection instruction on the operation interface by a user, wherein the defect detection instruction is used for further defect detection of the target area in the second target image, and further respond to the defect detection instruction acting on the operation interface, and can perform defect detection on the target area to obtain a second target detection result, and further display the second target detection result on the operation interface.
In this embodiment, the method first performs defect detection on the surface of the target object to obtain a first target detection result, determines a target region in a second target image of the target object based on the first target detection result, and performs defect detection on the target region to obtain a second target detection result, thereby achieving the technical effect of improving the accuracy of detecting the surface defects of the object and solving the technical problem of low accuracy of detecting the surface defects of the object.
Example 2
Preferred embodiments of the above-described method of this embodiment are further described below.
Currently, industrial defect detection is a challenging task, and the application scenarios thereof cover various vertical industries below the heavy industry and the light industry in two major directions, such as the steel industry, the 3C industry, the textile industry and the like. Among them, surface defect detection is one of the most widely used tasks of computer vision, and has great detection difficulty due to different surface defect forms and different heights.
In the related art, the industry commonly uses a conventional scheme of visible light plus a deep neural network, and a scheme of performing two-stage defect false alarm suppression by combining a conventional imaging scheme with photometric stereo imaging is not found for a while, a surface defect detection scheme generally uses a conventional imaging scheme of visible light, infrared light and the like, and uses a deep neural network for defect detection, but the following problems exist in the scheme: imaging expression forms of different types of surface defects are different, partial defects exist, the defects are easily mixed with a background in imaging environments such as visible light and infrared light, but can be distinguished through unevenness, for example, the aluminum leakage defects of the battery piece are easily mixed with metal dust in the visible light and the infrared light, and the situation can cause false alarm of the model on the background and finally cause serious over-killing of the model.
Therefore, the application provides a two-stage defect false alarm suppression scheme based on luminosity three-dimensional imaging, the defect is detected on conventional images such as visible light, infrared and the like in the first stage, whether the detection result of the first stage is false alarm is further judged by utilizing corresponding luminosity three-dimensional image information in the second stage, and the defect false alarm is suppressed by utilizing the luminosity three-dimensional concave-convex information based on the two-stage mode.
The above method of this embodiment is further described below.
The whole process of the two-stage defect false alarm suppression scheme based on photometric stereo imaging mainly comprises two stages of model training and model testing.
The first stage, the model training stage, is shown in fig. 6, and fig. 6 is a schematic flow chart of the model training stage according to the embodiment of the present invention.
In step S701, a normal image such as visible light or infrared light is formed.
In step S701, based on conventional imaging images such as visible light and infrared light, defects are labeled and corresponding first-stage defect detectors are trained.
Step S702, a first stage defect detector training.
The main functions of the first-stage defect detector are as follows: the method comprises the steps of taking a conventional imaging image such as visible light and infrared light as input, and detecting the defect type defined in advance. In the model training stage, firstly, a predefined defect type needs to be labeled by a rectangular frame based on a conventional imaging image such as visible light, infrared light and the like, and then the defect detector is trained by using labeling data. The core of the defect detection model is a detection model which is substantially the same as a general detection network.
In the testing stage, the defect detector uses conventional imaging images such as visible light and infrared light as input, the type and specification of the input image are required to be consistent with those in the training stage, and then the prediction result of the defect type and the defect area is obtained.
Then, based on the prediction result of the first-stage defect detector, sample equalization sampling is performed on the photometric stereo image.
And step S703, imaging the N-direction visible light.
Defining N lighting directions, acquiring images of the N lighting directions (N-direction visible light imaging), and generating corresponding luminosity stereo images by using the images.
(1) Polishing in multiple directions: the multi-direction polishing refers to keeping a sample still, switching different polishing directions and carrying out multiple times of shooting under the same shooting station, wherein the multi-direction polishing is used because photometric stereo imaging needs at least 3 images under the polishing directions.
Alternatively, the number N of the polishing directions may be determined according to morphological characteristics of the defect, and a suitable polishing direction is designed according to morphological characteristics of the defect, as shown in fig. 8, fig. 8 is a schematic diagram of an imaging effect of four-direction polishing according to an embodiment of the present invention, and is an imaging effect diagram of sequentially polishing from four directions, that is, N is 4.
(2) Photometric stereo imaging: photometric stereo imaging is a method for reconstructing the surface of an object, and has the functions of reconstructing the normal vector of the surface of the object and the reflectivity of different surface points of the object, and further, calculating relative depth information according to the normal vector information of the surface, so that a normal vector image, an albedo image and a relative depth image can be obtained by using the photometric stereo method.
Optionally, compared with a conventional geometric reconstruction method (e.g., a stereo matching method), the photometric stereo method does not need to consider the matching problem of images, and it only needs to collect three or more images obtained by shooting an object with light in different directions, and in the process, the object and the camera are kept different in the whole process, and only the direction of the light source is changed, so that the original images are already aligned, the number of light source directions can be properly increased to improve the accuracy of surface restoration, and meanwhile, the photometric stereo method is simple and convenient to operate and low in cost.
It should be noted that, in the present application, the normal vector map generated by photometric stereo inversion is mainly used, and the method of the present patent is also effective for the albedo map and the relative depth map.
Step S704, a photometric stereo image.
And generating a corresponding luminosity stereo image based on the acquired images in the N lighting directions.
Step S705, sample equalization sampling.
The sample equalization sampling mainly has the function of providing positive and negative samples with balanced quantity for the training of the second-stage defect classifier, and can be mainly divided into the following two stages: the detector full reasoning stage and the prediction result are classified in the equalization stage. The main flow is shown in fig. 9, and fig. 9 is a schematic diagram of sample equalization sampling according to an embodiment of the present invention. And in the detector total amount reasoning stage, based on the trained defect detector in the first stage, reasoning and predicting the labeled total amount data to obtain a series of defect prediction results, wherein the threshold value output by the detector needs to be reduced in the total amount reasoning stage, and is set to be 0.01, so that the detector is expected to keep high recall at present, and meanwhile, enough background false reports can be generated to serve as subsequent negative samples.
In the prediction result classification and equalization stage, based on the obtained total reasoning result, a corresponding rectangular frame region is cut out from the photometric stereo image, and based on a preset IoU (region intersection ratio) threshold value, if the threshold value is set to be 0.5, positive and negative samples are classified for the sample image obtained by cutting, wherein the positive sample represents a sample with the overlapping degree with the real labeling rectangular frame higher than a IoU threshold value, namely a real defect sample, and the negative sample represents a sample with the overlapping degree with the real labeling rectangular frame lower than a IoU threshold value, namely a normal background sample. Based on the above operation, a series of positive and negative sample images can be obtained, and further, the number of the positive and negative sample images is counted and randomly sampled, so that the proportion of the positive and negative sample images is close to 1: 1.
And finally, training a second-stage defect classifier by using positive and negative samples obtained by sample balanced sampling.
Step S706, training the defect classifier in the second stage.
The second-stage defect classifier is used for classifying positive and negative samples of the prediction result of the first-stage defect detector, so that the false alarm is restrained, partial surface defects are only distinguished from the background due to the difference in the unevenness, the visual features of imaging images such as light, infrared light and the like cannot be distinguished, and then the prediction result of the first stage needs to be further classified and judged by means of a photometric stereo image and the second-stage defect classifier.
In the model training stage, based on the photometric stereo positive and negative sample image after the balanced sampling obtained in the previous step, a defect classifier is trained, and the core of the defect classifier is a feature classification model which is basically the same as a general classification network (such as VGG, ResNet and the like).
The second stage, the model test stage, is shown in fig. 7, and fig. 7 is a schematic flow chart of the model test stage according to the embodiment of the invention.
In step S801, a normal image such as visible light or infrared light is formed.
Images are imaged based on visible light, infrared light, and the like.
Step S802, a first stage defect detector.
And based on imaging images such as visible light and infrared light, defect detection is carried out by utilizing the first-stage defect detection model.
In step S803, N-direction visible light imaging is performed.
For a certain sample to be identified, acquiring images of N lighting directions of the sample,
step S804, a photometric stereo image.
And generating a corresponding luminosity stereo image based on the acquired images in the N lighting directions.
In step S805, the prediction area is truncated.
In the testing stage, after the prediction result of the defect detector in the first stage is obtained for a certain sample to be detected, the prediction result is cut on the corresponding luminosity three-dimensional image, and the luminosity three-dimensional image obtained by cutting is sent to the defect classifier in the second stage
Step 806, a second stage defect classifier.
And sending the photometric stereo image obtained by cutting into a second-stage defect classifier, classifying positive and negative samples by using the second-stage defect classifier to obtain classification results of the positive and negative samples, finally judging the sample to be a normal sample if all the classification results are negative samples, and judging the defective sample of the sample if the classification result is a positive sample.
And step S807, judging and displaying the result.
And outputting the final prediction result and displaying the final prediction result on an interactive interface.
In this embodiment, while using other conventional imaging methods other than visible light, infrared light, other photometric stereo implementation methods are used to acquire photometric stereo images, and images generated by other photometric stereo than the surface normal vector are used as training samples; in the sample balanced sampling process, other balanced modes are used for playing a role in balancing samples, for example, random cutting sampling is carried out on a background, and the number of negative samples is controlled to be equal to that of positive samples, so that a scheme for inhibiting the false alarm of the defects in two stages based on photometric stereo imaging is provided.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the resource allocation method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 3
According to an embodiment of the present invention, there is also provided an object detection apparatus for implementing the object detection method shown in fig. 2.
Fig. 10 is a schematic diagram of an object detecting apparatus according to an embodiment of the present invention. As shown in fig. 10, the object detecting apparatus 110 may include: a first detection unit 112, a first determination unit 114 and a second detection unit 116.
The first detection unit 112 is configured to perform defect detection on a surface of the target object based on a first target image of the target object, so as to obtain a first target detection result, where the first target image is obtained by imaging the target object through the first light source in one lighting direction.
A first determining unit 114, configured to determine a target area in a second target image of the target object based on the first target detection result, where the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively.
The second detection unit 116 detects a defect in the target area to obtain a second target detection result.
It should be noted here that the first detecting unit 112, the first determining unit 114, and the second detecting unit 116 correspond to steps S202 to S206 in embodiment 1, and the two units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
According to an embodiment of the present invention, there is also provided an object detection apparatus for implementing the object detection method shown in fig. 3.
Fig. 11 is a schematic view of another object detecting apparatus according to an embodiment of the present invention. As shown in fig. 11, the object detecting apparatus 120 may include: a first acquisition unit 122, a third detection unit 124, a second determination unit 126 and a fourth detection unit 128.
The first obtaining unit 122 is configured to obtain a first surface image and a second surface image of the industrial object to be detected, where the first surface image is obtained by imaging the surface of the industrial object through a first light source in one lighting direction, and the second surface image is obtained by imaging the surface of the industrial object through corresponding second light sources in a plurality of lighting directions.
And the third detection unit 124 is configured to perform defect detection on the surface of the industrial object based on the first surface image, so as to obtain a first surface detection result.
The second unit 126 determines a target region in the second surface image based on the first surface detection result.
The fourth detecting unit 128 detects defects in the target area to obtain a second surface detection result.
It should be noted here that the first acquiring unit 122, the third detecting unit 124, the second determining unit 126, and the fourth detecting unit 128 correspond to steps S302 to S308 in embodiment 1, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
According to an embodiment of the present invention, there is also provided an object detection apparatus for implementing the object detection method shown in fig. 4.
Fig. 12 is a schematic view of another object detecting apparatus according to an embodiment of the present invention. As shown in fig. 12, the object detecting apparatus 130 may include: a second acquisition unit 132, a fifth detection unit 134, a third determination unit 136, and a sixth detection unit 138.
The second obtaining unit 132 is configured to obtain a first target image and a second target image of a target object to be detected by calling a first interface, where the first interface includes a first parameter, a parameter value of the first parameter is the first target image and the second target image, the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in multiple lighting directions.
The fifth detection unit 134 is configured to perform defect detection on the surface of the target object based on the first target image, so as to obtain a first target detection result.
A third determining unit 136 for determining a target area in the second target image based on the first target detection result; and carrying out defect detection on the target area to obtain a second target detection result.
The sixth detecting unit 138 outputs the second target detection result by invoking the second interface, where the second interface includes the second parameter, and a parameter value of the second parameter is the second target detection result.
It should be noted here that the second acquiring unit 132, the fifth detecting unit 134, the third determining unit 136, and the sixth detecting unit 138 correspond to steps S402 to S408 in embodiment 1, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
According to an embodiment of the present invention, there is also provided an object detection apparatus for implementing the object detection method shown in fig. 5.
Fig. 13 is a schematic diagram of a resource allocation apparatus according to an embodiment of the present invention. As shown in fig. 13, the resource configuration device 140 may include: a first display unit 142 and a second display unit 144.
The first display unit 142 is configured to display, on the operation interface, a first target image and a second target image of the target object in response to an input operation instruction acting on the operation interface, where the first target image is obtained by imaging the target object through the first light source in one lighting direction, and the second target image is obtained by imaging the target object through the corresponding second light source in each of the plurality of lighting directions.
And a second display unit 144, configured to display, in response to the detection operation instruction acting on the operation interface, a second target detection result on the operation interface, where the second target detection result is obtained by performing defect detection on a target area in a second target image, the target area is determined in the second target image based on the first target detection result, and the first target detection result is obtained by performing defect detection on the surface of the target object based on the first target image.
It should be noted here that the first display unit 142 and the second display unit 144 correspond to steps S502 to S504 in embodiment 1, and the two units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
In the object detection apparatus of the embodiment, defect detection is performed on the surface of the target object based on a first target image of the target object, which is obtained by imaging the target object through the first light source in one lighting direction, to obtain a first target detection result; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result. That is to say, the present invention first performs defect detection on the surface of the target object to obtain a first target detection result, determines the target region in the second target image of the target object based on the first target detection result, and performs defect detection on the target region to obtain a second target detection result, thereby achieving the technical effect of improving the accuracy of detecting the surface defect of the object, and solving the technical problem of low accuracy of detecting the surface defect of the object.
Example 4
Embodiments of the present invention may provide an object detection system, which may include a computer terminal, and the computer terminal may be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute program codes of the following steps in the object detection method of the application program: the method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result.
Alternatively, fig. 14 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 14, the computer terminal a may include: one or more processors 1502 (only one of which is shown), a memory 1504, and a transmission 1506.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the object detection method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the object detection method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, which may be connected to the computer terminal a via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: the method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result.
Optionally, the processor may further execute the program code of the following steps: and carrying out defect detection on the target area to obtain a second target detection result, wherein the defect detection comprises the following steps: and carrying out defect detection on the concave-convex information in the target area to obtain a second target detection result, wherein the concave-convex information is used for representing the defects of the three-dimensional form in the target area.
Optionally, the processor may further execute the program code of the following steps: and carrying out defect detection on the target area to obtain a second target detection result, wherein the defect detection comprises the following steps: and performing defect detection on the target area based on a first defect detection model to obtain a second target detection result, wherein the first defect detection model is obtained by training a positive sample image and a negative sample image based on a quantity balance, the positive sample image is used for representing the surface with real defects, and the negative sample image is used for representing the surface without the real defects.
Optionally, the processor may further execute the program code of the following steps: the positive sample image and the negative sample image are generated based on a second defect detection model, wherein the defect detection is performed on the surface of the target object to obtain a first target detection result, and the method comprises the following steps: and carrying out defect detection on the surface of the target object based on the second defect detection model to obtain a first target detection result.
Optionally, the processor may further execute the program code of the following steps: acquiring a first image sample of a first sample object, wherein the first image sample is obtained by imaging the first sample object through a first light source in a lighting direction; performing defect detection on the first image sample based on the second defect detection model to obtain a first sample detection result; acquiring a second image sample of the first sample object, and performing balanced sampling on the second image sample based on the first sample detection result to obtain a positive sample image and a negative sample image, wherein the images in the first image sample are obtained by imaging the images through corresponding second light sources in a plurality of lighting directions respectively; and training based on the positive sample image and the negative sample image to obtain a first defect detection model.
Optionally, the processor may further execute the program code of the following steps: based on the first sample detection result, performing equalization sampling on the second image sample to obtain a positive sample image and a negative sample image, including: determining a corresponding first area sample on the second image sample based on the first sample detection result; and classifying the sub-image samples of the first area samples to obtain a positive sample image and a negative sample image.
Optionally, the processor may further execute the program code of the following steps: classifying the sub-image samples of the first area samples to obtain a positive sample image and a negative sample image, including: determining the degree of overlap between the first area sample and a second area sample, wherein the second area sample comprises an area for representing real defects on the first sample object; determining an image corresponding to a second area sample with the overlapping degree larger than a target threshold value in the sub-image samples as a positive sample image; and determining an image corresponding to a second area sample with the overlapping degree lower than a target threshold value in the sub-image samples as a negative sample image.
Optionally, the processor may further execute the program code of the following steps: acquiring a third image sample of the second sample object and a corresponding second sample detection result, wherein the second sample detection result is obtained by labeling the defect on the surface of the second sample object based on the third image sample; and training the sub-detection model based on the third image sample and the second sample detection result to obtain a second defect detection model.
Optionally, the processor may further execute the program code of the following steps: respectively imaging the target object through corresponding second light sources in at least three lighting directions to obtain at least three sub-images; a second target image is generated based on the at least three sub-images.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: acquiring a first surface image and a second surface image of an industrial object to be detected, wherein the first surface image is obtained by imaging the surface of the industrial object through a first light source in one lighting direction, and the second surface image is obtained by imaging the surface of the industrial object through corresponding second light sources in a plurality of lighting directions respectively; detecting the defects of the surface of the industrial object based on the first surface image to obtain a first surface detection result; determining a target area in the second surface image based on the first surface detection result; and carrying out defect detection on the target area to obtain a second surface detection result.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: acquiring a first target image and a second target image of a target object to be detected by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the first target image and the second target image, the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions; carrying out defect detection on the surface of the target object based on the first target image to obtain a first target detection result; determining a target area in the second target image based on the first target detection result; detecting defects of the target area to obtain a second target detection result; and outputting a second target detection result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the second target detection result.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: responding to an input operation instruction acting on an operation interface, and displaying a first target image and a second target image of a target object on the operation interface, wherein the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and responding to a detection operation instruction acting on the operation interface, and displaying a second target detection result on the operation interface, wherein the second target detection result is obtained by performing defect detection on a target area in a second target image, the target area is determined in the second target image based on a first target detection result, and the first target detection result is obtained by performing defect detection on the surface of the target object based on the first target image.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: as an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: responding to a selection operation instruction acting on an operation interface, and selecting a plurality of lighting directions for lighting a target object; displaying a second target image on the operation interface, wherein the second target image is obtained by imaging a target object through corresponding second light sources in a plurality of lighting directions respectively; and displaying a second target detection result of a target area in a second target image on the operation interface in response to a defect detection instruction acting on the operation interface, wherein the target area is determined by a first target detection result, the first target detection result is obtained by performing defect detection on the surface of the target object based on a first target image of the target object, and the first target image is obtained by imaging the target object through a first light source in one lighting direction.
The embodiment of the invention provides an object detection method. The method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result. That is to say, the present invention utilizes two stages to detect the defect on the surface of the target object, firstly performs defect detection on the surface of the target object based on the first target image (e.g., a conventional image) to obtain a first target detection result, then determines the target region in the second target image (e.g., a photometric stereo image) of the target object based on the first target detection result, and performs defect detection on the target region to obtain a second target detection result, thereby suppressing false alarm according to the second target detection result, further achieving the technical effect of improving the accuracy of detecting the surface defect of the object, and solving the technical problem of low accuracy of detecting the surface defect of the object.
It can be understood by those skilled in the art that the structure shown in fig. 14 is only an illustration, and the computer terminal a may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 14 is not intended to limit the structure of the computer terminal a. For example, the computer terminal a may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 14, or have a different configuration than shown in fig. 14.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 5
Embodiments of the present invention also provide a computer-readable storage medium. Optionally, in this embodiment, the computer-readable storage medium may be configured to store the program code executed by the object detection method provided in the first embodiment.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Optionally, in this embodiment, the computer-readable storage medium is configured to store program codes for performing the following steps: the method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result.
Optionally, the computer-readable storage medium may further include program code for performing the following steps: and carrying out defect detection on the area to obtain a second target detection result, wherein the second target detection result comprises the following steps: and carrying out defect detection on the concave-convex information in the target area to obtain a second target detection result, wherein the concave-convex information is used for representing the defects of the three-dimensional form in the target area.
Optionally, the computer-readable storage medium may further include program code for performing the following steps: and carrying out defect detection on the target area to obtain a second target detection result, wherein the defect detection comprises the following steps: and performing defect detection on the target area based on a first defect detection model to obtain a second target detection result, wherein the first defect detection model is obtained by training a positive sample image and a negative sample image based on a quantity balance, the positive sample image is used for representing the surface with real defects, and the negative sample image is used for representing the surface without the real defects.
Optionally, the computer-readable storage medium may further include program code for performing the following steps: the positive sample image and the negative sample image are generated based on a second defect detection model, wherein the defect detection is performed on the surface of the target object to obtain a first target detection result, and the method comprises the following steps: and carrying out defect detection on the surface of the target object based on the second defect detection model to obtain a first target detection result.
Optionally, the computer-readable storage medium may further include program code for performing the following steps: acquiring a first image sample of a first sample object, wherein the first image sample is obtained by imaging the first sample object through a first light source in a lighting direction; performing defect detection on the first image sample based on the second defect detection model to obtain a first sample detection result; acquiring a second image sample of the first sample object, and performing balanced sampling on the second image sample based on the first sample detection result to obtain a positive sample image and a negative sample image, wherein the images in the first image sample are obtained by imaging through corresponding second light source images in a plurality of lighting directions respectively; and training based on the positive sample image and the negative sample image to obtain a first defect detection model.
As an alternative example, the computer-readable storage medium is configured to store program code for performing the following steps: based on the first sample detection result, performing equalization sampling on the second image sample to obtain a positive sample image and a negative sample image, including: determining a corresponding first area sample on the second image sample based on the first sample detection result; and classifying the sub-image samples of the first area samples to obtain a positive sample image and a negative sample image.
Optionally, the computer-readable storage medium may further include program code for performing the following steps: classifying the sub-image samples of the first area samples to obtain a positive sample image and the negative sample image, including: determining the degree of overlap between the first area sample and a second area sample, wherein the second area sample comprises an area for representing real defects on the first sample object; determining an image corresponding to the second area sample with the overlapping degree larger than the target threshold value in the sub-image samples as a positive sample image; and determining an image corresponding to a second area sample with the overlapping degree lower than a target threshold value in the sub-image samples as a negative sample image.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: acquiring a third image sample of the second sample object and a corresponding second sample detection result, wherein the second sample detection result is obtained by labeling the defect on the surface of the second sample object based on the third image sample; and training the sub-detection model based on the third image sample and the second sample detection result to obtain a second defect detection model.
Optionally, the computer-readable storage medium may further include program code for performing the following steps: respectively imaging the target object through corresponding second light sources in at least three lighting directions to obtain at least three sub-images; a second target image is generated based on the at least three sub-images.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: acquiring a first surface image and a second surface image of an industrial object to be detected, wherein the first surface image is obtained by imaging the surface of the industrial object through a first light source in one lighting direction, and the second surface image is obtained by imaging the surface of the industrial object through corresponding second light sources in a plurality of lighting directions respectively; detecting the defects of the surface of the industrial object based on the first surface image to obtain a first surface detection result; determining a target area in the second surface image based on the first surface detection result; and carrying out defect detection on the target area to obtain a second surface detection result.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: acquiring a first target image and a second target image of a target object to be detected by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the first target image and the second target image, the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions; carrying out defect detection on the surface of the target object based on the first target image to obtain a first target detection result; determining a target area in the second target image based on the first target detection result; detecting defects of the target area to obtain a second target detection result; and outputting a second target detection result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the second target detection result.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: responding to an input operation instruction acting on an operation interface, and displaying a first target image and a second target image of a target object on the operation interface, wherein the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and responding to a detection operation instruction acting on the operation interface, and displaying a second target detection result on the operation interface, wherein the second target detection result is obtained by performing defect detection on a target area in a second target image, the target area is determined in the second target image based on a first target detection result, and the first target detection result is obtained by performing defect detection on the surface of the target object based on the first target image.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: responding to a selection operation instruction acting on an operation interface, and selecting a plurality of lighting directions for lighting a target object; displaying a second target image on the operation interface, wherein the second target image is obtained by imaging a target object through corresponding second light sources in a plurality of lighting directions respectively; and displaying a second target detection result of a target area in a second target image on the operation interface in response to a defect detection instruction acting on the operation interface, wherein the target area is determined by a first target detection result, the first target detection result is obtained by performing defect detection on the surface of the target object based on a first target image of the target object, and the first target image is obtained by imaging the target object through a first light source in one lighting direction.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. An object detection method, comprising:
the method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction;
determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively;
and carrying out defect detection on the target area to obtain a second target detection result.
2. The method of claim 1, wherein performing defect detection on the target area to obtain a second target detection result comprises:
and carrying out defect detection on the concave-convex information in the target area to obtain a second target detection result, wherein the concave-convex information is used for representing the defects of the three-dimensional form in the target area.
3. The method of claim 1, wherein performing defect detection on the target area to obtain a second target detection result comprises:
and performing defect detection on the target area based on a first defect detection model to obtain a second target detection result, wherein the first defect detection model is obtained based on training of a positive sample image and a negative sample image which are balanced in number, the positive sample image is used for representing the surface with the real defects, and the negative sample image is used for representing the surface without the real defects.
4. The method of claim 3, wherein the positive sample image and the negative sample image are generated based on a second defect detection model, wherein performing defect detection on the surface of the target object to obtain a first target detection result comprises:
and carrying out defect detection on the surface of the target object based on the second defect detection model to obtain the first target detection result.
5. The method of claim 4, further comprising:
acquiring a first image sample of a first sample object, wherein the first image sample is obtained by imaging the first sample object through the first light source in a lighting direction;
performing defect detection on the first image sample based on the second defect detection model to obtain a first sample detection result;
acquiring a second image sample of the first sample object, and performing balanced sampling on the second image sample based on the first sample detection result to obtain a positive sample image and a negative sample image, wherein the images in the first image sample are obtained by imaging the images through corresponding second light sources in a plurality of lighting directions respectively;
and training to obtain the first defect detection model based on the positive sample image and the negative sample image.
6. The method of claim 5, wherein performing an equalization sampling on the second image samples based on the first sample detection result to obtain the positive sample image and the negative sample image comprises:
determining a corresponding first area sample on the second image sample based on the first sample detection result;
classifying the sub-image samples of the first area sample to obtain the positive sample image and the negative sample image.
7. The method of claim 6, wherein classifying the sub-image samples of the first region samples to obtain the positive sample image and the negative sample image comprises:
determining a degree of overlap between the first region sample and a second region sample, wherein the second region sample comprises a region representing a real defect on the first sample object;
determining an image corresponding to the second area sample with the overlapping degree larger than a target threshold value in the sub-image samples as the positive sample image;
and determining an image corresponding to the second area sample with the overlapping degree lower than the target threshold value in the sub-image samples as the negative sample image.
8. The method of claim 4, further comprising:
acquiring a third image sample of a second sample object and a corresponding second sample detection result, wherein the second sample detection result is obtained by labeling the defect on the surface of the second sample object based on the third image sample;
and training a sub-detection model based on the third image sample and the second sample detection result to obtain the second defect detection model.
9. The method of any one of claims 1 to 8, after, further comprising:
imaging the target object through corresponding second light sources in at least three lighting directions respectively to obtain at least three sub-images;
generating the second target image based on the at least three sub-images.
10. An object detection method, comprising:
acquiring a first surface image and a second surface image of an industrial object to be detected, wherein the first surface image is obtained by imaging the surface of the industrial object through a first light source in one lighting direction, and the second surface image is obtained by imaging the surface of the industrial object through corresponding second light sources in a plurality of lighting directions respectively;
detecting defects on the surface of the industrial object based on the first surface image to obtain a first surface detection result;
determining a target area in the second surface image based on the first surface detection result;
and carrying out defect detection on the target area to obtain a second surface detection result.
11. An object detection method, comprising:
acquiring a first target image and a second target image of a target object to be detected by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the first target image and the second target image, the first target image is obtained by imaging the target object through a first light source in one lighting direction, and the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions;
carrying out defect detection on the surface of the target object based on the first target image to obtain a first target detection result;
determining a target area in the second target image based on the first target detection result;
detecting defects of the target area to obtain a second target detection result;
and outputting the second target detection result by calling a second interface, wherein the second interface comprises a second parameter, and a parameter value of the second parameter is the second target detection result.
12. An object detection method, comprising:
responding to a selection operation instruction acting on an operation interface, and selecting a plurality of lighting directions for lighting a target object;
displaying a second target image on the operation interface, wherein the second target image is obtained by imaging the target object through corresponding second light sources in the plurality of lighting directions respectively;
and displaying a second target detection result of a target area in the second target image on the operation interface in response to a defect detection instruction acting on the operation interface, wherein the target area is determined by a first target detection result, the first target detection result is obtained by performing defect detection on the surface of a target object based on a first target image of the target object, and the first target image is obtained by imaging the target object through a first light source in one lighting direction.
13. A computer-readable storage medium, comprising a stored program, wherein the program, when executed by a processor, controls an apparatus in which the computer-readable storage medium is located to perform the method of any of claims 1-12.
14. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 12.
15. An object detection system, comprising:
a processor;
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: the method comprises the steps of detecting defects of the surface of a target object based on a first target image of the target object to obtain a first target detection result, wherein the first target image is obtained by imaging the target object through a first light source in a lighting direction; determining a target area in a second target image of the target object based on the first target detection result, wherein the second target image is obtained by imaging the target object through corresponding second light sources in a plurality of lighting directions respectively; and carrying out defect detection on the target area to obtain a second target detection result.
CN202111495847.8A 2021-12-08 2021-12-08 Object detection method, storage medium, processor and system Pending CN114359158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111495847.8A CN114359158A (en) 2021-12-08 2021-12-08 Object detection method, storage medium, processor and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111495847.8A CN114359158A (en) 2021-12-08 2021-12-08 Object detection method, storage medium, processor and system

Publications (1)

Publication Number Publication Date
CN114359158A true CN114359158A (en) 2022-04-15

Family

ID=81097079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111495847.8A Pending CN114359158A (en) 2021-12-08 2021-12-08 Object detection method, storage medium, processor and system

Country Status (1)

Country Link
CN (1) CN114359158A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503412A (en) * 2023-06-29 2023-07-28 宁德时代新能源科技股份有限公司 Appearance defect detection method, apparatus, computer device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503412A (en) * 2023-06-29 2023-07-28 宁德时代新能源科技股份有限公司 Appearance defect detection method, apparatus, computer device and storage medium
CN116503412B (en) * 2023-06-29 2023-12-08 宁德时代新能源科技股份有限公司 Appearance defect detection method, apparatus, computer device and storage medium

Similar Documents

Publication Publication Date Title
CN110544258B (en) Image segmentation method and device, electronic equipment and storage medium
US20200364802A1 (en) Processing method, processing apparatus, user terminal and server for recognition of vehicle damage
CN109409238B (en) Obstacle detection method and device and terminal equipment
CN109472264B (en) Method and apparatus for generating an object detection model
CN112348765A (en) Data enhancement method and device, computer readable storage medium and terminal equipment
CN108734684B (en) Image background subtraction for dynamic illumination scene
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN107564329B (en) Vehicle searching method and terminal
CN112750162A (en) Target identification positioning method and device
CN114359158A (en) Object detection method, storage medium, processor and system
CN111339880A (en) Target detection method and device, electronic equipment and storage medium
CN108776959B (en) Image processing method and device and terminal equipment
US10565872B2 (en) Cognitive situation-aware vision deficiency remediation
CN113742430B (en) Method and system for determining number of triangle structures formed by nodes in graph data
CN109144379B (en) Method for operating terminal, terminal detection device, system and storage medium
KR102316875B1 (en) A method for measuring fine dust concentration using a terminal having a camera and a light emitting unit, a method for sharing measured fine dust concentration information, and a server for the method
CN111599417B (en) Training data acquisition method and device of solubility prediction model
CN114022658A (en) Target detection method, device, storage medium and terminal
CN115619924A (en) Method and apparatus for light estimation
CN112560853A (en) Image processing method, device and storage medium
CN111476087A (en) Target detection method and related model training method, device and apparatus
CN110909755A (en) Object feature processing method and device
CN112383800B (en) Method and device for distributing and scheduling monitoring video data and electronic equipment
CN103679684A (en) Device, method and electronic equipment for detecting cloud in image
US11961251B2 (en) Continuous surface and depth estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination