CN108445007B - Detection method and detection device based on image fusion - Google Patents

Detection method and detection device based on image fusion Download PDF

Info

Publication number
CN108445007B
CN108445007B CN201810019410.9A CN201810019410A CN108445007B CN 108445007 B CN108445007 B CN 108445007B CN 201810019410 A CN201810019410 A CN 201810019410A CN 108445007 B CN108445007 B CN 108445007B
Authority
CN
China
Prior art keywords
image
detected
light
height
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810019410.9A
Other languages
Chinese (zh)
Other versions
CN108445007A (en
Inventor
杨洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huahan Weiye Technology Co ltd
Original Assignee
Shenzhen Huahan Weiye Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huahan Weiye Technology Co ltd filed Critical Shenzhen Huahan Weiye Technology Co ltd
Priority to CN201810019410.9A priority Critical patent/CN108445007B/en
Publication of CN108445007A publication Critical patent/CN108445007A/en
Application granted granted Critical
Publication of CN108445007B publication Critical patent/CN108445007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biochemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)

Abstract

A detection method based on image fusion comprises the steps of irradiating a detection object from different directions by adopting a plurality of light-emitting modules, obtaining a plurality of frames of images to be detected, carrying out image fusion processing on the plurality of frames of images to be detected, and carrying out feature screening processing on the obtained texture images or height images. The method can reduce the influence of poor image acquisition quality under the condition of low contrast, can obtain more complete surface characteristic information, is favorable for fusing the surface characteristic information in each frame of image to be detected into a texture image and a height image, and is favorable for identifying and screening the surface defect characteristics from the distinguishing characteristics of the images. The device combined with the surface defect detection method can eliminate the influence of environmental factors to the maximum extent in the process of detecting the object to be detected, can synthesize a characteristic image comprising the surface defect characteristics from a large number of images, enhances the screening effect of the surface defect characteristics, and completes the surface defect detection work under the condition of low contrast.

Description

Detection method and detection device based on image fusion
Technical Field
The invention relates to an image detection technology, in particular to a detection method and a detection device based on image fusion.
Background
At present, in industries such as industrial materials and printed matters, in order to ensure product quality and simultaneously remove waste products at a high speed, the surface of a workpiece or the surface of paper is often detected by means of a machine vision technology to find surface defects such as spots, pits, scratches, defects, concave-convex characters, dirt, color difference, missing printing, ink diffusion and the like. The main method for detecting the surface defects is image model comparison, an image model is established through a pre-selected standard image characteristic domain, the current image to be detected and the image models are compared pixel by pixel and the difference value of the image models is judged, and finally the defect characteristics in the image to be detected are judged according to the difference value. The surface defect detection method using an image filter is also available, and is used for denoising an image to be detected, suppressing the noise of a target image, keeping image detail characteristics as much as possible, and then extracting abnormal parts in the image details as defect characteristics. However, due to objective factors such as light source attenuation, light source displacement, lens pollution and inconsistent exposure parameters, problems such as inaccurate image model and poor image contrast are often caused, and when a defect screening method such as filtering, gray level denoising, binarization, morphological operation and the like is used for processing an image, certain requirements are required on the quality of the image, and the defect characteristic is not suitable for screening out from the low-contrast image. Therefore, the conventional technology cannot detect defects such as low-contrast wrinkles, speckles, color differences, and the like, and has the problems of missing detection, false detection, and the like in surface defect detection.
Disclosure of Invention
The invention mainly solves the technical problem of how to detect the surface defects with low contrast. In order to solve the above problems, the present invention provides a detection method based on image fusion and a detection device thereof.
A detection method based on image fusion comprises the following steps: sequentially controlling a plurality of light-emitting modules to irradiate an object to be detected from different directions, and controlling a camera to shoot an image of the object to be detected corresponding to each light-emitting module when each light-emitting module irradiates the object to be detected, so as to obtain a plurality of frames of images to be detected of the object to be detected; carrying out image fusion processing on a plurality of frames of images to be detected to obtain a characteristic image of the object to be detected; carrying out defect characteristic screening processing on the characteristic image to obtain the surface defect characteristics of the object to be detected; and outputting the surface defect characteristics.
An image fusion-based detection apparatus, comprising: the light source comprises a plurality of light emitting modules, and the light emitting modules are used for irradiating an object to be detected from different directions; the camera is used for acquiring an image to be detected of the object to be detected when the light-emitting module irradiates the object to be detected; the detection controller is respectively in signal connection with the light source and the camera, controls the plurality of light-emitting modules to be sequentially lightened during detection, controls the camera to shoot an object to be detected when the light-emitting modules irradiate the object to be detected, obtains a plurality of frames of images corresponding to the plurality of light-emitting modules, and is further used for processing the plurality of frames of images to obtain the surface defect characteristics of the object to be detected.
The light source adopted by the embodiment of the invention is provided with the plurality of light emitting modules, and the image to be detected of the object to be detected is acquired when the light emitting modules are sequentially lightened and form the irradiation angles in different directions, so that on one hand, the contrast of the image to be detected is enhanced by means of 'lighting', the influence of poor image acquisition quality caused by surface reflection, small contrast and the like is weakened, and on the other hand, the more complete surface characteristic information of the object to be detected can be acquired. Then, the detection controller carries out image fusion processing on a plurality of frames of images to be detected, which is beneficial to obtaining surface characteristic information from each frame of image to be detected and fusing the information into a characteristic image comprising texture information and/or height information, thereby obtaining a texture image or a height image with distinguishing characteristics. And thirdly, the detection controller carries out defect feature screening processing on the obtained texture image or height image, and the texture image or height image contains useful distinguishing features, so that the method is favorable for further enhancing the contrast of the image to be detected and improving the image quality for the defect feature screening processing on one hand, and is favorable for better identifying and screening the surface defect features from the distinguishing features on the other hand. The surface defect detection device adopting the method can eliminate the influence of the surrounding environment to the maximum extent in the process of detecting the object to be detected, can synthesize the characteristic image comprising the surface defect characteristics from a large number of images, further enhance the screening effect of the surface defect characteristics, and is favorable for finishing the surface defect detection work under the condition of low contrast.
Drawings
FIG. 1 is a schematic structural diagram of a surface defect detecting apparatus;
FIG. 2 is a schematic flow chart of a surface defect detection method;
FIG. 3 is a timing control diagram of the detection controller;
FIG. 4 is a schematic flow chart of a surface defect feature screening process;
FIG. 5 is a diagram illustrating a vector relationship of light source irradiation directions;
FIG. 6 is a diagram illustrating a vector relationship in an image to be detected.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings.
An image fusion based detection device is shown in fig. 1.
In the present embodiment, the surface defect detecting apparatus includes a light source 101, a camera 103, and a detection controller 105.
In this embodiment, the light source 101 is used to illuminate the object to be inspected a1 from different directions, and is communicatively connected to the inspection controller 105 and controlled by the inspection controller 105. The light source 101 includes eight light emitting modules, each of which is arranged in a ring shape along the light source 101 and is concentrated on a lamp constituted by the light source 101, and the light emitting modules are uniformly arranged. Each light emitting module of the light source 101 is a multi-color light emitting unit, has three-color light emitting capability, and can generate red light, green light, or blue light under the control of the detection controller 105 (the detection controller 105 has a multi-spectrum setting module for controlling which color light the light emitting module generates, and the light emitting mode of the multi-spectrum setting module is usually set by a user before the test starts), so as to realize the multi-spectrum light emitting capability, thereby meeting the requirements of different detection objects or different detection scenes. The light source 101 may receive the I/O control signal from the detection controller 105, and perform a soft-triggering operation on each light emitting module according to the control signal, that is, each light emitting module of the light source 101 may be controlled to be in a lighting state in sequence according to the I/O control signal. In the detection process, the object A1 to be detected is located at the center position below the light source 101, and when a light-emitting module is in a lighting state, the upper surface of the object A1 to be detected forms an inclined lighting surface; when all the light emitting modules are lighted, the upper surface of the object a1 to be detected forms a uniform and non-inclined lighting surface. In this embodiment, when the light-emitting module of the light source 101 is used to sequentially irradiate the object a1 to be detected from different directions, inclined illuminating surfaces with different irradiation directions are formed on the upper surface of the object a1 to be detected, which is helpful to weaken the influence of poor image acquisition quality of the object a1 to be detected due to surface reflection, and is also beneficial to improving the illumination contrast in surface defect detection, and is also helpful to comprehensively obtain characteristic information of the surface of the object a1 to be detected in the acquired image.
In the present embodiment, a station for fixing the object a1 to be detected is provided below the light source 101, and the station is suitable for placing the object to be detected and has a position limiting effect on the object to be detected. In a group of image capturing processes, the light emitting modules of the light source 101 may be sequentially turned on, or some of the light emitting modules of the light source 101 may be turned on according to user requirements.
In the present embodiment, the camera 103 is used to capture an image to be detected of an object to be detected a1, and is communicatively connected to the detection controller 105. The camera 103 has a lens portion and a photographing control portion, the lens 1031 of the camera 103 is located at the center of the circular ring of the light source 101, and can allow diffuse reflection light on the surface of the object a1 to enter as much as possible, so that a proper image capturing effect is achieved, and the distance between the camera 103 and the object a1 to be detected can be freely adjusted according to actual requirements. To ensure the image capturing effect of the camera 103, the lens 1031 is a CCTV lens or a telecentric lens.
In the present embodiment, the detection controller 105 is configured to control the lighting state of the light emitting module of the light source 101, control the shooting state of the camera 103, and further receive the image to be detected returned by the camera 103 and process the image to obtain the surface defect characteristics of the object to be detected a 1. To facilitate the understanding of the surface defect characteristics output by the inspection controller 105 by the skilled artisan, the inspection controller 105 has an input/output interface communicatively coupled to a computer or has a display for displaying the surface defect characteristics.
In another embodiment, the light source 101 has at least three light emitting modules, and it is advantageous to obtain at least three frames of images to be detected when more than three light emitting modules are used, so as to obtain texture images and height images by using the correlation theory of the least square method in the process of processing the images to be detected. In addition, the light emitting modules are arranged on the light source 101 in a polygonal shape (such as a triangle, a quadrangle, a pentagon, an octagonal rhombus, etc.) with a ring structure, and the specific shape of the light source 101 is not limited herein.
The surface defect detecting apparatus of the above embodiment is used to detect the surface defect of the object a1 to be detected, and the surface defect detecting method used includes the following steps, as shown in fig. 2.
S201, calibrating the light source position and the image acquisition position.
The vector value of the light source relative to the object to be detected and the vector value of the camera relative to the object to be detected are important parameters in the processing process of the image to be detected, so that the calibration of the positions of the plurality of light-emitting modules and the position of the camera is a necessary process for surface defect detection. The basic principles and methods of calibration operation will be described in detail below.
For an image, the gray scale imaging model at a pixel point (x, y) on the image is:
Figure BDA0001543057720000041
wherein I represents a gray value at a pixel point (x, y) on the image; rhoiRepresenting the reflectivity of the ith material; k represents the number of material species; f. ofiExpressing equation of light, fiRelated to the normal vector N of the object surface, the light source direction L and the camera view angle V.
If the surface of the measured object is an ideal scattering surface (a surface capable of reflecting incident light completely and reflecting light in all directions with the same brightness), obtaining an emission model of a pixel point of the image according to the gray imaging model:
I0=ρN0·L0 (2)
wherein N is0Represents a unit normal vector, L0Indicating the unit light source direction. Therefore, a reflection model expression of a pixel point of the image in each direction can be obtained:
Figure BDA0001543057720000042
wherein N represents the vector form, and is N ═ N (N)x Ny Nz)TNx, Ny, Nz are the components of the pixel point in the x, y, z directions respectively; l represents a matrix form thereof, of
Figure BDA0001543057720000043
Referring to fig. 1, in order to calibrate the relative positions of the light emitting modules in the light source 101 and the camera 103 with respect to the object a1 to be detected, a highlight black ball shown in fig. 5 is used for auxiliary operation. The highlight is placed at the position of an object A1 to be detected, the relative position of the highlight black ball and the camera is adjusted, the camera 101 can be guaranteed to shoot a clear image of the highlight black ball, and a highlight point on the highlight black ball can be found in the image, so that the visual angle direction of the camera is considered to be the same as the direction of reflected light on the highlight black ball, and the vector is represented as R-V. At this time, considering the viewing angle direction V of the camera 103 as determined, the expression of the light source direction L is obtained as:
L=2(N·R)N-R (4)
the normal vector N at the highlight point can be obtained through a parallel projection model. The image of the highlight black ball on the image is the same as the actual size, and the maximum outline section is the plane where the sphere center of the black ball is located. The detection controller 105 processes the captured image of the highlight black ball to find the position of the highlight black ball in the image, including the image pixel coordinates (C) of the highlight black ballx,Cy) And the pixel coordinates (P) of the highlight dots on the highlight black ball on the imagex,Py) Thus, the expression for the normal vector N of the highlight point is:
N=(Px-Cx,Py-Cy,z) (5)
wherein,
Figure BDA0001543057720000051
z and r are process parameters and have no practical significance.
The specific method of the calibration step comprises the following steps: the method comprises the steps of arranging a highlight black ball for calibration at a detection position (such as the position of an object A1 to be detected in fig. 1), obtaining images to be calibrated of the highlight black ball under the irradiation of each light-emitting module, constructing a gray imaging model (shown in formula 1) of each frame of images to be calibrated, and comparing according to the gray imaging model to obtain vector values of the light-emitting modules and the camera relative to the highlight black ball. The vector value of the light emitting module can refer to L in the above formula, and the vector value of the camera can refer to V in the above formula. The obtained vector values are used to participate in the calculation process of the texture information or the height information.
In this step, if the positions of the light source 101 and the camera 103 are not calibrated before the image to be detected of the object to be detected a1 is acquired, step S201 should be executed; if the position calibration of the light source 101 and the camera 103 is already performed before the image to be detected is acquired and the obtained correlation vector values are stored in the detection controller 105, the step S201 does not need to be executed, and the step S202 is performed directly.
S202, acquiring an image to be detected.
Referring to fig. 1, the object a1 to be detected is placed at the center under the light source, and the detection controller 105 controls the light source 101 to light a light emitting module according to the preset spectral color, at this time, the upper surface of the object a1 to be detected will form an inclined illumination surface. The detection controller 105 controls the camera 103 to shoot the object a1 to be detected, and the camera 103 transmits the acquired frame of image to be detected back to the detection controller 105. Then, the detection controller 105 continues to control the light source 101 to light the next light-emitting module, and controls the camera 103 to capture another frame of image to be detected until the light-emitting modules set by the user are sequentially lit, so as to obtain images to be detected of the object a1 to be detected in different illumination directions of the light-emitting modules, respectively.
In the present embodiment, the sequence control process of the light source 101 and the camera 103 by the detection controller 105 can be seen in fig. 4. The detection controller 105 sends the same trigger signal to the light source 101 and the camera 103 at the same time, the light source 101 lights a light emitting module when receiving the trigger signal, and the camera 103 shoots when receiving the trigger signal. After the camera 103 finishes shooting, the detection controller 105 sends out a trigger signal again, when receiving the trigger signal, the light source 101 controls the lighted light emitting module to be extinguished and enables the light emitting module to send an LED enabling signal to a next light emitting module and lights the next light emitting module, when receiving the trigger signal, the camera 103 carries out shooting again, then the detection controller 105 continues sending out the trigger signal, and the light source 101 and the camera 103 repeat the step until a group of detection is finished.
In this step, the position and number of the lighting modules of the light source 101 to be lighted are determined by the skilled person, and are not limited herein, and in addition, the obtained image to be detected should be equal to the number of the lighted lighting modules (at least 3) so that the detection controller 105 can process the images to be detected.
And S203, carrying out image fusion processing to obtain a characteristic image.
The detection controller 105 obtains n frames of images to be detected (n is the set number of images) in step S201, and the detection controller 105 performs image fusion processing on these images to be detected to obtain a characteristic image of the object a1 to be detected. The characteristic image comprises a texture image and/or a height image, the texture image comprises texture information, and the height image comprises height information.
In this step, the image fusion processing procedure includes a step of extracting texture information and/or height information from the image to be detected. The texture information refers to defect information caused by color differences such as spots, color differences, missing prints and the like, and the texture information is used for helping to mark surface texture features with color differences of the object to be detected; the height information refers to defect information caused by scratches, concave-convex characters, defects and the like, and the height information is used for identifying the surface height characteristics with concave-convex differences of the object to be detected. Since the images required to obtain different surface defect features are different, a technician can select the processing mode of the detection controller 105 according to the preset surface defect feature type, whether to obtain a texture image or a height image, or to obtain both the texture image and the height image.
The basic principles and methods of acquiring texture images and height images will be described in detail below.
1. Basic principles and methods of obtaining a texture image.
According to the steps S201 and S202, after the light source direction L, the visual angle direction V and the normal vector N of the object surface are calibrated, at least N (N) is obtained>3) Frames of the image to be detected are represented by a reflection model expression shown in formula (3), and the expression I in formula (6) can be seen1、I2、InAnd then, a reflection model equation set shown in the formula (6) is established.
Figure BDA0001543057720000061
Setting a process parameter G, making G ═ ρ N, solving equation (6) by using the least square method, and obtaining G ═ (L)TL)-1LTI, the texture information expression N of the texture image is obtained as G/ρ.
In this step, the image fusion process includes a step of extracting texture information from the image to be detected, and the step of extracting texture information may be described as: obtaining a reflection model expression of each frame of image to be detected, establishing a reflection model equation set of the image to be detected according to the reflection model expression, solving the reflection model equation set by adopting a least square method and obtaining a texture information expression;
2. basic principles and methods for obtaining height images.
To illustrate the principle of height information of a height image, an expression z (x, y) of the height image is assumed, and a surface vector expression of one frame of an image to be detected is obtained according to a vector relationship shown in fig. 6:
Figure BDA0001543057720000071
wherein, N.V1=0,N·V20, so we can:
Figure BDA0001543057720000072
when multiple frames of images to be detected exist, each frame of image to be detected is processed according to formulas (7) and (8), and a surface vector expression of each frame of image to be detected is obtained, so that a surface vector equation set is constructed. Assuming that the size of each frame of image to be detected is M multiplied by N, a vector result of z, namely a height information expression of a height image, can be obtained by solving according to a least square method:
Z=(ATA)-1AB (9)
wherein, the dimensionality of A is a sparse matrix of 2MN multiplied by MN, B is a parameter of B ═ AZ, and the obtained dimensionality of Z is 2MN multiplied by 1.
In this step, the image fusion process includes a step of extracting height information from the image to be detected, and the step of extracting height information may be described as: acquiring a surface vector expression of each frame of image to be detected, establishing a surface normal vector equation set of the image to be detected according to the surface vector expression, solving the surface vector equation set by adopting a least square method and acquiring a height information expression.
In this step, the detection controller 105 obtains a texture image and a height image in accordance with expressions of the texture information and the height information, and the obtained texture image and height image are stored in a digital form and await further processing.
And S204, screening the defect characteristics to obtain the surface defect characteristics.
The detection controller performs defect feature screening processing on the feature image (including the texture image and/or the height image) obtained in step S203 to obtain the surface defect feature of the object to be detected, as shown in fig. 4. The defect characteristic screening processing comprises image processing steps of contrast adjustment, gray level denoising, binarization, block analysis or contour tracking and condition screening.
In this step, the contrast adjustment is used to enhance the contrast of the image, including the operations of mean filtering and level setting. The gray scale denoising is used for eliminating image reflection and comprises filtering or morphological deformation operations. Binarization is used for setting the gray value of a pixel point on an image to be 0 or 255, so that the image has an obvious effect of only black and white. The block analysis is mainly used for connectivity analysis of the binarized image, obtaining the characteristics of the average gray value, the area, the long axis length, the short axis length, the roundness and the like of a connected region, facilitating feature screening, and the contour tracing is used for setting the region characteristics of the image, so as to facilitate judgment of the shape or contour of the object to be detected. The condition screening is used for judging the surface defect characteristics according to preset conditions, namely judging whether a certain distinguishing characteristic in the image meets the conditions according to an upper tolerance and a lower tolerance in the preset conditions, and if so, considering the distinguishing characteristic as the surface defect characteristics. Since the methods involved in the steps of the defect feature screening process are conventional means of image processing techniques, the principles and uses of the methods will not be described in detail.
In this step, the feature image for performing defect feature screening processing includes a texture image and/or a height image, which image needs to be used for defect feature screening processing should be determined according to a defect feature type preset by a technician, and the determining process may refer to the relevant content in step S203.
In this step, a defect feature screening process is used to screen the surface defect features of the object to be inspected from the surface texture features and/or the surface height features. Since the texture image has surface texture features with color differences and the height image has surface height features with convex-concave differences, the surface texture features or the height texture features are in a low-contrast and difficult-to-identify state in an original image (the original image refers to an image which is directly acquired and is not processed), and therefore, the quality of the texture image or the height image is obviously improved relative to the original image. The feature images with high image quality can easily identify surface defect features expected by technicians through a common defect feature screening method in the process of participating in defect feature screening processing.
In another embodiment, the defect feature screening process includes one or more of contrast adjustment, grayscale denoising, binarization, block analysis, contour tracking, and condition screening. The quality of the image to be detected is easily influenced by ambient light, a camera exposure value, the surface material of an object to be detected and the like, and the obtained characteristic image has quality difference, particularly the difference between the surface texture characteristic and the surface height characteristic, so that technicians can determine the processing steps used in the defect characteristic screening processing according to the actual conditions of the object to be detected and the detection environment, the processing steps involved according to the actual conditions can better obtain the surface defect characteristics, and the conditions of parameter inhibition, characteristic extinction and the like caused by unnecessary processing steps can be avoided. For example, when defect feature screening processing is performed on the texture image, the steps of contrast adjustment, gray level denoising, contour tracking and condition screening may be mainly used, and when defect feature screening processing is performed on the height image, the steps of contrast adjustment, gray level denoising and condition screening may be mainly used.
And S205, outputting the surface defect characteristics.
To facilitate the technician viewing the surface defect characteristics in step S204, the inspection controller 105 can transmit or display the surface distinctive characteristics according to its own data interface or display.
The detection method and the detection device based on image fusion not only can obtain the image of the detected object with high quality, but also can obtain the distinguishing characteristics of the surface of the detected object by the image fusion means, thereby being beneficial to achieving the purpose of detecting the distinguishing characteristics under the condition of low contrast. Therefore, the detection method and the detection device mentioned in the above embodiments can be used not only for the detection of surface defects, but also in the related image detection fields of machine vision, face recognition, motion recognition, and the like.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (8)

1. A detection method based on image fusion is characterized by comprising the following steps:
sequentially controlling a plurality of light-emitting modules to irradiate an object to be detected from different directions, and controlling a camera to shoot an image of the object to be detected corresponding to each light-emitting module when each light-emitting module irradiates the object to be detected, so as to obtain a plurality of frames of images to be detected of the object to be detected;
carrying out image fusion processing on a plurality of frames of images to be detected to obtain a characteristic image of the object to be detected; the method specifically comprises the steps of extracting texture information and/or height information from the image to be detected according to the type of the defect to be detected; the characteristic image comprises a texture image and/or a height image, the texture image comprises texture information, and the height image comprises height information; the texture information is used for identifying surface texture features with color difference of the object to be detected, and the height information is used for identifying surface height features with convex-concave difference of the object to be detected;
carrying out defect characteristic screening processing on the characteristic image to obtain the surface defect characteristics of the object to be detected;
outputting the surface defect characteristics;
wherein the extracting of the texture information comprises obtaining a reflection model expression of each frame of the image to be detected, establishing a reflection model equation set of the image to be detected according to the reflection model expression and expressing the reflection model equation set as
Figure FDA0002584463010000011
I, L, N, n in the formula respectively represents the gray value of a pixel point, the direction of a light source, the normal vector of the surface of an object and the number of frames of an image to be detected, and x, y and z are coordinate parameters; setting a process parameter G, enabling G to be rho N, and solving the reflection model equation set by using a least square method to obtain G (L)TL)-1LTI, so that texture information of the image to be detected is obtained through a texture information expression N ═ G/rho;
wherein extracting the height information comprises obtaining a surface vector expression of each frame of the image to be detected and expressing the surface vector expression as
Figure FDA0002584463010000012
Further comprises N.V1=0,N·V2Available as 0
Figure FDA0002584463010000013
In the formula, z (x, y) represents a height image, N, V represents a normal vector and a camera view angle, respectively; establishing a surface vector equation set of the image to be detected according to a surface vector expression of each frame of the image to be detected, solving the surface vector equation set according to a least square method and obtaining a height information expression Z ═ ATA)-1AB, A is a sparse matrix with dimension 2MN multiplied by MN, B is a parameter B ═ AZ, and the size of each frame of image to be detected is M multiplied by N; and obtaining the height information of the image to be detected through the height information expression.
2. The image fusion-based detection method according to claim 1, wherein the defect feature screening processing on the feature image comprises: and image processing steps of contrast adjustment, gray level denoising, binaryzation, block analysis, contour tracking and condition screening are carried out, and the defect characteristic screening processing is used for screening the surface defect characteristics of the object to be detected from the surface texture characteristics and/or the surface height characteristics.
3. The image fusion-based detection method of claim 1, wherein there are at least three light-emitting modules, and at least three frames of the image to be detected are obtained.
4. The image fusion-based detection method according to any one of claims 1 to 3, further comprising a calibration step before the step of obtaining the image to be detected: placing a highlight black ball for calibration at a detection position, acquiring images to be calibrated of the highlight black ball under the irradiation of each light-emitting module, constructing a gray imaging model of each frame of image to be calibrated, and comparing according to the gray imaging model to obtain vector values of the light-emitting modules and the camera relative to the highlight black ball;
and the vector value is used for calculating to obtain a reflection model expression and/or a surface vector expression of the image to be detected.
5. An image fusion-based detection device, comprising:
the light source comprises a plurality of light emitting modules, and the light emitting modules are used for irradiating an object to be detected from different directions;
the camera is used for acquiring an image to be detected of the object to be detected when the light-emitting module irradiates the object to be detected;
the detection controller is respectively in signal connection with the light source and the camera, controls the plurality of light-emitting modules to be sequentially lightened during detection, controls the camera to photograph the object to be detected when the light-emitting modules irradiate the object to be detected, and obtains a plurality of frames of images corresponding to the plurality of light-emitting modules;
the detection controller is used for carrying out image fusion processing on a plurality of frames of images to be detected according to the detection method in claim 1, so as to extract texture information and/or height information from the images to be detected according to the type of the defects to be detected; the characteristic image comprises a texture image and/or a height image, the texture image comprises texture information, and the height image comprises height information; the texture information is used for identifying the surface texture features with color difference of the object to be detected, and the height information is used for identifying the surface height features with convex-concave difference of the object to be detected.
6. The image fusion-based detection device according to claim 5, wherein the detection controller is configured to perform image feature screening on a plurality of frames of the images to be detected;
the image feature screening process includes: contrast adjustment, gray level denoising, binaryzation, block analysis, contour tracking and condition screening, wherein the defect characteristic screening process is used for screening out the surface defect characteristics of the object to be detected from the surface texture characteristics and/or the surface height characteristics.
7. The image fusion-based detection device of claim 5, wherein the light source comprises a plurality of light emitting modules, each of which is configured to generate red light, green light, or blue light.
8. The image fusion-based detection device of claim 7, wherein the detection controller has a multispectral setting module for controlling which color of light is generated by the light emitting module to meet different detection scene requirements.
CN201810019410.9A 2018-01-09 2018-01-09 Detection method and detection device based on image fusion Active CN108445007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810019410.9A CN108445007B (en) 2018-01-09 2018-01-09 Detection method and detection device based on image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810019410.9A CN108445007B (en) 2018-01-09 2018-01-09 Detection method and detection device based on image fusion

Publications (2)

Publication Number Publication Date
CN108445007A CN108445007A (en) 2018-08-24
CN108445007B true CN108445007B (en) 2020-11-17

Family

ID=63190866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810019410.9A Active CN108445007B (en) 2018-01-09 2018-01-09 Detection method and detection device based on image fusion

Country Status (1)

Country Link
CN (1) CN108445007B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020082386A1 (en) * 2018-10-26 2020-04-30 合刃科技(深圳)有限公司 Character obtaining method and device
CN109584239B (en) * 2018-12-13 2024-02-06 华南理工大学 High-light object surface defect detection system and method based on reflected light
CN110044921A (en) * 2019-04-28 2019-07-23 江苏理工学院 Lithium battery open defect detection system and method
JP7317957B2 (en) * 2019-05-28 2023-07-31 京セラ株式会社 Spectrum determination device, spectrum determination method, spectrum determination program, lighting system, lighting device, and inspection device
CN110346294B (en) * 2019-06-17 2020-12-22 北京科技大学 A scanning detection system and method for panel micro-scratch defects
CN110286134A (en) * 2019-07-26 2019-09-27 上海御微半导体技术有限公司 A kind of defect detecting device and its method
CN110441315B (en) * 2019-08-02 2022-08-05 英特尔产品(成都)有限公司 Electronic component testing apparatus and method
CN110796627A (en) * 2019-08-27 2020-02-14 南京东唯电子科技有限公司 Processing method for enhancing contrast of convex text image
CN110687134B (en) * 2019-09-29 2021-03-16 武汉大学 Online detection device and method in production of banded FPC
CN112683789A (en) * 2019-10-17 2021-04-20 神讯电脑(昆山)有限公司 Object surface pattern detection system and detection method based on artificial neural network
CN111122590B (en) * 2019-12-03 2023-07-07 佛山市景瞳科技有限公司 Ceramic surface defect detection device and detection method
CN111291761B (en) * 2020-02-17 2023-08-04 北京百度网讯科技有限公司 Method and device for recognizing text
CN111627008B (en) * 2020-05-27 2023-09-12 深圳市华汉伟业科技有限公司 Object surface detection method and system based on image fusion and storage medium
CN113146427A (en) * 2020-05-29 2021-07-23 浙江大学 Steel rail surface defect detection method
CN114062367A (en) * 2020-08-07 2022-02-18 华晨宝马汽车有限公司 Apparatus and method for rating data matrix codes on parts
CN113063704B (en) * 2020-12-04 2022-03-11 湖北沛丰生物科技股份有限公司 Particle fullness analysis platform and method
CN112240887A (en) * 2020-12-14 2021-01-19 惠州高视科技有限公司 Battery appearance defect detection system and method
CN114112903A (en) * 2021-02-01 2022-03-01 苏州威华智能装备有限公司 A method, equipment and storage medium for detecting surface defects of photovoltaic cells
CN112884689B (en) * 2021-02-25 2023-11-17 景德镇陶瓷大学 Method for removing high light of strong reflection surface image
CN113658166B (en) * 2021-08-24 2024-04-12 凌云光技术股份有限公司 Point cloud defect detection method and device based on grid model
CN114235831B (en) * 2022-02-28 2022-08-23 广东省科学院智能制造研究所 Method and device for detecting sink marks of injection molding workpiece based on image detection
CN114998333B (en) * 2022-08-02 2022-10-25 山东第一医科大学(山东省医学科学院) Computer vision detection method and system for light source characteristics
CN115436288B (en) * 2022-09-20 2025-01-10 山东大学 Lens defect detection device and imaging method
CN115684172A (en) * 2022-10-09 2023-02-03 迁安市福运机动车检测有限公司 Automobile appearance detection system and using method thereof
CN115980059B (en) * 2022-12-21 2023-12-15 中科慧远视觉技术(洛阳)有限公司 Surface defect detection system, detection method, detection device, detection equipment and storage medium
CN116148265B (en) * 2023-02-14 2024-07-12 浙江迈沐智能科技有限公司 Flaw analysis method and system based on synthetic leather high-quality image acquisition
CN117347383A (en) * 2023-12-06 2024-01-05 中材新材料装备科技(天津)有限公司 System and method for detecting and automatically repairing surface defects of calcium silicate plate
CN118628500B (en) * 2024-08-14 2024-11-05 深圳西普尼精密科技股份有限公司 Method and system for detecting vacuum coating defects of watch case based on image processing
CN119086020B (en) * 2024-09-27 2025-04-11 深圳华和信科技发展有限公司 Liquid crystal screen defect detection method, system and storage medium based on machine vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779354A (en) * 2012-06-21 2012-11-14 北京工业大学 Three-dimensional reconstruction method for traditional Chinese medicine inspection information surface based on photometric stereo technology
CN103884650A (en) * 2014-03-28 2014-06-25 北京大恒图像视觉有限公司 Multi-photosource linear array imaging system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779354A (en) * 2012-06-21 2012-11-14 北京工业大学 Three-dimensional reconstruction method for traditional Chinese medicine inspection information surface based on photometric stereo technology
CN103884650A (en) * 2014-03-28 2014-06-25 北京大恒图像视觉有限公司 Multi-photosource linear array imaging system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多点光源的金属表面三维缺陷检测方法;徐科等;《中国科技论文》;20170228;第420页右栏第2段-第424页左栏第3段 *

Also Published As

Publication number Publication date
CN108445007A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN108445007B (en) Detection method and detection device based on image fusion
JP6945245B2 (en) Visual inspection equipment
TW583389B (en) A surface conduction examination method and a substrate examination device
US7505149B2 (en) Apparatus for surface inspection and method and apparatus for inspecting substrate
JP6348289B2 (en) Inspection apparatus and inspection method
JP7514259B2 (en) System and method for determining whether a camera component is damaged - Patents.com
KR101679205B1 (en) Device for detecting defect of device
TWI495867B (en) Application of repeated exposure to multiple exposure image blending detection method
CN111344553B (en) Method and system for detecting defects of curved object
JP4150390B2 (en) Appearance inspection method and appearance inspection apparatus
JP6647903B2 (en) Image inspection device, image inspection program, computer-readable recording medium, and recorded device
TW201516397A (en) Bubble inspection system for glass
CN114324344A (en) Non-lambert surface inspection system for line scanning
JP2013015389A (en) Inspection method of weld position and device for the method
JP3871963B2 (en) Surface inspection apparatus and surface inspection method
TWI510776B (en) Bubble inspection processing method for glass
JP4967132B2 (en) Defect inspection method for object surface
TWI655412B (en) Light source detection system and method
JP5402182B2 (en) Appearance inspection method and appearance inspection apparatus
JP2024155236A (en) Focus setting selection device, surface defect inspection device, focus setting selection method, and surface defect inspection method
CN113870242A (en) Method and equipment for detecting blind hole
CN118403811A (en) Ink color sorting method, device and sorting system for printed circuit board
JP2001184493A (en) Image processing device
JP2002328205A5 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant