CN117459700B - Color luminosity three-dimensional imaging method, system, electronic equipment and medium - Google Patents
Color luminosity three-dimensional imaging method, system, electronic equipment and medium Download PDFInfo
- Publication number
- CN117459700B CN117459700B CN202311798734.4A CN202311798734A CN117459700B CN 117459700 B CN117459700 B CN 117459700B CN 202311798734 A CN202311798734 A CN 202311798734A CN 117459700 B CN117459700 B CN 117459700B
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional
- detected
- detection image
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 36
- 238000001514 detection method Methods 0.000 claims abstract description 208
- 230000007547 defect Effects 0.000 claims abstract description 113
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000012545 processing Methods 0.000 claims description 35
- 230000001678 irradiating effect Effects 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 23
- 238000005286 illumination Methods 0.000 description 38
- 238000004422 calculation algorithm Methods 0.000 description 16
- 238000004458 analytical method Methods 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000007689 inspection Methods 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 8
- 239000003086 colorant Substances 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000010191 image analysis Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000004141 dimensional analysis Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 208000034656 Contusions Diseases 0.000 description 1
- 206010030113 Oedema Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000002893 slag Substances 0.000 description 1
- 229910000679 solder Inorganic materials 0.000 description 1
- 238000012916 structural analysis Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/01—Arrangements or apparatus for facilitating the optical investigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8806—Specially adapted optical and illumination features
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/254—Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Immunology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A color luminosity three-dimensional imaging method, a system, electronic equipment and a medium relate to the technical field of image detection. The method comprises the following steps: shooting the object to be detected under three different light sources by adopting a black-and-white camera to obtain three first detection images; generating a second detection image according to each first detection image, wherein the second detection image is a two-dimensional color image; extracting three-dimensional depth information in each first detection image, and generating a three-dimensional third detection image according to the three-dimensional depth information; and determining the appearance defect of the object to be detected according to the second detection image and the third detection image. The effect of obtaining the color image and the three-dimensional image by a single camera and reducing the cost of the camera is achieved.
Description
Technical Field
The application relates to the technical field of image detection, in particular to a color luminosity three-dimensional imaging method, a system, electronic equipment and a medium.
Background
With the rapid development of industrial automation and machine vision technologies, the demand for quality inspection of articles has become increasingly strong. Computer vision and image processing techniques have been widely used for automated quality inspection over the past decades. These techniques allow for rapid and accurate detection of the shape, size, color, texture, etc. characteristics of the article, as well as any possible defects. Product appearance defect detection is generally classified into two categories: the color camera recognizes 2D planar defects (e.g., dirt, foreign objects, etc.) that are differentiated by different colors. Black and white cameras identify depth defects (e.g., scratches, burrs, bumps, pits, dimples, pits, corrosion, solder leaks, pits, slag, deformations, bruises, etc.) with 3D features.
Currently, the existing measuring method for detecting the appearance defects acquires a black-and-white image and a color image of a detection object by arranging two different cameras. However, in practical application, when appearance defect detection is performed on a large number of products, a plurality of stations are required to be arranged, so that a large amount of camera resources are consumed, and the cost is high.
Disclosure of Invention
The application provides a color luminosity three-dimensional imaging method, a system, electronic equipment and a medium, which have the effects of acquiring color images and three-dimensional images by a single camera and reducing the cost of the camera.
In a first aspect, the present application provides a method of color photometric stereo imaging, comprising:
shooting the object to be detected under three different light sources by adopting a black-and-white camera to obtain three first detection images;
generating a second detection image according to each first detection image, wherein the second detection image is a two-dimensional color image;
extracting three-dimensional depth information in each first detection image, and generating a three-dimensional third detection image according to the three-dimensional depth information;
and determining the appearance defect of the object to be detected according to the second detection image and the third detection image.
By adopting the technical scheme, the multi-angle acquisition of the object to be detected is realized by using a black-and-white camera with low cost and matching with a simple three-color alternate flash light source and shooting images under different illumination conditions. Based on the set of black and white images, color detection images with true colors are synthesized and generated according to brightness information provided by each image under different illumination angles, and the color detection images are used for collecting sample colors and plane characteristics. And calculating and extracting three-dimensional depth information from the black-and-white image by using a multi-view stereo analysis algorithm, and further constructing a three-dimensional effect image for acquiring the stereo structure and the surface morphological characteristics of the sample. The system comprehensively analyzes rich color details provided by the two-dimensional color image and three-dimensional shape characteristics provided by the three-dimensional image, and realizes accurate detection and positioning of appearance defects of samples. Compared with the traditional professional-level image equipment, the scheme only adopts one black-and-white camera, has low hardware cost and simple and convenient operation, realizes the conversion from a black-and-white image to a color image and from a two-dimensional image to a three-dimensional image through an image processing algorithm, constructs a multi-dimensional detection image containing color, shape and surface characteristics, is used for efficiently and accurately detecting the appearance defects of articles, and reduces the implementation cost.
Optionally, according to the position of the object to be detected and the position of the station of the black-and-white camera, the first position of the red light source for irradiating the object to be detected, the second position of the blue light source for irradiating the object to be detected and the third position of the green light source for irradiating the object to be detected are determined.
By adopting the technical scheme, the red light source, the blue light source and the green light source are controlled to alternately irradiate the object to be detected, and the shutter frequency of the black-and-white camera is matched, so that polychromatic light information under the same visual angle can be acquired in one imaging process. This can produce a two-dimensional color image equivalent to that taken by a color camera, enabling detection of planar defects. Since alternate illumination of the three-color light sources can form fine positional deviations of the surface of the object to be detected in the image, three-dimensional depth information can be extracted, and a three-dimensional detection image equivalent to black-and-white camera imaging can be reconstructed for detecting depth defects. The scheme integrates the functions of color detection and three-dimensional detection, can realize comprehensive detection of the appearance of the product, comprises plane defects and depth defects, can reduce the use cost of equipment, and improves the economic benefit.
Optionally, the gray level images of the first detection images are read, and gray levels of the gray level images are mapped to an R channel, a B channel and a G channel of a preset second detection image respectively to obtain the second detection image.
By adopting the technical scheme, the first detection images are alternately shot by using the red, green and blue light sources, and each first detection image contains brightness information corresponding to a certain dominant color. According to the scheme, each first detection image is matched into the brightness level of the corresponding color channel of the second detection image according to the brightness distribution condition of the first detection image, and the mapping of the color information is completed. The combined second detected image forms a color image containing complete color information. The image synthesis mode avoids the high cost of setting a color camera at the same time, and realizes the equivalent color imaging effect by matching a single black-and-white camera with the multicolor light source for illumination. In summary, the scheme effectively utilizes the first detection image under the illumination of the multicolor light source through a simple and realizable image processing means, and reconstructs a two-dimensional color image, thereby achieving the effect of reducing the equipment cost and realizing color detection.
Optionally, extracting pixels in each of the first detection images, and determining a relative position of each of the pixels; generating a depth image according to each relative position; converting each pixel into a three-dimensional coordinate point according to the depth image; and generating the third detection image according to each three-dimensional coordinate point.
By adopting the technical scheme, the red, green and blue light sources are used for alternately irradiating, and the same object in each first detection image can form fine position offset under different color illumination. According to the scheme, the depth information of the object surface, namely the distance between the object surface and the image sensing plane, can be calculated by tracking the position change condition of the pixels corresponding to the same object in each first detection image. From the depth information of all image areas, a depth image describing the three-dimensional structure may be generated. Each pixel is converted into a coordinate point in three-dimensional space according to its depth value. The set of all coordinate points forms a third detected image comprising the three-dimensional surface contour. The multi-view combined image analysis mode avoids the high cost of setting an additional black-and-white camera, and realizes the equivalent three-dimensional imaging effect by matching a single black-and-white camera with the multi-color light source illumination mode. The technical means accords with the protection of the patent law in the stereoscopic imaging field. According to the scheme, three-dimensional information is effectively extracted from the first detection image under the illumination of the multicolor light source through an image processing means, so that the reconstruction of an equivalent three-dimensional detection image is realized, and the effect of reducing the equipment cost and realizing three-dimensional detection is achieved.
Optionally, determining a planar defect feature of the object to be detected according to the second detection image; determining depth defect characteristics of the object to be detected according to the third detection image; and determining the appearance defect according to the plane defect characteristic and the depth defect characteristic.
By adopting the technical scheme, based on the two-dimensional color detection image containing rich color details, the plane characteristics of abnormal color, unclear edges and the like existing on the surface of the object are detected through an image processing algorithm. And detecting depth structural features such as surface dents, cracks and the like of the object by applying a three-dimensional analysis method on the three-dimensional detection image. And comprehensively comparing analysis results of the two types of images, and detecting the surface quality condition of the object in all directions by using plane colors and pattern details provided by the two-dimensional images and three-dimensional structural features provided by the three-dimensional images. Compared with the detection scheme which only depends on single two-dimensional vision, the detection scheme integrating two-dimensional and three-dimensional images can expand the detection range and dimension, fully utilize the advantages of different images, perform visual analysis on objects from two aspects of plane and three-dimensional, comprehensively judge the quality defects of the surfaces, and improve the comprehensiveness and accuracy of detection.
Optionally, shooting the object to be detected by adopting the black-and-white camera to obtain a primary detection image of the object to be detected; determining the shape and the size of the appearance defect on the object to be detected according to the primary detection image; determining the irradiation angles of the three different light sources according to the position of the appearance defect on the object to be detected; determining the irradiation intensities of the three different light sources according to the shape and the size of the appearance defect; and determining the three different light sources according to the irradiation angle and the irradiation intensity.
By adopting the technical scheme, a primary inspection image of the object to be detected is acquired by adopting a black-and-white camera, and is analyzed by an image processing algorithm, an appearance defect area on the surface of the object is positioned, and position and size information of the defect are acquired. After determining the parameters of the defect, the system calculates the optimal illumination angles of the three red, green and blue light sources for the defect, and the appropriate illumination intensity determined according to the size of the defect. The system accurately adjusts the direction and the light intensity parameters of the three-color light source according to the calculated irradiation angle and the calculated irradiation intensity result. The closed-loop control mode realizes optimization and customization of the illumination condition of the detection image, so that the light source can efficiently and accurately illuminate the area where the defect is located, and the visual characteristics reflecting the quality defect are highlighted through the generated detection image. Compared with the common illumination condition, the scheme realizes specific illumination adjustment aiming at the detection target, improves the presentation capability of the image on the details of the key area, and lays a foundation for the subsequent automatic detection and identification algorithm.
Optionally, acquiring the moving speed of the object to be detected and the flicker frequency of the three different light sources; determining a first shutter frequency range of the black-and-white camera according to the moving speed; determining a second shutter frequency range of the black-and-white camera according to the flicker frequency; and determining a target shutter frequency of the black-and-white camera according to the first shutter frequency range and the second shutter frequency range, and shooting the object to be detected by adopting the black-and-white camera according to the target shutter frequency.
By adopting the technical scheme, the first frequency range of the camera shutter is determined according to the moving speed of the object to be detected, so that the definition of the shot image is ensured. And determining a second frequency range of the camera shutter according to the flicker frequency of the three-color light source so as to ensure that each shooting only receives monochromatic illumination. The intersection of the two frequency ranges is selected as the target shutter frequency of the camera, i.e. frame-by-frame capture of the polychromatic flash can be achieved as the item moves. The matching exposure mode realizes the image capturing of obtaining the alternate irradiation effect of the multicolor light source under the motion detection scene. Stereoscopic imaging can be accomplished without stopping the motion of the detected object. The scheme realizes the three-dimensional detection imaging of the mobile object under the multicolor light source, expands the detection application scene and achieves the effect of simplifying the operation flow.
In a second aspect of the present application, a color photometric stereo imaging system is provided.
The image acquisition module is used for shooting the object to be detected under three different light sources by adopting a black-and-white camera to obtain three first detection images;
the first image processing module is used for generating a second detection image according to each first detection image, and the second detection image is a two-dimensional color image;
the second image processing module is used for extracting three-dimensional depth information in each first detection image and generating a three-dimensional third detection image according to the three-dimensional depth information;
and the defect detection module is used for determining the appearance defect of the object to be detected according to the second detection image and the third detection image.
In a third aspect of the present application, an electronic device is provided.
A color photometric stereo imaging system includes a memory, a processor, and a program stored on the memory and executable on the processor, the program being capable of implementing a color photometric stereo imaging method when loaded and executed by the processor.
In a fourth aspect of the present application, a computer-readable storage medium is provided.
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement a method of color photometric stereo imaging.
In summary, one or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
1. according to the multi-angle acquisition method, the black-and-white camera with low cost is utilized, the simple three-color alternate flash light source is matched, and the images are shot under different illumination conditions, so that multi-angle acquisition of the object to be detected is realized. Based on the set of black-and-white images, color detection images with true colors are synthesized and generated according to brightness information provided by each image under different illumination angles, and the color detection images are used for collecting sample colors and plane characteristics. And secondly, calculating and extracting three-dimensional depth information from the black-and-white image by using a multi-view stereo analysis algorithm, and further constructing a three-dimensional effect image for acquiring the stereo structure and the surface morphological characteristics of the sample. Finally, the system comprehensively analyzes rich color details provided by the two-dimensional color image and three-dimensional shape features provided by the three-dimensional image, so that accurate detection and positioning of appearance defects of the sample are realized. Compared with the traditional professional-level image equipment, the scheme only adopts one black-and-white camera, has low hardware cost and simple and convenient operation, realizes the conversion from a black-and-white image to a color image and from a two-dimensional image to a three-dimensional image through an image processing algorithm, constructs a multi-dimensional detection image containing color, shape and surface characteristics, is used for efficiently and accurately detecting the appearance defects of articles, and reduces the implementation cost.
2. According to the method and the device, the pixel information in a group of black-and-white detection images obtained through multi-view shooting is extracted, the three-dimensional space coordinates of each pixel relative to a camera are calculated according to the position change of each pixel in different images, and conversion from two-dimensional pixels to three-dimensional point clouds is achieved. Then, the system generates a three-dimensional effect black-and-white image reflecting the real three-dimensional structure through a three-dimensional reconstruction algorithm based on the three-dimensional point cloud full of scene depth information obtained through conversion, and the three-dimensional effect black-and-white image is used as an important third detection image. The scheme uses the thought of multi-view stereo matching, realizes two-dimensional to three-dimensional structure reduction through parallax analysis based on simple black-and-white images, avoids using complex three-dimensional scanning equipment, and reduces technical difficulty. The obtained three-dimensional image contains abundant scene geometric structural features, can make up for the defects of the traditional two-dimensional imaging, provides reliable three-dimensional visual data for subsequent three-dimensional feature extraction, structural analysis and other processes, greatly expands the dimension range of detection, and enables the detection analysis to go from a plane to a three-dimensional state, and to be more comprehensive and accurate.
3. According to the method, the primary detection image of the object to be detected is obtained by adopting the black-and-white camera, the primary detection image is analyzed by an image processing algorithm, the appearance defect area existing on the surface of the object is positioned, and the position and the size information of the defect are obtained. After determining the parameters of the defect, the system calculates the optimal illumination angles of the three red, green and blue light sources for the defect, and the appropriate illumination intensity determined according to the size of the defect. The system accurately adjusts the direction and the light intensity parameters of the three-color light source according to the calculated irradiation angle and the calculated irradiation intensity result. The closed-loop control mode realizes optimization and customization of the illumination condition of the detection image, so that the light source can efficiently and accurately illuminate the area where the defect is located, and the visual characteristics reflecting the quality defect are highlighted through the generated detection image. Compared with the consistent common illumination condition, the scheme realizes specific illumination adjustment aiming at the detection target, and improves the presentation capability of the image to the details of the key area.
Drawings
Fig. 1 is a schematic flow chart of a color photometric stereo imaging method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a color photometric stereo imaging system according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application.
Reference numerals illustrate: 300. an electronic device; 301. a processor; 302. a communication bus; 303. a user interface; 304. a network interface; 305. a memory.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "for example" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described herein as "such as" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In order to facilitate understanding of the methods and systems provided in the embodiments of the present application, a description of the background of the embodiments of the present application is provided before the description of the embodiments of the present application.
Currently, the existing measuring method for detecting the appearance defects acquires a black-and-white image and a color image of a detection object by arranging two different cameras. However, in practical application, when appearance defect detection is performed on a large number of products, a plurality of stations are required to be arranged, so that a large amount of camera resources are consumed, and the cost is high.
The embodiment of the application discloses a color luminosity three-dimensional imaging method, which comprises the steps of arranging a black-and-white camera at a single station, shooting an object to be detected under the cooperation of three different light sources, fusing the obtained images to obtain a color image, generating a three-dimensional image according to each image, and detecting defects of the object to be detected according to the color image and the three-dimensional image. The method is mainly used for solving the problems that a single black-and-white camera cannot acquire color images and three-dimensional images of an object to be detected at the same time, and multiple cameras are required to be arranged for detection, so that the cost is high.
Those skilled in the art will appreciate that the problems associated with the prior art are solved by the foregoing background description, and a detailed description of the technical solutions in the embodiments of the present application is provided below, with reference to the drawings in the embodiments of the present application, where the described embodiments are only some embodiments of the present application, but not all embodiments.
Referring to fig. 1, a color photometric stereo imaging method includes S10 to S40, specifically including the steps of:
s10: shooting the object to be detected under three different light sources by adopting a black-and-white camera to obtain three first detection images.
Specifically, according to the position of the object to be detected and the position of the black-and-white camera, the first position of the red light source, the second position of the blue light source and the third position of the green light source are determined. And (3) turning on a red light source, adjusting the red light source to a first position, and shooting an object to be detected by using a black-and-white camera to obtain a first image. And turning off the red light source, turning on the blue light source, adjusting the blue light source to a second position, and shooting the object to be detected by using the same black-and-white camera to obtain a second image. And turning off the blue light source, turning on the green light source, adjusting the blue light source to a third position, and continuously using the black-and-white camera to shoot the object to be detected to obtain a third image. Through the operation, the same object to be detected is shot by using a single black-and-white camera under the red light source, the blue light source and the green light source respectively, and three first detection images under different illumination conditions are finally obtained. Therefore, the complex operation of shooting at different positions by arranging a plurality of cameras can be avoided, the equipment installation is simplified, the cost is reduced, the effect of obtaining images under different light sources is ensured, and a foundation is provided for the subsequent generation of color images and three-dimensional images.
On the basis of the above embodiment, the specific step of acquiring the first detection image further includes S11:
s11: and determining a first position of the red light source for irradiating the object to be detected, a second position of the blue light source for irradiating the object to be detected and a third position of the green light source for irradiating the object to be detected according to the position of the object to be detected and the station position of the black-white camera.
In the color photometric stereo imaging method, for example, the position where the three-color light source irradiates the object to be detected needs to be determined, so that the light rays of each color can be ensured to irradiate all the surfaces of the object to be detected sufficiently and uniformly, and the omnidirectional stereo detection is realized. In the implementation, according to the specific position coordinates of the object to be detected placed on the production line and the fixed coordinate positions of the black-and-white cameras arranged beside the production line, the normal vector direction of the surface of the object to be detected can be determined through calculation. And calculating position coordinate points of the red, blue and green light sources to be irradiated to the surface according to the normal vector direction and the specific direction of each surface, namely determining coordinate values of the first position, the second position and the third position. Finally, a light source is arranged according to the coordinate value, so that three-color light rays penetrate through the surface of the article from different angles and irradiate all the surface positions to be detected. Therefore, by calculating the specific irradiation position of the three-color light source, the comprehensive three-dimensional illumination of the statically placed object to be detected is realized, and the image containing the complete three-dimensional structure information is ensured to be acquired, so that the detection accuracy and reliability are improved.
In an alternative embodiment of the present application, the specific step of determining the illumination parameters of the three different light sources further comprises S12 to S14:
s12: shooting the object to be detected by adopting a black-and-white camera, and obtaining a primary detection image of the object to be detected.
Illustratively, a detection location is determined and an item to be detected is placed within the detection zone. The black-and-white camera is adjusted to maintain a proper distance from the object, and the object is ensured to be in the optimal shooting range through the viewfinder. The camera parameters including focusing mode, exposure time, white balance and the like are configured to ensure clear image, moderate contrast and no overexposure distortion. And opening a shutter for shooting, and storing the acquired picture as a primary detection image. Through the operation, an initial image of the object to be detected is acquired by using a simple and economical black-and-white camera, and a data basis is provided for generating a first detection image, a second detection image, a third detection image and the like subsequently. And the common black-and-white camera is adopted to replace an expensive industrial camera, so that the equipment investment is reduced.
S13: determining the shape and the size of the appearance defect on the object to be detected according to the primary detection image; and determining the irradiation angles of three different light sources according to the position of the appearance defect on the object to be detected.
Illustratively, the primary inspection image is subjected to image processing, a defective area existing on the surface of the article is detected, and the shape and size parameters thereof are measured. And establishing a three-dimensional model of the article, and marking the position corresponding to the defect. According to the position and shape characteristics of the defect, the optimal angles of the red, green and blue light sources irradiating the position are calculated, so that the light can completely cover the defect area. Thus, parameters and position information of the defect are determined by analyzing the initial image, and a light source irradiation scheme with high pertinence is calculated. This allows different light sources to be focused at the defect, illuminating from different angles to better reflect the color and shape characteristics of the defect. The use effect of the light source is optimized, and the quality of the subsequently generated detection image is improved.
S14: determining the irradiation intensity of three different light sources according to the shape and the size of the appearance defect; according to the irradiation angle and the irradiation intensity, three different light sources are adjusted.
Illustratively, the size of the illuminated area to be covered is calculated from the detected defect shape parameters. And then selecting a proper light source and keeping a proper distance from the light source according to the coverage area and the defect depth position, and determining the irradiation intensity of the red, green and blue three-color light by adjusting the output power of the light source. The three light sources are adjusted to make the irradiation angles respectively correspond to the optimal angles calculated in the front, and the light intensity is adjusted according to the distance between the light sources and the defect, so that the light can fully cover the defect area. Thus, parameters of the light source are adjusted according to specific conditions of the defects, each defect can be well illuminated, and the influence of gloss and reflection caused by excessive light on image quality is avoided, so that images reflecting defect details can be obtained through different light sources.
S20: a second detection image is generated from each of the first detection images, and the second detection image is a two-dimensional color image.
Specifically, three first detection images, that is, black-and-white images obtained by shooting under red light, green light and blue light are respectively read, and gray value distribution conditions of the three images are respectively obtained. The gray values of the first image are mapped to the R channel of the color image, the gray values of the second image are mapped to the G channel, and the gray values of the third image are mapped to the B channel. The three channels are combined to generate a two-dimensional color image based on the RGB values of each pixel. In this way, the effect of restoring the true color by using a single black-and-white camera is achieved by processing and converting three black-and-white images and utilizing the reflection characteristics of objects under different light sources. The complex multispectral system is avoided to obtain the color information, and the equipment cost is reduced.
Based on the above embodiment, the specific step of obtaining the color image further includes S21:
s21: and (3) reading gray images of the first detection images, and mapping the gray of each gray image to an R channel, a B channel and a G channel of a preset second detection image to obtain the second detection image.
In this color photometric stereo imaging method, for example, in order to extract color information from first detection images captured under illumination of three-color light sources, it is necessary to first read the gradation value distribution of each first detection image. In particular, for each first detection image, a gray-scale conversion tool in image processing software is used to convert the color image into a gray-scale image containing only brightness information. The gray image reflects the reflection of the object surface of each part under the illumination of a certain color. The conversion of the gray scale image is performed because the original first detection image contains color components generated by specific light source pigments. In order to extract the brightness information of the light, it is necessary to eliminate color interference. The gray image is just information of brightness degree of the image, namely intensity distribution of corresponding light. By reading the light intensity distribution, the relative condition of light irradiation and reflection can be obtained, and the three-dimensional structure of the surface of the object can be further determined.
In the color photometric stereo imaging method, in order to reconstruct a two-dimensional image of one color from a plurality of grayscale images illuminated by three-color light sources, each grayscale image needs to be mapped into RGB different color channels of a preset color image. In practice, since the light of the three primary colors of red, green and blue is used, each first detection image mainly contains the brightness information of one color component. Therefore, the gray scale of the image shot by red illumination can be directly mapped into the R channel of the preset color image, green is mapped into the G channel, and blue is mapped into the B channel. Thus, the gray scale of the different light sources is matched to the corresponding color channels, and a two-dimensional color image containing complete color information can be reconstructed. This avoids the high hardware costs of having to provide a color camera in addition. The mapping mode can be realized by table lookup correspondence or linear conversion, and the input gray value is converted into gray output of the target color channel. Finally, three channels of gray levels are overlapped to form a visual color effect. The method effectively utilizes image information under different illumination conditions through a simple and rapid image processing algorithm, and reconstructs an equivalent color two-dimensional image for subsequent planar defect detection analysis.
S30: three-dimensional depth information in each first detection image is extracted, and a three-dimensional third detection image is generated according to the three-dimensional depth information.
Wherein the three-dimensional depth information refers to information describing depth or distance properties of a three-dimensional scene or object. Three-dimensional depth information generally refers to the depth value of each pixel in an image relative to a camera or other reference frame, reflecting three-dimensional structural information of a scene or object.
Specifically, preprocessing is performed on the three first detection images, including denoising, contrast enhancement and the like, so as to improve the image quality. And calculating the distance from each pixel point to the camera according to the images under two or more visual angles through a stereoscopic vision algorithm, and generating a depth map with distance information. And converting the image pixels into point cloud coordinates in a three-dimensional space according to the depth map. And carrying out three-dimensional reconstruction on the point cloud data to obtain a third detection image with a three-dimensional effect. In this way, three-dimensional coordinates of the object surface points are obtained through stereoscopic analysis of the plurality of black-and-white images, and conversion from a two-dimensional image to a three-dimensional image is realized. By adopting the scheme, the real three-dimensional structure of the object and abundant surface details can be intuitively reflected, the surface defects can be detected, the use of complex and expensive 3D scanning equipment is avoided, and the cost is reduced.
On the basis of the above embodiment, the specific step of acquiring the third detection image further includes S31 to S33:
s31: pixels in each first detection image are extracted, and the relative positions of the pixels are determined.
Illustratively, each first detection image is subjected to pixelation processing, and the position coordinates of each image pixel are extracted. In the different first detection images, pixels of the same object point are identified. Since the same object point is photographed under different color illumination, there is a slight deviation in the coordinate position of each image, which is caused by the normal vector direction of the object surface. By comparing the coordinate values of the same point in the two images, the relative displacement of the point under different illumination can be obtained. And analyzing the relative displacement of all object points among a plurality of images to obtain three-dimensional structure information of the object surface, such as normal vectors, surface inclination angles and the like. This is the basic data for achieving the subsequent three-dimensional image reconstruction. The step acquires the detail change of the image structure under different illumination conditions by extracting and comparing pixel coordinate information, and provides input data for calculating a three-dimensional image according to the multi-view image.
S32: generating a depth image according to each relative position; each pixel is converted into a three-dimensional coordinate point according to the depth image.
Illustratively, generating a depth image from the relative positions of pixels and converting it into a three-dimensional coordinate point cloud is a key step in achieving the transition of a two-dimensional image to a three-dimensional structure. According to the extracted position information of each pixel point relative to the camera, the actual physical distance, namely the depth value, between each pixel point and the camera can be calculated. According to the depth values of all the pixel points, a depth map of the whole scene can be constructed and used as a depth image. According to the depth value of each pixel in the depth image, the actual coordinate value of the pixel point in the three-dimensional physical space can be determined. And repeating the steps, and calculating the three-dimensional coordinates of all the pixel points to form point cloud data with three-dimensional space information. The conversion from the two-dimensional image space to the three-dimensional physical space is realized, and the three-dimensional point cloud is generated by using the image of the single camera. By the scheme, the three-dimensional information of the target can be acquired at low cost, and a data basis is provided for subsequent three-dimensional modeling and other processing.
S33: and generating a 3D black-and-white image according to each three-dimensional coordinate point, and taking the 3D black-and-white image as a third detection image.
Illustratively, from the resulting three-dimensional point cloud, three-dimensional coordinates of each spatial point in the three-dimensional scene may be determined. And recovering the geometric structure of the three-dimensional scene by using the coordinate information of the three-dimensional points through a three-dimensional reconstruction algorithm, such as a three-dimensional camera correction method and the like, so as to obtain a three-dimensional point cloud model. And generating a three-dimensional image according to the point cloud distribution, mapping the three-dimensional coordinates of the point cloud into pixel positions and gray values on the image, and reconstructing a three-dimensional structure by using a black-and-white image shot by a single view angle. A black-and-white stereoscopic image containing real three-dimensional information of the scene is obtained, and basic data is provided for subsequent three-dimensional image analysis and processing. With this flow, a three-dimensional effect image is successfully generated using images acquired by a single normal camera.
S40: and determining the appearance defect of the object to be detected according to the second detection image and the third detection image.
Illustratively, the appearance defect of the article to be inspected is determined from the obtained second inspection image (color image) and third inspection image (three-dimensional black-and-white image), which is a key step for achieving the final object. On the second detected image, the edge profile, color features, etc. of the extracted article are analyzed by an image processing algorithm. And on the third detection image, analyzing and acquiring the surface shape, angle, curvature and surface detail characteristics of the object through a three-dimensional modeling and rendering technology. And (3) integrating the processing result of the second detection image and the processing result of the third detection image, comparing the theoretical model and the actual situation of the article, and determining whether the surface of the product has appearance defects such as scratches, pits, cracks, edema and the like according to a preset defect judgment standard. The color image analysis and the three-dimensional structure analysis are comprehensively utilized, so that whether the surface of the article has the quality problem in appearance or not can be accurately and comprehensively judged, the automatic visual detection of the product is finished, and the traditional manual detection mode is replaced.
On the basis of the above embodiment, the specific step of determining the appearance defect further includes S41 to S42:
S41: and determining the plane defect characteristics of the object to be detected according to the second detection image.
The planar defect characteristics of the object to be inspected are determined, for example, from the second inspection image, i.e. the color image, because the color image may better reflect the surface color state of the object. Specifically, the second detected image is subjected to preprocessing including denoising, contrast enhancement, and the like to improve the image quality. And (3) performing edge detection and shape segmentation on the image through a selected image processing algorithm, and extracting a color lump area and an edge contour of the object. And analyzing the color value of each color block and the gradient smoothness of the edge according to the object theoretical model and the set color parameters. Judging whether the color consistency among the color blocks is in a block peeling phenomenon or not; judging whether saw teeth and burrs exist on the edge or not, and determining whether the product plane has defects or not. Therefore, by analyzing the high-quality color image, whether the two-dimensional plane defects such as color falling and burrs exist on the surface of the product can be effectively detected, support is provided for result judgment, the detection coverage is enlarged, and the detection quality is improved.
S42: determining depth defect characteristics of the object to be detected according to the third detection image; and determining the appearance defect according to the plane defect characteristic and the depth defect characteristic.
Illustratively, the depth defect feature of the object to be inspected is determined from the third inspection image, i.e., the three-dimensional image, because the three-dimensional image may better reflect the three-dimensional structure and surface features of the object. Specifically, a three-dimensional reconstruction algorithm is utilized to model the third detection image, and a three-dimensional digital model of the object is obtained. And detecting whether the surface of the model has defects in depth directions such as cracks, scratches, pits and the like through a three-dimensional analysis algorithm. And (3) comprehensively analyzing and comparing the planar defect characteristics detected in the previous step with the depth defect characteristics detected in the previous step. And judging the final appearance defect condition of the surface of the object according to the two types of defect results and a preset rule. By analyzing the three-dimensional image, the depth structure defect existing on the surface of the product can be effectively detected, and the surface quality condition of the object can be comprehensively judged by combining the depth structure defect with the planar image analysis result, so that the detection accuracy and coverage are improved, and the automatic and efficient quality defect detection is realized.
In another alternative embodiment of the present application, there is a process of determining the shutter frequency of the black-and-white camera, and the specific steps include S43 to S44:
s43: acquiring the moving speed of an object to be detected and the flicker frequency of three different light sources; determining a first shutter frequency range of the black-and-white camera according to the moving speed; a second shutter frequency range of the black and white camera is determined based on the flicker frequency.
For example, a speed sensor is used to detect the actual running speed of the object to be detected on the production line, and the speed is calculated by image processing software to determine the speed range, i.e. the first frequency range, required to be set by the camera shutter at the speed so as to ensure the shooting definition effect. Detecting a flicker control signal of the three-color light source, measuring parameters of flicker frequency of the flicker control signal, confirming that the parameters are within the frequency range, and setting a second range of camera shutter frequency to ensure that the shutter only receives monochromatic illumination each time. The first frequency range and the second frequency range are intersected to determine a target shutter frequency value of the final camera. The image acquisition of multi-color alternate illumination can be completed when the object moves by shooting with the frequency. Therefore, by matching the calculated parameters, multicolor illumination three-dimensional imaging of the moving object is realized, the detection scene is enlarged, the production stopping detection is avoided, and the efficiency is improved.
S44: and determining the target shutter frequency of the black-and-white camera according to the first shutter frequency range and the second shutter frequency range, and shooting the object to be detected by adopting the black-and-white camera according to the target shutter frequency.
Illustratively, the first frequency range calculated to ensure the definition of the image and the second frequency range calculated to ensure the effect of the monochromatic illumination are taken as intersection parts of the two ranges. The intersection part is the frequency value which meets the two requirements simultaneously. A specific value is selected within this intersection range as the target shutter frequency of the camera. The target frequency takes into account both the effects of motion blur and the switching of polychromatic illumination. And setting an imaging system of the black-and-white camera according to the determined target shutter frequency parameter. Causing it to perform image acquisition at the predetermined frequency. In this way, in the detection process, the image information under the alternate illumination of the polychromatic light can be correctly acquired. By matching the shutter frequency, multicolor illumination imaging control under the motion state is realized, the application range of detection is enlarged, and the effect that the detection can be completed without shutdown is achieved.
Referring to fig. 2, a color photometric stereo imaging system according to an embodiment of the present application is provided, the system comprising: the device comprises an image acquisition module, a first image processing module, a second image processing module and a defect detection module, wherein:
the image acquisition module is used for shooting the object to be detected under three different light sources by adopting a black-and-white camera to obtain three first detection images;
the first image processing module is used for generating a second detection image according to each first detection image, and the second detection image is a two-dimensional color image;
the second image processing module is used for extracting three-dimensional depth information in each first detection image and generating a three-dimensional third detection image according to the three-dimensional depth information;
and the defect detection module is used for determining the appearance defect of the object to be detected according to the second detection image and the third detection image.
On the basis of the embodiment, the image acquisition module is further used for determining a first position of the red light source for irradiating the object to be detected, a second position of the blue light source for irradiating the object to be detected and a third position of the green light source for irradiating the object to be detected according to the position of the object to be detected and the station position of the black-white camera.
On the basis of the above embodiment, the first image processing module is further configured to read gray-scale images of each first detection image, and map gray scales of each gray-scale image to an R channel, a B channel, and a G channel of a preset second detection image, respectively, so as to obtain the second detection image.
On the basis of the above embodiment, the second image processing module is further configured to extract pixels in each first detection image, and determine a relative position of each pixel; generating a depth image according to each relative position; converting each pixel into a three-dimensional coordinate point according to the depth image; and generating a third detection image according to each three-dimensional coordinate point.
On the basis of the above embodiment, the defect detection module is further configured to determine a planar defect feature of the object to be detected according to the second detection image; determining depth defect characteristics of the object to be detected according to the third detection image; and determining the appearance defect according to the plane defect characteristic and the depth defect characteristic.
On the basis of the embodiment, the image acquisition module further comprises shooting the object to be detected by adopting a black-and-white camera to obtain a primary detection image of the object to be detected; determining the shape and the size of the appearance defect on the object to be detected according to the primary detection image; determining the irradiation angles of three different light sources according to the position of the appearance defect on the object to be detected; determining the irradiation intensity of three different light sources according to the shape and the size of the appearance defect; three different light sources are determined according to the irradiation angle and the irradiation intensity.
On the basis of the embodiment, the defect detection module further comprises a step of acquiring the moving speed of the object to be detected and the flicker frequency of the three different light sources; determining a first shutter frequency range of the black-and-white camera according to the moving speed; determining a second shutter frequency range of the black-and-white camera according to the flicker frequency; the target shutter frequency of the black-and-white camera is determined according to the first shutter frequency range and the second shutter frequency range, and according to the target shutter frequency.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The application also discloses electronic equipment. Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application. The electronic device 300 may include: at least one processor 301, at least one network interface 304, a user interface 303, a memory 305, at least one communication bus 302.
Wherein the communication bus 302 is used to enable connected communication between these components.
The user interface 303 may include a Display screen (Display) interface and a Camera (Camera) interface, and the optional user interface 303 may further include a standard wired interface and a standard wireless interface.
The network interface 304 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 301 may include one or more processing cores. The processor 301 utilizes various interfaces and lines to connect various portions of the overall server, perform various functions of the server and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 305, and invoking data stored in the memory 305. Alternatively, the processor 301 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 301 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem etc. The CPU mainly processes an operating system, a user interface diagram, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 301 and may be implemented by a single chip.
The Memory 305 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 305 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 305 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 305 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. Memory 305 may also optionally be at least one storage device located remotely from the aforementioned processor 301. Referring to fig. 3, an operating system, a network communication module, a user interface module, and an application program of a color photometric stereo imaging method may be included in the memory 305 as a computer storage medium.
In the electronic device 300 shown in fig. 3, the user interface 303 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 301 may be configured to invoke an application program in memory 305 that stores a color photometric stereo imaging method that, when executed by one or more processors 301, causes electronic device 300 to perform the method as in one or more of the embodiments described above. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The above are merely exemplary embodiments of the present disclosure and are not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.
Claims (8)
1. A color photometric stereo imaging method, comprising:
shooting the object to be detected under three different light sources by adopting a black-and-white camera to obtain three first detection images;
generating a second detection image according to each first detection image, wherein the second detection image is a two-dimensional color image;
extracting three-dimensional depth information in each first detection image, and generating a three-dimensional third detection image according to the three-dimensional depth information;
determining appearance defects of the to-be-detected object according to the second detection image and the third detection image;
the method comprises the steps of shooting an object to be detected by adopting a black-and-white camera under three different light sources, and before obtaining three first detection images, further comprising:
shooting the object to be detected by adopting the black-and-white camera to obtain a primary detection image of the object to be detected;
Determining the shape and the size of the appearance defect on the object to be detected according to the primary detection image;
determining the irradiation angles of the three different light sources according to the position of the appearance defect on the object to be detected;
determining the irradiation intensities of the three different light sources according to the shape and the size of the appearance defect;
determining the three different light sources according to the irradiation angle and the irradiation intensity;
extracting three-dimensional depth information in each first detection image, and generating a three-dimensional third detection image according to the three-dimensional depth information, wherein the three-dimensional third detection image comprises:
extracting pixels in each first detection image, and determining the relative position of each pixel;
generating a depth image according to each relative position;
converting each pixel into a three-dimensional coordinate point according to the depth image;
and generating the third detection image according to each three-dimensional coordinate point.
2. The method of claim 1, wherein the three different light sources include a red light source, a blue light source and a green light source, and the photographing of the object to be detected with a black-and-white camera under the three different light sources, before obtaining three first detection images, further comprises:
And determining a first position of the red light source for irradiating the object to be detected, a second position of the blue light source for irradiating the object to be detected and a third position of the green light source for irradiating the object to be detected according to the position of the object to be detected and the station position of the black-and-white camera.
3. The method of claim 1, wherein generating a second detection image from each of the first detection images comprises:
and reading gray images of the first detection images, and mapping the gray of each gray image to an R channel, a B channel and a G channel of a preset second detection image to obtain the second detection image.
4. The method according to claim 1, wherein determining an appearance defect of the object to be detected from the second detection image and the third detection image includes:
determining the plane defect characteristics of the object to be detected according to the second detection image;
determining depth defect characteristics of the object to be detected according to the third detection image;
and determining the appearance defect according to the plane defect characteristic and the depth defect characteristic.
5. The method for stereoscopic imaging of claim 1, wherein shooting the object to be detected with a black-and-white camera under three different light sources to obtain three first detected images, further comprises:
acquiring the moving speed of the object to be detected and the flicker frequency of the three different light sources;
determining a first shutter frequency range of the black-and-white camera according to the moving speed;
determining a second shutter frequency range of the black-and-white camera according to the flicker frequency;
determining a target shutter frequency of the black-and-white camera based on the first shutter frequency range and the second shutter frequency range,
and shooting the object to be detected by adopting the black-and-white camera according to the target shutter frequency.
6. A color photometric stereo imaging system, the system comprising:
the image acquisition module is used for shooting the object to be detected under three different light sources by adopting a black-and-white camera to obtain three first detection images;
the first image processing module is used for generating a second detection image according to each first detection image, and the second detection image is a two-dimensional color image;
The second image processing module is used for extracting three-dimensional depth information in each first detection image and generating a three-dimensional third detection image according to the three-dimensional depth information;
the defect detection module is used for determining appearance defects of the to-be-detected object according to the second detection image and the third detection image;
the image acquisition module is used for shooting an object to be detected under three different light sources by adopting a black-and-white camera, and is also used for:
shooting the object to be detected by adopting the black-and-white camera to obtain a primary detection image of the object to be detected;
determining the shape and the size of the appearance defect on the object to be detected according to the primary detection image;
determining the irradiation angles of the three different light sources according to the position of the appearance defect on the object to be detected;
determining the irradiation intensities of the three different light sources according to the shape and the size of the appearance defect;
determining the three different light sources according to the irradiation angle and the irradiation intensity;
extracting three-dimensional depth information in each first detection image, and generating a three-dimensional third detection image according to the three-dimensional depth information, wherein the three-dimensional third detection image comprises:
Extracting pixels in each first detection image, and determining the relative position of each pixel;
generating a depth image according to each relative position;
converting each pixel into a three-dimensional coordinate point according to the depth image;
and generating the third detection image according to each three-dimensional coordinate point.
7. An electronic device comprising a processor, a memory, a user interface, and a network interface, the memory for storing instructions, the user interface and the network interface for communicating to other devices, the processor for executing the instructions stored in the memory to cause the electronic device to perform the method of color photometric stereo imaging as claimed in any one of claims 1-6.
8. A computer readable storage medium storing instructions which, when executed, perform the color photometric stereo imaging method steps of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311798734.4A CN117459700B (en) | 2023-12-26 | 2023-12-26 | Color luminosity three-dimensional imaging method, system, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311798734.4A CN117459700B (en) | 2023-12-26 | 2023-12-26 | Color luminosity three-dimensional imaging method, system, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117459700A CN117459700A (en) | 2024-01-26 |
CN117459700B true CN117459700B (en) | 2024-03-26 |
Family
ID=89580394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311798734.4A Active CN117459700B (en) | 2023-12-26 | 2023-12-26 | Color luminosity three-dimensional imaging method, system, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117459700B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118348029A (en) * | 2024-05-09 | 2024-07-16 | 山东中清智能科技股份有限公司 | Surface defect detection method and device for light-emitting chip |
CN118425148A (en) * | 2024-05-11 | 2024-08-02 | 山东中清智能科技股份有限公司 | AOI detection method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102980526A (en) * | 2012-08-23 | 2013-03-20 | 杭州先临三维科技股份有限公司 | Three-dimensional scanister using black and white camera to obtain color image and scan method thereof |
KR20150054656A (en) * | 2013-11-12 | 2015-05-20 | 엘지전자 주식회사 | Digital device and method for processing three dimensional image thereof |
CN115393555A (en) * | 2022-08-24 | 2022-11-25 | 奕目(上海)科技有限公司 | Three-dimensional image acquisition method, terminal device and storage medium |
CN115460386A (en) * | 2022-08-31 | 2022-12-09 | 武汉精立电子技术有限公司 | Method and system for acquiring color image by using black and white camera |
CN117011214A (en) * | 2022-08-30 | 2023-11-07 | 腾讯科技(深圳)有限公司 | Object detection method, device, equipment and storage medium |
-
2023
- 2023-12-26 CN CN202311798734.4A patent/CN117459700B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102980526A (en) * | 2012-08-23 | 2013-03-20 | 杭州先临三维科技股份有限公司 | Three-dimensional scanister using black and white camera to obtain color image and scan method thereof |
KR20150054656A (en) * | 2013-11-12 | 2015-05-20 | 엘지전자 주식회사 | Digital device and method for processing three dimensional image thereof |
CN115393555A (en) * | 2022-08-24 | 2022-11-25 | 奕目(上海)科技有限公司 | Three-dimensional image acquisition method, terminal device and storage medium |
CN117011214A (en) * | 2022-08-30 | 2023-11-07 | 腾讯科技(深圳)有限公司 | Object detection method, device, equipment and storage medium |
CN115460386A (en) * | 2022-08-31 | 2022-12-09 | 武汉精立电子技术有限公司 | Method and system for acquiring color image by using black and white camera |
Also Published As
Publication number | Publication date |
---|---|
CN117459700A (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117459700B (en) | Color luminosity three-dimensional imaging method, system, electronic equipment and medium | |
US9773302B2 (en) | Three-dimensional object model tagging | |
CN107607040B (en) | Three-dimensional scanning measurement device and method suitable for strong reflection surface | |
CN108445007B (en) | Detection method and detection device based on image fusion | |
US20240118218A1 (en) | Stroboscopic stepped illumination defect detection system | |
US9437035B2 (en) | Light source detection from synthesized objects | |
CN107664644B (en) | Object appearance automatic detection device and method based on machine vision | |
CN112361990B (en) | Laser pattern extraction method and device, laser measurement equipment and system | |
CN110570412B (en) | Part error vision judgment system | |
US9204130B2 (en) | Method and system for creating a three dimensional representation of an object | |
CN116503388A (en) | Defect detection method, device and storage medium | |
WO2016103285A1 (en) | System and method for reading direct part marking (dpm) codes on objects | |
CN111344553B (en) | Method and system for detecting defects of curved object | |
JP5059503B2 (en) | Image composition apparatus, image composition method, and image composition program | |
JP5336325B2 (en) | Image processing method | |
CN116977341B (en) | Dimension measurement method and related device | |
CN110441315B (en) | Electronic component testing apparatus and method | |
CN114689604A (en) | Image processing method for optical detection of object to be detected with smooth surface and detection system thereof | |
CN116681677A (en) | Lithium battery defect detection method, device and system | |
Dawda et al. | Accurate 3D measurement of highly specular surface using laser and stereo reconstruction | |
KR101653649B1 (en) | 3D shape measuring method using pattern-light with uniformity compensation | |
CN118338132B (en) | Shooting method and system for flexible material, storage medium and electronic equipment | |
CN212843399U (en) | Portable three-dimensional measuring equipment | |
Štampfl et al. | Shadow Segmentation with Image Thresholding for Describing the Harshness of Light Sources | |
CN117354439B (en) | Light intensity processing method, light intensity processing device, electronic equipment and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |