CN117692737A - 3.5D camera measuring method for measuring texture and three-dimensional morphology of object - Google Patents

3.5D camera measuring method for measuring texture and three-dimensional morphology of object Download PDF

Info

Publication number
CN117692737A
CN117692737A CN202311705925.1A CN202311705925A CN117692737A CN 117692737 A CN117692737 A CN 117692737A CN 202311705925 A CN202311705925 A CN 202311705925A CN 117692737 A CN117692737 A CN 117692737A
Authority
CN
China
Prior art keywords
objective lens
image
camera
information
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311705925.1A
Other languages
Chinese (zh)
Inventor
杨佳苗
沈阳
李林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202311705925.1A priority Critical patent/CN117692737A/en
Publication of CN117692737A publication Critical patent/CN117692737A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a measuring method of a 3.5D camera for measuring texture and three-dimensional morphology of an object, belonging to the technical field of optical equipment. The 3.5D camera mainly comprises an integrated base, and an objective lens, a barrel lens and an image sensor which are sequentially arranged on the integrated base along the light incidence direction, wherein the measuring method is that the objective lens scans and moves along the optical axis direction, so that the object plane corresponding to the light sensing surface of the image sensor is changed, along with the movement of the objective lens along the optical axis direction, images corresponding to different object planes are recorded by the image sensor in sequence, the images are processed, and finally, the texture information and the three-dimensional morphology information of an object are obtained simultaneously. The method can accurately match the surface texture information and the three-dimensional morphology information of the measured object, reduces the dependence on the surface quality of the measured object, solves the bottleneck problem of machine vision in industrial detection application, and has important significance for promoting the development of 'China manufacturing 2025' and 'industry 4.0'.

Description

3.5D camera measuring method for measuring texture and three-dimensional morphology of object
Technical Field
The invention belongs to the technical field of optical equipment, and particularly relates to a measuring method of a 3.5D camera for measuring texture and three-dimensional morphology of an object.
Background
The domestic industrial automation level is continuously improved, and the appearance detection requirements of industrial products based on the machine vision technology are increasingly vigorous. The traditional machine vision technology is mainly aimed at planar two-dimensional imaging, and utilizes a computer graphic processing algorithm to realize rapid detection of various parameters such as texture, shape, color and the like of an object, and is widely applied to various fields of industrial automatic detection. However, the conventional two-dimensional visual detection technology can only be used for estimating the three-dimensional shape information of the object by combining a two-dimensional image and an empirical algorithm, has low detection precision, is easy to generate misjudgment due to the problems of shooting angle and illumination, and is not suitable for detecting the three-dimensional shape information.
To solve the above problems, three-dimensional machine vision technology has been proposed and widely used for industrial inspection. The laser line scanning three-dimensional detection camera is most widely applied, and the detection camera is used for acquiring a linear laser image modulated by an object to calculate the height value of each point on the object and obtain three-dimensional information of the object. Different from the working principle of a laser line scanning three-dimensional detection camera, the light field three-dimensional camera utilizes a micromirror array to analyze the light field of measuring light, and then utilizes the image space spectrum principle to invert the three-dimensional shape of an object. In addition, the laser line scanning three-dimensional detection camera and the light field three-dimensional camera are affected by an imaging mechanism, so that the surface quality of an object to be detected is very high, and the three-dimensional imaging requirements of different objects cannot be met.
In industrial practice, it is often necessary to acquire both surface texture information and three-dimensional topography information of an object. The conventional two-dimensional machine vision technology can only acquire texture information of the surface of an object, and the acquired surface texture information can be blurred under the condition of out-of-focus imaging. The existing three-dimensional machine vision technology can collect three-dimensional information of an object, but loses surface texture information of the object, limits a plurality of application scenes, and the influence of inconsistency of the surface quality of the object on a measurement result is larger.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a measuring method of a 3.5D camera for measuring the texture and the three-dimensional morphology of an object.
The invention is realized by the following technical scheme:
the 3.5D camera comprises an integrated base, an objective lens, a barrel lens and an image sensor are sequentially arranged on the integrated base along the light incidence direction, wherein the objective lens is arranged on an objective lens translation stage, and the objective lens translation stage is in sliding connection with the integrated base.
The measuring method of the 3.5D camera comprises the following steps:
the objective lens of the S1.3.5D camera is arranged at the initial position, an object to be measured is arranged in the field of view of the 3.5D camera, and the illumination brightness and the exposure parameters of the 3.5D camera are adjusted, so that the whole appearance of the object to be measured can be primarily captured by the 3.5D camera.
The objective lens of the S2.3.5D camera is driven by the objective lens translation stage to move along the optical axis direction, each characteristic region of the object to be detected is scanned at intervals of a certain distance, the object plane corresponding to the photosensitive surface of the image sensor is changed, and along with the movement of the objective lens along the optical axis direction, images corresponding to different object planes are recorded by the image sensor in sequence.
Based on the working mode of the objective lens moving scanning, the 3.5D camera realizes two functions:
the first is a fast auto-focus function, which enables fast focusing of a specific area of an object to be measured.
The second is a measurement function, which realizes the synchronous detection of the texture information and the three-dimensional morphology information of the fusion object to be measured; the measurement function includes the steps of: 1) Setting a start-stop position of objective lens scanning and a shooting interval of an image sensor according to detection requirements; 2) The objective lens is driven by the objective lens translation stage to move to a starting position along the optical axis direction; 3) Starting from the initial position, the objective scans each characteristic area of the object to be detected according to the shooting interval, then the object plane corresponding to the photosurface of the image sensor is changed, along with the movement of the objective along the optical axis direction, the image sensor is sequentially utilized to record images corresponding to different object planes, and the movement and shooting are stopped at the end position; 4) And finally, calculating the texture information and the three-dimensional morphology information of the object through a texture information and three-dimensional morphology information processing algorithm for the images and the position information acquired at different positions.
Further, in the 3.5D camera, a driving motor is installed on the integrated base, the output end of the driving motor is connected with a screw rod, and the objective lens translation stage is installed on the screw rod.
Further, in the 3.5D camera, the image sensor is a gray-scale or color image sensor.
Further, in the measuring method of the 3.5D camera, in step S1, adjusting the illumination brightness refers to adjusting the brightness of the active illumination of the object to be measured, so as to improve the image acquisition effect, and the method of the active illumination of the object to be measured includes: 1) Annular illumination: installing an annular light source on the objective lens; 2) Coaxial illumination: a beam splitter is added between the objective lens and the cylindrical lens, and a coaxial illumination system is added in the reflecting direction of the beam splitter.
Further, in the measuring method of the 3.5D camera, in step S2, the 3.5D camera reads the moving distance of the objective lens in real time, and triggers the image sensor to execute one shooting every time the objective lens moves for a unit step length, and after each shooting, the image and the position of the objective lens are paired, and the images at each height are evaluated in real time through the saliency evaluation function.
Further, in the measuring method of the 3.5D camera, the fast auto-focusing function includes the following steps:
1) Selecting a position (x 0 ,y 0 )。
2) The objective lens is moved to an initial position through the objective lens translation stage, then scanning movement is carried out along a specified direction, and meanwhile, the image sensor acquires image information of an object plane corresponding to the objective lens at the moment at a time interval.
3) After each image sensor acquires image information, the image is quickly subjected to image matchingIn position (x) 0 ,y 0 ) The information at the position is processed to obtain a significance evaluation function value f at the position sal (. Cndot.) the significance was evaluated for the function value f sal (. Cndot.) is stored together with the position information z of the objective lens translation stage.
4) Evaluating the function value f according to the significance of the different positions z sal (. Cndot.) the significance evaluation function value f is obtained rapidly sal Position z corresponding to the (-) extremum position 0 The position is the position of the objective lens in the appointed (x 0 ,y 0 ) Position value at focus at position.
5) Rapidly moving the objective lens to z by the objective lens translation stage 0 Location, completion system versus location (x 0 ,y 0 ) Is used for fast focusing.
6) In z 0 An initial position and a final position are arranged near the position, the movement resolution of the objective lens translation stage is improved, and the position of the objective lens translation stage is equal to z 0 The image information near the position is scanned more finely, and then the step 4) is repeated, so that focusing with higher precision is realized.
Further, in the measuring method of the 3.5D camera, the significance evaluation function is realized by the following modes:
1) For an image f (x, y) of size mxn based on information statistics, the expression of the gray Variance function Variance is:
where μ is the mean of the image gray values:
2) Based on frequency domain information
For an image f (x, y) of size mxn, the expression of the spatial frequency function SF is:
wherein RF and CF are row and column frequencies, respectively:
3) Based on the spatial domain, for a window size of n×n, the expression of the modified laplace sum function SML is:
where T is the discrimination threshold, N is the window size,to correct the discrete approximation expression of the laplace operator ML:
further, in the measurement method of the 3.5D camera, in step S2, the processing method for the texture information in the measurement function is as follows:
1) Let the image size be H W, pass through the feature space transform function f trans (. Cndot.) performing feature decomposition on the image; each image I in the original image sequence obtained by equally-spaced scanning of the objective lens n WhereinPerforming multi-scale decomposition into sub-images respectively containing different frequency components>Where n=1, 2..where N is the image sequence number, c e [2, c]For component labels, C is the decomposition level.
2) By evaluating the function f by saliency sal (. Cndot.) for each image componentEvaluation was carried out, and the evaluation result was +.>And constructing a fusion weight map ++according to the significance level evaluation result>Wherein->And->
3) By filtering function, the weight map is fusedThe weight values with high degree of correlation and adjacent positions are subjected to cross optimization, so that the overall signal-to-noise ratio SNR of the fused weight graph is improved overall Signal-to-noise ratio SNR for each sub-region local
4) By means of weightsCarrying out weighted fusion on the image sequence of the multi-scale decomposition; specifically, for each sub-picture +.>And its corresponding weight map->Fusion component->Is obtained by weighted averaging of pixel values at the same position:
where (x, y) is the pixel position, W i,j (xY) is the fusion weight of the image component I, j at position (x, y), I i,j (x, y) is the pixel value of the image component i, j at position (x, y); thus, the fused pixel value for each location (x, y)I.e. a weighted average thereof over all image sequences and frequency components.
By means of a feature space transformation function f trans Inverse transformation of (-)These fusion components can be recombined to obtain a globally clear fusion texture image +.>
Further, in the measuring method of the 3.5D camera, in step S2, the processing method for the three-dimensional morphology information in the measuring function is as follows:
1) Let the image cube formed by the original image sequence beWherein H, W represents the height and width of the image, respectively, and N represents the number of image sequences; by evaluating the function f by saliency sal (. Cndot.) pair C orig Performing pixel level or window level saliency evaluation to obtain an initial saliency evaluation cube +.>Wherein->
2) Fused texture map obtained by texture information processingIs->Providing a priori knowledge of color, shape, distance, applying edge preserving guided filtering method to +.>Optimizing to obtain a correction saliency evaluation cube
3) ConsiderIs used to extract a saliency estimate value from the pixel in N image sequences to form a set { v } 1 ,v 2 ,...,v N -a }; processing the set to establish a significance evaluation Curve Curve ij (k)=v k Wherein k is [1, N ]]The method comprises the steps of carrying out a first treatment on the surface of the Then, fitting Curve is performed by using a higher order polynomial p (k) ij Marking the position corresponding to the maximum saliency value on the curve; the positions of these marks correspond to the cube C orig In the direction along the optical axis, thereby forming an initial depth map D init
4) Using a significance evaluation value and D init Constructing initial topography confidence information encoded as a binary mask M; by means ofAs a characteristic diagram, appoint the area to be processed according to M, for D init Performing weighted median filtering processing to distinguish and process the strong texture region and the weak texture region to obtain a final corrected topographic map D final
In summary, the main innovation points of the technical scheme of the invention are summarized as follows:
1) The invention discloses a structure composition of a 3.5D camera for simultaneously measuring texture information and three-dimensional morphology information of a fusion object.
2) The invention discloses a detection mode for realizing three-dimensional imaging by moving an objective lens 2 to scan at different heights.
3) The invention discloses a function capable of synchronously measuring surface texture information and three-dimensional morphology information of an object to be measured.
4) The invention discloses a saliency evaluation function, a texture information processing method of an object and a three-dimensional morphology information processing method.
Compared with the prior art, the method has the following beneficial effects:
1) The 3.5D camera is simple in structure and small in size, can perform detection work in environments with limited space, and is high in applicability.
2) The measuring method can synchronously measure the surface texture information and the three-dimensional morphology information of the object to be measured, including color texture information, 2D morphology information and 3D morphology information.
3) The invention has no strict requirement on the surface quality distribution of the object to be detected, and can adapt to more detection scenes.
4) The method for detecting the texture information and the three-dimensional morphology information of the object surface is more comprehensive, the evaluation means is more reasonable, the calculation speed is faster, and the calculation accuracy is higher.
In a word, the invention can carry out accurate information matching on the surface texture information and the three-dimensional morphology information of the measured object, reduces the dependence on the surface quality of the measured object, and solves the bottleneck problem of machine vision in industrial detection application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention.
Fig. 1 is a schematic structural diagram of a 3.5D camera in the method of the present invention.
In the figure: 1-integrated base, 2-objective, 3-barrel lens, 4-image sensor, 5-objective translation stage.
Fig. 2 is an optical schematic of the method of the present invention employing annular illumination.
Fig. 3 is an optical schematic of the method of the present invention employing coaxial illumination.
Fig. 4 is an optical schematic of the method of the present invention employing both on-axis illumination and annular illumination.
FIG. 5 is a physical diagram of a first sample to be tested by the method of the present invention.
FIG. 6 is a three-dimensional reconstruction of a first sample under test by the method of the present invention.
FIG. 7 is a pseudo-color height map and partial height data of a first sample under test by the method of the present invention.
FIG. 8 is a physical diagram of a second sample to be tested by the method of the present invention.
FIG. 9 is a three-dimensional reconstruction of a second sample under test by the method of the present invention.
FIG. 10 is a pseudo-color height map and partial height data of a second sample to be tested by the method of the present invention.
Detailed Description
For a better understanding of the present invention, reference will be made to the following description of the invention taken in conjunction with the accompanying drawings and examples. In addition, features in the embodiments and examples of the present application may be combined with each other without conflict.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted", "connected" and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected or integrally connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art in a specific case.
Example 1
The embodiment provides a 3.5D camera for measuring the texture and the three-dimensional morphology of an object, which is shown in fig. 1, and comprises an integrated base 1 with a U-shaped cross section, wherein an objective lens 2, a barrel lens 3 and an image sensor 4 are arranged between two side walls of the integrated base 1, the objective lens 2, the barrel lens 3 and the image sensor 4 are sequentially arranged along the incident direction of light, and the light of an object plane is imaged on the image sensor 4 after passing through the objective lens 2 and the barrel lens 3; the image sensor 4 is used for image recording, and adopts a gray level or color image sensor 4, and one end of the image sensor 4 is connected with the barrel lens 3 in a matching way through a C-interface lens thread; one end of a cylindrical lens 3 is matched and connected with an image sensor 4 through a C-interface lens thread, the other end of the cylindrical lens 3 is fixedly connected with an integrated base 1, and the cylindrical lens 3 is matched with an objective lens 2 to perform optical imaging; the objective lens 2 is firstly installed on the objective lens translation stage 5, two sides of the bottom of the objective lens translation stage 5 are respectively connected with linear guide rails arranged at the top ends of two side walls of the integrated base 1 in a sliding manner, meanwhile, a driving motor is installed in the integrated base 1, the output end of the driving motor is connected with a screw rod, the objective lens translation stage 5 is connected and installed on the screw rod, the driving motor provides power input, and the screw rod drives the objective lens translation stage 5 to reciprocate, namely, the objective lens 2 is connected with the screw rod and the linear guide rails through the objective lens translation stage 5 to reciprocate in the optical axis direction.
For the 3.5D camera, from the structural aspect, all structural components are connected through the integrated base 1, so that the complexity of internal parts is reduced, and the volume of the whole camera is reduced. The integrated base 1 is also provided with a threaded hole, so that flexible connection can be realized according to external installation requirements, and the complex requirements in industrial visual detection are met.
Example 2
The embodiment provides the measuring method of the 3.5D camera described in embodiment 1, wherein the objective lens translation stage 5 drives the objective lens 2 to scan along the optical axis direction, so as to change the object plane corresponding to the photosurface of the camera image sensor 4; along with the movement of the objective lens 2 along the optical axis direction, the image sensor 4 is sequentially utilized to record images corresponding to different object planes, and then the images are processed, so that the texture information and the three-dimensional shape information of the object can be obtained at the same time; the specific measuring method comprises the following steps:
the objective lens of the S1.3.5D camera is arranged at the initial position, an object to be measured is arranged in the field of view of the 3.5D camera, and the illumination brightness and the exposure parameters of the 3.5D camera are adjusted, so that the whole appearance of the object to be measured can be primarily captured by the 3.5D camera.
In this step, for a stable external illumination condition and the same kind of object to be measured, when the object to be measured is placed in the field of view of the 3.5D camera, the illumination and exposure brightness of the 3.5D camera need to be controlled; adjusting illumination brightness refers to adjusting the brightness of active illumination of an object to be detected, so that better illumination of the object to be detected is realized, and further, the image acquisition effect is improved; the method for actively illuminating the object to be tested comprises the following steps: 1) Annular illumination: an annular light source is arranged on the objective lens, and the optical schematic diagram of the annular light source is shown in figure 2; 2) Coaxial illumination: a spectroscope is added between the objective lens and the cylindrical lens, and a coaxial illumination system is added in the reflecting direction of the spectroscope, and the optical schematic diagram is shown in figure 3; 3) Annular illumination + coaxial illumination, the optical schematic diagram of which is shown in figure 4. The active illumination of the present embodiment may employ annular illumination or on-axis illumination or both annular and on-axis illumination. Wherein the annular illumination is an illumination system fixed on the objective. The coaxial illumination comprises a light source, a collimating mirror, a reflecting mirror and a spectroscope, and an illumination beam is reflected on the surface of a measured object and enters an objective lens to be collected by an image sensor after entering the cylindrical lens.
The objective lens of the S2.3.5D camera is driven by the objective lens translation stage to move along the optical axis direction, each characteristic region of the object to be detected is scanned at intervals of a certain distance, the object plane corresponding to the photosensitive surface of the image sensor is changed, and along with the movement of the objective lens along the optical axis direction, images corresponding to different object planes are recorded by the image sensor in sequence. Specifically, the 3.5D camera reads the moving distance of the objective in real time, and the image sensor is triggered to execute shooting once every moving unit step length of the objective, after each shooting, the image and the position of the objective are paired, and the images at each height are evaluated in real time through a saliency evaluation function.
The significance evaluation function is realized by the following modes:
1) For an image f (x, y) of size mxn based on information statistics, the expression of the gray Variance function Variance is:
where μ is the mean of the image gray values:
2) Based on the frequency domain information, for an image f (x, y) of size mxn, the expression of the spatial frequency function SF is:
wherein RF and CF are row and column frequencies, respectively:
3) Based on the spatial domain, for a window size of n×n, the expression of the modified laplace sum function SML is:
where T is the discrimination threshold, N is the window size,to correct the discrete approximation expression of the laplace operator ML:
based on the working mode of the objective lens moving scanning, the 3.5D camera realizes two functions:
A. the first is a fast auto-focus function, which enables fast focusing of a specific area of an object to be measured.
The quick auto-focusing function includes the following steps:
1) Selecting a position (x 0 ,y 0 )。
2) The objective lens is moved to an initial position through the objective lens translation stage, then scanning movement is carried out along a specified direction, and meanwhile, the image sensor acquires image information of an object plane corresponding to the objective lens at the moment at a time interval.
3) After each image sensor acquires image information, the image is quickly processed at the position (x 0 ,y 0 ) The information at the position is processed to obtain a significance evaluation function value f at the position sal (. Cndot.) the significance was evaluated for the function value f sal (. Cndot.) is stored together with the position information z of the objective lens translation stage.
4) Evaluating the function value f according to the significance of the different positions z sal (. Cndot.) the significance evaluation function value f is obtained rapidly sal Position z corresponding to the (-) extremum position 0 The position is the position of the objective lens in the appointed (x 0 ,y 0 ) Position value at focus at position.
5) Rapidly moving the objective lens to z by the objective lens translation stage 0 Location, completion system versus location (x 0 ,y 0 ) Is used for fast focusing.
6) In z 0 An initial position and a final position are arranged near the position, the movement resolution of the objective lens translation stage is improved, and the position of the objective lens translation stage is equal to z 0 The image information near the position is scanned more finely, and then the step 4) is repeated, so that focusing with higher precision is realized.
B. The second is a measurement function, which realizes the synchronous detection of the texture information and the three-dimensional morphology information of the fusion object to be measured.
The measurement function includes the steps of:
1) The start-stop position of the objective lens scanning and the shooting interval of the image sensor are set according to the detection requirement.
2) The objective lens is driven by the objective lens translation stage to move to a starting position along the optical axis direction.
3) And starting from the initial position, the objective scans each characteristic region of the object to be detected according to the shooting interval, so that the object plane corresponding to the light sensing surface of the image sensor is changed, images corresponding to different object planes are recorded by the image sensor sequentially along with the movement of the objective along the optical axis direction, and the movement and shooting are stopped at the end position.
4) And finally, calculating the texture information and the three-dimensional morphology information of the object through a texture information and three-dimensional morphology information processing algorithm for the images and the position information acquired at different positions.
The processing method for texture information in the measurement function is as follows:
1) Let the image size be H W, pass through the feature space transform function f trans (. Cndot.) performing feature decomposition on the image; each image I in the original image sequence obtained by equally-spaced scanning of the objective lens n WhereinPerforming multi-scale decomposition into sub-images respectively containing different frequency components>Where n=1, 2..where N is the image sequence number, c e [2, c]For component labels, C is the decomposition level.
2) By evaluating the function f by saliency sal (. Cndot.) for each image componentEvaluation was carried out, and the evaluation result was +.>And constructing a fusion weight map ++according to the significance level evaluation result>Wherein->And->
3) By filtering function, the weight map is fusedThe weight values with high degree of correlation and adjacent positions are subjected to cross optimization, so that the overall signal-to-noise ratio SNR of the fused weight graph is improved overall AndSignal-to-noise ratio SNR for each sub-region local
4) By means of weightsCarrying out weighted fusion on the image sequence of the multi-scale decomposition; specifically, for each sub-picture +.>And its corresponding weight map->Fusion component->Is obtained by weighted averaging of pixel values at the same position:
where (x, y) is the pixel position, W i,j (x, y) is the fusion weight of the image component I, j at position (x, y), I i,j (x, y) is the pixel value of the image component i, j at position (x, y); thus, the fused pixel value for each location (x, y)I.e. a weighted average thereof over all image sequences and frequency components.
By means of a feature space transformation function f trans Inverse transformation of (-)These fusion components can be recombined to obtain a globally clear fusion texture image +.>
The processing method for the three-dimensional morphology information in the measurement function is as follows:
1) Let the image cube formed by the original image sequence beWherein H, W represents the height and width of the image, respectively, and N represents the number of image sequences; by evaluating the function f by saliency sal (. Cndot.) pair C orig Performing pixel level or window level saliency evaluation to obtain an initial saliency evaluation cube +.>Wherein->
2) Fused texture map obtained by texture information processingIs->Providing a priori knowledge of color, shape, distance, applying edge preserving guided filtering method to +.>Optimizing to obtain a correction saliency evaluation cube
3) ConsiderIs used to extract a saliency estimate value from the pixel in N image sequences to form a set { v } 1 ,v 2 ,...,v N -a }; processing the set to establish a significance evaluation Curve Curve ij (k)=v k Wherein k is [1, N ]]The method comprises the steps of carrying out a first treatment on the surface of the Then, fitting Curve is performed by using a higher order polynomial p (k) ij Marking the position corresponding to the maximum saliency value on the curve; the positions of these marks correspond to the cube C orig In the direction along the optical axis, thereby forming an initial depth map D init
4) Using a significance evaluation value and D init Constructing initial topography confidenceInformation encoded as a binary mask M; by means ofAs a characteristic diagram, appoint the area to be processed according to M, for D init Performing weighted median filtering processing to distinguish and process the strong texture region and the weak texture region to obtain a final corrected topographic map D final
The 3.5D camera and the measuring method thereof are adopted to detect the welding area of the welding plate, the detection result is shown in fig. 5 to 10, and texture information and two-dimensional and three-dimensional information of the welding area can be clearly obtained from the detection result, so that the identification and detection of the welding problem area are realized.
The above examples illustrate only one embodiment of the invention, which is described in more detail and is not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (9)

1. A measuring method of a 3.5D camera for measuring texture and three-dimensional morphology of an object is characterized by comprising the following steps: the 3.5D camera comprises an integrated base, wherein an objective lens, a cylindrical lens and an image sensor are sequentially arranged on the integrated base along the light incident direction, the objective lens is arranged on an objective lens translation stage, and the objective lens translation stage is in sliding connection with the integrated base;
the measuring method of the 3.5D camera comprises the following steps:
an objective lens of the S1.3.5D camera is placed at an initial position, an object to be detected is placed in a field of view of the 3.5D camera, and illumination brightness and exposure parameters of the 3.5D camera are adjusted, so that the whole appearance of the object to be detected can be primarily captured by the 3.5D camera;
the objective lens of the S2.3.5D camera is driven by the objective lens translation stage to move along the optical axis direction, each characteristic region of an object to be detected is scanned at intervals of a certain distance, the object plane corresponding to the photosurface of the image sensor is changed, and along with the movement of the objective lens along the optical axis direction, images corresponding to different object planes are recorded by the image sensor in sequence;
based on the working mode of the objective lens moving scanning, the 3.5D camera realizes two functions:
the first is a rapid automatic focusing function, which realizes rapid focusing of a certain appointed area of an object to be detected;
the second is a measurement function, which realizes the synchronous detection of the texture information and the three-dimensional morphology information of the fusion object to be measured; the measurement function includes the steps of: 1) Setting a start-stop position of objective lens scanning and a shooting interval of an image sensor according to detection requirements; 2) The objective lens is driven by the objective lens translation stage to move to a starting position along the optical axis direction; 3) Starting from the initial position, the objective scans each characteristic area of the object to be detected according to the shooting interval, then the object plane corresponding to the photosurface of the image sensor is changed, along with the movement of the objective along the optical axis direction, the image sensor is sequentially utilized to record images corresponding to different object planes, and the movement and shooting are stopped at the end position; 4) And finally, calculating the texture information and the three-dimensional morphology information of the object through a texture information and three-dimensional morphology information processing algorithm for the images and the position information acquired at different positions.
2. The method for measuring 3.5D camera of object texture and three-dimensional topography of claim 1, wherein: and a driving motor is arranged on the integrated base, the output end of the driving motor is connected with a screw rod, and the objective lens translation stage is arranged on the screw rod.
3. The method for measuring 3.5D camera of object texture and three-dimensional topography of claim 1, wherein: the image sensor adopts a gray-scale or color image sensor.
4. The method for measuring 3.5D camera of object texture and three-dimensional topography of claim 1, wherein: in step S1, adjusting the illumination brightness refers to adjusting the brightness of the active illumination of the object to be detected, so as to improve the image acquisition effect, and the method for actively illuminating the object to be detected includes: 1) Annular illumination: installing an annular light source on the objective lens; 2) Coaxial illumination: a beam splitter is added between the objective lens and the cylindrical lens, and a coaxial illumination system is added in the reflecting direction of the beam splitter.
5. The method for measuring 3.5D camera of object texture and three-dimensional topography of claim 1, wherein: in step S2, the 3.5D camera reads the moving distance of the objective in real time, and triggers the image sensor to perform shooting once every moving unit step length of the objective, and after each shooting, the image and the position of the objective are paired, and the images at each height are evaluated in real time by a saliency evaluation function.
6. The method for measuring 3.5D camera of claim 5, wherein the fast auto-focusing function comprises the steps of:
1) Selecting a position (x 0 ,y 0 );
2) The objective lens is moved to an initial position through the objective lens translation stage, then scanning movement is carried out along a specified direction, and meanwhile, the image sensor acquires image information of an object plane corresponding to the objective lens at the moment at a time interval;
3) After each image sensor acquires image information, the image is quickly processed at the position (x 0 ,y 0 ) The information at the position is processed to obtain a significance evaluation function value f at the position sal (. Cndot.) the significance was evaluated for the function value f sal (. Cndot.) is stored together with the position information z of the objective lens translation stage;
4) Evaluating the function value f according to the significance of the different positions z sal (. Cndot.) the significance evaluation function value f is obtained rapidly sal Position z corresponding to the (-) extremum position 0 The position is the position of the objective lens in the appointed (x 0 ,y 0 ) A position value at the time of focusing at the position;
5) Fast moving objective through objective translation stageMove to z 0 Location, completion system versus location (x 0 ,y 0 ) Is fast focused;
6) In z 0 An initial position and a final position are arranged near the position, the movement resolution of the objective lens translation stage is improved, and the position of the objective lens translation stage is equal to z 0 The image information near the position is scanned more finely, and then the step 4) is repeated, so that focusing with higher precision is realized.
7. The method for measuring 3.5D camera of claim 5 or 6, wherein the saliency assessment function is implemented by:
1) Information statistics based
For an image f (x, y) of size mxn, the expression of the gray Variance function Variance is:
where μ is the mean of the image gray values:
2) Based on frequency domain information
For an image f (x, y) of size mxn, the expression of the spatial frequency function SF is:
wherein RF and CF are row and column frequencies, respectively:
3) Airspace-based
For a window size of size n×n, the expression for correcting the laplace sum function SML is:
where T is the discrimination threshold, N is the window size,to correct the discrete approximation expression of the laplace operator ML:
8. the method for measuring 3.5D camera of object texture and three-dimensional topography as defined in claim 7, wherein: in step S2, the processing method for texture information in the measurement function is as follows:
1) Let the image size be H W, pass through the feature space transform function f trans (. Cndot.) performing feature decomposition on the image; each image I in the original image sequence obtained by equally-spaced scanning of the objective lens n WhereinPerforming multi-scale decomposition into sub-images respectively containing different frequency components>Where n=1, 2..where N is the image sequence number, c e [2, c]The component marks are marked, and C is a decomposition level;
2) By evaluating the function f by saliency sal (. Cndot.) for each image componentEvaluation was carried out, and the evaluation result was +.>And constructing a fusion weight map ++according to the significance level evaluation result>Wherein->And->
3) By filtering function, the weight map is fusedThe weight values with high degree of correlation and adjacent positions are subjected to cross optimization, so that the overall signal-to-noise ratio SNR of the fused weight graph is improved overall Signal-to-noise ratio SNR for each sub-region local
4) By means of weightsCarrying out weighted fusion on the image sequence of the multi-scale decomposition; specifically, for each sub-imageAnd its corresponding weight map->Fusion component->Is obtained by weighted average of pixel values at the same positionTo:
where (x, y) is the pixel position, W i,j (x, y) is the fusion weight of the image component I, j at position (x, y), I i,j (x, y) is the pixel value of the image component i, j at position (x, y); thus, the fused pixel value for each location (x, y)That is, a weighted average thereof over all image sequences and frequency components;
by means of a feature space transformation function f trans Inverse transformation of (-)These fusion components can be recombined to obtain a globally clear fusion texture image +.>
9. The method for measuring 3.5D camera of object texture and three-dimensional topography of claim 8, wherein: in step S2, the method for processing the three-dimensional morphology information in the measurement function is as follows:
1) Let the image cube formed by the original image sequence beWherein H, W represents the height and width of the image, respectively, and N represents the number of image sequences; by evaluating the function f by saliency sal (. Cndot.) pair C orig Performing pixel level or window level saliency evaluation to obtain an initial saliency evaluation cube +.>Wherein->
2) Fused texture map obtained by texture information processingIs->Providing a priori knowledge of color, shape, distance, applying edge preserving guided filtering method to +.>Optimizing to obtain a correction saliency evaluation cube
3) ConsiderIs used to extract a saliency estimate value from the pixel in N image sequences to form a set { v } 1 ,v 2 ,...,v N -a }; processing the set to establish a significance evaluation Curve Curve ij (k)=v k Wherein k is [1, N ]]The method comprises the steps of carrying out a first treatment on the surface of the Then, fitting Curve is performed by using a higher order polynomial p (k) ij Marking the position corresponding to the maximum saliency value on the curve; the positions of these marks correspond to the cube C orig In the direction along the optical axis, thereby forming an initial depth map D init
4) Using a significance evaluation value and D init Constructing initial topography confidence information encoded as a binary mask M; by means ofAs a feature map and according to M designationTo be treated area, pair D init Performing weighted median filtering processing to distinguish and process the strong texture region and the weak texture region to obtain a final corrected topographic map D final
CN202311705925.1A 2023-12-13 2023-12-13 3.5D camera measuring method for measuring texture and three-dimensional morphology of object Pending CN117692737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311705925.1A CN117692737A (en) 2023-12-13 2023-12-13 3.5D camera measuring method for measuring texture and three-dimensional morphology of object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311705925.1A CN117692737A (en) 2023-12-13 2023-12-13 3.5D camera measuring method for measuring texture and three-dimensional morphology of object

Publications (1)

Publication Number Publication Date
CN117692737A true CN117692737A (en) 2024-03-12

Family

ID=90126001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311705925.1A Pending CN117692737A (en) 2023-12-13 2023-12-13 3.5D camera measuring method for measuring texture and three-dimensional morphology of object

Country Status (1)

Country Link
CN (1) CN117692737A (en)

Similar Documents

Publication Publication Date Title
US7929044B2 (en) Autofocus searching method
US7456377B2 (en) System and method for creating magnified images of a microscope slide
US10477097B2 (en) Single-frame autofocusing using multi-LED illumination
US8675992B2 (en) Digital microscope slide scanning system and methods
CN111912835B (en) LIBS device and LIBS method with ablation measuring function
US8373789B2 (en) Auto focus system and auto focus method
US10120163B2 (en) Auto-focus method for a coordinate-measuring apparatus
CA3061440C (en) Optical scanning arrangement and method
US8810799B2 (en) Height-measuring method and height-measuring device
US8005290B2 (en) Method for image calibration and apparatus for image acquiring
CN110082360A (en) A kind of sequence optical element surface on-line detection device of defects and method based on array camera
US9508139B2 (en) Apparatus and method to automatically distinguish between contamination and degradation of an article
CN113219622A (en) Objective lens focusing method, device and system for panel defect detection
CN115484371A (en) Image acquisition method, image acquisition device and readable storage medium
CN117692737A (en) 3.5D camera measuring method for measuring texture and three-dimensional morphology of object
TW201520669A (en) Bevel-axial auto-focus microscopic system and method thereof
CN102313524B (en) Image acquiring device and method
JP2021124429A (en) Scanning measurement method and scanning measurement device
KR20100032742A (en) Living body surface morphological measuring system
CN219641581U (en) Concave defect detection device
JP3960862B2 (en) Height measurement method
Teng et al. Autofocus optical imaging system based on image processing
CN114137714B (en) Different-color light source matching detection method of rapid focusing device of amplification imaging system
CN110646168B (en) Longitudinal spherical aberration measurement method of self-focusing lens
US20220358631A1 (en) Optical Measurement of Workpiece Surface using Sharpness Maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination