CN115711592A - Object morphology measuring method based on single-pixel imaging binocular deflection technology - Google Patents

Object morphology measuring method based on single-pixel imaging binocular deflection technology Download PDF

Info

Publication number
CN115711592A
CN115711592A CN202211311856.1A CN202211311856A CN115711592A CN 115711592 A CN115711592 A CN 115711592A CN 202211311856 A CN202211311856 A CN 202211311856A CN 115711592 A CN115711592 A CN 115711592A
Authority
CN
China
Prior art keywords
display screen
fourier
pixel
binocular
horizontal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211311856.1A
Other languages
Chinese (zh)
Inventor
肖昌炎
黄威
夏立元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202211311856.1A priority Critical patent/CN115711592A/en
Publication of CN115711592A publication Critical patent/CN115711592A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an object morphology measuring method based on single-pixel imaging binocular deflection, which comprises the following steps: step S1: generating a plurality of horizontal and vertical Fourier base frequency stripe images based on a Fourier single-pixel imaging principle and a Fourier center slice; step S2: displaying the generated multiple horizontal and vertical Fourier base frequency stripe patterns through a display screen, and collecting the Fourier base frequency stripe patterns which are distorted by the reflection of the surface of the object to be detected through a camera; and step S3: resolving an image acquired by a camera based on a single-pixel imaging principle; and step S4: combining the coordinates into corresponding point coordinates of the display screen; step S5: based on the binocular deflection technology, through the obtained display screen coordinates, under the frame of the binocular deflection technology, the three-dimensional point cloud and the surface normal of the upper surface of the object to be detected are iteratively reconstructed, and the three-dimensional surface morphology of the object to be detected is obtained through a wavefront reconstruction algorithm. The invention has the advantages of simple principle, simple and convenient operation, strong anti-interference capability, wide application range and the like.

Description

Object morphology measuring method based on single-pixel imaging binocular deflection technology
Technical Field
The invention mainly relates to the technical field of object appearance measurement, in particular to an object appearance measurement method based on single-pixel imaging binocular deflection.
Background
In the industrial production field, there is an increasing demand for the topography measurement of a mirror reflection or a smooth transparent object (thick transparent surface) having a certain thickness such as a wafer, glass, lens, etc.
In the traditional technology, a practitioner adopts a fringe reflection method, the method collects the reflected light on the surface of an object and reconstructs the surface appearance of the object to be measured based on the geometric relationship between a camera and a display screen, and the method has the advantages of high precision, strong flexibility and the like and is often used for measuring the appearance of a mirror surface object. However, because the thick transparent surface has the lower surface reflected light, the reflected light is overlapped with the upper surface reflected light to form a 'ghost image', which brings about a great challenge for subsequent reconstruction work and directly influences subsequent precision.
Different from diffuse reflection materials such as gypsum, wood and the like, the surfaces of smooth objects such as mirror surfaces, thick transparent surfaces and the like mainly take mirror reflection as main points, namely, reflected light rays have obvious directionality. If the projector is used as a modulation light source, the incident light also carries direction information, so that only part of the light can be collected by the camera after being reflected by the object, and subsequent object reconstruction is difficult to perform. And the light rays emitted by the light source with diffuse reflection property such as the LCD display screen do not have directional property, so that the camera can completely acquire the modulation image as far as possible to smoothly carry out subsequent reconstruction. The method of using LCD screen as system modulation light source is called as deflection imaging, which is suitable for the shape measurement of smooth surface and has been widely used.
In the offset imaging, a frequency shift method based on fringe frequency optimization and fourier transform is proposed by a practitioner Wan et al, and the measurement of the thick transmission surface can be realized by separating the lower surface reflection component from the frequency domain angle. But the requirements for image acquisition are too high and susceptible to interference by direct current components and ambient light. In addition, a practitioner Ye et al proposes a phase decoupling method, which directly solves the phase of the upper surface by projecting a frequency shift image, so as to realize the lens morphology measurement. But requires iteration initial values of higher quality and is also sensitive to illumination. In addition, both the two methods are developed based on monocular deflection, and in order to solve depth ambiguity, the monocular deflection needs to acquire a system space pose including a measured object, so that the requirements on equipment installation accuracy are high, and the requirements are difficult to guarantee in an actual industrial environment.
In addition, the difficulty of solving the "ghost image" problem is also: traditional cameras only collect light intensity information, and due to the lack of light direction dimensions, it is difficult to directly separate the reflected light intensity of the upper and lower surfaces from the image collected by the camera.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the object shape measuring method based on the single-pixel imaging binocular deflection technology, which has the advantages of simple principle, simple and convenient operation, strong anti-interference capability and wide application range.
In order to solve the technical problem, the invention adopts the following technical scheme:
an object topography measuring method based on single-pixel imaging binocular deflection comprises the following steps:
step S1: generating a plurality of horizontal and vertical Fourier base frequency stripe images based on a Fourier single-pixel imaging principle and a Fourier center slice;
step S2: displaying the generated multiple horizontal and vertical Fourier base frequency stripe patterns through a display screen, and collecting the Fourier base frequency stripe patterns which are distorted by the reflection of the surface of the object to be detected through a camera;
and step S3: resolving images acquired by the cameras based on a single-pixel imaging principle, and respectively acquiring LTC horizontal projection curves and vertical projection curves corresponding to pixel points on two camera imaging planes;
and step S4: respectively fitting peak point coordinates of LTC horizontal projection curves and peak point coordinates of vertical projection curves by using a Gaussian function, and combining the peak point coordinates into corresponding point coordinates of the display screen;
step S5: based on binocular deflection, iteratively reconstructing three-dimensional point cloud and surface normal of the upper surface of the object to be measured through the obtained display screen coordinates under a binocular deflection frame, and obtaining the three-dimensional surface morphology of the object to be measured through a wavefront reconstruction algorithm.
As a further improvement of the process of the invention: in step S1, a specific generation formula of the fourier fundamental frequency fringe image is as follows:
Figure BDA0003908245430000021
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003908245430000022
representing a horizontal fourier fundamental frequency fringe image,
Figure BDA0003908245430000023
representing a vertical Fourier fundamental fringe image, A is the average intensity, B is the modulated intensity, f x =x/N s Representing horizontal direction frequency, where x is the display screen horizontal direction pixel coordinate, N s Is the horizontal resolution of the display screen, f y =y/M s Representing the vertical frequency, where y is the vertical pixel coordinate of the display screen, M s Is the display screen vertical direction resolution.
As a further improvement of the process of the invention: regarding each pixel unit of the camera imaging plane as an independent individual, and extracting the light intensity value collected in the time sequence of each pixel unit.
As a further improvement of the process of the invention: the light intensity signal collected by the camera is defined as:
I i (u,v)=I 0 +∫∫ Ω P(x,y,u,v)·S i (x,y)dxdy
wherein, (u, v) represents the coordinates of a certain point of the camera imaging plane; (x, y) represents coordinates of a point on the display screen; i is 0 Denoted ambient light; Ω represents a pattern area reflected by the object to be measured; p (x, y, u, v) represents the horizontal or vertical projection of the display screen on the LTC of a point of the camera imaging plane; s i (x, y) represents a horizontal or vertical Fourier fundamental fringe pattern.
As a further improvement of the process of the invention: a certain pixel coordinate (u, v) of the camera to a certain frequency coefficient f x The light intensity values acquired by the four horizontal base frequency stripe patterns are I respectively 0 (u,v,f x ),I 1 (u,v,f x ),I 2 (u,v,f x ),I 3 (u,v,f x ) And processing the four acquired light intensity values as follows:
Figure BDA0003908245430000031
wherein F {. Cndot } represents a positive Fourier transform, P v (x, y, u, v) represents the horizontal projection of the Light Transmission Coefficient (LTC) of the display screen onto a point in the camera imaging plane; obtaining:
P v (x,y,u,v)∝F -1 {[I 0 (u,v,f x )-I 2 (u,v,f x )]+j[I 1 (u,v,f x )-I 3 (u,v,f x )]}
wherein, F -1 {. Denotes an inverse Fourier transform; for the same reason, for the frequency coefficient f x Collecting the four vertical base frequency stripe patterns to obtain the corresponding vertical projection P of the LTC h (x,y,u,v)。
As a further improvement of the process of the invention: in step S4, P obtained in step S3 is treated v (x, y, u, v) and P h And (x, y, u, v) performing Gaussian fitting on the highest peak in the (x, y, u, v) to obtain the abscissa of the peak point after fitting, and combining the abscissa into the coordinate of the corresponding point of the display screen.
As a further improvement of the method of the invention: the obtained LTC curve is a superposition of a plurality of pulse curves, and the process is equivalent to the following formula:
Figure BDA0003908245430000041
wherein, P 1 (x, y, u, v) represents LTC for the 1 st reflection, i.e., top surface reflection, P i (x, y, u, v) represents the i-th reflection of the LTC, and the light ray can be reflected and refracted when passing through the transparent object, and the reflection times and the refraction times are the same; lambda [ alpha ] i To represent the attenuation coefficient of the reflected light.
As a further improvement of the process of the invention: in the step S5, iteration is performed on all the pixels to be reconstructed on the camera imaging plane to obtain a point cloud image of the target, and in the reconstruction process, the surface normal of the object to be measured is calculated as an indirect quantity, and the surface topography of the object to be measured is reconstructed according to the Zernike polynomial.
Compared with the prior art, the invention has the advantages that:
1. the object morphology measuring method based on the single-pixel imaging binocular deflection technology is simple in principle and convenient to operate, inherits the advantages of single-pixel imaging, and is strong in ambient light interference resistance; because the position of the measured object does not need to be obtained a priori, the method can better adapt to the industrial production environment and meet the actual measurement requirement.
2. The object morphology measuring method based on the single-pixel imaging binocular deflection technology provided by the invention has the advantages that each pixel unit in a camera imaging plane is taken as an independent unit, the binocular deflection technology is taken as a frame, a one-dimensional light transmission curve of a light source plane is directly reconstructed by utilizing the single-pixel imaging principle, the light source coordinates acting on the upper surface of an object are obtained, the surface reflection component is separated from the source generated by the ghost image, and the ghost image problem caused by the reflection of the lower surface of the object is solved.
3. The object morphology measuring method based on the single-pixel imaging binocular deflection technology is a method for calculating the coordinates of a display screen based on the single-pixel imaging, solves the problem of ghost images from the source, inherits the advantages of the single-pixel imaging and has strong anti-interference capability. According to the method, fourier single-pixel imaging and Fourier center slicing are fused, and the display screen coordinates are obtained from one-dimensional information only, so that the acquisition efficiency is improved.
4. The object morphology measuring method based on the single-pixel imaging binocular deflection technology combines Fourier single-pixel imaging under the frame of the binocular deflection technology, and uses an LCD display screen to project horizontal and vertical Fourier base frequency stripe images. The one-dimensional light transmission curve of the light reflected by the surface of the transparent or mirror object is directly extracted by obtaining the light transmission coefficient of each pixel on the camera, and then the surface appearance of the transparent or mirror object is reconstructed based on the principle of binocular deflection. The method provided by the invention inherits the advantages of single-pixel imaging and has stronger anti-interference capability; based on binocular deflection, the depth and the morphology of the object are decoupled, the shape prior of the object to be measured is not required to be provided, and the method is more suitable for industrial scene application.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Fig. 2 is a schematic diagram of a measuring system built in a specific application example of the invention.
Fig. 3 is a schematic diagram of a single-pixel imaging system constructed in a specific application example of the invention.
Fig. 4 is a diagram of discrete spots in an embodiment of the present invention.
Fig. 5 is a schematic diagram of a partial fourier fundamental frequency fringe pattern in a specific application example of the present invention.
FIG. 6 is a diagram showing a light transmission coefficient curve in a specific application example of the present invention.
Fig. 7 is a schematic diagram of binocular deflectometric reconstruction in a specific application example of the present invention.
FIG. 8 is a diagram showing the measurement results in a specific application example of the present invention; wherein (a) is a plane mirror measurement result; and (b) is the measurement result of the planoconvex lens.
Detailed Description
The invention will be described in further detail below with reference to the drawings and specific examples.
As shown in fig. 1 and 2, the object topography measuring method based on the single-pixel imaging binocular deflection of the invention comprises the following steps:
step S1: generating a plurality of horizontal and vertical Fourier fundamental frequency stripe images based on a Fourier single pixel imaging principle and Fourier center slices; wherein the total number of images under the full sampling sample is (M) s ×N s )·2·r,M s And N s Projector lateral and longitudinal resolutions, respectively.
Step S2: the computer sends a control command to the LCD display screen, displays a plurality of generated horizontal and vertical Fourier base frequency stripe patterns, collects the distorted Fourier base frequency stripe patterns after being reflected by the surface of the object to be detected through the camera, and then transmits the image to the computer.
And step S3: based on the single-pixel imaging principle, the computer calculates the images acquired by the cameras to respectively acquire LTC horizontal and vertical projection curves corresponding to pixel points on the imaging planes of the two cameras, wherein
Figure BDA0003908245430000061
Is a pixel point of the main camera and is,
Figure BDA0003908245430000062
the horizontal and longitudinal resolutions of the camera are respectively M c And N c
And step S4: and respectively fitting peak point coordinates of LTC horizontal projection curves and peak point coordinates of vertical projection curves by using a Gaussian function, and combining the peak point coordinates into corresponding point coordinates of the display screen. That is, for P acquired in step S3 v (x, y, u, v) and P h And (x, y, u, v) performing Gaussian fitting on the highest peak in the (x, y, u, v) to obtain the abscissa of the peak point after fitting, and combining the abscissa into the coordinate of the corresponding point of the display screen.
Step S5: based on the binocular deflection technology, through the obtained display screen coordinates, under the frame of the binocular deflection technology, the three-dimensional point cloud and the surface normal of the upper surface of the object to be detected are iteratively reconstructed, and then the three-dimensional surface morphology of the object to be detected is obtained through a wavefront reconstruction algorithm.
According to the invention, each pixel unit in the camera imaging plane is regarded as an independent individual, and the appearance measurement of the transparent and thick transparent surfaces is completed by taking a binocular deflection technology as a frame.
In a specific application example, in step S1, a specific generation formula of the fourier baseband fringe image is as follows:
Figure BDA0003908245430000063
wherein the content of the first and second substances,
Figure BDA0003908245430000064
representing a horizontal fourier fundamental frequency fringe image,
Figure BDA0003908245430000065
representing a vertical Fourier fundamental fringe image, A is the average intensity, B is the modulated intensity, f x =x/N s Representing horizontal direction frequency, where x is the display screen horizontal direction pixel coordinate, N s Is display screen waterResolution in the square direction, f y =y/M s Representing the vertical frequency, where y is the vertical pixel coordinate of the display screen, M s Is the display screen vertical direction resolution. It should be noted that, by combining fourier single-pixel imaging with fourier center slicing, only horizontal and vertical fourier fundamental frequency stripe images are projected, which greatly reduces the number of projection pictures.
In a specific application example, in the step S3, in an embodiment, each pixel unit of the camera imaging plane is regarded as an independent individual, the light intensity value collected in the time series of each pixel unit is extracted, and it is assumed that a certain pixel coordinate (u, v) of the camera is corresponding to a certain frequency coefficient f x The light intensity values acquired by the four horizontal base frequency stripe patterns are respectively I 0 (u,v,f x ),I 1 (u,v,f x ),I 2 (u,v,f x ),I 3 (u,v,f x ) And processing the four acquired light intensity values as follows:
Figure BDA0003908245430000071
wherein F {. Cndot } represents a positive Fourier transform, P v (x, y, u, v) represents the horizontal projection of the Light Transmission Coefficient (LTC) of the display screen onto a point in the camera imaging plane. It is thus possible to obtain:
P v (x,y,u,v)∝F -1 {[I 0 (u,v,f x )-I 2 (u,v,f x )]+j[I 1 (u,v,f x )-I 3 (u,v,f x )]}
wherein, F -1 {. Denotes an inverse fourier transform. For the same reason, for the frequency coefficient f x The four vertical fundamental frequency stripe patterns generated are collected, and the corresponding vertical projection P of the LTC can be obtained by using the same method h (x,y,u,v)。
Referring to fig. 2, in order to implement the above method, the present invention sets up a set of measurement system, which includes a display screen 1, an object to be measured 2, a secondary camera 3, a primary camera 4, an optical support 5, a lens clamp 6, and an optical bread board 7. The display screen 1 is used for receiving the Fourier base frequency pattern transmitted by the computer and displaying the pattern in a full screen mode. The object 2 to be measured is used for reflecting the pattern displayed on the display screen to the camera. The secondary camera 3 and the primary camera 4 are used to receive the distorted fourier fundamental frequency pattern reflected by the object to be measured. The invention is based on the single-pixel imaging principle, directly reconstructs a one-dimensional light transmission coefficient curve of a light source plane, acquires the light source coordinates acting on the upper surface of an object, and separates surface reflection components from a source generated by ghost images. The invention can adapt to different detection environments, reduce the requirement of equipment installation precision and adapt to smooth objects of different materials such as mirror surfaces, thick transparent surfaces and the like.
As shown in fig. 3, the single-pixel imaging system includes a single-pixel sensor 8 (e.g., a photodiode) and a light source 9 (e.g., a projector, a spatial modulator, a display screen, etc.), both facing the object 2 to be measured. In the measurement process, a modulation pattern (such as a random speckle pattern, a Fourier image, a Hadamard image and the like) transmitted by the projector and an object image are acted, and then a one-dimensional light intensity signal is collected by the single-pixel sensor 8 without spatial resolution. After a number of projections, an image of the object can be reconstructed from the correlation of the sequence of modulation patterns and the corresponding sequence of light intensity signals.
In the present invention, each pixel unit in the camera imaging plane is regarded as a single pixel sensor 8, so that the whole imaging plane can form a group of single pixel sensor arrays.
In the image acquisition process, for a pixel point at a certain position of the imaging plane, the obtained light intensity value can be expressed as:
I i (u,v)=I 0 +∫∫ Ω P(x,y,u,v)·S i (x,y)dxdy
wherein, (u, v) is the coordinate of a certain point of the camera imaging plane; (x, y) is the coordinates of a certain point on the display screen; i is 0 Denoted ambient light; Ω is the pattern area reflected by the object to be measured; p (x, y, u, v) represents the horizontal or vertical projection of the display screen onto the Light Transmission Coefficient (LTC) at a point in the camera imaging plane; s i (x, y) represents a horizontal or vertical Fourier fundamental fringe pattern. In this embodiment, if S is i (x, y) and P (x, y, u, v) are not specified horizontally or verticallyStraight indicates that both directions apply. For conventional single pixel imaging, P (x, y, u, v) appears as a plurality of discrete spots (as shown in fig. 4).
However, single pixel imaging often requires thousands of illuminations. Especially in the deflection imaging, after the camera imaging plane is regarded as a single-pixel sensor array, single-pixel reconstruction needs to be performed on each pixel point, which brings great challenges to signal acquisition and data storage and reading. To this end, the present invention combines a fourier center slice technique to project only horizontal and vertical fourier fundamental fringe images.
The specific generation formula of the image is as follows:
Figure BDA0003908245430000081
wherein the content of the first and second substances,
Figure BDA0003908245430000082
representing a horizontal fourier fundamental frequency fringe image,
Figure BDA0003908245430000083
representing a vertical Fourier fundamental fringe image, A is the average intensity, B is the modulated intensity, f x =x/N s Representing horizontal direction frequency, where x is the display screen horizontal direction pixel coordinate, N s Is the horizontal resolution of the display screen, f y =y/M s Representing the vertical frequency, where y is the vertical pixel coordinate of the display screen, M s Is the display screen vertical direction resolution. It should be noted that, by combining fourier single-pixel imaging with fourier center slicing, only horizontal and vertical fourier fundamental frequency fringe images are projected, which greatly reduces the number of projection pictures, and fig. 5 shows a partial fourier fundamental frequency fringe pattern.
In one embodiment, each pixel unit of the camera imaging plane is regarded as an independent individual, and the light intensity value collected in each pixel unit time sequence is extracted. Suppose a certain pixel coordinate (u, v) of the camera versus a certain frequency coefficient f x Four pieces of water are producedThe light intensity values acquired by the flat base frequency stripe pattern are respectively I 0 (u,v,f x ),I 1 (u,v,f x ),I 2 (u,v,f x ),I 3 (u,v,f x ). The four acquired light intensity values are processed as follows:
Figure BDA0003908245430000091
wherein F {. Cndot } represents a positive Fourier transform, P v (x, y, u, v) represents the horizontal projection of the display screen onto the optical transmission coefficient function (LTC) at a point in the camera imaging plane. It is thus possible to obtain:
P v (x,y,u,v)=F -1 {[I 0 (u,v,f x )-I 2 (u,v,f x )]+j[I 1 (u,v,f x )-I 3 (u,v,f x )]}
wherein, F -1 {. Denotes an inverse fourier transform. According to an analysis formula, the four-step phase shift Fourier images with different spatial frequencies are displayed by using the display screen, so that Fourier coefficients of the LTC corresponding to the different spatial frequencies can be obtained, and direct current components are eliminated in the calculation process, so that the algorithm has certain anti-interference capability. Similarly, for the frequency coefficient f x The four vertical fundamental frequency fringe patterns are collected, and the vertical projection P of the corresponding light transmission coefficient function (LTC) can be obtained by using the same method h (x, y, u, v). Therefore, the invention combines Fourier single-pixel imaging with Fourier center slicing technology, and greatly reduces the required projection times.
In this embodiment, if the object to be measured is a thick transparent surface, light may be reflected many times through the surface, as shown in fig. 6, the LTC curve obtained by the present invention is actually a superposition of a plurality of pulse curves, and the process is equivalent to the following formula:
Figure BDA0003908245430000092
wherein, P 1 (x, y, u, v) represents LTC of the 1 st reflection, i.e. the top surfaceReflection, P i (x, y, u, v) represents the i-th reflection of the LTC, and light is reflected and refracted when passing through the transparent object, and the number of reflection is the same as that of refraction. In addition, the intensity of the reflected light gradually attenuates with the increase of the refraction times, and the invention uses lambda i To represent the attenuation coefficient of the reflected light. Assuming that the curvature of the object surface does not change much, the intensity of the light reflected from different positions on the transparent object surface can be considered as the same, and after the light is reflected by the sub-surface, the intensity of the reflected light is smaller than that of the surface reflected light due to the increase of the refraction times. The curves are represented as pulse curves of different heights, wherein the curve with the highest peak corresponds to the surface reflection. Therefore, the horizontal coordinate of the peak point of the highest pulse curve is the corresponding point coordinate required to be taken, and for P acquired in step S3 v (x, y, u, v) and P h And (x, y, u, v) performing Gaussian fitting on the highest peak in the (x, y, u, v) to obtain the abscissa of the peak point after fitting, and combining the abscissa into the coordinate of the corresponding point of the display screen.
Then, the invention carries out three-dimensional reconstruction on the surface of the object to be measured based on the principle of binocular deflection. As shown in fig. 7, assume that the slave display S 1 The emitted light reaches C of the camera A after being reflected by the object 1 Where is arranged C 1 Incident ray of point is i 1 The ray passing through the optical center O of the camera A 1 And C 1 And the specular point to be sought is located on the ray. In the last step, C has been obtained 1 Display screen S corresponding to point 1 The coordinates of the points. Then, searching the position of the target point by an iterative algorithm, wherein the known condition is C 1 Dot, S 1 Dot, i 1 Ray, target point located at ray i 1 In the above, assume that the target point is located at P 1 Where P can be obtained using internal and external parameters between the cameras 1 A point of mapping C to the camera 2 2 . Likewise, C 2 Point display screen coordinate point S 2 Are also known. According to the vector
Figure BDA0003908245430000101
And
Figure BDA0003908245430000102
can be respectively countedCalculating the normal
Figure BDA0003908245430000103
And
Figure BDA0003908245430000104
if it is not
Figure BDA0003908245430000105
And
Figure BDA0003908245430000106
are equal, then P 1 The point is the true target point, but in practice, due to the presence of noise and errors,
Figure BDA0003908245430000107
and
Figure BDA0003908245430000108
it is difficult to be completely equal by setting the threshold τ when
Figure BDA0003908245430000109
The position of the target point is considered to have been acquired. And in addition, in the reconstruction process, the surface normal of the object to be detected is calculated as indirect quantity, and a surface topography map of the object to be detected can be reconstructed according to the Zernike polynomial. FIG. 8 shows the results of topography measurements for a flat lens and a concave lens, wherein (a) is a flat lens and (b) is a plano-concave lens.
The traditional deflection measurement system uses a diffuse reflection light source, solves the imaging problem, but introduces depth ambiguity at the same time, and the two factors of the object depth and the appearance are coupled to jointly determine the reflection fringe pattern acquired by a camera. Monocular deflection takes the depth of an object as a priori, so that coupling is eliminated to finish the object morphology measurement. However, in some scenes, the method is not applicable any more if the object depth prior is not obtained. The rationality of the invention for selecting binocular deflection is as follows: based on binocular reflected light ray analysis, common decoupling of object depth and morphology can be completed under the condition that no object depth is prior, and the method is more suitable for actual use scenes.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (8)

1. An object topography measuring method based on single-pixel imaging binocular deflection is characterized by comprising the following steps:
step S1: generating a plurality of horizontal and vertical Fourier base frequency stripe images based on a Fourier single-pixel imaging principle and a Fourier center slice;
step S2: displaying the generated multiple horizontal and vertical Fourier base frequency stripe patterns through a display screen, and collecting the Fourier base frequency stripe patterns which are distorted by the reflection of the surface of the object to be detected through a camera;
and step S3: resolving images acquired by the cameras based on a single-pixel imaging principle, and respectively acquiring LTC horizontal projection curves and vertical projection curves corresponding to pixel points on two camera imaging planes;
and step S4: respectively fitting peak point coordinates of LTC horizontal projection curves and peak point coordinates of vertical projection curves by using a Gaussian function, and combining the peak point coordinates into corresponding point coordinates of the display screen;
step S5: based on binocular deflection, iteratively reconstructing three-dimensional point cloud and surface normal of the upper surface of the object to be measured through the obtained display screen coordinates under a binocular deflection frame, and obtaining the three-dimensional surface morphology of the object to be measured through a wavefront reconstruction algorithm.
2. The method for measuring the object morphology based on the single-pixel imaging binocular deflectometry as claimed in claim 1, wherein in the step S1, the specific generation formula of the fundamental fourier frequency fringe image is as follows:
Figure FDA0003908245420000011
wherein the content of the first and second substances,
Figure FDA0003908245420000012
representing a horizontal fourier fundamental frequency fringe image,
Figure FDA0003908245420000013
representing a vertical Fourier fundamental fringe image, A is the average intensity, B is the modulated intensity, f x =x/N s Representing horizontal direction frequency, where x is the display screen horizontal direction pixel coordinate, N s Is the horizontal resolution of the display screen, f y =y/M s Representing the vertical frequency, where y is the vertical pixel coordinate of the display screen, M s Is the display screen vertical direction resolution.
3. The object topography measuring method based on single-pixel imaging binocular deflection, as recited in claim 1, wherein the light intensity values collected in the time series of each pixel unit are extracted by regarding each pixel unit of the camera imaging plane as an independent individual.
4. The object topography measuring method based on the single-pixel imaging binocular deflection technique as claimed in claim 3, wherein the light intensity signal collected by the camera is defined as:
I i (u,v)=I 0 +∫∫ Ω P(x,y,u,v)·S i (x,y)dxdy
wherein, (u, v) represents coordinates of a point in the imaging plane of the camera; (x, y) represents coordinates of a point on the display screen; i is 0 Denoted ambient light; Ω represents a pattern area reflected by the object to be measured; p (x, y, u, v) represents the horizontal or vertical projection of the display screen on the LTC of a point of the camera imaging plane; s i (x, y) represents a horizontal or vertical Fourier fundamental fringe pattern.
5. The method of claim 4The object morphology measuring method based on the single-pixel imaging binocular deflectometry is characterized in that a certain pixel coordinate (u, v) of the camera is opposite to a certain frequency coefficient f x The light intensity values acquired by the four horizontal base frequency stripe patterns are respectively I 0 (u,v,f x ),I 1 (u,v,f x ),I 2 (u,v,f x ),I 3 (u,v,f x ) And processing the four acquired light intensity values as follows:
Figure FDA0003908245420000021
wherein F {. Cndot } represents a positive Fourier transform, P v (x, y, u, v) represents the horizontal projection of the Light Transmission Coefficient (LTC) of the display screen onto a point in the camera imaging plane; obtaining:
P v (x,y,u,v)∝F -1 {[I 0 (u,v,f x )-I 2 (u,v,f x )]+j[I 1 (u,v,f x )-I 3 (u,v,f x )]}
wherein, F -1 {. Denotes an inverse Fourier transform; for the same reason, for the frequency coefficient f x Collecting the four vertical base frequency stripe patterns to obtain the corresponding vertical projection P of the LTC h (x,y,u,v)。
6. The method for measuring the topography of an object based on the binocular deflection technology of single pixel imaging according to any one of the claims 1 to 5, wherein in the step S4, P obtained in the step S3 is used v (x, y, u, v) and P h And (x, y, u, v) performing Gaussian fitting on the highest peak in the (x, y, u, v) to obtain the abscissa of the peak point after fitting, and combining the abscissa into the coordinate of the corresponding point of the display screen.
7. The binocular disparity of single-pixel imaging based object topography measuring method according to any one of claims 1 to 5, wherein the acquired LTC curve is a superposition of a plurality of impulse curves, and the process is equivalent to the following formula:
Figure FDA0003908245420000031
wherein, P 1 (x, y, u, v) represents LTC for the 1 st reflection, i.e., top surface reflection, P i (x, y, u, v) represents the i-th reflection of the LTC, and the light ray can be reflected and refracted when passing through the transparent object, and the reflection times and the refraction times are the same; lambda [ alpha ] i To represent the attenuation coefficient of the reflected light.
8. The method for measuring the topography of an object based on the binocular deflection technique through single-pixel imaging according to any one of claims 1 to 5, wherein in the step S5, iteration is performed on all pixel points to be reconstructed on the imaging plane of the camera to obtain a point cloud image of the target, in the process of reconstruction, the surface normal of the object to be measured is calculated as an indirect quantity, and a surface topography map of the object to be measured is reconstructed according to a Zernike polynomial.
CN202211311856.1A 2022-10-25 2022-10-25 Object morphology measuring method based on single-pixel imaging binocular deflection technology Pending CN115711592A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211311856.1A CN115711592A (en) 2022-10-25 2022-10-25 Object morphology measuring method based on single-pixel imaging binocular deflection technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211311856.1A CN115711592A (en) 2022-10-25 2022-10-25 Object morphology measuring method based on single-pixel imaging binocular deflection technology

Publications (1)

Publication Number Publication Date
CN115711592A true CN115711592A (en) 2023-02-24

Family

ID=85231718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211311856.1A Pending CN115711592A (en) 2022-10-25 2022-10-25 Object morphology measuring method based on single-pixel imaging binocular deflection technology

Country Status (1)

Country Link
CN (1) CN115711592A (en)

Similar Documents

Publication Publication Date Title
US6229913B1 (en) Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus
CN106595519B (en) A kind of flexible 3 D contour measuring method and device based on laser MEMS projection
WO2019015154A1 (en) Monocular three-dimensional scanning system based three-dimensional reconstruction method and apparatus
CN109544679A (en) The three-dimensional rebuilding method of inner wall of the pipe
CN110514143A (en) A kind of fringe projection system scaling method based on reflecting mirror
KR20120058828A (en) System for extracting 3-dimensional coordinate and method thereof
CN109883391B (en) Monocular distance measurement method based on digital imaging of microlens array
CN104335005A (en) 3-D scanning and positioning system
CN106500629B (en) Microscopic three-dimensional measuring device and system
JP2003130621A (en) Method and system for measuring three-dimensional shape
TWI731443B (en) Point cloud merging for determining dimensional information of a surface
CN109341668A (en) Polyphaser measurement method based on refraction projection model and beam ray tracing method
CN116519257B (en) Three-dimensional flow field testing method and system based on double-view background schlieren of single-light-field camera
WO2023280292A1 (en) Fast-scanning and three-dimensional imaging method and device for large-volume scattered sample
CN113446957B (en) Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking
CN114923665B (en) Image reconstruction method and image reconstruction test system for wave three-dimensional height field
CN115359127A (en) Polarization camera array calibration method suitable for multilayer medium environment
CN114459384A (en) Phase shift profilometry based on multi-angle sine stripe light field fusion
CN110806181A (en) High-precision optical extensometer and measuring method based on color camera
CN111829435A (en) Multi-binocular camera and line laser cooperative detection method
CN117450955A (en) Three-dimensional measurement method for thin object based on space annular feature
AU2020408599A1 (en) Light field reconstruction method and system using depth sampling
CN115711592A (en) Object morphology measuring method based on single-pixel imaging binocular deflection technology
WO2019238583A1 (en) Deflectometric techniques
Jawad et al. Measuring object dimensions and its distances based on image processing technique by analysis the image using sony camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination