WO2021143330A1 - 一种基于边缘感知的投影仪失焦校正方法 - Google Patents

一种基于边缘感知的投影仪失焦校正方法 Download PDF

Info

Publication number
WO2021143330A1
WO2021143330A1 PCT/CN2020/128625 CN2020128625W WO2021143330A1 WO 2021143330 A1 WO2021143330 A1 WO 2021143330A1 CN 2020128625 W CN2020128625 W CN 2020128625W WO 2021143330 A1 WO2021143330 A1 WO 2021143330A1
Authority
WO
WIPO (PCT)
Prior art keywords
projector
pixel
circle
defocus
image
Prior art date
Application number
PCT/CN2020/128625
Other languages
English (en)
French (fr)
Inventor
何再兴
李沛隆
赵昕玥
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Publication of WO2021143330A1 publication Critical patent/WO2021143330A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • G03B21/142Adjusting of projection optics

Definitions

  • the invention relates to the field of projection equipment, and mainly relates to a method for improving the imaging quality of a projector, and in particular to a method for correcting the defocus of the projector based on edge perception.
  • Projection equipment has a wide range of applications in video projection, slide presentation, virtual reality and other fields.
  • the image of the projector on the projection surface tends to be blurred, which affects the user experience.
  • the reasons for the blurred phenomenon mainly include the following two aspects: First, when the projection surface is flat and the projection surface is perpendicular to the projection direction , The blur phenomenon is mainly caused by thermal defocus. Specifically, because the projector light source is a high-power device with high heat generation, the imaging elements including the projector lens are affected by thermal expansion and contraction, which will change the optical characteristics of the imaging system, causing the light source to fail to converge on the projection surface, resulting in blurring. .
  • the above method mainly has the following problems:
  • the first method of adjusting the focal length is mainly used to solve the thermal defocus phenomenon of the projector. Because the projector can only have a single focal length in front of the depth of field, the projection surface cannot be solved by simply adjusting the focal length. Blurring phenomenon when a large height change is non-planar.
  • the second method based on the compensation image is not only suitable for the correction of the thermal defocus of the projector, but also suitable for the defocus of the projector caused by the non-planar projection surface.
  • the calculation of the compensation picture is implemented by an iterative algorithm, which is inefficient and far from meeting the demand for real-time correction of out-of-focus, which limits the application of this method.
  • the existing defocusing convolution kernel calibration method can only be applied to simple, specific shape non-planar projection surfaces such as smooth curved surfaces. For complex non-planar projection surfaces, this method It will produce a larger defocus convolution kernel calibration error, reduce the effect of defocus compensation, and further limit the application range of the projector defocus correction method based on the compensated image.
  • the present invention proposes a digital projection grating image fitting correction method based on distortion global correction.
  • the method uses a projector-camera system to collect the projection results of the projector.
  • the projector-camera system includes the camera, the projector, and the projection surface.
  • the projector lens and the camera lens are all facing the projection surface;
  • the special input image is input to the projector Projection is irradiated onto the projection surface, and the camera collects the projection result of the projector projected and irradiated onto the projection surface as the output image;
  • the input is to correct and compensate the image to be projected by the projector, and perform projection illumination to obtain a clear projection result after defocus compensation.
  • the special input image includes a sparse dot pattern and a sine fringe pattern sequence;
  • the sparse dot pattern is composed of a number of square light-emitting pixel points of equal length and width, and each light-emitting pixel point in the sparse dot pattern along the horizontal direction and Equidistant rectangular arrays are distributed in the vertical direction, and the light-emitting pixel points are filled in the entire sparse dot pattern in a rectangular array;
  • the sinusoidal fringe pattern sequence is composed of a number of sinusoidal fringe patterns offset and misaligned in the horizontal direction/vertical direction, each The fringe pattern is composed of stripes distributed along the vertical direction/horizontal direction, and the gray level of each stripe along the horizontal direction/vertical direction presents a sinusoidal periodic distribution.
  • the present invention establishes the following mathematical model for the defocus process of the projector:
  • * is the convolution operator
  • (x p , y p ) and (x p -i, y p -j) are the coordinates in the coordinate system of the imaging plane of the projector
  • P is the input image
  • I 0 is the pixel matching
  • f is the out-of-focus convolution kernel, which characterizes the degree of out-of-focus of the light source at a certain coordinate on the imaging plane of the projector to the projection surface
  • r is the radius of the out-of-focus convolution kernel.
  • I 0 is the output image after pixel matching, which is different from the input image.
  • the input image is an image input to the projector and located in the projector's imaging plane coordinate system; the output image is obtained by the camera directly collecting the projection result of the light source generated by the projector illuminating the projection surface. Therefore, the output image is the image feedback located in the camera imaging plane coordinate system; the output image after pixel matching is through pixel matching, the output image located in the camera imaging plane coordinate system is transformed to the projector through the spatial transformation relationship The image obtained in the imaging plane coordinate system.
  • the pixel matching is realized by projecting a sinusoidal fringe pattern sequence by a projector.
  • the present invention inputs the sparse point map into the projector and performs projection. For each light-emitting pixel point in the sparse point map, there is a unique light source corresponding to it in the imaging plane of the projector, and the light source emits a light beam to illuminate the projection surface. Due to the out-of-focus phenomenon, the light beam emitted by the light source cannot converge at a point on the projection surface, but forms a disc area with a certain diameter on the projection surface. The disc area is also called the circle of confusion. According to the above analysis, the circle of confusion reflects the degree of defocus of the projector's light source on the projection surface, and can be used as a basis for defocusing convolution kernel calibration.
  • the present invention uses a camera to collect the information of the circle of confusion generated by the light source of the projector on the projection surface. Since the circle of confusion collected by the camera is located in the camera's imaging plane coordinate system, in order to obtain a one-to-one correspondence between the circle of confusion collected by the camera and each light-emitting pixel in the sparse point map, the dispersion in the output image after pixel matching is The circle is used as the basis for the calibration of the out-of-focus convolution kernel.
  • the calibration of the out-of-focus convolution kernel includes the following steps:
  • Step 1.1 Input the special input image of the sinusoidal fringe pattern sequence to the projector (2), and the projector (2) will project the sinusoidal fringe patterns in the sinusoidal fringe pattern sequence to the projection surface (3), and the camera (1) will collect them in sequence
  • the projection result of each sine fringe pattern is used as the output image;
  • Step 1.2 Solve the spatial transformation relationship according to the respective output images obtained corresponding to the sinusoidal fringe pattern sequence
  • Step 1.3 Input the special input image of the sparse point image into the projector (2), the projector (2) irradiates the sparse point image projection onto the projection surface (3), and the camera (1) sequentially collects the projection results of the sparse point image as output image;
  • the spatial transformation relationship is a one-to-one correspondence between the output image located in the camera's imaging plane coordinate system and the input image located in the projector's imaging plane coordinate system.
  • Pixel matching is a process of transforming the output image located in the imaging plane coordinate system of the camera to the output image after matching the pixels in the imaging plane coordinate system of the projector according to the spatial transformation relationship.
  • Step 1.4 For each light-emitting pixel in the sparse dot map, in the output image after pixel matching, a circular neighborhood is established with the point at the same position as the light-emitting pixel as the center of the circle, so as to obtain a circle with the light-emitting pixel.
  • One-to-one correspondence of the circle of confusion and normalization, and the normalized circle of confusion is used as the calibration map of the defocus convolution kernel at the position of the luminous pixel point in the projector coordinate system;
  • Step 1.5 For each non-luminous pixel in the sparse dot map, find the 4 luminous pixel points that are closest to it and enclose a rectangular area, and the rectangular area encloses the non-luminous pixel; if Find the 4 light-emitting pixel points that meet the above requirements, then combine the defocus convolution kernel calibration map at the position of the above-mentioned 4 light-emitting pixel points obtained in step 1.4, and obtain the non-light-emitting pixel points through bilinear interpolation.
  • the defocus convolution kernel smaller than the maximum radius in the focus convolution kernel is expanded to the maximum radius, and the pixels of the expanded part are filled with zero gray values, so that the above four defocus convolution kernels have the same size, that is, the same The number of rows and columns of, so that the above-mentioned bilinear interpolation operation can be performed well;
  • Step 1.6 For each non-luminous pixel in the sparse point map, if the 4 luminescent pixels that meet the requirements in step 1.5 cannot be found, it means that the pixel is located at the edge of the projector's imaging coordinate system, then find For the pixel that is closest to the non-luminous pixel and has obtained the defocusing convolution kernel calibration map, use the defocusing convolution kernel calibration map as the defocusing convolution kernel calibration map for the location of the non-luminous pixel So far, for each pixel in the imaging plane coordinate system of the projector, there is a calibration map corresponding to one of the out-of-focus convolution kernels, and the out-of-focus convolution kernel calibration step is completed.
  • the circle of confusion is composed of several adjacent pixel points with different gray values.
  • the normalized circle of confusion is the result of dividing each gray value in the circle of confusion by the maximum gray value in the circle of confusion.
  • the block scanning method is used to project the special input image used for defocusing convolution kernel calibration; the block scanning is to divide the special input image into a number of small rectangular areas of equal size and non-overlapping each other, and project and collect one by one each time
  • the content of the special input image in a small rectangular area is obtained, and the calibration map of the partial defocusing convolution kernel in the small rectangular area is obtained; the small rectangular area is traversed to obtain the calibration map of the defocusing convolution kernel. This does not affect the normal projection content of the projector, and maximizes the user experience.
  • the radius r of the circular neighborhood of the circle of confusion is obtained by processing in the following manner:
  • Step 1.4.1 Calculate the matching output image of the pixel corresponding to the sparse point map in the circle neighborhood with the position of the luminous pixel point in the sparse point map as the center of the circle and 1 pixel as the initial radius.
  • the sum of the gray values of all pixels in the shape neighborhood, the calculation formula is as follows:
  • Step 1.4.2 Increase the radius of the circular neighborhood in increments of 1 pixel, and repeat the process of calculating the sum of the gray values of all pixels in the circular neighborhood in Step 1.4.1; compare the kth iteration The sum of the gray values of all pixels in the circle of confusion obtained, and the sum of the gray values of all pixels in the circle of confusion obtained in the k+1 iteration;
  • Step 1.4.3 Repeat step 1.4.2 and iterate continuously until the difference between the sum of gray values before and after the radius is increased is less than a certain threshold, stop the iteration, and use the radius of the circle neighborhood when the iteration is stopped as the radius of the circle of confusion ;
  • the termination conditions of the iteration are as follows:
  • the use of the compensated picture as the input of the projector (2) to correct and compensate the picture to be projected by the projector (2) is specifically:
  • Step 2.1 For the input picture waiting to be projected in the input projector, convolve the input picture with the calibration picture of the out-of-focus convolution kernel to obtain a pre-blurred input picture; the expression is as follows:
  • Step 2.2 Divide the input picture with the pre-blurred input picture to obtain the edge perception matrix; specifically, the above division operation is to divide the gray value of each pixel in the input picture by the same in the pre-blurred input picture
  • the gray value of the pixel at the location is as follows:
  • E is the edge perception matrix
  • E(x p , y p ) is the element value of the edge perception matrix at the coordinates (x p , y p ).
  • Step 2.3 Multiply the input picture and the edge perception matrix to obtain a compensation picture.
  • the above-mentioned multiplication operation is to multiply the gray value of each pixel in the input image by the gray value of the pixel at the same position in the pre-blurred input image.
  • the above formula calculates the compensation picture through the edge perception matrix, and the process of calculating the compensation picture is called edge perception.
  • the compensation picture calculated by the above formula Replace the input picture P that was originally input to the projector, which will cause the projection result to be blurred, and use the projector to project the compensated picture A clear projection result after defocus compensation can be obtained.
  • the spatial transformation relationship is solved according to the respective output images obtained corresponding to the sinusoidal fringe pattern sequence, specifically:
  • the gray value of a certain pixel in the output image corresponding to each sine fringe image is:
  • (x c , y c ) are the coordinate points in the camera's imaging plane coordinate system; I kn (x c , y c ) is the pixel point (x c , y c) in the image collected by the camera after the sinusoidal fringe sequence is projected onto the projection surface ,y c ), where the subscript kn represents the nth projection of the sinusoidal fringe pattern with frequency f k , and A(x c ,y c ) is the pixel point (x c ,y c ) Background light intensity; B(x c , y c ) is the modulation amplitude at the pixel point (x c , y c ); Is the phase value corresponding to the pixel point (x c , y c );
  • Each sinusoidal fringe pattern is projected in turn and solved by a four-step phase shift method to obtain the phase value at each pixel (x c , y c) Then use the following formula to obtain the coordinates (x p , y p ) corresponding to the pixel point (x c , y c ) in the projector coordinate system, and the calculation method is as follows:
  • T represents the spatial transformation relationship, that is, the one-to-one correspondence between the camera imaging plane coordinate system and the projector imaging plane coordinate system
  • T(x c , y c ) represents the relationship with the camera imaging plane coordinate system (x c , y c )
  • the corresponding coordinate point in the coordinate system of the projector's imaging plane is (x p , y p );
  • f k is the frequency of the horizontal/vertical gray changes of the fringes in the sinusoidal fringe diagram,
  • the subscript k represents the k-th frequency.
  • the invention performs edge perception on the original picture according to the defocused convolution kernel calibration map, strengthens the high-frequency details in the original picture, and generates a compensated picture; the compensated picture is input to the projector and projected, and the clarity after defocus compensation can be obtained The projection result.
  • the present invention can realize the out-of-focus correction of the projector only by processing the original picture input to the projector, without adjusting the physical focal length of the projector, and solves the problem that the traditional projector out-of-focus correction method needs to adjust the projector lens by installing a motor to change the physical
  • the inconvenience of the focal length reduces the hardware requirements for the projector to achieve defocus correction, and improves the convenience and operability of the projector's defocus correction.
  • the present invention calculates the compensation picture through edge perception, solves the calculation efficiency problem that is generally not considered in the existing compensation picture-based defocus correction technology, and avoids the complicated and time-consuming iterative process required by the traditional compensation picture calculation method. This not only improves the efficiency of calculating the compensation picture, but also guarantees the quality of the compensation picture calculation, makes real-time projector defocus correction possible, and expands the applicability of the projector defocus correction method.
  • the invention realizes the calibration of the defocus convolution kernel by projecting the sparse point map and the sine fringe pattern sequence, can realize the high precision calibration of the defocus convolution kernel on the projection surface of any shape, and solves the problem of the traditional defocus convolution kernel calibration method. It is used for projection surfaces of specific shapes and has the limitation of large calibration errors on complex projection surfaces, which ensures the accuracy of subsequent compensation image calculation and defocus correction process.
  • the present invention can realize the block calibration of the out-of-focus convolution kernel by dividing the special input pattern into small rectangular areas and performing scan-by-scan projection, and obtain the required out-of-focus volume by integrating the results of all the block calibrations
  • the global calibration map of the integrator not only avoids the interference of the defocus convolution kernel calibration on the normal projection content of the user, improves the user experience, but also expands the sampling range of the defocus convolution kernel, realizes global calibration, and avoids the use of certain A local area is used as the overall basis for defocus correction, which improves the effect of defocus correction.
  • the method of the present invention can effectively improve the phenomenon of out-of-focus blur of the projector, meet the needs of the projector to use the projector in complex application scenarios such as thermal defocus and large height changes on the projection surface, and expand the projector equipment to a certain extent. Adaptability.
  • FIG. 1 is a schematic diagram of the arrangement and connection of the projector-camera system of the present invention and the effects achieved by the present invention
  • Figure 2 is a flow chart of the method of the present invention
  • FIG. 3 is a schematic diagram of the calibration step of the convolution kernel in the method described in FIG. 2;
  • FIG. 3 is a schematic diagram of the calibration step of the convolution kernel in the method described in FIG. 2;
  • FIG. 4 is a schematic diagram of implementing the calibration step of the convolution kernel in FIG. 3 by means of block scanning in an embodiment
  • FIG. 5 is a schematic diagram of a compensation picture calculation step in the method described in FIG. 2;
  • FIG. 6 is a projection result of the projector before defocus correction in the embodiment
  • FIG. 7 shows the projection result of the projector after the defocus correction is performed by the method of the present invention in the embodiment.
  • the projector-camera system includes: camera 1, projector 2, projection surface 3; the lens of projector 2 and the lens of camera 1 are both Toward the projection surface 3; input a special input image into the projector 2 to illuminate the projection surface 3, and the camera 1 captures the projection result of the light source generated by the projector 2 on the projection surface 3 as the output image; combines the special input image and The output image is sequentially subjected to two steps of defocusing convolution kernel calibration and compensation picture calculation, as shown in FIG. 2; the calculated compensation picture is used as the input of the projector 2 to obtain a clear projection result after the defocus compensation.
  • a DLP projector is used to project the input image
  • a common CMOS camera is used to collect the projection result of the input image on the projection surface as the output image.
  • CMOS camera is used to collect the projection result of the input image on the projection surface as the output image.
  • * is the convolution operator
  • (x p , y p ) and (x p -i, y p -j) are the coordinates in the projector's imaging plane coordinate system
  • P is the input image
  • P(x p , y p ) represents the pixel point of the coordinate (x p , y p ) in the input image
  • I 0 is the output image after pixel matching
  • f is the defocus convolution kernel, which represents the light source illumination at a certain coordinate on the imaging plane of the projector The degree of defocus to the projection surface
  • r is the radius of the defocus convolution kernel.
  • the above special input image Also known as compensation image. Since the input image P is known, in order to solve the compensation image The defocus convolution kernel f needs to be calibrated.
  • Defocus convolution kernel calibration includes the following steps:
  • Step 1.1 Input the special input image of the sinusoidal fringe pattern sequence into the projector (2), and the projector (2) will project the sinusoidal fringe patterns in the sinusoidal fringe pattern sequence to the projection surface (3) in turn, and the camera (1) will collect them in sequence
  • the projection result of each sine fringe pattern is used as the output image.
  • the output image collected by the camera read by the sinusoidal fringe pattern sequence is shown in Figure 3(d).
  • Step 1.2 Solve the spatial transformation relationship according to the output image corresponding to the sine fringe sequence.
  • the solution result of the space transformation is shown in Figure 3(e).
  • the spatial transformation relationship is solved by projecting the sine fringe pattern sequence to achieve pixel matching.
  • the gray value of a certain pixel in the output image corresponding to each sine fringe image is:
  • (x c , y c ) is the coordinate point in the camera imaging plane coordinate system;
  • I kn (x c , y c ) is the sine fringe sequence projected on the projection surface and located in the image collected by the camera in (x c , y tone value c) at which the subscript indicates that kn FIG sinusoidal fringe frequency f k n-th projection, f k is the frequency of the sinusoidal fringe pattern density change stripes in the horizontal direction / vertical direction,
  • the subscript k represents the kth frequency
  • the subscript n represents the nth time;
  • A(x c ,y c ) is the background light intensity at (x c ,y c );
  • B(x c ,y c ) is (x c ,y c ) modulation amplitude;
  • Each sinusoidal fringe pattern is projected in turn and solved by a four-step phase shift method to obtain the phase value at each pixel (x c , y c)
  • the phase value It satisfies the following relationship with the coordinates (x p , y p ) in the imaging plane of the projector:
  • the horizontal sine fringe pattern is a sine fringe pattern with stripes distributed in the horizontal direction; a vertical sine fringe pattern is a sine fringe pattern with stripes distributed in a vertical direction; f k is a sine fringe pattern with stripes along the horizontal/vertical direction
  • the frequency of grayscale changes. It can be seen from the above formula that the horizontal/vertical sine fringe pattern is projected and the phase value at (x c , y c) is solved Determine the one-to-one correspondence between the coordinates (x c , y c ) in the camera coordinate system and the coordinates (x p , y p ) in the projector coordinate system, and also determine the spatial transformation relationship for pixel matching, The calculation method is as follows:
  • T represents the spatial transformation relationship, that is, the one-to-one correspondence between the camera imaging plane coordinate system and the projector imaging plane coordinate system;
  • T(x c , y c ) represents the relationship with the camera imaging plane coordinate system (x c , y c ) corresponds to the coordinate point in the imaging plane coordinate system of the projector is (x p , y p ); among them, x p is solved by projecting vertical sine fringe pattern, and y p is solved by projecting horizontal sine fringe pattern .
  • the vertical sine fringe pattern is shown in Figure 3(d) on the left, and the horizontal sine fringe pattern is shown in Figure 3(d) on the right.
  • the obtained spatial transformation relationship T is shown in Figure 3(e).
  • the spatial transformation relationship obtained from the vertical sinusoidal fringe pattern reveals the one-to-one correspondence between the coordinate points (x c , y c ) in the output image and the y p in the imaging plane coordinate system of the projector Relationship: For the spatial transformation relationship obtained from the horizontal sinusoidal fringe diagram, the one-to-one correspondence relationship between the coordinate points (x c , y c ) in the output image and the x p in the imaging plane coordinate system of the projector is revealed.
  • Step 1.3 Input the special input image of the sparse point image into the projector (2), the projector (2) irradiates the sparse point image projection onto the projection surface (3), and the camera (1) sequentially collects the projection results of the sparse point image as output image.
  • the sparse point map is shown in Figure 3(a), and the output image collected by the camera corresponding to the sparse point map is shown in Figure 3(b).
  • the present invention uses the light-emitting pixel points in the sparse dot map (as shown in the upper right of Figure 3(a)) and the circle of confusion presented in the output pattern (as shown in Figure 3(c)) to realize the out-of-focus convolution kernel Calibration. Since the input image is located in the imaging plane coordinate system of the projector and the output image is located in the camera imaging plane coordinate system, in order to conveniently obtain the one-to-one correspondence between the circle of confusion and the light-emitting pixel points, the present invention uses the spatial transformation relationship to transform the output pattern to Under the same coordinate system as the input pattern, the transformed output pattern is called the output image after pixel matching, as shown in Figure 3(c).
  • Step 1.4 For each light-emitting pixel in the sparse dot image, the pixel corresponding to the sparse dot image can be obtained at the same position in the output image after matching the pixel, and in the circular neighborhood with the position as the center of the circle.
  • the circle of confusion corresponding to the luminescent pixel points one by one; the radius of the above-mentioned circular neighborhood is taken as the radius of the circle neighborhood; the circle of confusion is composed of several adjacent pixel points of different gray values; the normalized one
  • the converted circle of confusion is the result of dividing each gray value in the circle of confusion with the maximum gray value in the circle of confusion.
  • M(x p ,y p ) is the circle of confusion located at (x p ,y p );
  • I M (x p -i,y p -j) is the pixel matching output image located at (x p- i, y p -j) gray value;
  • r is the radius of the circle of confusion. It can be seen from the above formula that since I M is known, M(x p , y p ) depends on the radius r of the circle of confusion.
  • the radius r of the circle of confusion is determined according to the following iterative steps:
  • Step 1.4.1 Calculate the matching output image of the pixel corresponding to the sparse point map in the circle neighborhood with the position of the luminous pixel point in the sparse point map as the center of the circle and 1 pixel as the initial radius.
  • the sum of the gray values of all pixels in the shape neighborhood, the calculation formula is as follows:
  • sum k represents that in the kth iteration, when r k is used as the radius of the circle of confusion, the sum of the gray values of all pixels in the circle of confusion is sum k .
  • Step 1.4.2 Increase the radius of the circular neighborhood in increments of 1 pixel, and repeat the process of calculating the sum of the gray values of all pixels in the circular neighborhood in Step 1.4.1; compare the kth iteration The sum of the gray values of all pixels in the circle of confusion obtained, and the sum of the gray values of all pixels in the circle of confusion obtained in the k+1 iteration;
  • Step 1.4.3 Repeat step 1.4.2 and iterate continuously until the difference between the sum of gray values before and after the radius is increased is less than a certain threshold, stop the iteration, and use the radius of the circle neighborhood when the iteration is stopped as the radius of the circle of confusion .
  • the termination conditions of the iteration are as follows:
  • f(x p , y p ) is the calibration map of the convolution kernel at (x p , y p )
  • M(x p , y p ) is the circle of confusion at (x p , y p )
  • max[ M(x p ,y p )] is the maximum element value in the circle of confusion M(x p ,y p ).
  • the right figure of Figure 3(c) shows an example of the calibration map of the out-of-focus convolution kernel.
  • the calibration map of the convolution kernel in Fig. 3(c) shows a distribution characteristic similar to Gaussian distribution with bright center and dark edges.
  • Step 1.5 For each non-light-emitting pixel in the sparse dot map, find the 4 light-emitting pixels that are closest to it and can enclose a rectangular area, and the rectangular area encloses the non-light-emitting pixel; If four light-emitting pixel points that meet the above requirements can be found, combine the defocus convolution kernel calibration map at the position of the above-mentioned four light-emitting pixels obtained in step 1.4, and obtain the non-light-emitting pixel through bilinear interpolation. The calibration map of the defocus convolution kernel at the location of the pixel.
  • the convolution kernel is expanded to the maximum radius, and the pixels of the expanded part are filled with zero gray values to ensure that the defocus convolution kernels f 1 , f, f, fj have the same size, that is, have the same number of rows and columns, In order to facilitate the above-mentioned bilinear interpolation operation.
  • the calibration diagram of the out-of-focus convolution kernel at the position (x p , y p) of the non-luminous pixel is:
  • Step 1.6 For each non-luminous pixel in the sparse point map, if the 4 luminescent pixels that meet the requirements in step 1.5 cannot be found, it means that the pixel is located at the edge of the projector's imaging coordinate system, then find For the pixel that is closest to the non-luminous pixel and has obtained the defocusing convolution kernel calibration map, use the defocusing convolution kernel calibration map as the defocusing convolution kernel calibration map for the location of the non-luminous pixel ; So far, for each pixel in the projector's imaging plane coordinate system, there is a calibration map corresponding to one of the defocus convolution kernels.
  • the block scanning method can be used for the special projection used for defocus convolution kernel calibration.
  • the pattern is projected, as shown in Figure 4.
  • the block scanning is to divide the special projection pattern into a number of small rectangular areas of equal size and non-overlapping each other, project and collect the content of the special projection pattern in a small rectangular area one by one each time, and obtain the part in the small rectangular area
  • the calibration map of the out-of-focus convolution kernel traverse all the small rectangular areas to obtain the calibration map of the out-of-focus convolution kernel.
  • dividing the special input pattern into 16 small rectangular areas of equal size and non-overlapping each other is taken as an example to reveal the specific implementation of block scanning.
  • the small rectangular area is superimposed on the corresponding position of the normal projection pattern in order of number.
  • the 16 small rectangular areas are input into the projector and projected onto the projection surface based on the normal projection pattern superimposed on 16 frames. Collected by the camera, the normal projection pattern superimposed on each frame realizes the calibration of the out-of-focus convolution kernel in the corresponding small rectangular area. Integrate the calibration maps of all the defocus convolution kernels in the small rectangular area to obtain the global defocus convolution kernel calibration map.
  • only a small part of the normal projection pattern is blocked, ensuring that the interference to the normal projection pattern is minimized.
  • Step 2.1 According to the mathematical model established by the present invention for the defocusing process of the projector, for the input picture P that is input to the projector and waiting to be projected, the calibration picture of the defocusing convolution kernel obtained in 1) is taken f performs convolution to obtain the pre-blurred input picture as follows.
  • the pre-blurred input picture is shown in Figure 5(a).
  • Figure 5(a) compared with the input picture, the details of the pre-blurred input picture show blur and dullness, which conforms to the characteristics of the blurred picture.
  • Step 2.2 Divide the input picture with the pre-blurred input picture to obtain the edge perception matrix, as shown in Figure 5(a). It can be intuitively observed from Figure 5(a) that the edge perception matrix reflects the contour of the edge area in the input picture. Specifically, the above division operation is to divide the gray value of each pixel in the input image by the gray value of the pixel at the same position in the pre-blurred input image, and the calculation method is as follows:
  • E is the edge perception matrix
  • E(x p , y p ) is the element value of the edge perception matrix at the coordinates (x p , y p ).
  • Step 2.3 Multiply the input picture and the edge perception matrix to obtain the compensation picture, as shown in Figure 5(b).
  • the details in the compensation picture show the characteristics of sharpening and strengthening, that is, the compensation picture strengthens the high-frequency information located in the edge area of the input picture, which is also desirable.
  • the compensation picture reduces the loss of high-frequency information in the picture during the defocus process by enhancing the high-frequency features, so that the projection result becomes clear.
  • the above-mentioned multiplication operation is to multiply the gray value of each pixel in the input image by the gray value of the pixel at the same position in the pre-blurred input image. Compensation pictures are calculated as follows:
  • the compensation picture calculated by the above formula Replace the input picture P that was originally input to the projector, which will cause the projection result to be blurred, and use the projector to project the compensated picture A clear projection result after defocus compensation can be obtained.
  • the projection result in Fig. 6 shows obvious out-of-focus phenomenon. Specifically, the details of the lion's beard, the fluff all over the lion's body, and the right eye of the lion all show obvious blur.
  • the edge of the lion does not show a clear and obvious boundary; compared with Figure 6, after defocus correction, in Figure 7, the details of the lion’s beard, hair, right eye and other parts can be well distinguished.
  • the edges of the features also show clear and obvious boundaries, the projection result has been significantly improved, and the projection result is clearer.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Projection Apparatus (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

一种基于边缘感知的投影仪失焦校正方法。该方法采用投影仪-相机系统,投影仪的镜头和相机的镜头均朝向投影表面;将特殊输入图像输入投影仪投影照射至投影表面上,相机采集投影仪投影照射至投影表面后的投影结果作为输出图像;利用特殊输入图像和输出图像进行失焦卷积核标定等步骤获得补偿图片,将补偿图片作为投影仪的输入对投影仪待投影图片进行校正补偿,获得失焦补偿后清晰的投影结果。该方法无需调节投影仪物理焦距,仅通过对输入投影仪的原始图片进行处理,即可实现投影仪失焦投影结果的清晰化,可满足投影仪在存在热失焦、投影表面存在大幅高度变化等复杂应用场景下使用投影仪的需求,一定程度地扩大了投影仪设备的适用性。

Description

一种基于边缘感知的投影仪失焦校正方法 技术领域
本发明涉及投影设备领域,主要涉及一种提高投影仪成像质量的方法,尤其涉及一种基于边缘感知的投影仪失焦校正方法。
背景技术
投影设备在视频放映、幻灯片演示、虚拟现实等领域具有广泛的应用。在上述应用中,投影仪在投影表面上的成像往往会出现模糊现象,影响用户体验,出现模糊现象的原因主要包括如下两方面:第一,当投影表面为平面且投影表面与投影方向垂直时,模糊现象主要由热失焦引发。具体地,由于投影仪光源属于大功率器件,产热量高,包括投影仪镜头在内的成像元件受热胀冷缩的影响会改变成像系统的光学特性,导致光源在投影表面无法汇聚,导致模糊现象。第二,当投影表面为存在大幅高度变化的非平面时,由于投影仪景深浅且只能具有单一焦距,投影仪不可能同时在存在大幅高度变化的投影表面处处聚焦,因此,此时投影仪不可避免地会出现成像模糊。随着虚拟现实技术的发展,投影表面为非平面的应用场景越来越普遍。综合以上,解决投影仪失焦导致的成像模糊问题,对提高投影仪的应用范围和适应性具有重要意义。
在现有研究中,对投影失焦问题主要有以下两种解决方案:第一,利用电机实现投影仪焦距的自动调节。王全强以投影仪成像的清晰度作为反馈,利用爬山算法确定电机的运动规律;那庆林等人以投影镜头的温度作为反馈,根据温度判断投影仪是否出现失焦现象并确定失焦的程度,并电机发出驱动命令。因此,第二,对输入投影仪的输入图片进行处理,重新生成一幅用于投影的补偿图片,用于抑制投影仪成像时的模糊现象。Zhang和Nayar根据投影仪失焦模型,对失焦模型中的失焦卷积核进行参数标定,并将补偿图片的计算处理为优化问题,采用最速下降法寻找优化问题的最优解。
上述方法主要存在如下问题:第一种调节焦距的方法主要用于解决投影仪 的热失焦现象,由于投影仪景深前且只能具有单一焦距,仅通过简单地焦距调节无法解决投影表面为存在大幅高度变化的非平面时的模糊现象。第二种基于补偿图片的方法不仅适用于投影仪热失焦的校正,也适用于由非平面的投影表面所导致的投影仪失焦。然而,补偿图片的计算通过迭代算法实现,效率较低,远不能满足失焦实时校正的需求,限制了这种方法的应用。此外,作为计算补偿图片的前置步骤,现有失焦卷积核标定的方法只能适用于平滑曲面等简单的、特定形状的非平面投影表面,对于复杂的非平面投影表面,这种方法会产生较大的失焦卷积核标定误差,降低失焦补偿的效果,进一步限制了基于补偿图片的投影仪失焦校正方法应用范围。
发明内容
针对上述问题,基于此,本发明提出了一种基于畸变全局修正的数字投影光栅图像拟合校正方法。
本发明所采用的技术方案是:
方法采用投影仪-相机系统,采集投影仪的投影结果,投影仪-相机系统包括:相机、投影仪和投影表面,投影仪的镜头和相机的镜头均朝向投影表面;将特殊输入图像输入投影仪投影照射至投影表面上,相机采集投影仪投影照射至投影表面后的投影结果作为输出图像;利用特殊输入图像和输出图像进行失焦卷积核标定等步骤获得补偿图片,将补偿图片作为投影仪的输入对投影仪待投影图片进行校正补偿,进行投影照射获得失焦补偿后清晰的投影结果。
所述特殊输入图像,包括稀疏点图和正弦条纹图序列;所述稀疏点图由若干长度和宽度均相等的正方形的发光像素点构成,每个发光像素点在稀疏点图中沿水平方向和竖直方向等距离矩形阵列分布,并且发光像素点以矩形阵列方式充满整个稀疏点图;所述正弦条纹图序列由若干沿水平方向/竖直方向偏移错位的正弦条纹图组成,每个正弦条纹图由沿竖直方向/水平方向分布的条纹构成,各条条纹沿水平方向/竖直方向的灰度呈现正弦性周期分布。
作为优选,本发明对投影仪的失焦过程建立如下数学模型:
Figure PCTCN2020128625-appb-000001
其中,*为卷积操作符;(x p,y p)和(x p-i,y p-j)均为投影仪成像平面坐标系中的坐标;P为输入图像;I 0为像素匹配后的输出图像;f为失焦卷积核,表征投影仪成像平面上某坐标处的光源照射至投影表面的失焦程度;r为失焦卷积核的半径。
上式中,I 0为像素匹配后的输出图像,区别于输入图像。所述输入图像,是输入投影仪的、位于投影仪成像平面坐标系下的图像;所述输出图像,是通过相机直接采集投影仪所产生的光源照射至投影表面后的投影结果所获得的。因此,输出图像,是位于相机成像平面坐标系下的图像反馈;像素匹配后的输出图像,是通过像素匹配,将位于相机成像平面坐标系下的输出图像,通过空间变换关系,变换至投影仪成像平面坐标系下所得的图像。所述像素匹配,通过投影仪投影正弦条纹图序列实现。
本发明为了实现失焦卷积核标定,将稀疏点图输入投影仪中并进行投影。对于稀疏点图中的每一个发光像素点,在投影仪成像平面中均有唯一的光源与之对应,该光源发出光束照射至投影表面。由于失焦现象的存在,该光源所发出的光束在投影表面无法汇聚于一点,而是在投影表面形成了具有一定直径的圆盘区域,该圆盘区域又称为弥散圆。根据上述分析,该弥散圆反映了投影仪光源在投影表面的失焦程度,可以作为失焦卷积核标定的依据。
作为优选,本发明通过相机采集投影仪光源在投影表面所产生的弥散圆的信息。由于相机采集的弥散圆位于相机成像平面坐标系中,为了获得相机采集的弥散圆与稀疏点图中的每一个发光像素点之间的一一对应关系,将像素匹配后的输出图像中的弥散圆作为失焦卷积核标定的依据。
所述失焦卷积核标定,包括如下步骤:
步骤1.1:将正弦条纹图序列的特殊输入图像输入投影仪(2),投影仪(2)将正弦条纹图序列中各个正弦条纹图依次投影照射至投影表面(3),相机(1) 依次采集各个正弦条纹图的投影结果作为输出图像;
步骤1.2:根据正弦条纹图序列对应获得的各个输出图像,求解空间变换关系;
步骤1.3:将稀疏点图的特殊输入图像输入投影仪(2),投影仪(2)将稀疏点图投影照射至投影表面(3),相机(1)依次采集稀疏点图的投影结果作为输出图像;
然后利用空间变换关系将稀疏点图的输出图像进行变换获得像素匹配后的输出图像,与输入投影仪的稀疏点图进行像素匹配;
所述空间变换关系,是位于相机成像平面坐标系下的输出图像,与位于投影仪成像平面坐标系下的输入图像之间的一一对应关系。像素匹配,是将位于相机成像平面坐标系下的输出图像,根据空间变换关系,变换至位于投影仪成像平面坐标系下的像素匹配后的输出图像的过程。
步骤1.4:对于稀疏点图中的每一个发光像素点,在像素匹配后的输出图像中以与该发光像素点的相同位置的点为圆心建立圆形邻域,由此获得与该发光像素点一一对应的弥散圆并进行归一化,归一化后的弥散圆作为投影仪坐标系中发光像素点所在位置的失焦卷积核的标定图;
步骤1.5:对于位于稀疏点图中的每一个不发光的像素点,寻找与其距离最近、且围成矩形区域的4个发光像素点,该矩形区域将该不发光的像素点包围在内;若找到满足上述要求的4个发光像素点,则结合步骤1.4中所获得的上述4个发光像素点所在位置的失焦卷积核标定图,通过双线性插值的方式,获得不发光的像素点所在位置的失焦卷积核的标定图;在进行上述双线性插值之前,寻找上述4个发光像素点所在位置的失焦卷积核的标定图中的最大半径;并将上述4个失焦卷积核中小于该最大半径的失焦卷积核扩充至该最大半径,扩充的部分的像素点补以零灰度值,使得上述4个失焦卷积核具有相同大小,即具有相同的行数和列数,这样能很好地进行上述双线性插值的操作;
步骤1.6:对于位于稀疏点图中的每一个不发光的像素点,若不能找到满足步骤1.5中要求的4个发光像素点,则说明该像素点位于投影仪成像坐标系的边 缘处,则找到与该不发光像素点距离最近的、已获得失焦卷积核标定图的像素点,以该失焦卷积核标定图作为该不发光的像素点所在位置的失焦卷积核的标定图;至此,对于投影仪成像平面坐标系中的每一个像素点,均有与之一一对应的失焦卷积核的标定图,完成失焦卷积核标定步骤。
所述弥散圆,由若干不同灰度值的两两相邻的像素点组成。
所述归一化后的弥散圆,是将弥散圆中的每一个灰度值,与该弥散圆中的最大灰度值相除后所得的结果。
采用分块扫描的方式对用于失焦卷积核标定的特殊输入图像进行投影;分块扫描是将特殊输入图像分割为若干大小相等且相互不重叠的小矩形区域,每次逐一投影和采集特殊输入图像在一个小矩形区域的内容,并获得该小矩形区域内的部分失焦卷积核的标定图;对小矩形区域进行遍历,获得失焦卷积核的标定图。这样不影响投影仪的正常投影内容,最大程度地提高用户体验。
所述弥散圆的圆形邻域半径r,按照以下方式处理获得:
步骤1.4.1:在以稀疏点图中发光像素点的所在位置为圆心、以1个像素为初始半径的圆形邻域内,计算稀疏点图所对应的像素匹配后的输出图像中位于该圆形邻域内所有像素点的灰度值之和,计算公式如下:
Figure PCTCN2020128625-appb-000002
其中,sum k表示第k次迭代中、以r k作为弥散圆的半径时,位于该弥散圆中所有像素点的灰度值之和为sum k;根据步骤1.4.1中的分析,第一次迭代中,弥散圆的初始半径为r 1=1;
步骤1.4.2:将圆形邻域的半径以1个像素为增量进行递增,重复步骤1.4.1中计算圆形邻域内所有像素点的灰度值之和的过程;比较第k次迭代所得弥散圆内所有像素点的灰度值之和,与第k+1次迭代所得弥散圆内所有像素点的灰度值之和;
步骤1.4.3:重复步骤1.4.2不断迭代处理,直到半径递增前后灰度值之和的差值小于某一阈值,停止迭代,将停止迭代时的圆形邻域的半径作为弥散圆的 半径;迭代的终止条件如下:
sum k+1-sum k<ε
其中ε表示迭代精度,当第k+1次迭代满足上述终止条件时,第k+1次迭代中的半径r k+1即为所求弥散圆的半径。
所述将补偿图片作为投影仪(2)的输入对投影仪(2)待投影图片进行校正补偿,具体为:
步骤2.1:对于输入投影仪中的、等待被投影的输入图片,将输入图片与失焦卷积核的标定图进行卷积,获得预模糊的输入图片;表示如下:
Figure PCTCN2020128625-appb-000003
步骤2.2:将输入图片与预模糊的输入图片相除,获得边缘感知矩阵;具体地,上述相除操作是将输入图片中每一个像素点的灰度值,除以预模糊的输入图片中相同位置的像素点的灰度值;计算方式如下:
Figure PCTCN2020128625-appb-000004
其中,E是边缘感知矩阵,E(x p,y p)是边缘感知矩阵在坐标(x p,y p)处的元素值。
步骤2.3:将输入图片与边缘感知矩阵相乘,获得补偿图片。具体地,上述相乘操作是将输入图片中每一个像素点的灰度值,乘以预模糊的输入图片中相同位置的像素点的灰度值。
补偿图片的计算方式如下:
Figure PCTCN2020128625-appb-000005
其中,
Figure PCTCN2020128625-appb-000006
为补偿图片。上式通过边缘感知矩阵计算补偿图片,计算补偿图片的过程称为边缘感知。将上式计算所得的补偿图片
Figure PCTCN2020128625-appb-000007
替换原本输入投影仪中的、会导致投影结果模糊的输入图片P,通过投影仪投影补偿图片
Figure PCTCN2020128625-appb-000008
即可获得失焦 补偿后的清晰的投影结果。
所述步骤1.2中,根据正弦条纹图序列对应获得的各个输出图像,求解空间变换关系,具体为:
每个正弦条纹图对应的输出图像中某一像素点处的灰度值为:
Figure PCTCN2020128625-appb-000009
其中,(x c,y c)为相机成像平面坐标系中的坐标点;I kn(x c,y c)是正弦条纹序列投影到投影表面后被相机采集的图像中位于像素点(x c,y c)处的灰度值,其中,下标kn表示以为f k频率的正弦条纹图的第n次投影,A(x c,y c)是像素点(x c,y c)处的背景光强;B(x c,y c)是像素点(x c,y c)处的调制幅度;
Figure PCTCN2020128625-appb-000010
是像素点(x c,y c)处所对应的相位值;
各个正弦条纹图依次投影后通过四步相移法求解获得位于每个像素点(x c,y c)处的相位值
Figure PCTCN2020128625-appb-000011
然后采用以下公式获得投影仪坐标系中和像素点(x c,y c)对应的坐标(x p,y p),计算方式如下:
Figure PCTCN2020128625-appb-000012
上式中,T表示空间变换关系,即从相机成像平面坐标系到投影仪成像平面坐标系之间的一一对应关系;T(x c,y c)表示与相机成像平面坐标系中(x c,y c)对应的、位于投影仪成像平面坐标系中的坐标点是(x p,y p);f k是正弦条纹图中条纹沿水平方向/竖直方向的灰度变化的频率,下标k表示第k个频率。
本发明根据失焦卷积核标定图对原始图片进行边缘感知,强化原始图片中的高频细节,并生成补偿图片;将补偿图片输入投影仪并进行投影,即可获得失焦补偿后的清晰的投影结果。
本发明的技术效果如下:
本发明仅通过对输入投影仪的原始图片进行处理,即可实现投影仪失焦校正,无需调节投影仪物理焦距,解决了传统的投影仪失焦校正方法需要通过安 装电机调节投影仪镜头改变物理焦距的不便,降低了对投影仪实现失焦校正的硬件要求,提高了投影仪失焦校正的便利性和可操作性。
本发明通过边缘感知计算补偿图片,解决了现有的基于补偿图片的失焦校正技术中普遍未考虑的计算效率问题,避免了传统的补偿图片计算方法所需的复杂、耗时的迭代过程,既提高了计算补偿图片的效率,又保证了补偿图片计算的质量,使实时的投影仪失焦校正成为了可能,扩大了投影仪失焦校正方法的适用性。
本发明通过投影稀疏点图和正弦条纹图序列实现失焦卷积核标定,可实现失焦卷积核在任意形状的投影表面的高精度标定,解决了传统失焦卷积核标定方法只能用于特定形状的投影表面、在复杂投影表面存在较大标定误差的局限性,保证了后续补偿图片计算以及失焦校正过程的准确性。
本发明可以通过将特殊输入图案分割为小矩形区域的并进行逐一扫描式投影的方式,实现失焦卷积核的分块标定,并通过整合所有分块标定的结果获得所需的失焦卷积核的全局标定图,既避免了失焦卷积核标定对用户正常投影内容的干扰、提高用户了使用体验,又扩大了失焦卷积核的取样范围、实现全局标定,避免了采用某一局部区域作为失焦校正的整体依据,提高了失焦校正的效果。
本发明方法可有效地改善投影仪失焦模糊的现象,满足投影仪在存在热失焦、投影表面存在大幅高度变化等复杂应用场景下使用投影仪的需求,一定程度地扩大了投影仪设备的适应性。
附图说明
图1为本发明投影仪-相机系统布置连接以及本发明所实现效果的示意图;
图2为本发明方法流程图;
图3为图2所述方法中卷积核标定步骤的示意图;
图4为实施例中采用分块扫描的方式实现图3中卷积核标定步骤的示意图;
图5为图2所述方法中补偿图片计算步骤的示意图;
图6为实施例中失焦校正前的投影仪的投影结果;
图7为实施例中采用本发明方法进行失焦校正后的投影仪的投影结果。
图中:相机1、投影仪2、投影表面3。
具体实施方式
下面结合图及具体实例对本发明作进一步说明。
具体实施采用投影仪-相机系统采集投影仪的投影结果,如图1所示,投影仪-相机系统包括:相机1、投影仪2、投影表面3;投影仪2的镜头和相机1的镜头均朝向投影表面3;将特殊输入图像输入投影仪2产生光源照射至投影表面3上,相机1采集投影仪2所产生的光源照射至投影表面3后的投影结果作为输出图像;结合特殊输入图像和输出图像依次进行失焦卷积核标定、补偿图片计算两个步骤,如图2所示;将计算所得的补偿图片作为投影仪2的输入,获得失焦补偿后的清晰的投影结果。
本发明实施例如下:
在本实施例中,采用DLP投影仪投影输入图像,采用普通CMOS相机采集输入图像在投影表面的投影结果作为输出图像。受到投影仪热失焦,以及投影表面存在大幅的高度变化等影响,投影结果中存在失焦模糊的现象。为了对上述失焦模糊的现象进行校正,将投影仪失焦模糊的数学模型建立如下:
Figure PCTCN2020128625-appb-000013
其中,*为卷积操作符;(x p,y p)和(x p-i,y p-j)均为投影仪成像平面坐标系中的坐标;P为输入图像,P(x p,y p)表示输入图像中的坐标(x p,y p)的像素点;I 0为像素匹配后的输出图像;f为失焦卷积核,表征投影仪成像平面上某坐标处的光源照射至投影表面的失焦程度;r为失焦卷积核的半径。
根据上式,投影仪失焦校正的任务,等同于求解一个特殊的输入图像
Figure PCTCN2020128625-appb-000014
该特殊的输入图像
Figure PCTCN2020128625-appb-000015
应满足如下条件:
Figure PCTCN2020128625-appb-000016
上述特殊的输入图像
Figure PCTCN2020128625-appb-000017
又称为补偿图像。由于输入图像P是已知的,为了求解补偿图片
Figure PCTCN2020128625-appb-000018
需要对失焦卷积核f进行标定。
1)失焦卷积核标定包括如下步骤:
步骤1.1:将正弦条纹图序列的特殊输入图像输入投影仪(2),投影仪(2)将正弦条纹图序列中各个正弦条纹图依次投影照射至投影表面(3),相机(1)依次采集各个正弦条纹图的投影结果作为输出图像。正弦条纹图序列所读应的由相机采集的输出图像如图3(d)所示。
步骤1.2:根据正弦条纹图序列所对应的输出图像,求解空间变换关系。空间变换的求解结果如图3(e)所示。
通过投影正弦条纹图序列求解空间变换关系,实现像素匹配。每个正弦条纹图对应的输出图像中某一像素点处的灰度值为:
Figure PCTCN2020128625-appb-000019
其中,(x c,y c)为相机成像平面坐标系中的坐标点;I kn(x c,y c)是正弦条纹序列投影到投影表面后被相机采集的图像中位于(x c,y c)处的灰度值,其中,下标kn表示以为f k频率的正弦条纹图的第n次投影,f k是正弦条纹图中条纹沿水平方向/竖直方向的灰度变化的频率,下标k表示第k个频率,下标n表示第n次;A(x c,y c)是(x c,y c)处的背景光强;B(x c,y c)是(x c,y c)处的调制幅度;
Figure PCTCN2020128625-appb-000020
是(x c,y c)处所对应的相位值。
各个正弦条纹图依次投影后通过四步相移法求解获得位于每个像素点(x c,y c)处的相位值
Figure PCTCN2020128625-appb-000021
所述相位值
Figure PCTCN2020128625-appb-000022
与投影仪成像平面中的坐标(x p,y p)满足如下关系:
Figure PCTCN2020128625-appb-000023
其中,水平正弦条纹图是条纹沿水平方向分布的正弦条纹图;竖直正弦条 纹图是条纹沿竖直方向分布的正弦条纹图;f k是正弦条纹图中条纹沿水平方向/竖直方向的灰度变化的频率。由上式可以看出,通过投影水平/竖直正弦条纹图,并求解位于(x c,y c)处的相位值
Figure PCTCN2020128625-appb-000024
确定位于相机坐标系中的坐标(x c,y c)与投影仪坐标系中的坐标(x p,y p)之间的一一对应关系,也确定了用于像素匹配的空间变换关系,计算方式如下:
T(x c,y c)=(x p,y p),其中,
Figure PCTCN2020128625-appb-000025
上式中,T表示空间变换关系,即从相机成像平面坐标系到投影仪成像平面坐标系之间的一一对应关系;T(x c,y c)表示与相机成像平面坐标系中(x c,y c)对应的、位于投影仪成像平面坐标系中的坐标点是(x p,y p);其中,x p通过投影竖直正弦条纹图求解,y p通过投影水平正弦条纹图求解。竖直正弦条纹图如图3(d)左图所示,水平正弦条纹图如图3(d)右图所示。求解所得的空间变换关系T如图3(e)所示。
在图3(e)中,对于竖直正弦条纹图所得的空间变换关系,揭示输出图像中的坐标点(x c,y c)与投影仪成像平面坐标系中y p之间的一一对应关系;对于水平正弦条纹图所得的空间变换关系,揭示输出图像中的坐标点(x c,y c)与投影仪成像平面坐标系中x p之间的一一对应关系。
对于输出图像中的每一个坐标点(x c,y c),根据空间变换关系T,将输出图像中位于(x c,y c)处的内容,变换至(x p,y p)处,即可获得像素匹配后的输出图像,如图3(c)所示。
步骤1.3:将稀疏点图的特殊输入图像输入投影仪(2),投影仪(2)将稀疏点图投影照射至投影表面(3),相机(1)依次采集稀疏点图的投影结果作为输出图像。稀疏点图如图3(a)所示,稀疏点图所对应的由相机采集的输出图像如图3(b)所示。
然后利用空间变换关系将稀疏点图的输出图像进行变换获得像素匹配后的 输出图像,与输入投影仪的稀疏点图进行像素匹配,如图3(c)所示。
本发明利用稀疏点图中的发光像素点(如图3(a)右上方所示),在输出图案中所呈现的弥散圆(如图3(c)所示),实现失焦卷积核的标定。由于输入图像位于投影仪成像平面坐标系,输出图像位于相机成像平面坐标系,为了方便地获得弥散圆与发光像素点之间的一一对应关系,本发明利用空间变换关系,将输出图案变换至与输入图案相同的坐标系下,并将变换后的输出图案称为像素匹配后的输出图像,如图3(c)所示。
步骤1.4:对于位于稀疏点图中的每一个发光像素点,在稀疏点图所对应的像素匹配后的输出图像中的相同位置、以及以该位置为圆心的圆形邻域内,可以获得与该发光像素点一一对应的弥散圆;上述圆形邻域的半径作为圆形邻域的半径;所述弥散圆,由若干不同灰度值的两两相邻的像素点组成;所述归一化后的弥散圆,是将弥散圆中的每一个灰度值,与该弥散圆中的最大灰度值相除后所得的结果。
不妨设稀疏点图所对应地像素匹配后的输出图像为I M,弥散圆的半径为r,弥散圆可以通过矩阵表示如下:
Figure PCTCN2020128625-appb-000026
其中,M(x p,y p)是位于(x p,y p)处的弥散圆;I M(x p-i,y p-j)是像素匹配后的输出图像中位于(x p-i,y p-j)处的灰度值;r为弥散圆的半径。由上式可以看出,由于I M是已知的,M(x p,y p)取决于弥散圆的半径r。
所述弥散圆的半径r,根据如下迭代步骤进行确定:
步骤1.4.1:在以稀疏点图中发光像素点的所在位置为圆心、以1个像素为初始半径的圆形邻域内,计算稀疏点图所对应的像素匹配后的输出图像中位于该圆形邻域内所有像素点的灰度值之和,计算公式如下:
Figure PCTCN2020128625-appb-000027
其中,sum k表示第k次迭代中、以r k作为弥散圆的半径时,位于该弥散圆中所有像素点的灰度值之和为sum k。根据步骤1.4.1中的分析,第一次迭代中,弥散圆的初始半径为r 1=1。
步骤1.4.2:将圆形邻域的半径以1个像素为增量进行递增,重复步骤1.4.1中计算圆形邻域内所有像素点的灰度值之和的过程;比较第k次迭代所得弥散圆内所有像素点的灰度值之和,与第k+1次迭代所得弥散圆内所有像素点的灰度值之和;
步骤1.4.3:重复步骤1.4.2不断迭代处理,直到半径递增前后灰度值之和的差值小于某一阈值,停止迭代,将停止迭代时的圆形邻域的半径作为弥散圆的半径。迭代的终止条件如下:
sum k+1-sum k<ε
其中ε表示迭代精度,当第k+1次迭代满足上述终止条件时,第k+1次迭代中的半径r k+1即为所求弥散圆的半径。
将归一化后的弥散圆作为投影仪坐标系中发光像素点所在位置的失焦卷积核的标定图,计算公式如下:
Figure PCTCN2020128625-appb-000028
其中f(x p,y p)是位于(x p,y p)处的卷积核标定图,M(x p,y p)是位于(x p,y p)处的弥散圆,max[M(x p,y p)]是弥散圆M(x p,y p)中的最大元素值。
图3(c)右图所示为失焦卷积核标定图的示例。图3(c)中的卷积核标定图表现出中心亮、边缘暗的类似于高斯分布的分布特征。
步骤1.5:对于位于稀疏点图中的每一个不发光的像素点,寻找与其距离最近、且可以围成矩形区域的4个发光像素点,该矩形区域将该不发光的像素点包围在内;若可以找到满足上述要求的4个发光像素点,则结合步骤1.4中所获得的上述4个发光像素点所在位置的失焦卷积核标定图,通过双线性插值的方式,获得不发光的像素点所在位置的失焦卷积核的标定图。
不妨设稀疏点图中某一个不发光的像素点位于(x p,y p),满足步骤1.5中条件 的4个发光像素点在投影仪成像平面坐标系中由左至右、由上至下的坐标分别为(x 1,y 1)、(x 1,y 2)、(x 2,y 1)、(x 2,y 2),且上述4个发光像素点处所对应的失焦卷积核的标定图分别为f 1、f 2、f 3、f 4
在进行上述双线性插值之前,寻找上述4个发光像素点所在位置的失焦卷积核的标定图中的最大半径;并将上述4个失焦卷积核中小于该最大半径的失焦卷积核扩充至该最大半径,扩充的部分的像素点补以零灰度值,保证失焦卷积核f 1、f、f、fj具有相同大小,即具有相同的行数和列数,以便于进行上述双线性插值的操作。
根据双线性差值的原理,上述不发光的像素点所在位置(x p,y p)处的失焦卷积核的标定图为:
Figure PCTCN2020128625-appb-000029
步骤1.6:对于位于稀疏点图中的每一个不发光的像素点,若不能找到满足步骤1.5中要求的4个发光像素点,则说明该像素点位于投影仪成像坐标系的边缘处,则找到与该不发光像素点距离最近的、已获得失焦卷积核标定图的像素点,以该失焦卷积核标定图作为该不发光的像素点所在位置的失焦卷积核的标定图;至此,对于投影仪成像平面坐标系中的每一个像素点,均有与之一一对应的失焦卷积核的标定图。
作为优选,本发明在上述卷积核标定过程中,为了不影响投影仪的正常投影内容,最大程度地提高用户体验,可以采用分块扫描的方式对用于失焦卷积核标定的特殊投影图案进行投影,如图4所示。所述分块扫描是将特殊投影图案分割为若干大小相等且相互不重叠的小矩形区域,每次逐一投影和采集特殊投影图案在一个小矩形区域的内容,并获得该小矩形区域内的部分失焦卷积核的标定图;对所有小矩形区域进行遍历,即可获得失焦卷积核的标定图。
图4中以将特殊输入图案划分成16个大小相等且互不重叠的小矩形区域为 例,对分块扫描的具体实现方式进行揭示。在每帧正常投影图案中,按照编号顺序依次将小矩形区域叠加在正常投影图案的相应位置,16个小矩形区域依托16帧被叠加的正常投影图案被输入投影仪并投影至投影表面、并被相机采集,每帧被叠加的正常投影图案实现失焦卷积核在对应小矩形区域内的标定。将所有小矩形区域内的失焦卷积核的标定图进行整合,即可获得全局的失焦卷积核标定图。在上述每帧输入投影仪的图像中,正常投影图案仅有小部分区域被遮挡,保证最大程度地降低对正常投影图案地干扰。
1)补偿图片计算包括如下步骤:
步骤2.1:根据本发明对投影仪的失焦过程所建立的数学模型,对于输入投影仪中的、等待被投影的输入图片P,将其1)中所获得的失焦卷积核的标定图f进行卷积,获得预模糊的输入图片如下。预模糊的输入图片如图5(a)所示。在图5(a)中,与输入图片相比,预模糊的输入图片中细节处表现出模糊和钝化,符合模糊图片应该具有的特征。
Figure PCTCN2020128625-appb-000030
步骤2.2:将输入图片与预模糊的输入图片相除,获得边缘感知矩阵,如图5(a)所示。从图5(a)可以直观地观察到,边缘感知矩阵反映输入图片中边缘区域的轮廓。具体地,上述相除操作是将输入图片中每一个像素点的灰度值,除以预模糊的输入图片中相同位置的像素点的灰度值,计算方式如下:
Figure PCTCN2020128625-appb-000031
其中,E是边缘感知矩阵,E(x p,y p)是边缘感知矩阵在坐标(x p,y p)处的元素值。
步骤2.3:将输入图片与边缘感知矩阵相乘,获得补偿图片,如图5(b)所示。在图5(b)中,与输入图片相比,补偿图片中细节处表现出锐化和加强的特点,即补偿图片加强了位于输入图片中边缘区域的高频信息,这也是所希望 的,因为补偿图片通过加强高频特征来减少失焦过程中图片中高频信息的损失,使得投影结果变得清晰。具体地,上述相乘操作是将输入图片中每一个像素点的灰度值,乘以预模糊的输入图片中相同位置的像素点的灰度值。补偿图片的计算方式如下:
Figure PCTCN2020128625-appb-000032
其中,
Figure PCTCN2020128625-appb-000033
为补偿图片。上式通过边缘感知矩阵计算补偿图片,计算补偿图片的过程称为边缘感知。
将上式计算所得的补偿图片
Figure PCTCN2020128625-appb-000034
替换原本输入投影仪中的、会导致投影结果模糊的输入图片P,通过投影仪投影补偿图片
Figure PCTCN2020128625-appb-000035
即可获得失焦补偿后的清晰的投影结果。
为表明本发明所实现的投影仪失焦校正的效果,在根据上述所发所进行的一个实例中,失焦校正前的投影结果如图6所示,失焦校正后的投影结果如图7所示。
失焦校正前,图6中,投影结果出现明显的失焦现象,具体地,在狮子的胡须、遍布狮子全身的绒毛、狮子右眼睛等部位的细节特征均表现出明显的模糊,上述细节特征的边缘未表现出清晰、明显的分界;与图6相比,失焦校正后,在图7中,狮子的胡须、绒毛、右眼睛等部位的细节特征均可以被很好地分辨,上述细节特征的边缘也表现出清晰、明显的分界,投影结果有明显的改善,投影结果更加清晰。

Claims (8)

  1. 一种基于边缘感知的投影仪失焦校正方法,其特征在于:方法采用投影仪-相机系统,投影仪-相机系统包括:相机(1)、投影仪(2)和投影表面(3),投影仪(2)的镜头和相机(1)的镜头均朝向投影表面(3);将特殊输入图像输入投影仪(2)投影照射至投影表面(3)上,相机(1)采集投影仪(2)投影照射至投影表面(3)后的投影结果作为输出图像;利用特殊输入图像和输出图像进行失焦卷积核标定等步骤获得补偿图片,将补偿图片作为投影仪(2)的输入对投影仪(2)待投影图片进行校正补偿,进行投影照射获得失焦补偿后清晰的投影结果。
  2. 如权利要求1所述的一种基于边缘感知的投影仪失焦校正方法,其特征在于:所述特殊输入图像,包括稀疏点图和正弦条纹图序列;所述稀疏点图由若干长度和宽度均相等的正方形的发光像素点构成,每个发光像素点在稀疏点图中沿水平方向和竖直方向等距离矩形阵列分布;所述正弦条纹图序列由若干沿水平方向/竖直方向偏移错位的正弦条纹图组成,每个正弦条纹图由沿竖直方向/水平方向分布的条纹构成。
  3. 如权利要求1所述的一种基于边缘感知的投影仪失焦校正方法,其特征在于:所述失焦卷积核标定,包括如下步骤:
    步骤1.1:将正弦条纹图序列的特殊输入图像输入投影仪(2),投影仪(2)将正弦条纹图序列中各个正弦条纹图依次投影照射至投影表面(3),相机(1)依次采集各个正弦条纹图的投影结果作为输出图像;
    步骤1.2:根据正弦条纹图序列对应获得的各个输出图像,求解空间变换关系;
    步骤1.3:将稀疏点图的特殊输入图像输入投影仪(2),投影仪(2)将稀疏点图投影照射至投影表面(3),相机(1)依次采集稀疏点图的投影结果作为输出图像;
    然后利用空间变换关系将稀疏点图的输出图像进行变换获得像素匹配后的 输出图像;
    步骤1.4:对于稀疏点图中的每一个发光像素点,在像素匹配后的输出图像中以与该发光像素点的相同位置的点为圆心建立圆形邻域,由此获得与该发光像素点一一对应的弥散圆并进行归一化,归一化后的弥散圆作为投影仪坐标系中发光像素点所在位置的失焦卷积核的标定图;
    步骤1.5:对于位于稀疏点图中的每一个不发光的像素点,寻找与其距离最近、且围成矩形区域的4个发光像素点,该矩形区域将该不发光的像素点包围在内;若找到满足上述要求的4个发光像素点,则结合步骤1.4中所获得的上述4个发光像素点所在位置的失焦卷积核标定图,通过双线性插值的方式,获得不发光的像素点所在位置的失焦卷积核的标定图;
    在进行上述双线性插值之前,寻找上述4个发光像素点所在位置的失焦卷积核的标定图中的最大半径;并将上述4个失焦卷积核中小于该最大半径的失焦卷积核扩充至该最大半径,扩充的部分的像素点补以零灰度值,使得上述4个失焦卷积核具有相同大小;
    步骤1.6:对于位于稀疏点图中的每一个不发光的像素点,若不能找到满足步骤1.5中要求的4个发光像素点,则说明该像素点位于投影仪成像坐标系的边缘处,则找到与该不发光像素点距离最近的、已获得失焦卷积核标定图的像素点,以该失焦卷积核标定图作为该不发光的像素点所在位置的失焦卷积核的标定图;至此,对于投影仪成像平面坐标系中的每一个像素点,均有与之一一对应的失焦卷积核的标定图。
  4. 如权利要求3所述的一种基于边缘感知的投影仪失焦校正方法,其特征在于:所述归一化后的弥散圆,是将弥散圆中的每一个灰度值,与该弥散圆中的最大灰度值相除后所得的结果。
  5. 如权利要求1所述的一种基于边缘感知的投影仪失焦校正方法,其特征在于:采用分块扫描的方式对用于失焦卷积核标定的特殊输入图像进行投影;分 块扫描是将特殊输入图像分割为若干大小相等且相互不重叠的小矩形区域,每次逐一投影和采集特殊输入图像在一个小矩形区域的内容,并获得该小矩形区域内的部分失焦卷积核的标定图;对小矩形区域进行遍历,获得失焦卷积核的标定图。
  6. 如权利要求1所述的一种基于边缘感知的投影仪失焦校正方法,其特征在于:所述弥散圆的圆形邻域半径r,按照以下方式处理获得:
    步骤1.4.1:在以稀疏点图中发光像素点的所在位置为圆心、以1个像素为初始半径的圆形邻域内,计算稀疏点图所对应的像素匹配后的输出图像中位于该圆形邻域内所有像素点的灰度值之和,计算公式如下:
    Figure PCTCN2020128625-appb-100001
    其中,sum k表示第k次迭代中、以r k作为弥散圆的半径时,位于该弥散圆中所有像素点的灰度值之和为sum k
    步骤1.4.2:将圆形邻域的半径以1个像素为增量进行递增,重复步骤1.4.1中计算圆形邻域内所有像素点的灰度值之和的过程;比较第k次迭代所得弥散圆内所有像素点的灰度值之和,与第k+1次迭代所得弥散圆内所有像素点的灰度值之和;
    步骤1.4.3:重复步骤1.4.2不断迭代处理,直到半径递增前后灰度值之和的差值小于某一阈值,停止迭代,将停止迭代时的圆形邻域的半径作为弥散圆的半径;迭代的终止条件如下:
    sum k+1-sum k<ε
    其中ε表示迭代精度,当第k+1次迭代满足上述终止条件时,第k+1次迭代中的半径r k+1即为所求弥散圆的半径。
  7. 如权利要求1所述的一种基于边缘感知的投影仪失焦校正方法,其特征在于:所述将补偿图片作为投影仪(2)的输入对投影仪(2)待投影图片进行校正补偿,具体为:
    步骤2.1:对于输入投影仪中的、等待被投影的输入图片,将输入图片与失焦卷积核的标定图进行卷积,获得预模糊的输入图片;
    步骤2.2:将输入图片与预模糊的输入图片相除,获得边缘感知矩阵;具体地,上述相除操作是将输入图片中每一个像素点的灰度值,除以预模糊的输入图片中相同位置的像素点的灰度值;
    步骤2.3:将输入图片与边缘感知矩阵相乘,获得补偿图片。
  8. 如权利要求1所述的一种基于边缘感知的投影仪失焦校正方法,其特征在于:所述步骤1.2中,根据正弦条纹图序列对应获得的各个输出图像,求解空间变换关系,具体为:
    每个正弦条纹图对应的输出图像中某一像素点处的灰度值为:
    Figure PCTCN2020128625-appb-100002
    其中,(x c,y c)为相机成像平面坐标系中的坐标点;I kn(x c,y c)是正弦条纹序列投影到投影表面后被相机采集的图像中位于像素点(x c,y c)处的灰度值,其中,下标kn表示以为f k频率的正弦条纹图的第n次投影,A(x c,y c)是像素点(x c,y c)处的背景光强;B(x c,y c)是像素点(x c,y c)处的调制幅度;φ(x c,y c)是像素点(x c,y c)处所对应的相位值;
    各个正弦条纹图依次投影后通过四步相移法求解获得位于每个像素点(x c,y c)处的相位值φ(x c,y c),然后采用以下公式获得投影仪坐标系中和像素点(x c,y c)对应的坐标(x p,y p),计算方式如下:
    T(x c,y c)=(x p,y p),其中,
    Figure PCTCN2020128625-appb-100003
    上式中,T表示空间变换关系,即从相机成像平面坐标系到投影仪成像平面坐标系之间的一一对应关系;T(x c,y c)表示与相机成像平面坐标系中(x c,y c)对应的、位于投影仪成像平面坐标系中的坐标点是(x p,y p);f k是正弦条 纹图中条纹沿水平方向/竖直方向的灰度变化的频率,下标k表示第k个频率。
PCT/CN2020/128625 2020-01-15 2020-11-13 一种基于边缘感知的投影仪失焦校正方法 WO2021143330A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010043774.8A CN111311686B (zh) 2020-01-15 2020-01-15 一种基于边缘感知的投影仪失焦校正方法
CN202010043774.8 2020-01-15

Publications (1)

Publication Number Publication Date
WO2021143330A1 true WO2021143330A1 (zh) 2021-07-22

Family

ID=71148762

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/128625 WO2021143330A1 (zh) 2020-01-15 2020-11-13 一种基于边缘感知的投影仪失焦校正方法

Country Status (2)

Country Link
CN (1) CN111311686B (zh)
WO (1) WO2021143330A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114740681A (zh) * 2022-04-19 2022-07-12 深圳市和天创科技有限公司 一种配置旋转镜头的单片液晶投影仪的智能测距调节系统
CN114885136A (zh) * 2021-11-16 2022-08-09 海信视像科技股份有限公司 投影设备和图像校正方法
CN114885136B (zh) * 2021-11-16 2024-05-28 海信视像科技股份有限公司 投影设备和图像校正方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311686B (zh) * 2020-01-15 2023-05-02 浙江大学 一种基于边缘感知的投影仪失焦校正方法
CN111986144A (zh) * 2020-07-08 2020-11-24 深圳市景阳科技股份有限公司 一种图像模糊判断方法、装置、终端设备及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110298960A1 (en) * 2008-12-09 2011-12-08 Seiko Epson Corporation View Projection Matrix Based High Performance Low Latency Display Pipeline
CN103974011A (zh) * 2013-10-21 2014-08-06 浙江大学 一种投影图像模糊消除方法
US20150189267A1 (en) * 2013-12-27 2015-07-02 Sony Corporation Image projection device and calibration method thereof
CN107786816A (zh) * 2017-09-14 2018-03-09 天津大学 基于曝光补偿的自适应投影方法
CN108981611A (zh) * 2018-07-25 2018-12-11 浙江大学 一种基于畸变全局修正的数字投影光栅图像拟合校正方法
TWI678926B (zh) * 2018-09-25 2019-12-01 華碩電腦股份有限公司 投影方法及投影系統
CN111311686A (zh) * 2020-01-15 2020-06-19 浙江大学 一种基于边缘感知的投影仪失焦校正方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011086456A1 (de) * 2011-11-16 2013-05-16 Siemens Aktiengesellschaft Rekonstruktion von Bilddaten
US10080004B2 (en) * 2014-11-06 2018-09-18 Disney Enterprises, Inc. Method and system for projector calibration
CN106408556B (zh) * 2016-05-23 2019-12-03 东南大学 一种基于一般成像模型的微小物体测量系统标定方法
CN108168464B (zh) * 2018-02-09 2019-12-13 东南大学 针对条纹投影三维测量系统离焦现象的相位误差校正方法
CN108259869B (zh) * 2018-02-26 2020-08-04 神画科技(深圳)有限公司 一种投影机及其梯形校正的温度补偿方法
CN108592824B (zh) * 2018-07-16 2020-06-30 清华大学 一种基于景深反馈的变频条纹投影结构光测量方法
CN109827502B (zh) * 2018-12-28 2020-03-17 北京航空航天大学 一种标定点图像补偿的线结构光视觉传感器高精度标定方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110298960A1 (en) * 2008-12-09 2011-12-08 Seiko Epson Corporation View Projection Matrix Based High Performance Low Latency Display Pipeline
CN103974011A (zh) * 2013-10-21 2014-08-06 浙江大学 一种投影图像模糊消除方法
US20150189267A1 (en) * 2013-12-27 2015-07-02 Sony Corporation Image projection device and calibration method thereof
CN107786816A (zh) * 2017-09-14 2018-03-09 天津大学 基于曝光补偿的自适应投影方法
CN108981611A (zh) * 2018-07-25 2018-12-11 浙江大学 一种基于畸变全局修正的数字投影光栅图像拟合校正方法
TWI678926B (zh) * 2018-09-25 2019-12-01 華碩電腦股份有限公司 投影方法及投影系統
CN111311686A (zh) * 2020-01-15 2020-06-19 浙江大学 一种基于边缘感知的投影仪失焦校正方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885136A (zh) * 2021-11-16 2022-08-09 海信视像科技股份有限公司 投影设备和图像校正方法
CN114885136B (zh) * 2021-11-16 2024-05-28 海信视像科技股份有限公司 投影设备和图像校正方法
CN114740681A (zh) * 2022-04-19 2022-07-12 深圳市和天创科技有限公司 一种配置旋转镜头的单片液晶投影仪的智能测距调节系统
CN114740681B (zh) * 2022-04-19 2023-10-03 深圳市和天创科技有限公司 一种配置旋转镜头的单片液晶投影仪的智能测距调节系统

Also Published As

Publication number Publication date
CN111311686A (zh) 2020-06-19
CN111311686B (zh) 2023-05-02

Similar Documents

Publication Publication Date Title
WO2021143330A1 (zh) 一种基于边缘感知的投影仪失焦校正方法
JP7291244B2 (ja) プロジェクタの台形補正方法、装置、システム及び読み取り可能な記憶媒体
RU2716843C1 (ru) Цифровая коррекция аберраций оптической системы
JP6484706B2 (ja) 画像を記録するための装置および方法
US10684537B2 (en) Camera-assisted arbitrary surface characterization and correction
Green et al. Multi-aperture photography
US20070286514A1 (en) Minimizing image blur in an image projected onto a display surface by a projector
US20200082557A1 (en) Device and method for producing a three-dimensional image of an object
JP2017169202A (ja) プレノプティック・イメージング・システムのオブジェクト空間較正
JP2008511859A (ja) 球面収差範囲が制御され中央を掩蔽した絞りを有する多焦点距離レンズを使用した拡張焦点深度
JP5615393B2 (ja) 画像処理装置、撮像装置、画像処理方法、画像処理プログラム、および、記憶媒体
JP2017526964A (ja) 画像を記録するための装置および方法
CN111083457A (zh) 多光机投影图像的校正方法、装置和多光机投影仪
JP2016024052A (ja) 3次元計測システム、3次元計測方法及びプログラム
JP2017208641A (ja) 圧縮センシングを用いた撮像装置、撮像方法および撮像プログラム
CN109474814A (zh) 投影仪的二维校准方法、投影仪以及校准系统
Liu et al. Large depth-of-field 3D measurement with a microscopic structured-light system
JP2020036310A (ja) 画像処理方法、画像処理装置、撮像装置、レンズ装置、プログラム、記憶媒体、および、画像処理システム
JP2013254097A (ja) 画像処理装置及びその制御方法、並びにプログラム
US10656406B2 (en) Image processing device, imaging device, microscope system, image processing method, and computer-readable recording medium
TW202206888A (zh) 振鏡的參數調節方法、裝置、設備
WO2019104670A1 (zh) 深度值确定方法和装置
US20070206847A1 (en) Correction of vibration-induced and random positioning errors in tomosynthesis
JP2007081611A (ja) 表示画面補正パラメータ設定方法
CN109587463A (zh) 投影仪的校准方法、投影仪及校准系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914061

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914061

Country of ref document: EP

Kind code of ref document: A1