CN113592995A - Multiple reflected light separation method based on parallel single-pixel imaging - Google Patents

Multiple reflected light separation method based on parallel single-pixel imaging Download PDF

Info

Publication number
CN113592995A
CN113592995A CN202110849254.0A CN202110849254A CN113592995A CN 113592995 A CN113592995 A CN 113592995A CN 202110849254 A CN202110849254 A CN 202110849254A CN 113592995 A CN113592995 A CN 113592995A
Authority
CN
China
Prior art keywords
reflected light
light
pixel
camera
primary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110849254.0A
Other languages
Chinese (zh)
Other versions
CN113592995B (en
Inventor
姜宏志
闫雍靖
赵慧洁
李旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110849254.0A priority Critical patent/CN113592995B/en
Publication of CN113592995A publication Critical patent/CN113592995A/en
Application granted granted Critical
Publication of CN113592995B publication Critical patent/CN113592995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The invention designs a multiple reflected light separation method based on parallel single-pixel imaging, which applies the parallel single-pixel imaging to the decomposition of aliasing light rays in multiple reflected light surfaces and can complete the separation of primary, secondary, tertiary and more than three reflected light rays in multiple reflected light rays. Firstly, projecting sine gray stripes to a scene of multiple reflected lights to be detected through a projector, and acquiring modulation information in the scene through a camera; secondly, carrying out phase subtraction according to image results acquired by a camera to obtain a Fourier coefficient, and carrying out inverse Fourier transform on the Fourier coefficient to obtain an optical transmission coefficient image under the visual angle of the projector; and then separating the primary reflected light and the secondary reflected light according to the polar constraint and the three-dimensional model, so that the separation of the multiple reflected light surfaces is realized. The method is the inverse process of the traditional illumination rendering, can directly obtain a composite illumination result of scene information through a camera-projector system and complete separation of aliasing rays, and plays an important role in promoting the development of optical measurement, computer vision and computer graphics.

Description

Multiple reflected light separation method based on parallel single-pixel imaging
Technical Field
The invention relates to a method for separating multiple reflected lights based on parallel single-pixel imaging, which uses the parallel single-pixel imaging in the field of light transmission and photogrammetry and combines the traditional three-dimensional measurement technology to complete the separation of primary, secondary and multiple reflected light components on the surface of multiple reflected lights. The invention mainly belongs to the field of computational imaging and computer vision.
Background
The multiple reflected light is actually the aliasing of the light in the space, which causes the multiple reflected light to be mixed together, and when the light is collected by the optical detector, the light intensity information received by the optical detector is actually the result of the multiple reflected light mixed together. In the field of photogrammetry, the decomposition of multiple reflected light rays can help to reconstruct real three-dimensional scene information, and the measurement precision is improved; in the field of computer graphics, because a virtual scene is often rendered through a rendering equation, the propagation process of light from a light source to the scene is simulated, illumination information such as multiple reflected light is added in the propagation process, so that the scene information is more real, and for the separation of the multiple reflected light, the separation is actually a reverse process of rendering, the separation of the reflected light can help to verify the rendering authenticity, and theoretical basis and real measurement data are provided for the separation.
For example, a parallel single-pixel imaging method disclosed in chinese patent application publication No. cn110264540a studies light intensity information received by a single detector pixel, and analyzes the light intensity information to obtain illumination information corresponding to the pixel, and multiple reflected light is actually a part of the illumination information, so that from a theoretical standpoint, parallel single-pixel imaging should be able to complete identification and separation of various components in the multiple reflected light.
Disclosure of Invention
The invention provides a method for separating multiple reflected lights based on parallel single-pixel imaging, which expands the basic principle of the parallel single-pixel imaging to the problem of separation of aliasing rays, and realizes the identification and separation of multiple reflected light components under the complex illumination condition through the uniqueness of a parallel single-pixel algorithm. The flow chart of the multiple reflected light separation method based on parallel single-pixel imaging is shown in the attached figure 1.
The basic principle of the invention is that parallel single-pixel imaging is transplanted to a traditional array camera, each camera pixel is taken as a single-pixel detector, Fourier inverse transformation processing is carried out on light intensity information acquired by the single-pixel detector to obtain an optical transmission coefficient image, direct reflection points are identified in the optical transmission coefficient image through polar line constraint, and then secondary reflection points are identified and separated through a three-dimensional model to complete separation of primary, secondary and multiple reflection light components.
The method is different from other separated reflected light rays in that the method realizes the reverse reduction from measured data to a multi-reflected light component image, and is characterized in that the method does not acquire the surface material information and the space three-dimensional structure of a detected scene in advance, and the multi-reflected light component is identified and separated aiming at a static unknown scene, which is the most unique and direct advantage of the method.
The technical solution of the invention is as follows: firstly, generating sine gray stripes suitable for the method according to a Fourier transform parallel single-pixel imaging method, projecting the modulated gray stripes to a detected scene by using a projector according to frequency transformation, and shooting and sampling a mixed scene of the scene and the modulated stripes by using a calibrated camera; after shooting is finished, each single pixel on a camera pixel is subjected to phase subtraction and inverse Fourier transform to obtain an optical transmission coefficient image under the projector visual angle corresponding to the pixel; polar line constraint processing is carried out on the light transmission coefficient, and the primary reflection light spots are identified and separated; and reconstructing according to a stereoscopic vision principle to obtain a three-dimensional point cloud, establishing a light propagation model, and identifying and separating secondary reflected light components according to a secondary reflected light criterion. It mainly comprises the following steps:
(1) before the projector and the camera are placed in a detected scene, the projector should generate sine gray stripes needing to be projected in advance, light rays projected by the projector are subjected to aliasing on a propagation path, and an aliasing light ray intensity image is acquired by the camera.
(2) According to the multi-step phase shift principle, phase subtraction is carried out on the four phase images under each frequency to obtain a Fourier coefficient, and inverse Fourier transform processing is carried out on the Fourier coefficient;
(3) all the camera pixels are processed in the step (2), and each pixel point can obtain an optical transmission coefficient image;
(4) decomposing primary reflected light (directly reflected light) and indirect reflected light by adopting an epipolar constraint principle according to the optical transmission coefficient image;
(5) and reconstructing a three-dimensional point cloud data model according to a stereoscopic vision principle, decomposing indirect reflected light components according to a secondary reflected light separation model, and decomposing to obtain secondary reflected light and three or more reflected light.
The expression of the multiple reflected light aliasing ray mentioned in the step (1) is as follows:
Figure BDA0003181749120000031
wherein, Iout(x, y) is the multiple reflection alias intensity received by a certain camera pixel (x, y), h (x, y; m, n) is the light transmission coefficient of the light from the projector pixel (m, n) to the camera pixel (x, y), Iin(m, n) is the intensity of the outgoing light from the projector pixel (m, n), Ie(x, y) is the ambient light intensity.
The light transmission coefficient image mentioned in the step (3) is located under the visual angle of the projector, the resolution is consistent with that of the projector, a plurality of light spots generally exist on the surface of the multiple-reflection light, and for a certain pixel (m, n), the pixel represents that the light emitted from the point on the projector array reaches the current camera pixel (x, y) after being reflected for a plurality of times in space, and the corresponding relation of the light transmission coefficient image and the current camera pixel (x, y) is just reflected in the light transmission coefficient image.
In the step (4), the primary reflected light of the camera belongs to a component which is directly emitted from a certain pixel of the projector and is directly received by the camera through the primary reflection, so that the epipolar constraint condition in the stereoscopic vision is met, in the light transmission coefficient, only the primary reflected light meets the condition by calculating the epipolar line of the camera pixel corresponding to the visual angle of the projector, and therefore, the separation of the primary reflected light can be completed through the method.
The principle expression of the three-dimensional model designed in the step (5) for separating the secondary reflected light from the third reflected light and above is as follows:
Figure BDA0003181749120000032
δ(m,n;x,y)≤θd
due to the uniqueness of the secondary reflected light in the space, the secondary reflected light occurs on the curved surface of the material of multiple reflected light and can be reflected twice as shown in the figure, and the propagation path of the secondary reflected light is consistent with that of the primary reflected light after the secondary reflection to form aliasing light, so that the secondary reflected light is separated before. For a certain point Y on the curved surface, after the incident light is emitted by a certain pixel (m, n) of the projector, the light is reflected by the point Y on the curved surface to reach another point X on the curved surface, and then is reflected to a point (X, Y) on the camera pixel to form secondary reflected light. Let L be the light incident on Y and R be the light reflected by Y, where i (m, n; X, Y) and R (m, n; X, Y) are the incident angle and the reflection angle of the light at point Y, respectively, and the above two equations represent the angular relationship between the twice reflected light from a certain pixel (m, n) of the projector to a certain pixel (X, Y) of the camera when the first reflection occurs. According to the steps, a certain pixel value acquired by the camera can be subjected to inverse Fourier transform to obtain an optical transmission coefficient image, primary reflected light can be screened out by an epipolar constraint method, the value of a primary reflected light spot is set to be 0 on the optical transmission coefficient image, and indirect reflected light components are reserved; meanwhile, according to the matching relation between the primary reflected light and the camera pixels, the corresponding three-dimensional coordinate of each camera pixel in the space can be obtained through calculation. Therefore, for each camera pixel, a light transmission coefficient image containing only indirect reflected light components can be reconstructed, all points in the light transmission coefficient image are traversed, and according to the above formula, the light which satisfies the threshold condition can be determined as secondary reflected light.
The invention has the advantages that:
(1) the invention aims at a three-dimensional scene with a multi-time reflected light surface, and can complete the separation work of a primary component, a secondary component and a third component or more in the multi-time reflected light on the premise of not knowing the surface material and the three-dimensional shape, which is different from the traditional separation research. The decomposition of the secondary reflection light can help the accurate establishment of BRDFs and BSSDFs models and the verification of rendering effect in the fields of optics and computer graphics.
(2) A visual representation of light from the light source to the collector is provided. By introducing parallel single-pixel imaging, the invention directly reconstructs the light transmission coefficient of the light from the projector to the camera, and the representation result composed of the reflected light in each order can be directly observed in the two-dimensional image of the light transmission coefficient. This means that the propagation of the light can be directly visualized in the optical transmission coefficient, providing a basis for the decomposition of the individual components.
(3) And completing the separation of secondary reflected light of the unknown scene. On the premise of unknown scene morphology and surface material, the difficulty is that multiple reflected lights are decomposed. The method starts from parallel single-pixel imaging, light transmission coefficient images are reconstructed through the parallel single-pixel imaging, and after a three-dimensional model is built, secondary reflection light meeting judgment conditions can be screened out in a reflection light lobe range according to a propagation path of the secondary reflection light and a Phong model.
In a word, the invention provides a method for separating multiple reflected light surfaces based on parallel single-pixel imaging, wherein a light transmission coefficient is reduced through the parallel single-pixel imaging, each component can be separated one by one in the light transmission coefficient according to the respective characteristics of multiple reflected light components of one time, two times and more than three times, the separation of the reflected light components on the multiple reflected light surfaces is realized, and the separation result is displayed by a two-dimensional image.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic diagram of the propagation of light from a projector array to a camera array. In the figure, 1 is a primary reflection light propagation path, 2 is a secondary reflection light path, 3 is a tertiary reflection light path, 4 is a stripe projector, 5 is a certain pixel representation on a projector array, 6 is a measured multiple reflection light scene, 7 is a multiple reflection light aliasing ray received by a camera, 8 is a certain pixel representation on a camera array, and 9 is the camera.
Fig. 3 is a diagram illustrating the light transmission coefficient obtained by the pixel. In the figure, 8 is a pixel on the camera array, 9 is a camera, 10 is an image with optical transmission coefficient, and 11 is an epipolar line corresponding to the pixel on the image.
Fig. 4 is a schematic diagram of secondary reflected light separation. In the figure, 1 is a primary reflection light propagation path, 2 is a secondary reflection light path, 4 is a stripe projector, 7 is a multiple reflection light aliasing ray received by a camera, 8 is a certain pixel representation on a camera array, 9 is the camera, 12 is a computer, 13 is a curved surface representation of a measured multiple reflection light scene, 14 is a gray scale sine stripe emitted by the projector, 15 is a normal line of a reflection surface 13 at a certain point, and 16 and 17 represent an incident angle and a reflection angle of the secondary reflection light at the point respectively.
Detailed Description
The technical solution of the present invention is further explained with reference to the accompanying drawings and the detailed description.
The invention provides a multiple reflected light separation method based on parallel single-pixel imaging, which is implemented by the method according to a flow chart shown in figure 1, gray scale sine stripes are projected by a stripe projector, the stripes are reflected by multiple reflected light scenes and received by camera pixels, a two-dimensional image of a light transmission coefficient is obtained by Fourier inverse transformation, primary reflected light is screened out by polar line constraint on the basis of the two-dimensional image, a stereoscopic vision relation is established between the primary reflected light and pixel points of the projector to obtain three-dimensional reconstruction data, a separation model of secondary reflected light is established on the basis, the secondary reflected light is separated from indirect light components by a secondary reflected light criterion, and finally, the separation of the primary, secondary and tertiary reflected light components and more than primary reflected light components is completed. A schematic diagram of the formation of aliased rays in space by multiple reflections is shown in fig. 2. The method specifically operates as follows:
1. and calibrating the stripe projector and the camera to obtain a spatial pose relationship between the stripe projector and the camera, generating gray sine stripes with the parallel single-pixel characteristic, and ensuring that a modulation pattern projected by the projector can be collected by the camera. The projector projects a modulation pattern, and the camera acquires images of scenes by triggering to obtain a series of scene images combined by the scenes and the stripes.
2. For a two-dimensional image acquired by a camera, a fourier coefficient can be obtained after phase subtraction processing, and an optical transmission coefficient as shown in fig. 3 can be obtained by performing inverse fourier transform on the fourier coefficient. The optical transmission coefficient is an aliasing result of multiple reflected light components, and the expression is as follows:
Figure BDA0003181749120000061
wherein, Iout(x, y) is the light intensity of the mixed light received by the camera pixel, h (x, y; m, n) is the light transmission coefficient of the projector coordinate (m, n) corresponding to the camera array coordinate (x, y), and is shown as a certain bright point in the light transmission coefficient in fig. 3, and the separation principle of the multiple reflected lights is shown as follows:
hn(x,y;m,n)=h(x,y;m,n)·Mn(x,y;m,n) (2)
wherein M isn(x, y; m, n) is a mask for reflecting light n times in a certain sequence, and the expression is as follows:
Figure BDA0003181749120000062
cn in formula (3) represents whether the current reflected light belongs to the nth reflected light which is logically required, if yes, the current reflected light is set to be 1, otherwise, the current reflected light is set to be 0, and formulas (2) and (3) calculate the light transmission coefficient of multiple reflected light in mathematical logic. Fig. 3 illustrates the process of reducing the light transmission coefficient from a certain pixel of the camera.
3. The primary reflected light is separated from the light transmission coefficient image. In fig. 3, 9 is a camera, and 8 is a certain pixel on the camera array, which is subjected to inverse fourier transform to obtain an image under the viewing angle of the projector, the size of the image is the resolution of the projector, some bright spots in the image represent the light transmission coefficients of reflected light in different orders, and the characteristics of reflected light compared with indirect light should be searched for in order to separate reflected light once. Because the primary reflected light represents the primary reflection of the light through the scene to be measured only during the propagation processThat is, a camera pixel is reached, which should satisfy the basic constraint relation of stereo vision in space, so in this step, a primary reflection light spot is selected and screened through epipolar constraint, as shown in the figure, 10 is an optical transmission coefficient image, 11 is an epipolar line of the pixel on the optical transmission coefficient image, a light spot located in the epipolar threshold range is judged according to the epipolar line, and the sum obtained by summing the pixel values is the primary reflection light intensity I received by the pixel1(x, y), and processing all the pixels according to the method to obtain a two-dimensional separation result of the primary reflected light.
4. And obtaining the three-dimensional structure of the measured surface through three-dimensional visual stereo reconstruction by the aid of the primary reflection light spots and the corresponding camera pixel points. With this three-dimensional result, a separation model for separating the secondary reflected light can be established, as shown in fig. 4.
Due to the uniqueness of the secondary reflected light in the space, the secondary reflected light occurs on the curved surface of the material of multiple reflected light and can be reflected twice as shown in the figure, and the propagation path of the secondary reflected light is consistent with that of the primary reflected light after the secondary reflection to form aliasing light, so that the secondary reflected light is separated before. For a certain point Y on the curved surface, after the incident light is emitted by a certain pixel (m, n) of the projector, the light is reflected by the point Y on the curved surface to reach another point X on the curved surface, and then is reflected to a point (X, Y) on the camera pixel to form secondary reflected light. Here, let L be a ray incident on Y and R be a ray reflected from Y to X, and there is the following formula according to the reflection characteristics of light:
Figure BDA0003181749120000071
Figure BDA0003181749120000072
where i (m, n; x, Y) and r (m, n; x, Y) are the incident angle and the reflection angle of the light at point Y, respectively, and the above two equations represent the angular relationship between the twice reflected light from a certain pixel (m, n) of the projector to a certain pixel (x, Y) of the camera when the first reflection occurs. According to the steps, a certain pixel value acquired by the camera can be subjected to inverse Fourier transform to obtain an optical transmission coefficient image, primary reflected light can be screened out by an epipolar constraint method, the value of a primary reflected light spot is set to be 0 on the optical transmission coefficient image, and indirect reflected light components are reserved; meanwhile, according to the matching relation between the primary reflected light and the camera pixels, the corresponding three-dimensional coordinate of each camera pixel in the space can be obtained through calculation. Thus, for each camera pixel, an image of the light transmission coefficient can be reconstructed that contains only indirect reflected light components, including indirect light from some pixels on the projector, with all the spot coordinates being substituted into the following equation:
Figure BDA0003181749120000081
δ(m,n;x,y)≤θd (7)
wherein, KLAnd KGRespectively, represent the reflectance of a lambertian surface and a smooth surface, ranging from 0 to 1, which respectively represent the reflectance characteristics of a non-smooth surface and a smooth surface. The parameter s represents the roughness of the current surface, the numerical value of the metal material is larger than that of other materials, once the final calculation result delta (m, n; x, y) is smaller than a given threshold value, the light from (m, n) to (x, y) is judged to belong to secondary reflection light, each camera pixel can be screened in sequence, the corresponding secondary reflection light is selected from the reverse tracking angle of the light, after screening, the light spot pixel values meeting the conditions in the light transmission coefficient images are reserved, the rest is set to be 0, and then the numerical values of each light transmission coefficient image are added to obtain the final secondary reflection light separation result.

Claims (4)

1. A multiple reflected light separation method based on parallel single-pixel imaging is characterized in that: the separation process comprises the following steps:
(1) before a projector and a camera are placed in a detected scene, the projector projects sine stripes required by parallel single-pixel imaging, the camera acquires a mixed image of the stripes and the scene, and the light intensity acquired by image pixels is the mixture of primary, secondary and multiple reflected light components;
(2) performing a parallel single-pixel imaging algorithm on each pixel in the camera to obtain an optical transmission coefficient image under the visual angle of the projector corresponding to the pixel, wherein a plurality of light spots in the optical transmission coefficient image reflect component information of primary, secondary and multiple reflected lights;
(3) separating primary reflected light (direct reflected light) from mixed illumination information by adopting an epipolar constraint principle according to the light transmission coefficient image, adding the primary reflected light intensities of all pixels to obtain a result image of primary reflected light separation, establishing a three-dimensional model through pixel points and primary reflected light points, and reconstructing to obtain a three-dimensional point cloud model of a detected scene;
(4) according to the characteristics of the secondary reflected light separation model and the propagation path, decomposing the secondary reflected light in the indirect reflected light component by adopting a secondary reflected light separation algorithm, and adding the intensities to obtain a secondary reflected light separation result image;
(5) in the light transmission coefficient image, the primary and secondary reflected light components are removed, and the remaining light spots represent the three or more reflected light components, and the intensities thereof are added to obtain a result image separated by three or more times.
2. A method according to claim 1, characterized in that: the expression of the multiple reflected light aliasing ray mentioned in the step (1) is as follows:
Figure FDA0003181749110000011
wherein, Iout(x, y) is the multiple reflection alias intensity received by a certain camera pixel (x, y), h (x, y; m, n) is the light transmission coefficient of the light from the projector pixel (m, n) to the camera pixel (x, y), Iin(m, n) is the intensity of the outgoing light from the projector pixel (m, n), Ie(x, y) is the ambient light intensity.
3. The method of claim 1, wherein: in the step (3), because the primary reflected light of the camera belongs to a component which is directly emitted from a certain pixel of the projector and is directly received by the camera through the primary reflection, the epipolar constraint condition in the stereoscopic vision is met, in the light transmission coefficient image, the epipolar threshold is set by calculating the epipolar line of the camera pixel corresponding to the visual angle of the projector, so that the light spot in the threshold range can be determined, namely the light spot represents the primary reflected light component, the intensity of the light spot is added, the pixel value of the primary reflected light component can be obtained, and the processing is carried out on each pixel, so that the result image of the primary reflected light component can be obtained.
4. The method of claim 1, wherein: the principle expression of the secondary reflected light separation algorithm involved in the step (4) for separating the secondary reflected light is as follows:
Figure FDA0003181749110000021
wherein KLAnd KGRepresenting the reflectivity of a lambertian surface and a shiny surface, respectively, ranging from 0 to 1, the parameter s describing the roughness of the curved surface.
Due to the uniqueness of the secondary reflection light propagating in space, the secondary reflection occurs on the curved surface of the multiple reflection material, and the secondary reflection occurs as shown in the figure. For a certain point Y on the curved surface, the incident light is emitted by a certain pixel (m, n) of the projector, reflected by the point Y, reaches another point X, and is finally received by the camera pixel (X, Y), i.e., the secondary reflected light. Let L be the light incident on Y and R be the light reflected by Y, where i (m, n; X, Y) and R (m, n; X, Y) are the incident angle and the reflection angle of the light at point Y, respectively, and the above two equations represent the angular relationship between the twice reflected light from a certain pixel (m, n) of the projector to a certain pixel (X, Y) of the camera when the first reflection occurs. For each camera pixel, after removing the primary reflected light component, a light transmission coefficient image containing only the indirect reflected light component can be reconstructed, candidate points in the light transmission coefficient image are traversed, and according to the formula, the image meeting the threshold condition can be determined as secondary reflected light.
CN202110849254.0A 2021-07-27 2021-07-27 Multi-reflection light separation method based on parallel single-pixel imaging Active CN113592995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110849254.0A CN113592995B (en) 2021-07-27 2021-07-27 Multi-reflection light separation method based on parallel single-pixel imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110849254.0A CN113592995B (en) 2021-07-27 2021-07-27 Multi-reflection light separation method based on parallel single-pixel imaging

Publications (2)

Publication Number Publication Date
CN113592995A true CN113592995A (en) 2021-11-02
CN113592995B CN113592995B (en) 2023-07-18

Family

ID=78250373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110849254.0A Active CN113592995B (en) 2021-07-27 2021-07-27 Multi-reflection light separation method based on parallel single-pixel imaging

Country Status (1)

Country Link
CN (1) CN113592995B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050103854A1 (en) * 2003-11-13 2005-05-19 Metrologic Instruments, Inc. Hand-supportable digital imaging-based bar code symbol reader supporting narrow-area and wide-area modes of illumination and image capture
US20150085136A1 (en) * 2013-09-26 2015-03-26 Xerox Corporation Hybrid single-pixel camera switching mode for spatial and spot/area measurements
CN107870334A (en) * 2017-10-27 2018-04-03 西安电子科技大学昆山创新研究院 Single pixel laser infrared radar imaging device and imaging method based on embedded gpu
CN110264540A (en) * 2019-06-19 2019-09-20 北京航空航天大学 A kind of parallel single pixel imaging method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050103854A1 (en) * 2003-11-13 2005-05-19 Metrologic Instruments, Inc. Hand-supportable digital imaging-based bar code symbol reader supporting narrow-area and wide-area modes of illumination and image capture
US20150085136A1 (en) * 2013-09-26 2015-03-26 Xerox Corporation Hybrid single-pixel camera switching mode for spatial and spot/area measurements
CN107870334A (en) * 2017-10-27 2018-04-03 西安电子科技大学昆山创新研究院 Single pixel laser infrared radar imaging device and imaging method based on embedded gpu
CN110264540A (en) * 2019-06-19 2019-09-20 北京航空航天大学 A kind of parallel single pixel imaging method

Also Published As

Publication number Publication date
CN113592995B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
Meilland et al. 3d high dynamic range dense visual slam and its application to real-time object re-lighting
US6639594B2 (en) View-dependent image synthesis
JP2024041815A (en) System and method for scene reconstruction that disentangles light and matter fields - Patents.com
US8896594B2 (en) Depth sensing with depth-adaptive illumination
US6803910B2 (en) Rendering compressed surface reflectance fields of 3D objects
Lin et al. Review and comparison of high-dynamic range three-dimensional shape measurement techniques
US20200057831A1 (en) Real-time generation of synthetic data from multi-shot structured light sensors for three-dimensional object pose estimation
JP2003203220A (en) Three-dimensional image processing method, three- dimensional image processor, there-dimensional image processing system and three-dimensional image processing program
KR100681320B1 (en) Method for modelling three dimensional shape of objects using level set solutions on partial difference equation derived from helmholtz reciprocity condition
JP2016128816A (en) Surface attribute estimation using plenoptic camera
KR20110127202A (en) Fusion of a 2d electro-optical image and 3d point cloud data for scene interpretation and registration performance assessment
EP3382645A2 (en) Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images
Sato et al. Reflectance analysis for 3D computer graphics model generation
KR20080045392A (en) Method for light environment reconstruction for image synthesis and storage medium storing program therefor
Pintus et al. Objective and subjective evaluation of virtual relighting from reflectance transformation imaging data
JPWO2020075252A1 (en) Information processing equipment, programs and information processing methods
US20140218477A1 (en) Method and system for creating a three dimensional representation of an object
CN115242934A (en) Noise phagocytosis ghost imaging with depth information
JP5441752B2 (en) Method and apparatus for estimating a 3D pose of a 3D object in an environment
Martell et al. Benchmarking structure from motion algorithms of urban environments with applications to reconnaissance in search and rescue scenarios
EP3582183B1 (en) Deflectometric techniques
CN113592995B (en) Multi-reflection light separation method based on parallel single-pixel imaging
CN116106318A (en) Object surface defect detection method and device and three-dimensional scanner
CN107103620B (en) Depth extraction method of multi-optical coding camera based on spatial sampling under independent camera view angle
Aliaga Digital inspection: An interactive stage for viewing surface details

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant