CN109889799B - Monocular structure light depth perception method and device based on RGBIR camera - Google Patents

Monocular structure light depth perception method and device based on RGBIR camera Download PDF

Info

Publication number
CN109889799B
CN109889799B CN201711279413.8A CN201711279413A CN109889799B CN 109889799 B CN109889799 B CN 109889799B CN 201711279413 A CN201711279413 A CN 201711279413A CN 109889799 B CN109889799 B CN 109889799B
Authority
CN
China
Prior art keywords
image
infrared
rgbir
speckle
depth perception
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711279413.8A
Other languages
Chinese (zh)
Other versions
CN109889799A (en
Inventor
葛晨阳
邓作为
侯晓琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201711279413.8A priority Critical patent/CN109889799B/en
Publication of CN109889799A publication Critical patent/CN109889799A/en
Application granted granted Critical
Publication of CN109889799B publication Critical patent/CN109889799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a monocular structured light depth perception method and a monocular structured light depth perception device based on an RGBIR camera, which are characterized in that an infrared laser speckle coding projector is adopted to project an infrared speckle image, or an infrared pan-light source is utilized to irradiate the infrared image, then the RGBIR camera is used for collecting RGBIR RAW data according to frames and separating the RGBIR RAW data, image signal processing is carried out on a separated RGB image, depth perception decoding is carried out on the infrared speckle image to obtain a depth value or a 3D point cloud, infrared image enhancement processing is carried out on the infrared image, and finally, the depth image, the RGB image, the infrared image, the RGBD combined data or XYZRGB point cloud data can be selected to be output.

Description

Monocular structure light depth perception method and device based on RGBIR camera
Technical Field
The invention belongs to the technical field of computer vision, human-computer interaction and integrated circuits, and particularly relates to a monocular structured light depth perception method and device based on an RGBIR camera.
Background
Currently, the virtual world is approaching to the real world without limit, and the man-machine interaction technology will become a bridge connecting the virtual world and the real world. With the continuous improvement of computer performance, people are also continuously exploring a simpler and more convenient way to communicate with a machine. The visual sense is the most direct and important means for human observation, cognition and experience.
The depth perception technology based on the structured light active vision can obtain more accurate depth information, is not influenced by ambient light, and is relatively simple in algorithm and hardware implementation. However, in the current structured light depth sensing device, an RGB camera and an IR camera are separated, the RGB camera can obtain texture information, the IR camera can obtain an infrared image and depth information, and the two cameras can establish a one-to-one correspondence relationship only by synchronization and RGBD precise registration, which is disclosed in patent CN 204481940U. However, due to the influence of the assembly precision, the calibration precision and the use process, it is often difficult to realize one-to-one correspondence between RGB image pixels at different distances and depth maps or 3D point clouds.
Disclosure of Invention
In view of this, the invention provides a monocular structured light depth perception method and device based on an rgbi camera, which uses an infrared laser speckle coding projector to project an infrared speckle Image, or uses an infrared flood light source to irradiate the infrared Image, uses a single rgbi camera to collect an rgbi Image, uses a depth perception decoding module to separate input rgbi RAW data, including an RGB Image and an IR Image (infrared speckle pattern or infrared Image), wherein the RGB Image is processed by traditional Image Signal Processing (ISP), the infrared speckle pattern is processed by depth perception decoding to obtain a depth value or a 3D point cloud, the infrared Image is processed by infrared Image enhancement, and finally, the depth Image, the RGB Image, the infrared Image, the RGBD combined data or the XYZRGB point cloud data can be selected and output.
In order to achieve the above object, the present invention provides a monocular structured light depth perception method based on an rgbiir camera, specifically comprising the following steps:
step 1: performing RGBIR image projection, namely projecting an infrared speckle image by using an infrared laser speckle coding projector or irradiating the infrared image to a target object or a projection space by using an infrared flood light source;
step 2: collecting RGBIR images, namely collecting the infrared speckle images or infrared images sent out in the step 1 together with visible light RGB images by using a single RGBIR camera to obtain RGBIR RAW data;
and step 3: performing RGBIR image separation, namely performing image separation on the RGBIR RAW data acquired in the step 2 to form an RGB image, an infrared speckle image or an infrared image, and preparing for respective subsequent image processing;
and 4, step 4: and (3) image processing after separation, namely performing conventional signal processing ISP on the separated RGB image, performing infrared enhancement processing on the separated infrared image, and performing depth perception decoding on the separated infrared speckle pattern, wherein the processing process comprises the following steps: preprocessing, image rotation, block matching, depth calculation, post-processing and three-dimensional point cloud generation;
and 5: and selecting and outputting data, namely selectively outputting the RGB image, the infrared image, the depth map or the point cloud after image signal processing, and selectively outputting the depth map, the RGB image, the infrared image, the RGBD data or the XYZRGB point cloud data.
In addition, the invention also provides a monocular structured light depth perception device based on the RGBIR camera, which specifically comprises: RGBIR image generation module, RGBIR image acquisition module, RGBIR image separation module, RGBIR image processing module, and select output module:
the RGBIR image generation module is used for projecting an infrared speckle image to a target object or a projection space through an infrared laser speckle coding projector, or irradiating the infrared image by using an infrared pan light source;
the RGBIR image acquisition module is used for acquiring the infrared speckle image projected by the infrared laser speckle coding projector or the infrared image irradiated by the infrared pan-light source together with the visible light RGB image by using a single RGBIR camera to obtain RGBIR RAW data; the RGBIR image separation module is used for carrying out image separation on the collected RGBIR RAW data to form an RGB image, an infrared speckle image or an infrared image, and the separated RGB image, infrared speckle image or infrared image can be stored or respectively sent to a subsequent module for processing;
the RGBIR image processing module is used for respectively processing the RGB image and the infrared speckle image or the infrared image;
and the selection output module is used for selectively outputting the processed RGB image, infrared image, depth map or point cloud, and selectively outputting the depth map, RGB image, infrared image, RGBD data or XYZRGB point cloud data.
Compared with other binocular camera structure optical depth sensing devices in the prior art, the monocular structure optical depth sensing device can reduce the influences of assembly precision, calibration precision and use process, and can realize one-to-one correspondence of RGB image pixels at different distances and depth maps or 3D point clouds.
In addition, the invention adopts an improved infrared image enhancement method for the separated infrared image, can remove the noise of the infrared image and well protect the details of the edge, the outline and the like of the image. When the infrared speckle images after separation are processed, the infrared speckle images can be rotated, so that the images can be conveniently read and stored, and the problem that the data structure of the images can be damaged when the camera is rotated according to the needs of people in actual use and data is read according to the short edge of the images is solved. The above-mentioned advantages will be embodied by the following embodiments.
Drawings
FIG. 1 is a structural framework diagram of a monocular structured light depth perception device based on an RGBIR camera;
FIG. 2 is a flow chart of a monocular structured light depth perception method based on an RGBIR camera;
fig. 3 is an image signal processing ISP flow chart of an RGB image;
FIG. 4 is a depth-aware decoding framework diagram of an infrared speckle pattern;
FIG. 5 is a frame diagram of a novel fisheye correction module;
fig. 6(a) and 6(b) are image rotation reference diagrams.
Detailed Description
The present invention is described in further detail below with reference to figures 1-6.
Fig. 1 is a structural framework diagram of a monocular structured light depth perception device based on an rgbiir camera, which includes an rgbiir image generation module, an rgbiir image acquisition module, an rgbiir image separation module, an rgbiir image processing module, and a selection output module.
The RGBIR image generation module utilizes an infrared laser speckle coding projector to project an infrared speckle image or utilizes an infrared flood light source to irradiate the infrared image, and utilizes an RGBIR camera to project a visible light RGB image.
The RGBIR image acquisition module acquires an infrared speckle image projected by the infrared laser speckle coding projector or an infrared image irradiated by the infrared floodlight source together with the visible light RGB image by using a single RGBIR camera to obtain RGBIR RAW data. The RGBIR camera, the infrared laser speckle coding projector or the infrared pan-light source are arranged on the same base line, and are horizontally or vertically arranged.
The rgbiir image separation module is configured to perform image separation on the acquired rgbiir RAW data, specifically, the rgbiir RAW data may be separated by using an existing device such as an optical filter to form an RGB image, an infrared speckle image or an infrared image, and the separated RGB image, infrared speckle image or infrared image may be stored and/or respectively sent to a subsequent module for processing.
The RGBIR image processing module is used for processing the RGB image, the infrared speckle image and/or the infrared image separated by the RGBIR image separation module. The method comprises the following steps of carrying out traditional image signal processing ISP (internet service provider) processing on an RGB (red, green and blue) image, carrying out infrared image enhancement processing on an infrared image, and carrying out processing on an infrared speckle image, wherein the processing on the infrared speckle image comprises the following steps: preprocessing, image rotation, block matching, depth calculation, post-processing, three-dimensional point cloud generation and other processing operations. The detailed description of the specific method for infrared image enhancement processing and the specific operation for processing the infrared speckle images will be described later.
The selective output module is used for selectively outputting the RGB image processed by the ISP, the infrared image subjected to the infrared image enhancement processing, and the infrared speckle image generated by preprocessing, image rotation, block matching, depth calculation, post-processing and three-dimensional point cloud. The RGB image can obtain an RGB color image with good effect after being processed by an ISP, meanwhile, the infrared speckle image obtains an image depth value D through a structured light monocular calculation formula after being preprocessed, rotated and matched, and XYZ data are obtained by combining the depth value D and the internal and external camera calibration parameter three-dimensional reconstruction. And RGB images may be used to register or select output for the point cloud. The selection output module can select and output the depth map, the RGB image, the infrared image, the RGBD data or the XYZRGB data according to different requirements.
Fig. 2 is a flow chart of the monocular structured light depth perception method based on an rgbiir camera according to the present invention, and for more detailed and clear explanation of the method, the method will be described below with reference to fig. 2 to 6.
Step 1: and the power supply is switched on to drive the power supply to normally work, so that the infrared laser speckle coding projector emits red light. Further, the infrared laser speckle coding projector emits collimated laser beams, and the collimated laser beams are scattered through an optical diffraction element (DOE), so that a required infrared speckle pattern is obtained. In another embodiment, an infrared flood light source may also be used to illuminate the infrared image. And simultaneously driving the RGBIR camera to project an RGB color image.
Step 2: and collecting an infrared speckle image projected by the infrared laser speckle coding projector or an infrared image irradiated by the infrared floodlight source and an RGB color image projected by the RGBIR camera according to frames by using the RGBIR camera to obtain RGBIR RAW data.
And step 3: and (3) performing image separation on the RGBIR RAW data acquired in the step (2), and separately reading and storing the separated infrared speckle image or infrared image and RGB color image. After the RGB color image, the infrared speckle image or the infrared image are separated, preparation is made for subsequent image processing.
And 4, step 4: and respectively carrying out image processing on the RGB color image and the infrared speckle image or the infrared image obtained by image separation.
(1) And carrying out image signal processing ISP processing on the RGB color image. The specific processing is shown in fig. 3. The signal output by the front-end image sensor is post-processed, and the image is subjected to linear correction, noise removal and dead pixel removal, so that an RGB color image with good effect is obtained.
(2) The image processing operation of the following steps is performed on the infrared speckle image, as shown in fig. 4:
and (4) preprocessing. The invention adopts a fish eye correction method to correct the input distortion and image. As shown in fig. 5, by using the external storage function of DDR and QPI Flash, the displacement difference between the distorted image coordinate and the ideal image coordinate is stored, and then the calculation is synchronized with the real-time image coordinate, so as to achieve the purpose of fisheye distortion correction and well meet the real-time processing requirement. The fish-eye corrected image is converted into a gray image by adopting an average value method, then the gray image is converted into a black-white image by utilizing a threshold conversion algorithm, and then the image is clearer and is convenient to analyze by adopting a gradient sharpening method.
The image is rotated. In the prior art, the image is read according to the rows, namely, the image is read according to the long sides of the image, but in actual operation, a camera often rotates along with the needs of people, and at the moment, if the image is not rotated, the image is read according to the short sides of the image, so that the data structure of the image is damaged. The image rotation is for more convenient reading and storing of the image. When the camera rotates, the image sensor sends a signal to the image processing module, so that the image is rotated during image processing, otherwise, the camera does not rotate. Fig. 6(a) and 6(b) are image rotation reference diagrams, and the conventional reading manner of image information is to start reading and storing from the first row 1, 2, 3, 4, as shown in fig. 6 (a). Fig. 6(b) shows the image reading mode after the image reading mode of fig. 6(a) is rotated by 90 degrees clockwise, and the image is read and stored from the first row 9, 5, 1.
And (5) matching blocks. And performing block matching motion estimation on the input infrared speckle image and the solidified reference speckle image, and obtaining the offset between each pixel point in the input infrared speckle image and the optimal matching block center point in the reference speckle image, namely a motion estimation vector, in an image block mode through comparison and calculation of block matching similarity. The reference speckle image is used as a reference comparison standard, is projected in advance onto a plane vertical to the optical axis of the infrared speckle coding pattern projector and is acquired through the same preprocessing, and the vertical reference distance d between the reference speckle image and the projector is known.
And (4) calculating the depth. Obtaining depth value information corresponding to each pixel point of the infrared speckle image according to a structured light monocular depth calculation formula, wherein the calculation formula is as follows:
Figure GDA0002456340360000071
where f is the focal length of the camera, s is the baseline, d is the reference distance, Δ x is the offset, μIs the pixel size and dis is the depth value.
And (5) post-treatment. And carrying out post-processing on the depth image obtained through the depth calculation. Denoising, smoothing and hole repairing are carried out on the depth image. Denoising and smoothing: noise caused by mismatching and edge cavities is reduced, and space-time 3D denoising or a sliding window denoising smoothing and interpolation mode can be adopted. And (3) repairing the hole: the method adopts a bilateral filter to carry out hole patching, is used as a nonlinear filter, can take the space distance into consideration, not only can remove noise, but also can keep the edge of an image, and simultaneously can fill the missing depth value.
And generating a point cloud. Three-dimensional point cloud generation is the conversion of pixel coordinates of a depth map to spatial coordinates. Binding of Pd=dis·Ind -1·pdWherein p isd=[id,jd,1]THomogeneous coordinates, In, representing depth map pixel pointsiAn internal reference matrix representing the depth sensor DepthSensor, dis represents a distance value from each point to the camera.
(3) And performing infrared image enhancement processing on the infrared image.
If the traditional infrared image enhancement technology is adopted, the noise of the infrared image cannot be well removed, the details such as the edge, the outline and the like of the image cannot be well protected, and the infrared image enhancement effect cannot be well realized. The invention can apply the traditional infrared image enhancement processing, and also provides an improved histogram equalization algorithm, compared with the image enhancement technology in the prior art, the invention can greatly improve the contrast of the image, and better adjust the signals of the target, the background, the noise and the like, thereby highlighting the gray level of the target, improving the gray level space of the target, protecting a great deal of details and leading the infrared image to obtain good enhancement effect.
The method comprises the following specific steps:
step s 1: analyzing the weighted histogram, eliminating zero units, keeping the front-back position relation of the remaining units unchanged, and forming a set { N (i) |1 ≦ i ≦ K }, wherein K is the number of nonzero units.
Step s 2: the maximum and maximum values of the set are found. Specifically, M (i-1) < M (i) and M (i) > M (i +1) are first judged, (2. ltoreq. i.ltoreq.K-1) to obtain a maximum, and then M (1) and M (K) are analyzed.
Step s 3: the values in the previous step s2 are formed into another set, { N (i) |1 ≦ i ≦ L }, where L is the number of maxima in the set.
Step s 4: then, the maximum value and the maximum value are calculated for the set { N (i) |1 ≦ i ≦ L } obtained in step S3, and a new set { Z (j) |1 ≦ j ≦ S } is obtained, S is the number of the new set, and S is less than L.
Step s 5: finding the median value of the set { Z (j) |1 ≦ j ≦ S } obtained in step S4
Figure GDA0002456340360000091
Plateau threshold
Figure GDA0002456340360000092
Step s 6: calculating a new weighted histogram:
Figure GDA0002456340360000093
wherein P ist(k) Is a new weighted histogram, Pr(k) Is the original histogram. Image gray scale value in interval [0, 255]0 denotes black, 255 denotes white, nkRepresenting the number of times a gray level k of the digital image occurs, wherein,
Pr(k)=nkk=0,1,2……255
step s 7: the cumulative histogram is calculated and a grey scale map is derived.
The cumulative histogram calculation formula is:
Figure GDA0002456340360000094
wherein Ft(k) Is a cumulative histogram.
The gray scale mapping calculation formula is as follows:
Figure GDA0002456340360000095
Rt(k) is a new gray level corresponding to the original k gray level.
By improving the histogram equalization algorithm, the contrast of the image can be greatly improved, signals such as a target, a background, noise and the like can be well adjusted, the gray level of the target is highlighted, the gray space of the target is improved, a large amount of details are protected, and the infrared image can be well enhanced.
And 5: and selectively outputting the RGB image, the infrared speckle image or the infrared image after image processing, and selectively outputting the depth map, the RGB image, the infrared image, the RGBD data or the XYZRGB data.
The invention not only supports the structured light modes generated by different laser sources, such as infrared light, visible light, ultraviolet light, invisible light and the like, but also can use the projection schemes of light-emitting elements with different shapes, such as round point, block, cross, stripe and the like. Thus, it is intended that all such modifications and improvements within the spirit and scope of the invention be included within the scope of the following claims.

Claims (10)

1. The utility model provides a monocular configuration light depth perception device based on RGBIR camera which characterized in that:
the depth perception device includes:
the RGBIR image generation module is used for projecting an infrared speckle image through an infrared laser speckle coding projector or irradiating the infrared image by utilizing an infrared flood light source and projecting a visible light RGB image through an RGBIR camera;
the RGBIR image acquisition module is used for acquiring the infrared speckle image or the infrared image and the visible light RGB image together through a single RGBIR camera to obtain RGBIR RAW data;
the RGBIR image separation module is used for carrying out image separation on the collected RGBIR RAW data to obtain a separated RGB image, an infrared speckle image or an infrared image;
the RGBIR image processing module is used for processing the RGB image, the infrared speckle image or the infrared image separated by the RGBIR image separation module separately;
the selective output module is used for selectively outputting the RGB images, the infrared speckle images or the infrared images which are processed by the RGBIR image processing module.
2. A monocular structured light depth perception device as described in claim 1, wherein: the image processing of the infrared speckle image comprises preprocessing, image rotation, block matching, depth calculation, post-processing and three-dimensional point cloud generation.
3. A monocular structured light depth perception device as described in claim 1, wherein:
and the selection output of the selection output module further selects and outputs the depth map, the RGBD data or the XYZRGB data.
4. A monocular structured light depth perception device as described in claim 2, wherein:
the image rotation is specifically that when the camera rotates, the image sensor transmits a signal to the rgbiir image processing module, so that the image rotates during image processing, otherwise, the image does not rotate.
5. A monocular structured light depth perception device as described in claim 1, wherein:
the image processing performed on the infrared image is an infrared image enhancement operation, and specifically comprises the following steps:
analyzing the weighted histogram, eliminating zero units, keeping the front-back position relation of the remaining units unchanged, and forming a set { N (i) |1 is more than or equal to i and less than or equal to K }, wherein K is the number of nonzero units;
determining the maximum value and the maximum value of the set N (i);
forming another set by the values in the previous step, { N (i) |1 ≦ i ≦ L }, wherein L is the number of the maximum values of the set;
then carrying out maximum value and maximum value calculation on the set { N (i) |1 ≦ i ≦ L } to obtain a new set { Z (j) |1 ≦ j ≦ S }, wherein S is the number of the new set, and S is less than L;
find the median of the set { Z (j) |1 ≦ j ≦ S ≦
Figure FDA0002456340350000021
Plateau threshold
Figure FDA0002456340350000022
Calculating a new weighted histogram:
Figure FDA0002456340350000023
wherein P ist(k) Is a new histogram, Pr(k) Is the original histogram, and the number of times of gray level k of the digital image is nk
Pr(k)=nkk=0,1,2……255
Calculating an accumulated histogram and obtaining gray mapping;
the cumulative histogram calculation formula is:
Figure FDA0002456340350000024
the gray scale mapping calculation formula is as follows:
Figure FDA0002456340350000031
rt (k) is a new gray level corresponding to the original k gray level.
6. A monocular structured light depth perception method based on an RGBIR camera is characterized in that:
the monocular structured light depth perception method comprises the following steps:
step 1: the power supply is switched on to drive the power supply to work normally, so that the infrared laser speckle coding projector projects an infrared speckle image or an infrared floodlight source is utilized to irradiate the infrared image, and the RGBIR camera is driven to project an RGB color image;
step 2: collecting the infrared speckle image or the infrared image and the RGB color image by an RGBIR camera according to frames to obtain RGBIR RAW data;
and step 3: performing image separation on the RGBIR RAW data acquired in the step 2;
and 4, step 4: respectively carrying out image processing on the RGB color image, the infrared speckle image or the infrared image obtained by image separation;
and 5: and selectively outputting the RGB color image, the infrared speckle image or the infrared image after image processing.
7. The monocular structured light depth perception method of claim 6, wherein:
and (5) selectively outputting, namely further selecting an output depth map, RGBD data or XYZRGB data.
8. The monocular structured light depth perception method of claim 6, wherein:
the image processing of the infrared speckle image in the step 4 comprises preprocessing, image rotation, block matching, depth calculation, post-processing and three-dimensional point cloud generation.
9. The monocular structured light depth perception method of claim 8, wherein:
the image rotation is specifically that when the camera rotates, the image sensor transmits a signal to the image processing module, so that the image rotates during image processing, otherwise, the image does not rotate.
10. The monocular structured light depth perception method of claim 6, wherein:
the image processing on the infrared image in the step 4 is an infrared image enhancement operation, and specifically comprises the following steps:
step s 1: analyzing the weighted histogram, eliminating zero units, keeping the front-back position relation of the remaining units unchanged, and forming a set { N (i) |1 is more than or equal to i and less than or equal to K }, wherein K is the number of nonzero units;
step s 2: solving the maximum value and the maximum value of the set;
step s 3: forming another set of the values in the previous step s2, { N (i) |1 ≦ i ≦ L }, where L is the number of maxima of the set;
step s 4: then, the maximum value and the maximum value are calculated for { N (i) |1 ≦ i ≦ L } to obtain a new set { Z (j) |1 ≦ j ≦ S }, S is the number of the new set, and S is less than L;
step s 5: find the median of the set { Z (j) |1 ≦ j ≦ S ≦
Figure FDA0002456340350000041
Plateau threshold
Figure FDA0002456340350000042
Step s 6: calculating a new weighted histogram:
Figure FDA0002456340350000043
wherein P ist(k) Is a new histogram, Pr(k) Is the original histogram, nkRepresenting the number of occurrences of a gray level k, where P isr(k)=nkk=0,1,2……255
Step s 7: calculating an accumulated histogram and obtaining gray mapping;
the cumulative histogram calculation formula is:
Figure FDA0002456340350000051
the gray scale mapping calculation formula is as follows:
Figure FDA0002456340350000052
Rt(k) is a new gray level corresponding to the original k gray level.
CN201711279413.8A 2017-12-06 2017-12-06 Monocular structure light depth perception method and device based on RGBIR camera Active CN109889799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711279413.8A CN109889799B (en) 2017-12-06 2017-12-06 Monocular structure light depth perception method and device based on RGBIR camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711279413.8A CN109889799B (en) 2017-12-06 2017-12-06 Monocular structure light depth perception method and device based on RGBIR camera

Publications (2)

Publication Number Publication Date
CN109889799A CN109889799A (en) 2019-06-14
CN109889799B true CN109889799B (en) 2020-08-25

Family

ID=66924325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711279413.8A Active CN109889799B (en) 2017-12-06 2017-12-06 Monocular structure light depth perception method and device based on RGBIR camera

Country Status (1)

Country Link
CN (1) CN109889799B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111239729B (en) * 2020-01-17 2022-04-05 西安交通大学 Speckle and floodlight projection fused ToF depth sensor and distance measuring method thereof
CN113034565B (en) * 2021-03-25 2023-07-04 奥比中光科技集团股份有限公司 Depth calculation method and system for monocular structured light
CN114119696A (en) * 2021-11-30 2022-03-01 上海商汤临港智能科技有限公司 Method, device and system for acquiring depth image and computer readable storage medium
CN115567697B (en) * 2022-12-02 2023-05-16 浙江智慧视频安防创新中心有限公司 3D structured light camera system, data acquisition method, device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10268885B2 (en) * 2013-04-15 2019-04-23 Microsoft Technology Licensing, Llc Extracting true color from a color and infrared sensor
US9191643B2 (en) * 2013-04-15 2015-11-17 Microsoft Technology Licensing, Llc Mixing infrared and color component data point clouds
CN103824318B (en) * 2014-02-13 2016-11-23 西安交通大学 A kind of depth perception method of multi-cam array
CN105120257B (en) * 2015-08-18 2017-12-15 宁波盈芯信息科技有限公司 A kind of vertical depth sensing device based on structure light coding
CN106412433B (en) * 2016-10-09 2019-01-29 深圳奥比中光科技有限公司 Atomatic focusing method and system based on RGB-IR depth camera

Also Published As

Publication number Publication date
CN109889799A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN108564041B (en) Face detection and restoration method based on RGBD camera
US10194135B2 (en) Three-dimensional depth perception apparatus and method
CN109889799B (en) Monocular structure light depth perception method and device based on RGBIR camera
CN111047510B (en) Large-field-angle image real-time splicing method based on calibration
CN111080724A (en) Infrared and visible light fusion method
CN108111749B (en) Image processing method and device
CN107864337B (en) Sketch image processing method, device and equipment and computer readable storage medium
CN111062905A (en) Infrared and visible light fusion method based on saliency map enhancement
CN107424118A (en) Based on the spherical panorama mosaic method for improving Lens Distortion Correction
CN110009672A (en) Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment
CN108055452A (en) Image processing method, device and equipment
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
CN111932678B (en) Multi-view real-time human motion, gesture, expression and texture reconstruction system
CN106456292B (en) Systems, methods, devices for collecting color information related to an object undergoing a 3D scan
CN107800965B (en) Image processing method, device, computer readable storage medium and computer equipment
CN109034017A (en) Head pose estimation method and machine readable storage medium
CN107680039B (en) Point cloud splicing method and system based on white light scanner
CN108154514A (en) Image processing method, device and equipment
CN111027415B (en) Vehicle detection method based on polarization image
CN106875437A (en) A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN110099268B (en) Blind area perspective display method with natural color matching and natural display area fusion
CN111724317A (en) Method for constructing Raw domain video denoising supervision data set
CN111354077A (en) Three-dimensional face reconstruction method based on binocular vision
CN109559353A (en) Camera module scaling method, device, electronic equipment and computer readable storage medium
CN109285183A (en) A kind of multimode video image method for registering based on moving region image definition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant