CN108399632B - RGB-D camera depth image restoration method based on color image combination - Google Patents
RGB-D camera depth image restoration method based on color image combination Download PDFInfo
- Publication number
- CN108399632B CN108399632B CN201810174885.5A CN201810174885A CN108399632B CN 108399632 B CN108399632 B CN 108399632B CN 201810174885 A CN201810174885 A CN 201810174885A CN 108399632 B CN108399632 B CN 108399632B
- Authority
- CN
- China
- Prior art keywords
- depth
- rgb
- cavity
- camera
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000001914 filtration Methods 0.000 claims abstract description 18
- 238000003384 imaging method Methods 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 21
- 238000013139 quantization Methods 0.000 claims description 10
- 239000011800 void material Substances 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000004392 development of vision Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a depth image restoration method of an RGB-D camera based on a combined color image, and belongs to the field of depth image restoration. The method comprises the following steps: calibrating the color and depth cameras by adopting a Zhang calibration method to obtain internal and external parameters of the cameras; realizing the coordinate alignment of the color camera and the depth camera according to the pinhole imaging principle and the coordinate transformation; respectively acquiring a color image and a depth image of a scene, and binarizing the color image and the depth image by adopting a depth threshold method; judging the size of the cavity connected domain to determine that a cavity exists; performing expansion operation to obtain a cavity neighborhood; calculating the variance of the depth values of the pixel points in the cavity field, and dividing the cavity into a shielding cavity and a cavity in a plane according to the variance of the depth values; repairing the two types of cavities by adopting color consistency and field similarity respectively; and filtering the repaired image by adopting a local filtering method to remove noise. The invention ensures the restoration of the depth image, simultaneously maintains the original information of the depth image as much as possible and improves the restoration efficiency of the depth image.
Description
Technical Field
The invention belongs to the field of depth image restoration, and relates to an RGB-D depth image restoration method combining color images.
Background
With the development of vision technology, more and more kinds of cameras are applied to display scenes, for example: the common color camera is used for face recognition, the CCD camera is used for obstacle detection, and the Kinect is used for human skeleton tracking and the like. The common two-dimensional plane camera can acquire scene two-dimensional information, and for a three-dimensional display space, the two-dimensional plane camera obviously cannot meet the requirements of practical application. This situation is primarily improved with the advent of RGB-D cameras, such as Kinect published by microsoft, which is very convenient for acquiring depth information while acquiring color images of a scene. With the development of microsoft kinect (natal), the depth camera attracts more and more people's eyes, and can be used in the fields of human body tracking, three-dimensional reconstruction, human-computer interaction, SLAM and the like. However, the depth image obtained by the RGB-D camera generally contains many holes, and the depth information cannot be directly applied to the real application, which greatly affects the application range of the RGB-D camera.
Therefore, an effective depth image restoration method is needed to solve the problem that depth image information obtained by RGB-D has holes and cannot be directly used.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a depth image restoration method for an RGB-D camera combined with a color image, which provides an effective depth image restoration method for solving the problem that depth image information obtained by RGB-D has holes and cannot be directly used.
In order to achieve the purpose, the invention provides the following technical scheme:
a depth image restoration method of an RGB-D camera combining color images comprises the following steps:
s1: calibrating the color camera and the depth camera by adopting a classical Zhang calibration method to obtain internal and external parameters of the camera;
s2: realizing the coordinate alignment of the color camera and the depth camera according to the pinhole imaging principle and the coordinate transformation;
s3: respectively acquiring a scene color image and a depth image, and binarizing the depth image according to a depth threshold method;
s4: judging the size of a depth image hole connected domain to determine that a hole exists;
s5: performing expansion operation on the depth image to obtain a cavity neighborhood;
s6: calculating the variance of the depth values of the pixel points in the cavity field, and dividing the cavity into a shielding cavity and an in-plane cavity according to the variance;
s7: repairing the two types of cavities by adopting color consistency and field similarity according to different cavity types;
s8: and filtering the repaired image by adopting a local filtering method, and removing noise while ensuring the originality of the depth value.
Further, the step S2 specifically includes the following steps:
s21: the rotation matrix R of the color camera is obtained in step S1rgbAnd translation vector TrgbRotation matrix R of depth camerairAnd translation vector Tir(ii) a Let P be a point in the world coordinate system, PrgbAnd PirRespectively, obtaining a relation formula by a pinhole imaging model according to projection coordinate points of P under the color camera and the depth camera:
Prgb=RrgbP+Trgb
Pir=RirP+Tir
s22: let prgbAnd pirThe projection coordinates of the point on the RGB image plane and the depth image plane are respectively represented by an internal reference matrix A of the camerargbAnd AirObtaining:
prgb=ArgbPrgb
pir=AirPir
s23: the coordinates of the depth camera are different from the coordinates of the RGB camera, PrgbAnd PirA rotation matrix R and a translation matrix T are used for being related, and the relationship between the rotation matrix R and the translation matrix T is as follows:
Prgb=RPir+T
s24: cancelling P by the S21 equation yields:
s25: the corresponding terms of simultaneous S23 and S24 equations are equal to obtain R and T:
s26: and (3) combining equations S22 and S23 to finally obtain the corresponding relation between the color image pixel point and the depth image pixel point as follows:
further, the step S3 specifically includes the following steps:
s31: simultaneously acquiring a scene image according to the aligned color camera and depth camera;
s32: the original depth data are quantized, the depth values are converted into gray levels of 0-255, and the quantization formula is as follows:
where depth is the depth value after quantization, DpFor each pixel point's depth value, DmaxIs the maximum depth value;
s33: and carrying out binarization on the obtained depth image according to a depth threshold value method, wherein a formula is expressed as follows:
wherein I (x, y) is the binarized image, depth (x, y) represents the depth value of the pixel point after quantization, DthrTo a binarized depth threshold.
Further, the step S4 specifically includes: after binarization processing of S3, judging the size of a cavity connected domain to distinguish space-time cavities and noise points; if the connected domain is judged to be a hole, continuing the subsequent processing; if not, the subsequent processing is not performed.
Further, the step S5 specifically includes the following steps:
s51: expanding operation H for the cavity judged at S4ep;
S52: obtaining void field HnpThe expression is as follows:
Hnp=Hep-Ho
wherein HoIs a hollow region.
Further, the step S6 specifically includes the following steps:
s61: according to the hole neighborhood obtained in S5, calculating the variance delta of the depth values of the quantized pixel points in the neighborhooddepthThe calculation formula is as follows:
wherein DiThe depth value of the i pixel points in the hollow domain is represented,representing the average value of depth values of the cavity field, i representing the number of pixel points, and N representing the total number of pixel points of the cavity field;
s62: neighborhood variance Δ calculated using S61depthAnd a void determination threshold value deltathBy comparison, if the depth variance Δ of the hole neighborhood isdepthLess than ΔthDetermining the hollow in the plane if the number is larger than or equal to deltathThen the occlusion hole is determined.
Further, the step S7 specifically includes: for the cavity judged to be in-plane, namely the cavity is generated by not reflecting infrared light in the object plane, because the cavity exists in a plane, the depth value of the cavity area is similar to the depth of the adjacent pixel points, and the repairing is carried out by utilizing the depth value alignment of the field pixel points; and for the holes judged to be shielded, the holes are generated due to mutual shielding of objects, and for the hole repairing, color images are used for repairing, namely, the depth values of pixel points in the hole neighborhood and the pixel points in the hole area, which have the color close to that of the color images, are found out for repairing.
Further, the step S8 specifically includes: because the hole repaired by the S7 still has fine noise, in order to ensure the originality of the depth data as much as possible and remove the noise, the local upper edge filtering method is adopted to perform image filtering.
The invention has the beneficial effects that: according to the method, the cavities are divided into two types according to a cavity neighborhood variance threshold method, the combined color image is used for repairing the cavities by respectively adopting domain similarity and color consistency, and the images after the cavities are repaired are subjected to noise removal by adopting a local filtering method. Compared with the traditional rough restoration method adopting the bilateral filtering algorithm, the method ensures the effectiveness of the restoration of the depth image and simultaneously ensures the originality of the depth image information as much as possible.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a system flow diagram of the present invention;
FIG. 2 is a schematic diagram of hole neighborhood calculation;
FIG. 3 is a schematic diagram of the formation of a blocking void.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
FIG. 1 is a system flow chart of the present invention, as shown in FIG. 1, a method for repairing RGB-D camera depth image of combined color image, which comprises the steps of firstly realizing the alignment of coordinates of a color camera and a depth camera, carrying out binarization on the quantized depth image according to a depth threshold method, obtaining a cavity neighborhood through expansion operation, calculating a neighborhood variance to judge the type of the cavity, respectively repairing different cavities by adopting domain similarity and color consistency, and finally carrying out image filtering by adopting a local bilateral filtering method.
The method specifically comprises the following steps:
s1: calibrating the color camera and the depth camera by adopting a classical Zhang calibration method to obtain internal and external parameters of the camera;
s2: realizing the coordinate alignment of the color camera and the depth camera according to the pinhole imaging principle and the coordinate transformation;
s3: respectively acquiring a scene color image and a depth image, and binarizing the depth image according to a depth threshold method;
s4: judging the size of a depth image hole connected domain to determine that a hole exists: after binarization processing of S3, judging the size of a cavity connected domain to distinguish space-time cavities and noise points; if the connected domain is judged to be a hole, continuing the subsequent processing; if not, the subsequent processing is not performed.
S5: performing expansion operation on the depth image to obtain a cavity neighborhood;
s6: calculating the variance of the depth values of the pixel points in the cavity field, and dividing the cavity into a shielding cavity and an in-plane cavity according to the variance;
s7: repairing the two types of cavities by adopting color consistency and field similarity according to different cavity types;
for the cavity judged to be in-plane, namely the cavity is generated by not reflecting infrared light in the object plane, because the cavity exists in a plane, the depth value of the cavity area is similar to the depth of the adjacent pixel points, and the repairing is carried out by utilizing the depth value alignment of the field pixel points; for the holes judged to be blocked, the holes are generated due to mutual blocking of the objects, as shown in fig. 3, the holes are repaired by using the color images, that is, the depth values of the pixel points in the vicinity of the holes and in the holes, which are closer to the color of the color images, are found for repairing.
S8: filtering the repaired image by adopting a local filtering method, and removing noise while ensuring the originality of the depth value;
because the hole repaired by the S7 still has fine noise, in order to ensure the originality of the depth data as much as possible and remove the noise, the local upper edge filtering method is adopted to perform image filtering.
Step S2 specifically includes the following steps:
s21: the rotation matrix R of the color camera is obtained in step S1rgbAnd translation vector TrgbRotation matrix R of depth camerairAnd translation vector Tir(ii) a Let P be a point in the world coordinate system, PrgbAnd PirRespectively, obtaining a relation formula by a pinhole imaging model according to projection coordinate points of P under the color camera and the depth camera:
Prgb=RrgbP+Trgb
Pir=RirP+Tir
s22: let pgbrAnd pirThe projection coordinates of the point on the RGB image plane and the depth image plane are respectively represented by an internal reference matrix A of the camerargbAnd AirObtaining:
prgb=ArgbPrgb
pir=AirPir
s23: the coordinates of the depth camera are different from the coordinates of the RGB camera, PrgbAnd PirA rotation matrix R and a translation matrix T are used for being related, and the relationship between the rotation matrix R and the translation matrix T is as follows:
Prgb=RPir+T
s24: cancelling P by the S21 equation yields:
s25: the corresponding terms of simultaneous S23 and S24 equations are equal to obtain R and T:
s26: combining the equations S22 and S23 to obtain the final colorThe corresponding relation between the image pixel point and the depth image pixel point is as follows:
step S3 specifically includes the following steps:
s31: simultaneously acquiring a scene image according to the aligned color camera and depth camera;
s32: the original depth data are quantized, the depth values are converted into gray levels of 0-255, and the quantization formula is as follows:
where depth is the depth value after quantization, DpFor each pixel point's depth value, DmaxIs the maximum depth value;
s33: and carrying out binarization on the obtained depth image according to a depth threshold value method, wherein a formula is expressed as follows:
wherein I (x, y) is the binarized image, depth (x, y) represents the depth value of the pixel point after quantization, DthrTo a binarized depth threshold.
Step S5 specifically includes the following steps:
s51: expanding operation H for the cavity judged at S4ep;
S52: obtaining void field HnpThe expression is as follows:
Hnp=Hep-Ho
wherein HoIs a hollow region. As shown in fig. 2, a 3 × 3 cross-shaped structural element is used to perform an expansion operation on the hole, and the original hole image is subtracted from the expanded image to obtain a hole neighborhood.
Step S6 specifically includes the following steps:
s61: obtaining a hole neighborhood according to S5Calculating the variance delta of the depth value after the quantization of the pixel points in the neighborhooddepthThe calculation formula is as follows:
wherein DiThe depth value of the i pixel points in the hollow domain is represented,representing the average value of depth values of the cavity field, i representing the number of pixel points, and N representing the total number of pixel points of the cavity field;
s62: neighborhood variance Δ calculated using S61depthAnd a void determination threshold value deltathBy comparison, if the depth variance Δ of the hole neighborhood isdepthLess than ΔthDetermining the hollow in the plane if the number is larger than or equal to deltathThen the occlusion hole is determined.
Finally, it is noted that the above-mentioned preferred embodiments illustrate rather than limit the invention, and that, although the invention has been described in detail with reference to the above-mentioned preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims.
Claims (5)
1. A depth image restoration method of an RGB-D camera combined with a color image is characterized by comprising the following steps:
s1: calibrating the color camera and the depth camera by adopting a classical Zhang calibration method to obtain internal and external parameters of the camera;
s2: realizing the coordinate alignment of the color camera and the depth camera according to the pinhole imaging principle and the coordinate transformation; the method specifically comprises the following steps:
s21: the rotation matrix R of the color camera is obtained in step S1rgbAnd translation vector TrgbRotation matrix R of depth camerairAnd translation vector Tir(ii) a Let P be a point in the world coordinate system, PrgbAnd PirRespectively, obtaining a relation formula by a pinhole imaging model according to projection coordinate points of P under the color camera and the depth camera:
Prgb=RrgbP+Trgb
Pir=RirP+Tir
s22: let prgbAnd pirThe projection coordinates of the point on the RGB image plane and the depth image plane are respectively represented by an internal reference matrix A of the camerargbAnd AirObtaining:
prgb=ArgbPrgb
pir=AirPir
s23: the coordinates of the depth camera are different from the coordinates of the RGB camera, PrgbAnd PirA rotation matrix R and a translation matrix T are used for being related, and the relationship between the rotation matrix R and the translation matrix T is as follows:
Prgb=RPir+T
s24: cancelling P by the S21 equation yields:
s25: the corresponding terms of simultaneous S23 and S24 equations are equal to obtain R and T:
s26: and (3) combining equations S22 and S23 to finally obtain the corresponding relation between the color image pixel point and the depth image pixel point as follows:
s3: respectively acquiring a scene color image and a depth image, and binarizing the depth image according to a depth threshold method;
s4: judging the size of a depth image hole connected domain to determine that a hole exists;
s5: performing expansion operation on the depth image to obtain a cavity neighborhood; s6: calculating the variance of the depth values of the pixel points in the cavity field, and dividing the cavity into a shielding cavity and an in-plane cavity according to the variance; the method specifically comprises the following steps:
s61: according to the hole neighborhood obtained in S5, calculating the variance delta of the depth values of the quantized pixel points in the neighborhooddepthThe calculation formula is as follows:
wherein DiThe depth value of the i pixel points in the hollow domain is represented,representing the average value of depth values of the cavity field, i representing the number of pixel points, N representing the total number of pixel points of the cavity field, and HnpIs a cavity field;
s62: neighborhood variance Δ calculated using S61depthAnd a void determination threshold value deltathBy comparison, if the depth variance Δ of the hole neighborhood isdepthLess than ΔthDetermining the hollow in the plane if the number is larger than or equal to deltathJudging the shielding cavity;
s7: repairing the two types of cavities by adopting color consistency and field similarity according to different cavity types; the method specifically comprises the following steps: for the cavity judged to be in-plane, namely the cavity is generated by not reflecting infrared light in the object plane, because the cavity exists in a plane, the depth value of the cavity area is similar to the depth of the adjacent pixel points, and the repairing is carried out by utilizing the depth value alignment of the field pixel points; for the holes judged to be shielded, the holes are generated due to mutual shielding of objects, and for the hole repairing, color images are used for repairing, namely, the depth values of pixel points in the hole neighborhood and the pixel points in the hole area, the color of which is close to that of the color images, are found out for repairing;
s8: and filtering the repaired image by adopting a local filtering method, and removing noise while ensuring the originality of the depth value.
2. The method for RGB-D camera depth image restoration combined with color image according to claim 1, wherein the step S3 specifically includes the following steps:
s31: simultaneously acquiring a scene image according to the aligned color camera and depth camera;
s32: the original depth data are quantized, the depth values are converted into gray levels of 0-255, and the quantization formula is as follows:
where depth is the depth value after quantization, DpFor each pixel point's depth value, DmaxIs the maximum depth value;
s33: and carrying out binarization on the obtained depth image according to a depth threshold value method, wherein a formula is expressed as follows:
wherein I (x, y) is the binarized image, depth (x, y) represents the depth value of the pixel point after quantization, DthrTo a binarized depth threshold.
3. The method for RGB-D camera depth image restoration combined with color image according to claim 1, wherein the step S4 specifically includes: after binarization processing of S3, judging the size of a cavity connected domain to distinguish space-time cavities and noise points; if the connected domain is judged to be a hole, continuing the subsequent processing; if not, the subsequent processing is not performed.
4. The method for RGB-D camera depth image restoration combined with color image according to claim 1, wherein the step S5 specifically includes the following steps:
s51: expanding operation H for the cavity judged at S4ep;
S52: obtaining void field HnpThe expression is as follows:
Hnp=Hep-Ho
wherein HoIs a hollow region.
5. The method for RGB-D camera depth image restoration combined with color image according to claim 1, wherein the step S8 specifically includes: because the hole repaired by the S7 still has fine noise, in order to ensure the originality of the depth data as much as possible and remove the noise, the local filtering method is adopted to perform image filtering.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810174885.5A CN108399632B (en) | 2018-03-02 | 2018-03-02 | RGB-D camera depth image restoration method based on color image combination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810174885.5A CN108399632B (en) | 2018-03-02 | 2018-03-02 | RGB-D camera depth image restoration method based on color image combination |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108399632A CN108399632A (en) | 2018-08-14 |
CN108399632B true CN108399632B (en) | 2021-06-15 |
Family
ID=63091760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810174885.5A Active CN108399632B (en) | 2018-03-02 | 2018-03-02 | RGB-D camera depth image restoration method based on color image combination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108399632B (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345580B (en) * | 2018-10-23 | 2020-03-24 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing image |
CN109636732B (en) * | 2018-10-24 | 2023-06-23 | 深圳先进技术研究院 | Hole repairing method of depth image and image processing device |
CN109360230A (en) * | 2018-11-08 | 2019-02-19 | 武汉库柏特科技有限公司 | A kind of method for registering images and system based on 2D camera Yu 3D camera |
CN109712084B (en) * | 2018-12-10 | 2021-01-19 | 上海奕瑞光电子科技股份有限公司 | Image restoration method, image restoration system and flat panel detector |
CN109685732B (en) * | 2018-12-18 | 2023-02-17 | 重庆邮电大学 | High-precision depth image restoration method based on boundary capture |
CN109741405B (en) * | 2019-01-21 | 2021-02-02 | 同济大学 | Depth information acquisition system based on dual structured light RGB-D camera |
CN111626086A (en) * | 2019-02-28 | 2020-09-04 | 北京市商汤科技开发有限公司 | Living body detection method, living body detection device, living body detection system, electronic device, and storage medium |
CN111754713B (en) * | 2019-03-28 | 2021-12-14 | 杭州海康威视数字技术股份有限公司 | Video monitoring method, device and system |
CN111862139B (en) * | 2019-08-16 | 2023-08-18 | 中山大学 | Dynamic object parametric modeling method based on color-depth camera |
CN110866882B (en) * | 2019-11-21 | 2021-09-07 | 湖南工程学院 | Layered joint bilateral filtering depth map repairing method based on depth confidence |
CN111524193B (en) * | 2020-04-17 | 2022-05-03 | 西安交通大学 | Method and device for measuring two-dimensional size of object |
CN111739031B (en) * | 2020-06-19 | 2023-09-26 | 华南农业大学 | Crop canopy segmentation method based on depth information |
CN112116602A (en) * | 2020-08-31 | 2020-12-22 | 北京的卢深视科技有限公司 | Depth map repairing method and device and readable storage medium |
CN112381867B (en) * | 2020-11-09 | 2023-09-05 | 华南理工大学 | Automatic filling method for large-area depth image cavity of industrial sorting assembly line |
CN112991193B (en) * | 2020-11-16 | 2022-09-23 | 武汉科技大学 | Depth image restoration method, device and computer-readable storage medium |
CN113408344A (en) * | 2021-05-13 | 2021-09-17 | 深圳市捷顺科技实业股份有限公司 | Three-dimensional face recognition generation method and related device |
CN113628205B (en) * | 2021-08-25 | 2022-05-20 | 四川大学 | Non-contact respiratory frequency detection method based on depth image |
CN117541510A (en) * | 2023-12-01 | 2024-02-09 | 维悟光子(北京)科技有限公司 | Image restoration and completion method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101067557A (en) * | 2007-07-03 | 2007-11-07 | 北京控制工程研究所 | Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle |
CN101374242A (en) * | 2008-07-29 | 2009-02-25 | 宁波大学 | Depth map encoding compression method for 3DTV and FTV system |
CN101609508A (en) * | 2008-06-18 | 2009-12-23 | 中国科学院自动化研究所 | Sign structure and recognition methods to object identification and orientation information calculating |
CN101662694A (en) * | 2008-08-29 | 2010-03-03 | 深圳华为通信技术有限公司 | Method and device for presenting, sending and receiving video and communication system |
CN103217111A (en) * | 2012-11-28 | 2013-07-24 | 西南交通大学 | Non-contact contact line geometrical parameter detecting method |
CN103561258A (en) * | 2013-09-25 | 2014-02-05 | 同济大学 | Kinect depth video spatio-temporal union restoration method |
CN104571482A (en) * | 2013-10-22 | 2015-04-29 | 中国传媒大学 | Digital device control method based on somatosensory recognition |
CN107622480A (en) * | 2017-09-25 | 2018-01-23 | 长春理工大学 | A kind of Kinect depth image Enhancement Method |
-
2018
- 2018-03-02 CN CN201810174885.5A patent/CN108399632B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101067557A (en) * | 2007-07-03 | 2007-11-07 | 北京控制工程研究所 | Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle |
CN101609508A (en) * | 2008-06-18 | 2009-12-23 | 中国科学院自动化研究所 | Sign structure and recognition methods to object identification and orientation information calculating |
CN101374242A (en) * | 2008-07-29 | 2009-02-25 | 宁波大学 | Depth map encoding compression method for 3DTV and FTV system |
CN101662694A (en) * | 2008-08-29 | 2010-03-03 | 深圳华为通信技术有限公司 | Method and device for presenting, sending and receiving video and communication system |
CN103217111A (en) * | 2012-11-28 | 2013-07-24 | 西南交通大学 | Non-contact contact line geometrical parameter detecting method |
CN103561258A (en) * | 2013-09-25 | 2014-02-05 | 同济大学 | Kinect depth video spatio-temporal union restoration method |
CN104571482A (en) * | 2013-10-22 | 2015-04-29 | 中国传媒大学 | Digital device control method based on somatosensory recognition |
CN107622480A (en) * | 2017-09-25 | 2018-01-23 | 长春理工大学 | A kind of Kinect depth image Enhancement Method |
Non-Patent Citations (3)
Title |
---|
《一种深度图像修复算法研究》;刘田间等;《信息技术》;20171231(第6期);全文 * |
《基于Kinect的深度图像修复方法》;吕朝辉等;《吉林大学学报》;20160930;第46卷(第5期);全文 * |
《面向工业应用的机器人手眼标定与物体定位》;程玉立等;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160815(第08期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108399632A (en) | 2018-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399632B (en) | RGB-D camera depth image restoration method based on color image combination | |
Zhang et al. | Fast haze removal for nighttime image using maximum reflectance prior | |
Ancuti et al. | Night-time dehazing by fusion | |
Ancuti et al. | D-hazy: A dataset to evaluate quantitatively dehazing algorithms | |
Chen et al. | Depth image enhancement for Kinect using region growing and bilateral filter | |
US8126268B2 (en) | Edge-guided morphological closing in segmentation of video sequences | |
US8565525B2 (en) | Edge comparison in segmentation of video sequences | |
TWI526992B (en) | Method for optimizing occlusion in augmented reality based on depth camera | |
Hernandez et al. | Laser scan quality 3-d face modeling using a low-cost depth camera | |
US20090016603A1 (en) | Contour Finding in Segmentation of Video Sequences | |
CN103578084A (en) | Color image enhancement method based on bright channel filtering | |
RU2419880C2 (en) | Method and apparatus for calculating and filtering disparity map based on stereo images | |
Alenezi | Image Dehazing Based on Pixel Guided CNN with PAM via Graph Cut. | |
CN111311503A (en) | Night low-brightness image enhancement system | |
Wang et al. | Single-image dehazing using color attenuation prior based on haze-lines | |
Othman et al. | Enhanced single image dehazing technique based on hsv color space | |
CN110827209A (en) | Self-adaptive depth image restoration method combining color and depth information | |
CN111652809A (en) | Infrared image noise suppression method for enhancing details | |
Wang et al. | Review of single image defogging | |
CN109410160A (en) | The infrared polarization image interfusion method driven based on multiple features and feature difference | |
KR102327304B1 (en) | A method of improving the quality of 3D images acquired from RGB-depth camera | |
Zhang et al. | Nighttime haze removal with illumination correction | |
CN105160635B (en) | A kind of image filtering method based on fractional order differential estimation gradient field | |
US20210241430A1 (en) | Methods, devices, and computer program products for improved 3d mesh texturing | |
CN116468636A (en) | Low-illumination enhancement method, device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |