CN109615601A - A method of fusion colour and gray scale depth image - Google Patents

A method of fusion colour and gray scale depth image Download PDF

Info

Publication number
CN109615601A
CN109615601A CN201811281646.6A CN201811281646A CN109615601A CN 109615601 A CN109615601 A CN 109615601A CN 201811281646 A CN201811281646 A CN 201811281646A CN 109615601 A CN109615601 A CN 109615601A
Authority
CN
China
Prior art keywords
depth
depth image
image
rgb
yuv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811281646.6A
Other languages
Chinese (zh)
Other versions
CN109615601B (en
Inventor
姚慧敏
葛晨阳
王佳宁
邓作为
刘欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201811281646.6A priority Critical patent/CN109615601B/en
Publication of CN109615601A publication Critical patent/CN109615601A/en
Application granted granted Critical
Publication of CN109615601B publication Critical patent/CN109615601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The disclosure discloses a kind of colored method with gray scale depth image of fusion, comprising: the inside and outside ginseng of depth transducer and RGB sensor to three dimensional depth awareness apparatus is demarcated;Gray scale depth image is obtained by depth transducer, generates corresponding YUV depth image, meanwhile, RGB color image is obtained by RGB sensor;YUV depth image is converted into RGB depth image, and with exported after RGB color image compressed encoding;The RGB depth image of compressed encoding and RGB color image are decompressed, the RGB depth image after decompression is converted into YUV depth image, and restore the depth value of each pixel in the gray scale depth image being mapped in the channel YU or YV;YUV depth image and RGB color image are registrated;The texture information of RGB color image after the depth value and registration of each pixel in the gray scale depth image being resumed is merged, the colored depth image merged with grayscale is generated.

Description

A method of fusion colour and gray scale depth image
Technical field
The disclosure belongs to image procossing, computer vision and human-computer interaction technique field, and in particular to a kind of fusion is colored With the method for gray scale depth image.
Background technique
Vision is mankind's observation and the cognition world is most direct, most important approach.We live in a three-dimensional world, Human vision can not only perceive the brightness of body surface, color, texture information, motion conditions, and can judge its shape, space And spatial position (depth, distance).The real-time of depth information obtains the friendship for facilitating real physical world Yu the virtual network world Mutually, it links up and learns between reinforcement person to person, people and machine, machine and machine, enhance the intelligent level of machine.
The high accuracy depth information (distance) within the scope of projecting space can be obtained in real time by three dimensional depth sensing device, Precision can reach that millimeter rank is even more small, can be used colored depth image merge with grayscale intuitively, accurately indicate line Information and range information are managed, wherein each pixel value of gray scale depth image corresponds to the depth information of one point of physical space (i.e. Distance value).For example camera of the target object apart from three dimensional depth sensing device is 5 meters, depth accuracy will reach millimeter, then need Offer 213A data indicate, in requisition for the depth information that indicates object space with the high gray depth image of 13bits. Human eye is much larger than the resolution capability to grayscale information for the resolution capability of colouring information, and gray scale depth image is a kind of achromatic map Picture, the scene information provided with pseudo-colours depth map are limited, it is difficult to reflect the depth information of physical space completely.Cromogram View mode as meeting human eye, but color image does not include the depth information of physical space.How will be under natural lighting Color image and gray scale depth image co-registration have become three dimensional depth and obtain so as to more be truly reflected scene information The important content taken.
Summary of the invention
In view of the above problems, the disclosure is designed to provide a kind of colored method with gray scale depth image of fusion, this It is open to merge the colour and gray scale depth image that obtain by three dimensional depth awareness apparatus, it is deep relative to simple grayscale Degree image more meets eye-observation mode, can clearer reflection scene information.
A method of fusion colour and gray scale depth image, comprising the following steps:
S100: the internal reference of depth transducer and RGB sensor to three dimensional depth awareness apparatus and outer ginseng are demarcated;
S200: gray scale depth image is obtained by the depth transducer, and by each pixel in the gray scale depth image Depth value be mapped to the channel YU or YV in yuv image and generate corresponding YUV depth image, meanwhile, pass through the RGB Sensor obtains RGB color image;
S300: being converted to RGB depth image for the YUV depth image in step S200, and with the RGB color figure As being exported after compressed encoding;
S400: it is unziped it, will solve with RGB color image by the RGB depth image of compressed encoding in step S300 Compressed RGB depth image is converted to YUV depth image, and the YU being mapped in yuv image in recovering step S200 Or in the gray scale depth image in the channel YV each pixel depth value;
S500: in step S400 the YUV depth image and the RGB color image be registrated;By step The texture of the depth value of each pixel and the RGB color image after registration in the gray scale depth image being resumed in S400 Information fusion generates the colored depth image merged with grayscale.
Preferably, in step S100, the internal reference of the depth transducer and the RGB sensor is demarcated as { f respectivelyi, ci} { fr, cr, the outer ginseng of the depth transducer and the RGB sensor is demarcated as R, T respectively, wherein fiIndicate depth sensing The focal length of device, ciIndicate the central point of depth transducer;frIndicate the focal length of RGB sensor, crIndicate the center of RGB sensor Point;R is spin matrix, and T is translation matrix.
Preferably, in step S200, it includes: the gray scale depth figure that the gray scale depth image, which generates YUV depth image, Y channel of the most-significant byte of the depth value of each pixel of picture as YUV depth image, the depth of each pixel of the gray scale depth image U channel or V channel of the low level of angle value as YUV depth image, the channel V or the channel U of YUV depth image give constant 128.
Preferably, in step S300, the YUV depth image is converted to RGB depth image as follows:
Wherein, Y, U, V respectively indicate the value in the channel Y, U, V of YUV depth image, and R, G, B respectively indicate RGB depth image R, G, channel B value.
Preferably, in step S300, after the YUV depth image conversion generates the RGB depth image, with step 200 In RGB color image compressed encoding after export.
Preferably, described to decompress as follows in step S400:
Wherein, R, G, B respectively indicate the value of R, G of RGB depth image, channel B, and Y, U, V respectively indicate YUV depth image The channel Y, U, V value.
Preferably, in step S500, it includes as follows for carrying out registration to the YUV depth image and the RGB color image Step:
S501: according to the internal reference and outer ginseng demarcated in step S100, pass through the pixel coordinate (i in YUV depth imaged, jd) calculate corresponding pixel coordinate (i in RGB color imager, jr), the pixel coordinate (i in the YUV depth imaged, jd) Using depth transducer as the coordinate representation in the space coordinates of origin are as follows:
Wherein, (Xd, Yd, Zd) indicate pixel coordinate (id, jd) the corresponding coordinate in depth transducer space coordinates, Dis indicates pixel coordinate (id, jd) corresponding depth value, fix、fiyRespectively indicate depth transducer focal length fiIn X, the coke of Y-direction Away from value, cix、ciyRespectively indicate depth transducer central point ciThe pixel coordinate in depth image;
S502: being using RGB sensor as in the space coordinates of origin by the space coordinate conversion of the depth transducer Coordinate, be expressed as follows:
Wherein, (Xr, Yr, Zr) representation space coordinate (Xd, Yd, Zd) the corresponding coordinate in RGB sensor space coordinate system;
S503: it is the pixel coordinate of RGB depth image by the space coordinate conversion of the RGB sensor, is expressed as follows:
Wherein, frx、fryRespectively indicate RGB sensor focal distance frIn X, the focal length value of Y-direction, crx、cryRespectively indicate RGB Center sensor point crThe pixel coordinate in color image, (ir, jr) it is space coordinate (Xr, Yr, Zr) in color image Corresponding pixel coordinate.
Preferably, in step S500, by the depth of each pixel in the gray scale depth image being resumed in step S400 The fusion of the texture information of value and the RGB color image carries out in the following manner:
Wherein, R (ir, jr)、G(ir, jr)、B(ir, jr) be respectively fused depth image pixel (ir, jr) R, G, The value of channel B, r (ir, jr)、g(ir, jr)、b(ir, jr) it is RGB color image pixel (ir, jr) R, G, channel B value, gray (id, jd) it is corresponding depth image pixel (id, jd) gray value, α, β are respectively constant factor.
Compared with prior art, disclosure bring has the beneficial effect that
The disclosure is merged the colour information of the grayscale of depth map and RGB, i.e., joined colour in depth map Information keeps the depth map information of display richer, and scene details is more eye-catching, consequently facilitating eye-observation.
Detailed description of the invention
Fig. 1 is the method flow diagram of a kind of the fusion colour and gray scale depth image of the disclosure.
Specific embodiment
The technical solution of the disclosure is described in detail with reference to the accompanying drawings and examples.
Referring to Fig. 1, a method of fusion is colored with gray scale depth image, includes the following steps:
S100: the internal reference of depth transducer and RGB sensor to three dimensional depth awareness apparatus and outer ginseng are demarcated;
S200: gray scale depth image is obtained by the depth transducer, and by each pixel in the gray scale depth image Depth value be mapped to the channel YU or YV in yuv image and generate corresponding YUV depth image, meanwhile, pass through the RGB Sensor obtains RGB color image;
S300: being converted to RGB depth image for the YUV depth image in step S200, and with the RGB color figure As being exported after compressed encoding;
S400: it is unziped it, will solve with RGB color image by the RGB depth image of compressed encoding in step S300 Compressed RGB depth image is converted to YUV depth image, and the YU being mapped in yuv image in recovering step S200 Or in the gray scale depth image in the channel YV each pixel depth value;
S500: in step S400 the YUV depth image and the RGB color image be registrated;By step The texture of the depth value of each pixel and the RGB color image after registration in the gray scale depth image being resumed in S400 Information fusion generates the colored depth image merged with grayscale.
So far, the present embodiment completely discloses technical solution of the present invention, by by the coloured silk of the grayscale of depth map and RGB Color information is merged, so that shown depth map information is richer, scene details is more eye-catching, is convenient for eye-observation.
In another embodiment, in step S100, the internal reference of the depth transducer and the RGB sensor is marked respectively It is set to { fi, ciAnd { fr, cr, the outer ginseng of the depth transducer and the RGB sensor is demarcated as R, T respectively, wherein fiTable Show the focal length of depth transducer, ciIndicate the central point of depth transducer;frIndicate the focal length of RGB sensor, crIndicate that RGB is passed The central point of sensor;R is spin matrix, and T is translation matrix.
In the specific implementation process of step S100, by the depth transducer and RGB sensing in three dimensional depth awareness apparatus Device is set as sync pulse jamming.Pass through depth transducer and the tessellated depth image of RGB sensor sync pulse jamming and RGB color figure Picture, using Zhang Zhengyou calibration method calibrate depth transducer and RGB sensor internal reference and outer ginseng.
More specifically, depth transducer and RGB sensor shoot calibration object simultaneously, such as gridiron pattern scaling board, adjustment mark The position of earnest or angle shoot multiple series of images;Then the position for detecting angle point in every width picture, is asked according to Zhang Zhengyou calibration method Obtain the respective internal reference { f of two sensorsi, ciAnd { fr, cr, finally according between coordinate system each in Zhang Zhengyou calibration method Geometrical relationship acquires outer ginseng R, T between two cameras.
In another embodiment, in step S200, it includes: described that the gray scale depth image, which generates YUV depth image, Y channel of the most-significant byte of the depth value of each pixel of gray scale depth image as YUV depth image, the gray scale depth image U channel or V channel of the low level of the depth value of each pixel as YUV depth image, the channel V or the channel U of YUV depth image are given Permanent number 128.
It is logical that YU or YV is mapped in the specific implementation process of step S200, after depth transducer acquisition gray scale depth image Road, with the output of YUV422 format.Each pixel in gray scale depth image there is a depth value and depth value 8 with On, Y channel of the most-significant byte of depth value as YUV depth image, the low level of depth value is as the channel U of YUV depth image or V Channel, correspondingly, the channel V or the channel U of YUV depth image give constant 128;Gray scale depth image is obtained in depth transducer While, RGB sensor is acquired RGB color image.
In another embodiment, in step S300, the YUV depth image is converted to RGB depth map as follows Picture:
Wherein, Y, U, V respectively indicate the value in the channel Y, U, V of YUV depth image, and R, G, B respectively indicate RGB depth image R, G, channel B value.
In another embodiment, in step S300, after the YUV depth image conversion generates the RGB depth image, It is exported with after the RGB color image compressed encoding in step 200.
In the specific implementation process of step S300, after the YUV depth image is converted to RGB depth image, with step Display equipment is output to by USB interface after the RGB color image compressed encoding obtained in S200.
In another embodiment, described to decompress as follows in step S400:
Wherein, R, G, B respectively indicate the value of R, G of RGB depth image, channel B, and Y, U, V respectively indicate YUV depth image The channel Y, U, V value.
In the specific implementation process of step S400, the RGB depth through compressed encoding in equipment receiving step S300 is shown Image and RGB color image simultaneously unzip it, and after decompression, RGB depth image is converted into YUV depth image, from YUV depth It spends in the channel Y of image and restores the most-significant byte of the depth value of each pixel of gray scale depth image, from the channel U of YUV depth image The low level for restoring the depth value of each pixel of gray scale depth image, to obtain the depth value of each pixel of gray scale depth image.
In another embodiment, in step S500, the YUV depth image and the RGB color image are matched Standard includes the following steps:
S501: according to the internal reference and outer ginseng demarcated in step S100, pass through the pixel coordinate (i in YUV depth imaged, jd) calculate corresponding pixel coordinate (i in RGB color imager, jr), the pixel coordinate (i in the YUV depth imaged, jd) Using depth transducer as the coordinate representation in the space coordinates of origin are as follows:
Wherein, (Xd, Yd, Zd) indicate pixel coordinate (id, jd) the corresponding coordinate in depth transducer space coordinates, Dis indicates pixel coordinate (id, jd) corresponding depth value, fix、fiyRespectively indicate depth transducer focal length fiIn X, the coke of Y-direction Away from value, cix、ciyRespectively indicate depth transducer central point ciThe pixel coordinate in depth image;
S502: being using RGB sensor as in the space coordinates of origin by the space coordinate conversion of the depth transducer Coordinate, be expressed as follows:
Wherein, (Xr, Yr, Zr) representation space coordinate (Xd, Yd, Zd) the corresponding coordinate in RGB sensor space coordinate system;
S503: it is the pixel coordinate of RGB depth image by the space coordinate conversion of the RGB sensor, is expressed as follows:
Wherein, frx、fryRespectively indicate RGB sensor focal distance frIn X, the focal length value of Y-direction, crx、cryRespectively indicate RGB Center sensor point crThe pixel coordinate in color image, (ir, jr) it is space coordinate (Xr, Yr, Zr) in color image Corresponding pixel coordinate.
It in another embodiment, will be each in the gray scale depth image being resumed in step S400 in step S500 The depth value of pixel and the fusion of the texture information of the RGB color image after registration carry out in the following manner:
Wherein, R (ir, jr)、G(ir, jr)、B(ir, jr) be respectively fused depth image pixel (ir, jr) R, G, the value of channel B, r (ir, jr)、g(ir, jr)、b(ir, jr) it is RGB color image pixel (ir, jr) R, G, channel B value, gray(id, jd) it is corresponding depth image pixel (id, jd) gray value, α, β are respectively constant factor.
The disclosure that the above embodiments are only used to help understand and its core concept.It should be pointed out that for this skill For the technical staff in art field, under the premise of not departing from disclosure principle, can also to the disclosure carry out it is several improvement and Modification, these improvement and modification are also fallen into disclosure scope of protection of the claims.

Claims (8)

1. a kind of colored method with gray scale depth image of fusion, includes the following steps:
S100: the internal reference of depth transducer and RGB sensor to three dimensional depth awareness apparatus and outer ginseng are demarcated;
S200: gray scale depth image is obtained by the depth transducer, and by the depth of each pixel in the gray scale depth image Angle value is mapped to the channel YU or YV in yuv image and generates corresponding YUV depth image, meanwhile, it is sensed by the RGB Device obtains RGB color image;
S300: being converted to RGB depth image for the YUV depth image in step S200, and with the RGB color image pressure It is exported after reducing the staff code;
S400: it is unziped it, will decompress with RGB color image by the RGB depth image of compressed encoding in step S300 RGB depth image afterwards is converted to YUV depth image, and the YU or YV being mapped in yuv image in recovering step S200 The depth value of each pixel in gray scale depth image in channel;
S500: in step S400 the YUV depth image and the RGB color image be registrated;It will be in step S400 The depth value of each pixel and the texture information of the RGB color image after registration melt in the gray scale depth image being resumed It closes, generates the colored depth image merged with grayscale.
2. the method according to claim 1, wherein preferred, in step S100, the depth transducer and institute The internal reference for stating RGB sensor is demarcated as { f respectivelyi,ciAnd { fr,cr, the depth transducer and the RGB sensor it is outer Ginseng is demarcated as R, T respectively, wherein fiIndicate the focal length of depth transducer, ciIndicate the central point of depth transducer;frIndicate RGB The focal length of sensor, crIndicate the central point of RGB sensor;R is spin matrix, and T is translation matrix.
3. the method according to claim 1, wherein it is deep that the gray scale depth image generates YUV in step S200 Degree image includes: Y channel of the most-significant byte of the depth value of each pixel of the gray scale depth image as YUV depth image, described U channel or V channel of the low level of the depth value of each pixel of gray scale depth image as YUV depth image, YUV depth image The channel V or the channel U give constant 128.
4. the method according to claim 1, wherein the YUV depth image is as follows in step S300 Be converted to RGB depth image:
Wherein, Y, U, V respectively indicate the value in the channel Y, U, V of YUV depth image, R, G, B respectively indicate RGB depth image R, G, the value of channel B.
5. the method according to claim 1, wherein the YUV depth image conversion generates institute in step S300 After stating RGB depth image, and exported after the RGB color image compressed encoding in step 200.
6. the method according to claim 1, wherein in step S400, the decompression as follows:
Wherein, R, G, B respectively indicate the value of R, G of RGB depth image, channel B, Y, U, V respectively indicate YUV depth image Y, U, the value in the channel V.
7. the method according to claim 1, wherein in step S500, to the YUV depth image and described RGB color image carries out registration and includes the following steps:
S501: according to the internal reference and outer ginseng demarcated in step S100, pass through the pixel coordinate (i in YUV depth imaged,jd) meter Calculate corresponding pixel coordinate (i in RGB color imager,jr), the pixel coordinate (i in the YUV depth imaged,jd) with depth Spend the coordinate representation in the space coordinates that sensor is origin are as follows:
Wherein, (Xd,Yd,Zd) indicate pixel coordinate (id,jd) the corresponding coordinate in depth transducer space coordinates, dis table Show pixel coordinate (id,jd) corresponding depth value, fix、fiyRespectively indicate depth transducer focal length fiIn X, the focal length value of Y-direction, cix、ciyRespectively indicate depth transducer central point ciThe pixel coordinate in depth image;
S502: being using RGB sensor as the sky in the space coordinates of origin by the space coordinate conversion of the depth transducer Between coordinate, be expressed as follows:
Wherein, (Xr,Yr,Zr) representation space coordinate (Xd,Yd,Zd) the corresponding coordinate in RGB sensor space coordinate system;
S503: it is the pixel coordinate of RGB depth image by the space coordinate conversion of the RGB sensor, is expressed as follows:
Wherein, frx、fryRespectively indicate RGB sensor focal distance frIn X, the focal length value of Y-direction, crx、cryRespectively indicate RGB sensing Device central point crThe pixel coordinate in color image, (ir,jr) it is space coordinate (Xr,Yr,Zr) corresponding in color image Pixel coordinate.
8. the method according to claim 1, wherein in step S500, described in being resumed in step S400 The depth value of each pixel and the texture information of the RGB color image after registration merge in the following manner in gray scale depth image It carries out:
Wherein, R (ir,jr)、G(ir,jr)、B(ir,jr) it is respectively fused depth image pixel (ir,jr) R, G, channel B Value, r (ir,jr)、g(ir,jr)、b(ir,jr) it is RGB color image pixel (ir,jr) R, G, channel B value, gray (id, jd) it is corresponding depth image pixel (id,jd) gray value, α, β are respectively constant factor.
CN201811281646.6A 2018-10-23 2018-10-23 Method for fusing color and gray scale depth image Active CN109615601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811281646.6A CN109615601B (en) 2018-10-23 2018-10-23 Method for fusing color and gray scale depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811281646.6A CN109615601B (en) 2018-10-23 2018-10-23 Method for fusing color and gray scale depth image

Publications (2)

Publication Number Publication Date
CN109615601A true CN109615601A (en) 2019-04-12
CN109615601B CN109615601B (en) 2020-12-25

Family

ID=66002557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811281646.6A Active CN109615601B (en) 2018-10-23 2018-10-23 Method for fusing color and gray scale depth image

Country Status (1)

Country Link
CN (1) CN109615601B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510729A (en) * 2020-03-25 2020-08-07 西安电子科技大学 RGBD data compression transmission method based on video coding and decoding technology
CN112215172A (en) * 2020-10-17 2021-01-12 西安交通大学 Human body prone position three-dimensional posture estimation method fusing color image and depth information
CN112233191A (en) * 2020-09-18 2021-01-15 南京理工大学 Depth map colorizing method
CN112291479A (en) * 2020-11-23 2021-01-29 Oppo(重庆)智能科技有限公司 Image processing module, image processing method, camera assembly and mobile terminal
CN112734862A (en) * 2021-02-10 2021-04-30 北京华捷艾米科技有限公司 Depth image processing method and device, computer readable medium and equipment
CN114022871A (en) * 2021-11-10 2022-02-08 中国民用航空飞行学院 Unmanned aerial vehicle driver fatigue detection method and system based on depth perception technology
CN115457099A (en) * 2022-09-09 2022-12-09 梅卡曼德(北京)机器人科技有限公司 Deep completion method, device, equipment, medium and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004260393A (en) * 2003-02-25 2004-09-16 Ai Technology:Kk Method for encoding and decoding digital color still image
CN104616284A (en) * 2014-12-09 2015-05-13 中国科学院上海技术物理研究所 Pixel-level alignment algorithm for color images to depth images of color depth camera
CN104851109A (en) * 2015-06-10 2015-08-19 宁波盈芯信息科技有限公司 Representing method for high-gray-scale depth image
CN106778506A (en) * 2016-11-24 2017-05-31 重庆邮电大学 A kind of expression recognition method for merging depth image and multi-channel feature
CN107917701A (en) * 2017-12-28 2018-04-17 人加智能机器人技术(北京)有限公司 Measuring method and RGBD camera systems based on active binocular stereo vision
CN108470339A (en) * 2018-03-21 2018-08-31 华南理工大学 A kind of visual identity of overlapping apple and localization method based on information fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004260393A (en) * 2003-02-25 2004-09-16 Ai Technology:Kk Method for encoding and decoding digital color still image
CN104616284A (en) * 2014-12-09 2015-05-13 中国科学院上海技术物理研究所 Pixel-level alignment algorithm for color images to depth images of color depth camera
CN104851109A (en) * 2015-06-10 2015-08-19 宁波盈芯信息科技有限公司 Representing method for high-gray-scale depth image
CN106778506A (en) * 2016-11-24 2017-05-31 重庆邮电大学 A kind of expression recognition method for merging depth image and multi-channel feature
CN107917701A (en) * 2017-12-28 2018-04-17 人加智能机器人技术(北京)有限公司 Measuring method and RGBD camera systems based on active binocular stereo vision
CN108470339A (en) * 2018-03-21 2018-08-31 华南理工大学 A kind of visual identity of overlapping apple and localization method based on information fusion

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510729A (en) * 2020-03-25 2020-08-07 西安电子科技大学 RGBD data compression transmission method based on video coding and decoding technology
CN112233191A (en) * 2020-09-18 2021-01-15 南京理工大学 Depth map colorizing method
CN112215172A (en) * 2020-10-17 2021-01-12 西安交通大学 Human body prone position three-dimensional posture estimation method fusing color image and depth information
CN112291479A (en) * 2020-11-23 2021-01-29 Oppo(重庆)智能科技有限公司 Image processing module, image processing method, camera assembly and mobile terminal
CN112291479B (en) * 2020-11-23 2022-03-22 Oppo(重庆)智能科技有限公司 Image processing module, image processing method, camera assembly and mobile terminal
CN112734862A (en) * 2021-02-10 2021-04-30 北京华捷艾米科技有限公司 Depth image processing method and device, computer readable medium and equipment
CN114022871A (en) * 2021-11-10 2022-02-08 中国民用航空飞行学院 Unmanned aerial vehicle driver fatigue detection method and system based on depth perception technology
CN115457099A (en) * 2022-09-09 2022-12-09 梅卡曼德(北京)机器人科技有限公司 Deep completion method, device, equipment, medium and product

Also Published As

Publication number Publication date
CN109615601B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN109615601A (en) A method of fusion colour and gray scale depth image
US10194135B2 (en) Three-dimensional depth perception apparatus and method
US11003897B2 (en) Three-dimensional real face modeling method and three-dimensional real face camera system
CN101322589B (en) Non-contact type human body measuring method for clothing design
CN104463880B (en) A kind of RGB D image acquiring methods
CN106203390B (en) A kind of intelligent blind auxiliary system
Asayama et al. Fabricating diminishable visual markers for geometric registration in projection mapping
CN109816731B (en) Method for accurately registering RGB (Red Green blue) and depth information
CN106456292B (en) Systems, methods, devices for collecting color information related to an object undergoing a 3D scan
CN108388341B (en) Man-machine interaction system and device based on infrared camera-visible light projector
CN110879080A (en) High-precision intelligent measuring instrument and measuring method for high-temperature forge piece
CN109471533B (en) Student end system in VR/AR classroom and use method thereof
CN110074788B (en) Body data acquisition method and device based on machine learning
CN108154514A (en) Image processing method, device and equipment
CN108038885A (en) More depth camera scaling methods
CN105869115B (en) A kind of depth image super-resolution method based on kinect2.0
CN103063193A (en) Method and device for ranging by camera and television
CN109889799B (en) Monocular structure light depth perception method and device based on RGBIR camera
CN109903377A (en) A kind of three-dimensional face modeling method and system without phase unwrapping
CN106408664A (en) Three-dimensional model curved surface reconstruction method based on three-dimensional scanning device
CN105513074B (en) A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system
CN105739106A (en) Somatosensory multi-view point large-size light field real three-dimensional display device and method
CN113160421A (en) Space type real object interaction virtual experiment method based on projection
WO2023280082A1 (en) Handle inside-out visual six-degree-of-freedom positioning method and system
RU2735066C1 (en) Method for displaying augmented reality wide-format object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant