CN103400351B - Low light based on KINECT depth map shines image enchancing method and system - Google Patents

Low light based on KINECT depth map shines image enchancing method and system Download PDF

Info

Publication number
CN103400351B
CN103400351B CN201310325623.1A CN201310325623A CN103400351B CN 103400351 B CN103400351 B CN 103400351B CN 201310325623 A CN201310325623 A CN 201310325623A CN 103400351 B CN103400351 B CN 103400351B
Authority
CN
China
Prior art keywords
depth
value
contrast stretching
texture maps
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310325623.1A
Other languages
Chinese (zh)
Other versions
CN103400351A (en
Inventor
胡瑞敏
胡金晖
王中元
郭春辉
侯志强
谭晓琼
杨阿慧
石艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201310325623.1A priority Critical patent/CN103400351B/en
Publication of CN103400351A publication Critical patent/CN103400351A/en
Application granted granted Critical
Publication of CN103400351B publication Critical patent/CN103400351B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a kind of low light based on KINECT depth map according to image enchancing method and system, comprise step: step 1, gather by KINECT depth map and the texture maps that low light shines monitoring image, and pre-treatment is carried out to depth map; Step 2, based on step 1 obtain pixel and neighborhood territory pixel point thereof in depth map depth value build depth information gaussian kernel, depth information gaussian kernel is introduced the denoise algorithm based on gaussian kernel as nuclear factor and denoising is carried out to texture maps; Step 3, according to step 1 obtain the normalized value of the Depth Information Acquistion depth information of depth map, the normalized value of depth information is introduced contrast stretching algorithm and contrast stretching is carried out to the texture maps after denoising.The present invention make use of the colouring information of image and depth information to realize low light according to image enhaucament simultaneously, further enhancing the contour sharpness of low light image, stereovision and visual effect.

Description

Low light based on KINECT depth map shines image enchancing method and system
Technical field
The present invention relates to a kind of low light according to monitoring image Enhancement Method, especially relate to a kind of low light based on KINECT depth map according to image enchancing method and system.
Background technology
At present, video monitoring system is widely used in urban safety system.On daytime, the image taken by camera has good quality; But under night or low-light conditions, due to the firmware characteristic of ccd sensor, not only noise is more to cause the image taken, and the dynamic range of contrast is narrower.If security system requires to work under low lighting conditions, strengthen low light just most important according to the visual effect of monitoring image.
The gordian technique that low light strengthens according to monitoring image comprises denoising and contrast stretching, and this is for the image had compared with low-dynamic range and higher noise, beyond doubt a huge challenge.2007, Henrik [1]achieve image denoising Deng utilization based on anisotropic structure tensor filtering, meanwhile, utilize limited histogram equalization to achieve picture superposition.2010, QingXu [2]first pass through the non-local mean algorithm in time domain-spatial domain to image denoising Deng people, then carry out contrast stretching by tone-mapping algorithm, finally carry out filtering in YCbCr space; They also adjudicate weights according to movable information, improve the time-domain filtering effect of 3D non-local mean.2011, XuanDong [3]deng people by by low light image negate, mist elimination algorithm is then utilized to be strengthened by the low light image after negate.Although said method obtains certain effect, but still there is following two problems: (1) said method is when carrying out low light according to image enhaucament, only make use of the such as 2D such as color, brightness information in image, do not utilize the depth information of scene, cause the image after strengthening to lack stereovision; (2) after strengthening, the edge contour of foreground target object is still clear not.
Following list of references is related in literary composition:
[1]H.Malm,M.Oskarssonetal,“Adaptiveenhancementandnoisereductioninverylowlight-levelvideo,”inProc.11thInternationalConferenceonComputerVision,pp.1–8,2007.
[2]QingXu,HailinJiang,RiccardoScopigno,andMateuSbert,“Anewapproachforverydarkvideodenoisingandenhancement,”inProc.17thInternationalConferenceonImageProcessing,pp.1185-1188,2010
[3]XuanDong,GuanWang,Yi(Amy)Pangetal,“Fastefficientalgorithmforenhancementoflowlightingvideo,”inProc.IEEEInternationalConferenceonMultimediaandExpo,pp.1-6,2011.
Summary of the invention
For the deficiency that prior art exists, the present invention proposes a kind of make use of image simultaneously colouring information and the low light of depth information according to image enchancing method, the method can strengthen the contour sharpness of low light image, stereovision and visual effect further.
For solving the problems of the technologies described above, the present invention adopts following technical scheme:
One, the low light based on KINECT depth map shines image enchancing method, comprises step:
Step 1, gathers by KINECT depth map and the texture maps that low light shines monitoring image, and carries out pre-treatment to depth map;
Step 2, based on step 1 obtain pixel and neighborhood territory pixel point thereof in depth map depth value build depth information gaussian kernel, depth information gaussian kernel is introduced the denoise algorithm based on gaussian kernel as nuclear factor and denoising is carried out to texture maps;
Step 3, according to step 1 obtain the normalized value of the Depth Information Acquistion depth information of depth map, the normalized value of depth information is introduced contrast stretching algorithm and carries out contrast stretching to the texture maps after denoising, the normalized value of described depth information comprises the normalized value of the normalized value of depth value and the mould of depth value gradient.
Depth information gaussian kernel described in step 2 wherein, y represent step 1 obtain pixel in depth map; D (y) represents the depth value of pixel y; X obtains by step 1 the neighborhood territory pixel point of pixel y in depth map; D (x) is the depth value of pixel x; h dthe decay factor of gaussian kernel, according to noise intensity value in texture maps; || || 2represent two norms.
The contrast stretching algorithm introduced by the normalized value of depth information described in step 3 carries out contrast stretching to the texture maps after denoising and comprises the texture maps after to denoising and carry out global contrast and stretch and local contrast stretching.
The described normalized value of depth information being introduced contrast stretching algorithm and the texture maps after denoising is carried out to global contrast stretching comprises sub-step further:
3-1a according to step 1 obtain depth map depth value obtain the normalized value Norm (D (y)) of depth value, and be obtained from adaptation parameter b (y)=λ e -Norm (D (y)), wherein, λ is constant, according to strengthening degree value;
Auto-adaptive parameter b (y) is introduced global contrast stretching algorithm and is carried out global contrast stretching to the texture maps after denoising by 3-2a.
The described normalized value of depth information being introduced contrast stretching algorithm and the texture maps after denoising is carried out to local contrast stretching is specially:
According to step 1 obtain the normalized value size of the mould of the depth value gradient of depth map, the normalized value of the mould of the normalized value of depth value or depth value gradient is introduced local contrast stretching algorithm local contrast stretching is carried out to the texture maps after denoising.
Two, the low light based on KINECT depth map shines Image Intensified System, comprising:
Pre-processing module, is used for gathering by KINECT depth map and the texture maps that low light shines monitoring image, and carries out pre-treatment to depth map;
Denoising module, is used for building depth information gaussian kernel based on the depth value of pixel in depth map and neighborhood territory pixel point thereof, depth information gaussian kernel is introduced the denoise algorithm based on gaussian kernel as nuclear factor and carries out denoising to texture maps;
Contrast stretching module, be used for according to the normalized value of the Depth Information Acquistion depth information of depth map, the normalized value of depth information is introduced contrast stretching algorithm and carries out contrast stretching to the texture maps after denoising, the normalized value of described depth information comprises the normalized value of the normalized value of depth value and the mould of depth value gradient.
Above-mentioned depth information gaussian kernel wherein, y represents the pixel in depth map; D (y) represents the depth value of pixel y; X is the neighborhood territory pixel point of pixel y in depth map; D (x) is the depth value of pixel x; h dthe decay factor of gaussian kernel, according to noise intensity value in texture maps; || || 2represent two norms.
Above-mentioned contrast stretching module comprises global contrast stretching module and local contrast stretching module further, wherein:
Global contrast stretching module comprises submodule further:
Auto-adaptive parameter acquisition module, is used for obtaining according to the depth value of depth map the normalized value Norm (D (y)) of depth value, and is obtained from adaptation parameter b (y)=λ e -Norm (D (y)), wherein, λ is constant, according to strengthening degree value;
Stretching module, is used for auto-adaptive parameter b (y) being introduced global contrast stretching algorithm and carries out global contrast stretching to the texture maps after denoising;
Local contrast stretching module, be used for according to the normalized value size of the mould of the depth value gradient of depth map, the normalized value of the mould of the normalized value of depth value or depth value gradient introduced local contrast stretching algorithm local contrast stretching is carried out to the texture maps after denoising.
Compared with prior art, the present invention has the following advantages:
The depth information that present invention utilizes scene strengthens low light image, and the image outline after enhancing is more clear, has better stereovision and visual effect.
Accompanying drawing explanation
Fig. 1 is the inventive method process flow diagram;
Fig. 2 is depth map and the texture maps of the low light photograph monitoring image that KINECT gathers, and wherein, figure (a) is depth map, and figure (b) is texture maps;
Fig. 3 is depth map and the texture maps of another low light photograph monitoring image that KINECT gathers, and wherein, figure (a) is depth map, and figure (b) is texture maps;
Fig. 4 adopts the present invention and prior art respectively to the enhancing effect contrast figure that Fig. 2 strengthens, and wherein, figure (a) strengthens effect for the present invention, and scheming (b) is that prior art strengthens effect;
Fig. 5 adopts the present invention and prior art respectively to the enhancing effect contrast figure that Fig. 3 strengthens, and wherein, figure (a) strengthens effect for the present invention, and scheming (b) is that prior art strengthens effect.
Embodiment
To be described in detail to specific embodiment of the invention below.
The low light that the present invention is based on KINECT depth map, according to monitoring image Enhancement Method, comprises step:
Step 1, gathers by KINECT depth map and the texture maps that low light shines monitoring image, and carries out processing the level and smooth depth map of acquisition to depth map.
This step comprises sub-step further:
Step 1.1, gathers by KINECT depth map and the texture maps that low light shines monitoring image, and carries out visual angle registration to depth map.In this concrete enforcement, visual angle registration can adopt the OpenNI tool box in PrimeSense to realize.
Step 1.2: cavity filling, medium filtering are carried out successively to the depth map after the registration of visual angle, obtains level and smooth depth map.In this concrete enforcement, cavity is filled and is utilized in-painting algorithm realization, and medium filtering adopts median filtering algorithm to realize.
Step 2, based on the depth information of depth map, adopts non-local mean Denoising Algorithm to carry out denoising to texture maps.
In this step, described depth map is the level and smooth depth map obtained after treatment in step 1; Described texture maps is gather the texture maps of low light according to monitoring image by KINECT.
This step comprises sub-step further:
Step 2.1, the depth information based on depth map builds gaussian kernel:
e - | | D ( y ) - D ( x ) | | 2 2 h d 2 Formula (1)
In formula (1):
Y represents the pixel in depth map;
D (y) represents the depth value of pixel y in depth map;
X is the neighborhood territory pixel point of pixel y;
D (x) is the depth value of pixel x;
H dbe the decay factor of gaussian kernel, according to noise intensity value in texture maps, if noise is comparatively large, then value is larger;
|| || 2represent two norms.
Based on the depth value of pixel each in depth map and neighborhood territory pixel thereof, formula (1) is adopted to build the depth information gaussian kernel that in depth map, each pixel is corresponding respectively.
Step 2.2, adds non-local mean denoise algorithm using depth information gaussian kernel as nuclear factor, and carries out denoising to texture maps, and concrete denoising realizes based on formula (2):
I ′ ( y ) = Σ x ∈ S y ( 1 Z ( y ) e - | | v ( P y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D ( y ) - D ( x ) | | 2 2 h d 2 I ( x ) ) Formula (2)
In formula (2):
Z ( y ) = Σ x ∈ S y ( e - | | v ( P y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D ( y ) - D ( x ) | | 2 2 h d 2 ) , Be used for realizing normalization;
Y currently in texture maps treats denoising pixel, P yrepresent by the current image block treated centered by denoising pixel y in texture maps, I'(y) be the pixel value after denoising pixel y denoising current in texture maps;
X is the current neighborhood territory pixel point treating denoising pixel y in texture maps, P xrepresent the image block centered by neighborhood territory pixel point x in texture maps, I (x) is the pixel value of neighborhood territory pixel point x in texture maps;
S yrepresent in texture maps by the current neighborhood treated centered by denoising pixel y;
H is the decay factor of pixel in texture maps, and α is the standard deviation of gaussian kernel, h and α all carrys out value according to noise intensity in texture maps, and noise is larger, h and α value is larger;
|| || 2, αrepresent Gauss's weighted euclidean distance;
V (P x) and v (P y) represent image block P respectively xand P yin each pixel pixel value form column vector.
In this concrete enforcement, the pixel that depth map is identical with position in texture maps all adopts identical letter representation.
Step 3, according to the normalized value of the Depth Information Acquistion depth information of depth map, introduces contrast stretching algorithm by the normalized value of depth information and carries out contrast stretching to the texture maps after denoising.
The normalized value of the depth information described in the present invention comprises the normalized value of the normalized value of depth value and the mould of depth value gradient.
This is concrete implements, and carries out contrast stretching comprise and carry out global contrast to texture maps and stretch and local contrast stretching texture maps.In this step, described depth map is the level and smooth depth map obtained after treatment in step 1.
This step comprises sub-step further:
Step 3.1, the depth information based on depth map carries out global contrast stretching to the texture maps after denoising, can strengthen the stereovision of image while stretching dynamic range of images.
First, according to the depth value of depth map, obtain normalization depth value Norm (D (y)):
Norm ( D ( y ) ) = D ( y ) - D min D max - D min Formula (3)
In formula (3):
Y is the current pixel point in the texture maps after denoising, and D (y) is the depth value that in texture maps, current pixel point y is corresponding in depth map;
D minand D maxbe respectively minimum value and the maximal value of all pixel depth values in depth map.
Then, adaptation parameter b (y) is obtained from according to normalization depth value Norm (D (y)):
B (y)=λ e -Norm (D (y))formula (4)
In formula (4), λ is constant, according to strengthening degree value, wants to obtain stronger enhancing degree, desirable smaller value; Vice versa.
Finally, based on auto-adaptive parameter b (y), global contrast stretching is carried out to the texture maps after denoising, adopt tone mapping to strengthen algorithm in this concrete enforcement and global contrast stretching is carried out to texture maps, specific as follows:
I en _ g ( y ) = log ( I ( y ) + 1 I max + 1 ) · log 10 log ( 2 + ( ( I ( y ) I max ) log b log 0.5 ) · 8 ) · I max Formula (5)
In formula (5):
I (y) represents the front pixel value of current pixel point y in R passage, G passage or channel B of global contrast stretching;
I maxit is the maximal value of all pixels pixel value in R passage, G passage or channel B;
I en_gy () is the pixel value that after global contrast stretches, current pixel point y is corresponding.
The present invention introduces auto-adaptive parameter in global contrast stretches, thus can carry out self-adaptative adjustment according to depth map information to global contrast stretching.
This step carries out global contrast stretching to the texture maps after denoising, is specially and performs formula (5) to the R passage of texture maps, G passage and channel B respectively.
Step 3.2, the depth information based on depth map carries out local contrast stretching to the texture maps after denoising, to strengthen local hierarchy's sense of image.
Because the boundary of texture image prospect and background has stronger change in depth, stretch so carry out local contrast to image with the level difference strengthening front background, this concrete local contrast implementing to utilize the depth information Grad of depth map to realize texture maps stretches, specific as follows:
I en _ l ( y ) = I en _ g ( y ) + Norm ( α ( y ) ) · ( I en _ g ( y ) - I ‾ en _ g ( y ) ) Formula (6)
In formula (6):
I en_gy () is the pixel value that after global contrast stretches, current pixel point y is corresponding;
I en_lx () is the pixel value that after local contrast stretches, current pixel point y is corresponding;
for the pixel average of the neighborhood territory pixel point of current pixel point y;
α (y) represents depth information weights discriminant:
α ( y ) = | | ▿ D ( y ) | | if | | ▿ D ( y ) | | ≠ 0 D ( y ) else Formula (7)
Wherein, || ▽ D (y) || represent the mould of the gradient of depth value D (y) of pixel y in depth map.
When α (y)=|| ▽ D (y) || time, Norm ( α ( y ) ) = | | ▿ D ( y ) | | - | | ▿ D | | min | | ▿ D | | max - | | ▿ D | | min , Wherein, || ▽ D (y) || represent the mould of the gradient of depth value D (y) of pixel y, || ▽ D|| minwith || ▽ D|| maxbe respectively minimum value and the maximal value of the mould of the gradient of all pixel depth values in depth map.
Technical solution of the present invention is further illustrated below in conjunction with embody rule example.
First, gather by KINECT depth map and the texture maps that low light shines monitoring image, and pre-treatment is carried out to depth map; Then, strengthen based on algorithm by non-local mean denoise algorithm and tone mapping, add the depth information of depth map, realize low light according to image enhaucament.
In the present embodiment, by the decay factor h of pixel in texture maps and the decay factor h of gaussian kernel dall be set to 15, hunting zone and the tile size of non-local mean algorithm are respectively 21x21 and 7x7, gaussian kernel standard deviation α=0.15.Fig. 2 and Fig. 3 is the depth map and the texture maps that adopt KINECT collection low light to shine monitoring image, and wherein, Fig. 2 (a) and Fig. 3 (a) is depth map, and Fig. 2 (b) and Fig. 3 (b) is texture maps.Below for Fig. 2 ~ 3, describe the inventive method step in detail:
Step 1, the OpenNI tool box in PrimeSense is adopted to carry out visual angle registration to the depth map in Fig. 2 and Fig. 3, then, carry out cavity to the depth map after the registration of visual angle to fill and filtering process, be specially: utilize in-painting algorithm to carry out cavity to depth map and fill, adopt median filtering algorithm to carry out filtering and obtain level and smooth depth map.
Step 2, based on the depth map information that step 1 obtains, by non-local mean denoise algorithm, realizes the denoising of texture maps.
Step 2.1, for treating denoising pixel y in texture maps, in depth map in the hunting zone for the treatment of the 21x21 centered by denoising pixel y, the Euclidean distance of each neighborhood territory pixel point x in computing center pixel y and hunting zone, and generate gaussian kernel wherein: D (y) represents in depth map the depth value treating denoising pixel y; D (x) is the depth value of neighborhood territory pixel point x; h dthe decay factor of gaussian kernel, in the present embodiment, h dget 15; || || 2represent two norms.
Step 2.2, adds non-local mean denoise algorithm using the gaussian kernel of depth map as nuclear factor and obtains above-mentioned formula (2) I ′ ( y ) = Σ x ∈ S y ( 1 Z ( y ) e - | | v ( P y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D ( y ) - D ( x ) | | 2 2 h d 2 I ( x ) ) , Formula (2) is adopted to carry out denoising to texture maps, wherein: Z ( y ) = Σ x ∈ S y ( e - | | v ( P y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D ( y ) - D ( x ) | | 2 2 h d 2 ) It is normalization process; P yrepresent that this tile size is 7x7 by the current image block treated centered by denoising pixel y in texture maps; I'(y) be the pixel value after denoising pixel y denoising current in texture maps; P xrepresent the image block centered by neighborhood territory pixel point x in texture maps, this tile size is 7x7; I (x) is the pixel value of neighborhood territory pixel point x in texture maps; || || 2, αrepresent Gauss's weighted euclidean distance; V (P x) and v (P y) represent image block P respectively xand P yin each pixel pixel value form column vector; S yrepresent that this Size of Neighborhood is non-local mean algorithm search scope, is 21x21 by the current neighborhood treated centered by denoising pixel y in texture maps; H is the decay factor of pixel in texture maps, and α is the standard deviation of gaussian kernel, and in the present embodiment, h is set to 15, α and is set to 0.15.
Step 3, utilizes the depth information of depth map to carry out contrast stretching to the texture maps after denoising.
Contrast stretching in the present invention comprises global contrast and stretches and local contrast stretching.
Step 3.1, utilizes the depth information of depth map to carry out global contrast stretching to the texture maps after denoising.
First, the depth information based on depth map calculates normalized depth value Norm (D (y));
Then, adaptation parameter b (y)=λ e is obtained from based on normalized depth value Norm (D (y)) -Norm (D (y)), wherein, λ is constant, and in the present embodiment, λ is set to 0.6.
Finally, above-mentioned formula (5) is adopted to carry out global contrast stretching to the texture maps after denoising based on auto-adaptive parameter b (y).
Step 3.2, the depth information based on depth map stretches to the local contrast of the texture maps after denoising.
Because the boundary of display foreground and background has stronger change in depth, stretch so carry out local contrast to image with the level difference strengthening front background.
This concrete depth information implemented based on depth map, adopts formula (6) to carry out local contrast stretching to the texture maps after denoising.
The present embodiment strengthens Fig. 2 and Fig. 3, strengthens effect and sees Fig. 4 (a) and Fig. 5 (a) respectively.Meanwhile, also adopt traditional non-local mean denoise algorithm and tone mapping to strengthen algorithm in this concrete enforcement and denoising and contrast stretching are carried out to Fig. 2 and Fig. 3, strengthen effect and see Fig. 4 (b) and Fig. 5 (b).Find out from Fig. 4 ~ 5: it is more clear that gained of the present invention strengthens image outline, and have better stereovision and visual effect.
Specific embodiment described herein is only to the explanation for example of the present invention's spirit.Those skilled in the art can make various amendment or supplement or adopt similar mode to substitute to described specific embodiment, but can't depart from spirit of the present invention or surmount the scope that appended claims defines.

Claims (2)

1. the low light based on KINECT depth map shines image enchancing method, it is characterized in that, comprises step:
Step 1, gathers by KINECT depth map and the texture maps that low light shines monitoring image, and carries out visual angle registration, cavity filling, medium filtering to depth map;
Step 2, based on step 1 obtain pixel and neighborhood territory pixel point thereof in depth map depth value build depth information gaussian kernel depth information gaussian kernel is introduced the denoise algorithm based on gaussian kernel as nuclear factor and denoising is carried out to texture maps, wherein, y represent step 1 obtain pixel in depth map; D (y) represents the depth value of pixel y; X obtains by step 1 the neighborhood territory pixel point of pixel y in depth map; D (x) is the depth value of pixel x; h dthe decay factor of gaussian kernel, according to noise intensity value in texture maps; || || 2represent two norms;
Step 3, according to step 1 obtain the normalized value of the Depth Information Acquistion depth information of depth map, the normalized value of depth information is introduced contrast stretching algorithm and carries out contrast stretching to the texture maps after denoising, the normalized value of described depth information comprises the normalized value of the normalized value of depth value and the mould of depth value gradient; Described contrast stretching algorithm of being introduced by the normalized value of depth information carries out contrast stretching to the texture maps after denoising and comprises the texture maps after to denoising and carry out global contrast and stretch and local contrast stretching, wherein:
The normalized value of depth information introducing contrast stretching algorithm is carried out global contrast stretching to the texture maps after denoising and comprises sub-step further:
3-1a according to step 1 obtain depth map depth value obtain the normalized value Norm (D (y)) of depth value, and be obtained from adaptation parameter b (y)=λ e -Norm (D (y)), wherein, λ is constant, according to strengthening degree value;
Auto-adaptive parameter b (y) is introduced global contrast stretching algorithm and is carried out global contrast stretching to the texture maps after denoising by 3-2a;
The normalized value of depth information introducing contrast stretching algorithm is carried out local contrast stretching to the texture maps after denoising be specially:
According to step 1 obtain the normalized value size of the mould of the depth value gradient of depth map, the normalized value of the mould of the normalized value of depth value or depth value gradient is introduced local contrast stretching algorithm local contrast stretching is carried out to the texture maps after denoising.
2. the low light based on KINECT depth map shines Image Intensified System, it is characterized in that, comprising:
Pre-processing module, is used for gathering by KINECT depth map and the texture maps that low light shines monitoring image, and carries out visual angle registration, cavity filling, medium filtering to depth map;
Denoising module, is used for building depth information gaussian kernel based on the depth value of pixel in depth map and neighborhood territory pixel point thereof depth information gaussian kernel is introduced the denoise algorithm based on gaussian kernel as nuclear factor and denoising is carried out to texture maps, wherein, y represent step 1 obtain pixel in depth map; D (y) represents the depth value of pixel y; X obtains by step 1 the neighborhood territory pixel point of pixel y in depth map; D (x) is the depth value of pixel x; h dthe decay factor of gaussian kernel, according to noise intensity value in texture maps; || || 2represent two norms;
Contrast stretching module, be used for according to the normalized value of the Depth Information Acquistion depth information of depth map, the normalized value of depth information is introduced contrast stretching algorithm and carries out contrast stretching to the texture maps after denoising, the normalized value of described depth information comprises the normalized value of the normalized value of depth value and the mould of depth value gradient;
Contrast stretching module comprises global contrast stretching module and local contrast stretching module further, and wherein, global contrast stretching module comprises submodule further:
Auto-adaptive parameter acquisition module, is used for obtaining according to the depth value of depth map the normalized value Norm (D (y)) of depth value, and is obtained from adaptation parameter b (y)=λ e -Norm (D (y)), wherein, λ is constant, according to strengthening degree value;
Stretching module, is used for auto-adaptive parameter b (y) being introduced global contrast stretching algorithm and carries out global contrast stretching to the texture maps after denoising;
Local contrast stretching module, be used for according to the normalized value size of the mould of the depth value gradient of depth map, the normalized value of the mould of the normalized value of depth value or depth value gradient introduced local contrast stretching algorithm local contrast stretching is carried out to the texture maps after denoising.
CN201310325623.1A 2013-07-30 2013-07-30 Low light based on KINECT depth map shines image enchancing method and system Expired - Fee Related CN103400351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310325623.1A CN103400351B (en) 2013-07-30 2013-07-30 Low light based on KINECT depth map shines image enchancing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310325623.1A CN103400351B (en) 2013-07-30 2013-07-30 Low light based on KINECT depth map shines image enchancing method and system

Publications (2)

Publication Number Publication Date
CN103400351A CN103400351A (en) 2013-11-20
CN103400351B true CN103400351B (en) 2015-12-23

Family

ID=49563963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310325623.1A Expired - Fee Related CN103400351B (en) 2013-07-30 2013-07-30 Low light based on KINECT depth map shines image enchancing method and system

Country Status (1)

Country Link
CN (1) CN103400351B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103713525B (en) * 2014-01-02 2016-04-27 南京航空航天大学 Adopt Spacecraft Rendezvous docking ground demonstration verification system and the method for Kinect
ES2563098B1 (en) 2015-06-15 2016-11-29 Davantis Technologies Sl IR image enhancement procedure based on scene information for video analysis
US10325351B2 (en) * 2016-03-11 2019-06-18 Qualcomm Technologies, Inc. Systems and methods for normalizing an image
CN108734670B (en) * 2017-04-20 2021-05-18 天津工业大学 Method for restoring single night weak-illumination haze image
CN111275642B (en) * 2020-01-16 2022-05-20 西安交通大学 Low-illumination image enhancement method based on significant foreground content
CN113126944B (en) * 2021-05-17 2021-11-09 北京的卢深视科技有限公司 Depth map display method, display device, electronic device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005036371A2 (en) * 2003-10-09 2005-04-21 Honda Motor Co., Ltd. Moving object detection using low illumination depth capable computer vision
US7234641B2 (en) * 2004-12-01 2007-06-26 Datalogic Scanning, Inc. Illumination pulsing method for a data reader
JP5397190B2 (en) * 2009-11-27 2014-01-22 ソニー株式会社 Image processing apparatus, image processing method, and program
CN102663741B (en) * 2012-03-22 2014-09-24 侯克杰 Method for carrying out visual stereo perception enhancement on color digit image and system thereof

Also Published As

Publication number Publication date
CN103400351A (en) 2013-11-20

Similar Documents

Publication Publication Date Title
CN103400351B (en) Low light based on KINECT depth map shines image enchancing method and system
CN106157267B (en) Image defogging transmissivity optimization method based on dark channel prior
Wang et al. Dehazing for images with large sky region
CN102750674B (en) Video image defogging method based on self-adapting allowance
Lv et al. Real-time dehazing for image and video
CN103914813B (en) The restored method of colored haze image defogging and illumination compensation
CN104537634B (en) The method and system of raindrop influence is removed in dynamic image
CN104318524A (en) Method, device and system for image enhancement based on YCbCr color space
CN108765336A (en) Image defogging method based on dark bright primary colors priori with auto-adaptive parameter optimization
Yeh et al. Efficient image/video dehazing through haze density analysis based on pixel-based dark channel prior
CN105931208A (en) Physical model-based low-illuminance image enhancement algorithm
CN105046658A (en) Low-illumination image processing method and device
CN103679157A (en) Human face image illumination processing method based on retina model
CN103500440A (en) Method for eliminating cloud and haze of atmospheric degraded image
CN111311503A (en) Night low-brightness image enhancement system
CN104778673B (en) A kind of improved gauss hybrid models depth image enhancement method
Zhen et al. Single Image Defogging Algorithm based on Dark Channel Priority.
CN104751138B (en) A kind of vehicle mounted infrared image colorization DAS (Driver Assistant System)
Hu et al. A low illumination video enhancement algorithm based on the atmospheric physical model
Das et al. Histogram equalization techniques for contrast enhancement: a review
Shi et al. Image haze removal using dark channel prior and minimizing energy function
CN105528772B (en) A kind of image interfusion method based on directiveness filtering
CN110992287A (en) Method for clarifying non-uniform illumination video
Xiaoxu et al. Image dehazing base on two-peak channel prior
Goswami et al. A hybrid approach for visibility enhancement in foggy image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151223

Termination date: 20190730

CF01 Termination of patent right due to non-payment of annual fee