CN101873506B - Image processing method for providing depth information and image processing system thereof - Google Patents

Image processing method for providing depth information and image processing system thereof Download PDF

Info

Publication number
CN101873506B
CN101873506B CN200910136819XA CN200910136819A CN101873506B CN 101873506 B CN101873506 B CN 101873506B CN 200910136819X A CN200910136819X A CN 200910136819XA CN 200910136819 A CN200910136819 A CN 200910136819A CN 101873506 B CN101873506 B CN 101873506B
Authority
CN
China
Prior art keywords
variability
amount
image block
input image
intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200910136819XA
Other languages
Chinese (zh)
Other versions
CN101873506A (en
Inventor
邵得晋
陈文昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CN200910136819XA priority Critical patent/CN101873506B/en
Publication of CN101873506A publication Critical patent/CN101873506A/en
Application granted granted Critical
Publication of CN101873506B publication Critical patent/CN101873506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to an image processing method for producing corresponding depth information based on an input image. The method comprises the following steps: firstly, operating fuzzy processing for the input image to produce a reference image; subsequently, respectively dividing the input image and the reference image into a plurality of input image blocks and reference image blocks which correspond to each other; then, acquiring a variance intensity corresponding to each input image block based on a plurality of input pixel data and reference pixel data respectively included in each input image block and each reference image block which correspond to each other; afterwards, dividing the input image to obtain a plurality of divided regions; and finally, producing depth information based on the variance intensities corresponding to the input image blocks under the actual coverage of each divided region.

Description

The image treatment method and the image processing system thereof of depth information are provided
Technical field
The invention relates to a kind of image treatment method and image processing system thereof, and particularly relevant for a kind of in order to the image treatment method of depth information to be provided.
Background technology
On the field of computer vision,, normally provide three-dimensional content (3D content) to a three-dimensional display (autostereoscopic display) for the stereopsis with 3-D effect is provided.
Above-mentioned three-dimensional content comprises the information of image plus depth (2D plus Z), i.e. a bidimensional image and a depth information (depth information).This depth information for example is a corresponding depth map of bidimensional image (depth map) so far.That is this depth information has comprised the depth value of each pixel that corresponds to bidimensional image.According to bidimensional image and corresponding depth information, three-dimensional display can demonstrate stereopsis, and can let the user obtain the appreciation effect of 3D.
In order to make three-dimensional display show stereopsis, then need the scene in the image is carried out degree of depth estimation.In the tradition, the technology of stereoscopic vision (stereo vision) is to utilize two filmed images corresponding to right and left eyes to carry out degree of depth estimation.In addition, in recent years also the someone use various visual angles down many captured images carry out the degree of depth and estimate.Moreover in order to reduce cost and operation ease, also having can be to the practice that the input image carries out degree of depth estimation that camera head provided of single lens.
In the conventional practice of the depth information of an input of a kind of estimation image, be the image character information of analyzing this input image, and classify (classification) handle.So, can summarize the scene characteristic of input in the image, like ground, building, human body, and vehicle, and the foundation of judging as image depth.Yet this practice but need spend the plenty of time in the experiment (training) that the input image is classified.Therefore, how to utilize an input image to produce its pairing depth information, one of problem of still endeavouring for industry.
Summary of the invention
The purpose of this invention is to provide a kind of image treatment method and image processing system thereof, need not spend the plenty of time, import the pairing depth information of image and can utilize an input image to produce in that the input image is being classified experimentally.This depth information can show the far and near degree of the object shooting distance in the input image really, and can accurately present the third dimension of the object in the image.
For realizing above-mentioned purpose, according to an aspect of the present invention, a kind of image treatment method is proposed, in order to produce a corresponding depth information according to an input image.The method comprises the following steps.At first, this input image is carried out Fuzzy processing to produce one with reference to image.Then, will import respectively image be divided into reference to image corresponding a plurality of input image blocks with reference to the image block.Then, according to corresponding many input pixel datas importing respectively that image block and each comprised respectively with reference to the image block and reference pixel data, obtain and respectively import the pairing amount of variability intensity of image block.Afterwards, the input image is carried out the image dividing processing, to obtain a plurality of cut zone.Then, according to each cut zone contain corresponding input image block in fact these a little amount of variability intensity produce depth information.
According to a further aspect in the invention, a kind of image processing system is proposed, in order to produce a corresponding depth information according to an input image.Image processing system comprises that an input unit, is with reference to image generation unit, an amount of variability intensity generation unit, an image cutting unit, an and output unit.Input unit is in order to obtain the input image.Produce single in order to this input image is carried out Fuzzy processing to produce one with reference to image with reference to image.Amount of variability intensity generation unit will be in order to will import image respectively and to be divided into corresponding a plurality of input image blocks and a plurality of with reference to the image block with reference to image; And in order to according to corresponding image block and each respectively imported with reference to many input pixel datas and many reference pixel data that the image block is comprised respectively, obtain and respectively import the pairing amount of variability intensity of image block.The image cutting unit is in order to the input image is carried out the image dividing processing, to obtain a plurality of cut zone.Output unit in order to according to each cut zone contain corresponding input image block in fact amount of variability intensity produce depth information.
Description of drawings
Figure 1A illustrates the flow chart according to the image treatment method of one embodiment of the invention.
Figure 1B illustrates the calcspar according to the image processing system of one embodiment of the invention.
It is the flow chart of step S132~S136 of being comprised of step S130 that Fig. 2 illustrates.
It is sketch map one example of an input image that Fig. 3 illustrates.
Fig. 4 its to illustrate be sketch map one example with reference to image.
Fig. 5 and Fig. 6 illustrate respectively for the input image of Fig. 3 and Fig. 4 be divided into a plurality of input image blocks and a example with reference to the image block with reference to image.
Fig. 7 illustrates the example into the pairing amount of variability intensity of these a little input image blocks of Fig. 5.
Fig. 8 illustrates an example of a plurality of cut zone of carrying out being obtained after the image dividing processing for the input image to Fig. 3.
Fig. 9 is the example of the input image IY of Fig. 3.
Figure 10 is the example of the reference image IR of Fig. 4.
Figure 11 is the example of these a little input image block YB1~YBk of Fig. 5.
Figure 12 is the example of the pairing amount of variability intensity of these a little input image block YB1~YBk VM1~VMk of Fig. 7.
Figure 13 is the example of a plurality of cut zone DA1~DAx of Fig. 8.
Figure 14 is the image treatment method of one embodiment of the invention example according to the depth map DM that cut zone DA1~DAx produced of amount of variability intensity VM1~VMk of Figure 12 and Figure 13.
(Dynamic Digital Depth, the bidimensional image that company provided DDD) changes the depth map DM2 of the technique for generating of stereopsis to Figure 15 in order to utilize a development trend numeral degree of depth that is positioned at California, USA.
Primary clustering symbol description in the accompanying drawing
100: image processing system
110: input unit
120: with reference to the image generation unit
130: amount of variability intensity generation unit
140: the image cutting unit
150: output unit
160: three-dimensional display
DA1~DAx: cut zone
DM, DM2: depth map
Im: raw video
IR: with reference to image
IY: input image
RB1~RBk: with reference to the image block
S110~S150, S132~S136: process step
YB1~YBk, YB (al)~YB (an): input image block
VM1~VMk, VM (al)~VM (an): amount of variability intensity
Embodiment
For letting the foregoing of the present invention can be more obviously understandable, below specially lift preferred embodiment, and conjunction with figs. elaborates as follows.
In embodiments of the invention, be to disclose a kind of corresponding its image treatment method and the image processing system of depth information to be provided in order to an input image is carried out image processing.The mode of this input image of present embodiment acquisition for example is to utilize single capture equipment to capture.In addition, real object, personage or view that the scenario objects in the input image that present embodiment was suitable for is for example captured for single capture equipment can also be the solid object of utilizing based on Computer Animated Graph produced.
Please with reference to Figure 1A, it illustrates the flow chart according to the image treatment method of one embodiment of the invention.This image treatment method for example is a depth map (depth map) in order to produce corresponding depth information according to an input image.The method comprises the following steps.
At first, shown in step S110, produce one with reference to image according to the input image.Then, shown in step S 120, will import image respectively and be divided into corresponding a plurality of input image blocks and a plurality of with reference to image with reference to the image block.
Afterwards, shown in step S130, with reference to many input pixel datas and many reference pixel data that the image block is comprised respectively, obtain and respectively import the pairing amount of variability intensity of image block according to corresponding image block and each respectively imported.
Then, shown in step S140, the input image is carried out the image dividing processing, to obtain a plurality of cut zone.Then, shown in step S150, according to each cut zone contain corresponding input image block in fact these a little amount of variability intensity produce depth information.
Now with an image processing system the detailed practice of the image treatment method of Figure 1A is explained as follows.Please continue with reference to Figure 1A, and please be simultaneously with reference to Figure 1B, it illustrates the calcspar according to the image processing system of one embodiment of the invention.Image processing system 100 comprise an input unit 110, one with reference to image generation unit 120, an amount of variability intensity generation unit 130, an image cutting unit 140, with an output unit 150.The image processing system 100 of present embodiment is in order to carry out the image treatment method of 1A figure.
At first, before present embodiment gets into step S110, can capture a raw video Im earlier by input unit 110, for example be colored raw video (not illustrating).The pixel data of raw video Im is for example defined by the color space of YCbCr.And because human eye is comparatively sharp to the impression that brightness (luminance) changes, so present embodiment is that brightness composition with raw video Im is as importing image IY.
Please with reference to Fig. 3, it illustrates is sketch map one example of an input image IY.The employed input image of present embodiment for example is input image IY, and this input image IY for example is the image of Y passage (Y channel) composition of the YCbCr color space that kept colored raw video Im.
Then, present embodiment gets into step S110.In step S110, produce one with reference to image IR according to input image IY with reference to image generation unit 120.
Please with reference to Fig. 4, it illustrates is sketch map one example with reference to image IR.Generation can be with reference to the mode of image IR, and input image IY is carried out Fuzzy processing produce with reference to image, for example be with reference to image IR.For instance; When producing with reference to image IR, a low pass filter capable of using (low-pass filter) (for example Gaussian filter (Gaussian filter)) or on average shields (average mask) and comes input image IY is carried out Fuzzy processing with reference to image generation unit 120.
Then, in step S120, amount of variability intensity generation unit 130 will be imported image IY respectively and be divided into corresponding a plurality of input image blocks and a plurality of with reference to the image block with reference to image IR.
Please with reference to Fig. 5 and Fig. 6, its illustrate respectively for the input image IY of Fig. 3 and Fig. 4 be divided into a plurality of input image block YB1~YBk and a example with reference to image block RB1~RBk with reference to image IR.In present embodiment, input image IY can be divided into a plurality of input image blocks, for example is input image block YB1~YBk.And can also this mode be divided into a plurality ofly with reference to the image block with reference to image IR, for example be with reference to image block RB1~RBk.In these a little input image block YB1~YBk and in reference to image block RB1~RBk, a corresponding input image block (like input image block YB1) has identical image resolution and pixel quantity with one with reference to image block (as with reference to image block RB1).
Afterwards; In step S130; Amount of variability intensity generation unit 130, is obtained and is respectively imported the pairing amount of variability intensity of image block VM1~VMk with reference to many input pixel datas and many reference pixel data that the image block is comprised respectively according to corresponding image block and each respectively imported.
Please with reference to Fig. 7, it illustrates the example into the pairing amount of variability intensity of these a little input image block YB1~YBk VM1~VMk of Fig. 5.The a little blocks of this of Fig. 7 are these a little input image block YB1~YBk that correspond to Fig. 5, and the indicated numerical value of these a little blocks is in order to be expressed as amount of variability intensity VM1~VMk that present embodiment is obtained.Change speech, an input image block (like input image block YB1) is correspond to an amount of variability intensity (shown in amount of variability intensity VM1).
Detailed speech, step S130 can comprise step S132~S136.Please with reference to Fig. 2, it illustrates is the flow chart of step S132~S136 of being comprised of step S130.
In step S132, amount of variability intensity generation unit 130 calculates an input image block and corresponding each input pixel data that is comprised respectively with reference to the image block and each reference pixel data in the variation of horizontal direction and in the variation of vertical direction.Then, amount of variability intensity generation unit 130 produces the vertical overall amount of variability with of the overall amount of variability of the input pairing level of image block according to result of calculation again.
YB1 is an example with input image block, will how to produce the overall amount of variability of the input image pairing level of block YB1 and explain as follows with vertical overall amount of variability.Please with reference to Fig. 5 and Fig. 6, suppose that input image block YB1 and corresponding reference image block RB1 respectively comprise m * n pixel information, then (i, a j) pixel information, i are 0 integer to (m-1), and j is 0 integer to (n-1) to I.
In step S132, when producing the overall amount of variability of the input image pairing level of block YB1, the overall amount of variability of level for example produces with following formula:
D_Ihor(i,j)=Abs(I(i,j)-I(i-1,j)),
I=1 ... M-1, j=0 ... N-1 (formula 1)
D_Rhor(i,j)=Abs(R(i,j)-R(i-1,j)),
I=1 ... M-1, j=0 ... N-1 (formula 2)
D_Vhor(i,j)=Max(0,D_Ihor(i,j)-D_Rhor(i,j)),
I=1 ... M-1, j=1 ... N-1 (formula 3)
s _ Vhor = Σ i , j = 1 m - 1 , n - 1 D _ Vhor ( i , j ) ; (formula 4)
Wherein, (i j) is expressed as (i, j) input pixel data of importing image block YB1 to I; (i j) is expressed as (i, j) a reference pixel data with reference to image block RB1 to R; Abs (.) expression the carrying out computing of absolute value; The peaked computing of Max (.) expression carrying out; D_Ihor (i, j) be expressed as input image block YB1 the (i, j) input pixel data is in the variation of horizontal direction; D_Rhor (i, j) be expressed as with reference to image block RB1 (i, j) a reference pixel data is in the variation of horizontal direction; (i j) is expressed as (i, j) the horizontal amount of variability absolute difference of an input pixel data of importing image block YB1 to D_Vhor; S_Vhor is expressed as the overall amount of variability of level of all input pixel datas of input image block YB1.
Moreover in step S132, when producing the pairing vertical overall amount of variability of input image block YB1, vertical overall amount of variability for example produces with following formula:
D_Iver(i,j)=Abs(I(i,j)-I(i,j-1)),
J=1 ... N-1, i=0 ... M-1 (formula 5)
D_Rver(i,j)=Abs(R(i,j)-R(i,j-1)),
J=1 ... N-1, i=0 ... M-1 (formula 6)
D_Vver(i,j)=Max(0,D_Iver(i,j)-D_Rver(i,j)),
I=1 ... M-1, j=1 ... N-1 (formula 7)
s _ Vver = Σ i , j = 1 m - 1 , n - 1 D _ Vver ( i , j ) ; (formula 8)
Wherein, D_Iver (i, j) be expressed as input image block YB1 the (i, j) input pixel data is in the variation of vertical direction; D_Rver (i, j) be expressed as with reference to image block RB1 (i, j) a reference pixel data is in the variation of vertical direction; (i j) is expressed as (i, j) the vertical amount of variability absolute difference of an input pixel data of importing image block YB1 to D_Vver; S_Vver is expressed as the vertical overall amount of variability of all input pixel datas of input image block YB1.
After step S132, then, shown in step S134, overall amount of variability of amount of variability intensity generation unit 130 regular levels and vertical overall amount of variability.
Amount of variability intensity generation unit 130 is when overall amount of variability of regular level and the overall amount of variability of level; For example be to come the overall amount of variability of level is carried out normalization, and come vertical overall amount of variability is carried out normalization with a vertical regular reference value with level normalization reference value.Level normalization reference value for example obtains with following formula with vertical regular reference value:
s _ Ihor = Σ i , j = 1 m - 1 , n - 1 D _ Ihor ( i , j ) ; (formula 9)
s _ Iver = Σ i , j = 1 m - 1 , n - 1 D _ Iver ( i , j ) ; (formula 10)
Wherein, s_Ihor is expressed as level normalization reference value; S_Iver is expressed as vertical regular reference value.
In present embodiment, the overall amount of variability of regular level is following with the mode of vertical overall amount of variability:
c _ Ihor = s _ Vhor s _ Ihor (formula 11)
c _ Iver = s _ Vver s _ Iver (formula 12)
Wherein, c_Ihor is expressed as the overall amount of variability of level after the normalization; C_Iver is expressed as the vertical overall amount of variability after the normalization; The overall amount of variability of level after the normalization is between 0 to 1 with vertical overall amount of variability.
Afterwards, shown in step S136, amount of variability intensity generation unit 130 is obtained the pairing amount of variability intensity of input image block according to overall amount of variability of the level after the normalization and vertical overall amount of variability.
For instance; In an embodiment; Amount of variability intensity generation unit 130 is when obtaining the pairing amount of variability intensity of input image block YB1 VM1; Amount of variability intensity generation unit 130 is with the overall amount of variability of the level after normalization the greater with vertical overall amount of variability, as the pairing amount of variability intensity of input image block YB1 VM1.Detailed speech, the pairing amount of variability intensity of input image block YB1 VM1 for example can following formula obtain:
CVar=Max (c_Iver, c_Ihor) (formula 13)
Wherein, cVar is expressed as the pairing amount of variability intensity of input image block YB1 VM1.
In another embodiment; Amount of variability intensity generation unit 130 is when obtaining the pairing amount of variability intensity of input image block YB1 VM1; Amount of variability intensity generation unit 130 is the overall amount of variability of level and the geometric means of vertical overall amount of variability that calculate after the normalization, is used as importing the pairing amount of variability intensity of image block YB1 VM1.Detailed speech, the pairing amount of variability intensity of input image block YB1 VM1 also for example can obtain with following formula:
CVar = ( c _ Iver ) 2 + ( c _ Ihor ) 2 (formula 14)
Above-mentioned is the index account form of application formula 13 or formula 14; To obtain the pairing amount of variability intensity of input image block YB1 VM1 with vertical overall amount of variability according to the overall amount of variability of the level after the normalization; Right its is in order to usefulness of the present invention to be described, but not in order to restriction the present invention.The present invention should utilize other kind index account form to obtain the pairing amount of variability intensity of input image block YB1 VM1.
So, repeated execution of steps S132~S136 can obtain the pairing amount of variability intensity of these a little input image block YB1~YBk VM1~VMk as shown in Figure 7.
Then, present embodiment gets into step S140.In step S140,140 pairs of inputs of image cutting unit image IY carries out the image dividing processing, to obtain a plurality of cut zone DA1~DAx.Image cutting unit 140 is to the input image when carrying out the image dividing processing, the discontinuity of image gray scale capable of using, or can be according to the similitude in colors of image, texture or space, the Region Segmentation that will have identical configuration is come out.For instance, the mode that 140 pairs of inputs of image cutting unit image IY carries out the image dividing processing can be, and cuts apart with the mode of edge detection (edge detection), or cuts apart with the mode of zone growth (region growing).
Please with reference to Fig. 8, it illustrates the example of a plurality of cut zone DA1~DAx that carries out being obtained after the image dividing processing for the input image IY to Fig. 3.After 140 pairs of inputs of image cutting unit image IY carries out the image dividing processing, can obtain this a plurality of cut zone DA1~DAx.
Behind step S140, present embodiment can be obtained a plurality of input image block YB1~YBk pairing amount of variability intensity VM1~VMk (as shown in Figure 7), and a plurality of cut zone DA1~DAx (as shown in Figure 8) of input image IY.Afterwards, present embodiment just can utilize input image block YB1~YBk pairing amount of variability intensity VM1~VMk, decides to represent suitably the numerical value of the degree of depth of each cut zone DA1~DAx, thereby produces the depth information that corresponds to input image IY.So, present embodiment need not spend the plenty of time to the input image classifying experimentally, can utilize one the input image produce its pairing depth information.
Specifically, in step S150, output unit 150 according to each cut zone DA1~DAx contain corresponding input image block in fact these a little amount of variability intensity VM1~VMk produce depth information.Depth information for example is a depth map DM.
Detailed speech; Output unit 150 is when producing depth information according to these selected a little amount of variability intensity VM1~VMk; For example be these a little amount of variability intensity VM1~VMk of containing earlier the input image block of correspondence in fact, obtain the pairing amount of variability intensity typical value of each cut zone DA1~DAx according to each cut zone DA1~DAx institute.Afterwards, produce depth information according to the pairing amount of variability intensity of this a little cut zone DA1~DAx typical value again.
For instance, please be simultaneously with reference to Fig. 7 and Fig. 8.When a little amount of variability intensity of this of the input image block of containing correspondence according to cut zone DA1 institute in fact; Can choose earlier this cut zone DA1 contain these a little amount of variability intensity VM1~VMk of these corresponding a little input image block YB1~YBk in fact, for example be to choose pairing these a little amount of variability intensity VM (al) of input image block YB (al)~YB (an)~VM (an).Pairing these a little amount of variability intensity VM (al) of input image block YB (al)~YB (an)~VM (an) is covered by in fact among the corresponding region DA1 '.And corresponding region DA1 ' has close area and position with cut zone DA1.
Then, after choosing this a little amount of variability intensity VM (al)~VM (an), the output unit 150 of present embodiment is obtained the pairing amount of variability intensity typical value of cut zone DA1 according to selected these a little amount of variability intensity VM (al)~VM (an) again.In an embodiment; The mode that output unit 150 is obtained the pairing amount of variability intensity of cut zone DA1 typical value for example but restrictedly do not do; Computed segmentation region D A1 contain the average of the amount of variability intensity (for example being selected these a little amount of variability intensity VM (al)~VM (an)) of corresponding input image block in fact, be used as the pairing amount of variability intensity of cut zone DA1 typical value.In other embodiment, but output unit 150 also computed segmentation region D A1 contain the median of the amount of variability intensity of corresponding input image block in fact, be used as the pairing amount of variability intensity of cut zone DA1 typical value.So; The practice of average and median is merely in order to usefulness of the present invention to be described; Be not in order to restriction the present invention; As long as can from cut zone DA1 contain corresponding input image block in fact amount of variability intensity find out the amount of variability intensity typical value of the amount of variability intensity that can represent cut zone DA1, all in protection scope of the present invention.
So, the step (promptly obtaining the step of the pairing amount of variability intensity of cut zone DA1 typical value) according to above-mentioned just can obtain the pairing amount of variability intensity of all cut zone DA1~DAx typical value.Wherein, Because in step S136; Input image block VM1~VMk pairing amount of variability intensity is that the overall amount of variability of level with (between 0 to 1) after the normalization obtains with vertical totally amount of variability, so amount of variability intensity typical value can be between 0 to 1 a bit.
Then, output unit 150 produces depth information according to pairing these a little amount of variability intensity typical values of these a little cut zone.Among the Yu Shizuo; Because depth information for example has 8 GTG (grayscale); The GTG value that is each pixel of depth map DM is between 0 to 255, so the output unit 150 of present embodiment can produce depth information by the mode of these a little amount of variability intensity typical values being carried out linear corresponding (linear mapping).
In another embodiment, output unit 150 also can produce depth information by these a little amount of variability intensity typical values are carried out non-linear (nonlinear) corresponding mode.For example, can each value between 0 to 1 be extended between 0 to 255 according to the histogram (histogram) of these a little amount of variability intensity typical values.So also be not limited thereto, as long as can convert these a little amount of variability intensity typical values between 0 to 1 into required depth information accordingly, all in protection scope of the present invention.
Please with reference to Fig. 9-14.Fig. 9 is the example of the input image IY of Fig. 3.Figure 10 is the example of the reference image IR of Fig. 4.Figure 11 is the example of these a little input image block YB1~YBk of Fig. 5.Figure 12 is the example of the pairing amount of variability intensity of these a little input image block YB1~YBk VM1~VMk of Fig. 7.Figure 13 is the example of a plurality of cut zone DA1~DAx of Fig. 8.Figure 14 is the image treatment method of one embodiment of the invention example according to the depth map DM that cut zone DA1~DAx produced of the amount of variability intensity VM1~VMk of accompanying drawing 4 and Figure 13.
Please be simultaneously with reference to Fig. 9 and Figure 14.In depth map DM shown in Figure 14, the locating of color brighter (shade of gray higher) representes that the object shooting distance is nearer, and color darker (shade of gray is lower) locate represent that the object shooting distance is far away.The color of zone A1 is brighter, and this regional A1 is the object B1 that corresponds to close together in input image IY.Accordingly, the color of regional A2 is darker, and this regional A2 is to correspond to distance object B2 far away in input image IY.Therefore, the depth map DM that present embodiment produced can show the far and near degree of the object shooting distance among the input image IY really.
Moreover, please be simultaneously with reference to Figure 14 and Figure 15.(Dynamic Digital Depth, the bidimensional image that company provided DDD) changes the depth map DM2 of the technique for generating of stereopsis to Figure 15 in order to utilize a development trend numeral degree of depth that is positioned at California, USA.In depth map DM2 shown in Figure 15, its degree of depth configuration is to successively decrease towards periphery with image center, that is depth map DM2 is that the distance mode far away around the image is given the compute depth value with the close together of image zone line.Yet this mode can only show the third dimension of position at the object of image zone line, and can't show the third dimension of the object around the image effectively.For instance, in depth map DM2, the regional A3 ' around the image is the object B3 that corresponds to close together in input image IY, however the color of this regional A3 ' but very dark (shade of gray is very low).
In present embodiment, be to utilize the degree clearly of image to highlight the distance relation of object on image.The third dimension that so, can accurately present the object in the image.For instance, compared to traditional depth map DM2, in the depth map DM that present embodiment produced, the color of the regional A3 around the image is very bright, and this regional A3 is the object B3 that corresponds to close together in input image IY.Therefore, present embodiment does not have the problem that makes that third dimension reduces, and can show the third dimension of the object of image (comprise image center and on every side) effectively.
In addition, in an embodiment, this image treatment method can be in order to produce a solid (three-dimensional) image Im3D.Please with reference to Figure 1B, image processing system 100 applied image treatment methods for example can be further comprising the steps of: a three-dimensional display 160 that provides input image and corresponding depth information to image processing system 100 to be comprised.For example; Can produce a various visual angles image according to input image IY with corresponding depth information (like depth map DM) by a processing unit 170; And be a staggered image Imit with the various visual angles video conversion, and provide to three-dimensional display 160 with the mode of staggered (interlace).So, three-dimensional display 160 just can produce the stereopsis Im3D with stereoscopic vision (stereo vision) effect.
Image treatment method that the above embodiment of the present invention disclosed and image processing system thereof; According to an input image with corresponding produce one produce the pairing amount of variability intensity of a plurality of input image blocks of importing image with reference to image, and decide to represent suitably the numerical value of the depth value of each each cut zone of importing image according to these a little amount of variability intensity.So, need not spend the plenty of time to the input image classifying experimentally, and can utilize one the input image produce its pairing depth information.This depth information can show the far and near degree of the object shooting distance in the input image really, and can accurately present the third dimension of the object in the image.
In sum, though the present invention discloses as above with a preferred embodiment, so it is not in order to limit the present invention.Have common knowledge the knowledgeable in the technical field under the present invention, do not breaking away from the spirit and scope of the present invention, when doing various changes and retouching.Therefore, protection scope of the present invention should be as the criterion with the claim scope content that defined of application.

Claims (38)

1. an image treatment method produces a corresponding depth information in order to import image according to one, and this method comprises:
This input image is carried out Fuzzy processing to produce one with reference to image;
To import image respectively and be divided into corresponding a plurality of input image blocks and a plurality of with reference to image with reference to the image block with this;
Respectively this input image block of foundation correspondence is with respectively this obtains the respectively pairing amount of variability intensity of this input image block with reference to complex input pixel data and complex reference pixel data that the image block is comprised respectively;
This input image is carried out the image dividing processing, to obtain a plurality of cut zone; And
According to each cut zone contain those corresponding input image blocks in fact those amount of variability intensity produce this depth information.
2. the method for claim 1, wherein this input image is carried out Fuzzy processing, be to utilize a low pass filter or an average shielding that this input image is carried out Fuzzy processing.
3. the method for claim 1, wherein obtaining respectively, the pairing amount of variability intensity of this input image block is to comprise:
Calculate an input image block and corresponding each input pixel data that is comprised respectively with reference to the image block and each reference pixel data in the variation of horizontal direction with in the variation of vertical direction, and be somebody's turn to do according to the result of calculation generation and import the vertical totally amount of variability of the overall amount of variability of the pairing level of image block with one; And
According to the vertical overall amount of variability of the overall amount of variability of this level, obtain the pairing amount of variability intensity of this input image block with this.
4. method as claimed in claim 3; Wherein this input image block respectively comprises m * n pixel data with corresponding this with reference to the image block; And in the step that produces the overall amount of variability of this input pairing this level of image block, the overall amount of variability of this level is to produce with following formula:
D_Ihor(i,j)=Abs(I(i,j)-I(i-1,j)),i=1…m-1,j=0…n-1;
D_Rhor(i,j)=Abs(R(i,j)-R(i-1,j)),i=1…m-1,j=0…n-1;
D_Vhor (i, j)=Max (0, and D_Ihor (i, j)-D_Rhor (i, j)), i=1 ... M-1, j=1 ... N-1; And
s _ Vhor = Σ i , j = 1 m - 1 , n - 1 D _ Vhor ( i , j ) ;
Wherein,
(i j) is expressed as (i, j) input pixel data of this input image block to I;
(i j) is expressed as this (i, j) reference pixel data with reference to the image block to R;
Abs (.) expression the carrying out computing of absolute value;
The peaked computing of Max (.) expression carrying out;
D_Ihor (i, j) be expressed as this input image block (i, j) input pixel data is in the variation of horizontal direction;
D_Rhor (i, j) be expressed as this with reference to the image block (i, j) a reference pixel data is in the variation of horizontal direction;
(i j) is expressed as (i, j) the horizontal amount of variability absolute difference of an input pixel data of this input image block to D_Vhor; And
S_Vhor is expressed as the overall amount of variability of this level of those input pixel datas of this input image block.
5. method as claimed in claim 3; Wherein this input image block respectively comprises m * n pixel data with corresponding this with reference to the image block; And in produce that this input image block is pairing should the step of vertical overall amount of variability in, this vertical overall amount of variability is to produce with following formula:
D_Iver(i,j)=Abs(I(i,j)-I(i,j-1)),j=1…n-1,i=0…m-1;
D_Rver(i,j)=Abs(R(i,j)-R(i,j-1)),j=1…n-1,i=0…m-1;
D_Vver (i, j)=Max (0, and D_Iver (i, j)-D_Rver (i, j)), i=1 ... M-1, j=1 ... N-1; And
s _ Vver = Σ i , j = 1 m - 1 , n - 1 D _ Vver ( i , j ) ;
Wherein,
(i j) is expressed as (i, j) input pixel data of this input image block to I;
(i j) is expressed as this (i, j) reference pixel data with reference to the image block to R;
Abs (.) expression the carrying out computing of absolute value;
The peaked computing of Max (.) expression carrying out;
D_Iver (i, j) be expressed as this input image block (i, j) input pixel data is in the variation of vertical direction;
D_Rver (i, j) be expressed as this with reference to the image block (i, j) a reference pixel data is in the variation of vertical direction;
(i j) is expressed as (i, j) the vertical amount of variability absolute difference of an input pixel data of this input image block to D_Vver; And
S_Vver is expressed as this vertical overall amount of variability of those input pixel datas of this input image block.
6. method as claimed in claim 3, wherein before the step that obtains the pairing amount of variability intensity of this input image block, this method comprises:
The vertical overall amount of variability of the overall amount of variability of this level of normalization with this;
In the step that obtains the pairing amount of variability intensity of this input image block, be wherein, obtain the pairing amount of variability intensity of this input image block according to the vertical overall amount of variability of the overall amount of variability of this level after the normalization with this.
7. method as claimed in claim 6; Wherein, In the step of the vertical overall amount of variability of the overall amount of variability of this level of normalization with this; Be to come the overall amount of variability of this level is carried out normalization, and come this vertical overall amount of variability is carried out normalization with a vertical regular reference value with level normalization reference value.
8. method as claimed in claim 7, wherein corresponding this of this input image block respectively comprises m * n pixel data with reference to the image block, and the vertical regular reference value with this of this level normalization reference value is to obtain with following formula:
D_Ihor(i,j)=Abs(I(i,j)-I(i-1,j)),i=1…m-1,j=0…n-1;
D_Iver(i,j)=Abs(I(i,j)-I(i,j-1)),j=1…n-1,i=0…m-1;
s _ Ihor = Σ i , j = 1 m - 1 , n - 1 D _ Ihor ( i , j ) ; And
s _ Iver = Σ i , j = 1 m - 1 , n - 1 D _ Iver ( i , j ) ;
Wherein,
(i j) is expressed as (i, j) input pixel data of this input image block to I;
Abs (.) expression the carrying out computing of absolute value;
D_Ihor (i, j) be expressed as this input image block (i, j) input pixel data is in the variation of horizontal direction;
D_Iver (i, j) be expressed as this input image block (i, j) input pixel data is in the variation of vertical direction;
S_Ihor is expressed as this level normalization reference value; And
S_Iver is expressed as this vertical regular reference value.
9. method as claimed in claim 6; Wherein when obtaining the pairing amount of variability intensity of this input image block; Be the greater, as the pairing amount of variability intensity of this input image block with the vertical overall amount of variability of the overall amount of variability of this level after the normalization with this.
10. method as claimed in claim 6; Wherein when obtaining the pairing amount of variability intensity of this input image block; Be the geometric mean that calculates the vertical overall amount of variability of the overall amount of variability of this level after the normalization, be used as the pairing amount of variability intensity of this input image block with this.
11. the method for claim 1 is wherein carried out in the step of image dividing processing this input image, is to cut apart with the mode of edge detection.
12. the method for claim 1 is wherein carried out in the step of image dividing processing this input image, is to cut apart with the mode of growing up in the zone.
13. the method for claim 1, the step that wherein produces this depth information comprises:
According to each cut zone contain those amount of variability intensity of those corresponding input image blocks in fact, obtain the pairing amount of variability intensity typical value of each cut zone; And
Produce this depth information according to pairing those amount of variability intensity typical values of those cut zone.
14., wherein comprise in the step that obtains pairing this amount of variability intensity typical value of each cut zone like claim 13 a described method:
Calculate cut zone institute and contain corresponding those in fact and import the average of those amount of variability intensity of image blocks, be used as pairing this amount of variability intensity typical value of this cut zone.
15., wherein comprise in the step that obtains pairing this amount of variability intensity typical value of each cut zone like claim 13 a described method:
Calculate cut zone institute and contain corresponding those in fact and import the median of those amount of variability intensity of image blocks, be used as pairing this amount of variability intensity typical value of this cut zone.
16., wherein produce in the step of this depth information, be to produce so that those amount of variability intensity typical values are carried out linear corresponding mode like claim 13 a described method.
17., wherein produce in the step of this depth information, be to produce so that those amount of variability intensity typical values are carried out non-linear corresponding mode like claim 13 a described method.
18. the method for claim 1, in order to producing a stereopsis, and this method more comprises:
This input image and this corresponding depth information to a three-dimensional display are provided, so that this three-dimensional display produces this stereopsis with stereoscopic visual effect.
19. the method for claim 1 more comprises:
Capture a raw video, and import image as this with the brightness composition of this raw video.
20. an image processing system, in order to produce a corresponding depth information according to an input image, this image processing system comprises:
One input unit is in order to obtain this input image;
One with reference to the image generation unit, in order to this input image is carried out Fuzzy processing to produce one with reference to image;
One amount of variability intensity generation unit; Be divided into corresponding a plurality of input image blocks and a plurality of with reference to image block with this with reference to image in order to will import image respectively; And in order to according to corresponding respectively this input image block and respectively this obtains the respectively pairing amount of variability intensity of this input image block with reference to complex input pixel data and complex reference pixel data that the image block is comprised respectively;
One image cutting unit is in order to carry out the image dividing processing to this input image, to obtain a plurality of cut zone; And
One output unit, in order to according to each cut zone contain those corresponding input image blocks in fact those amount of variability intensity produce this depth information.
21. image processing system as claimed in claim 20 should be to utilize a low pass filter (low-pass filter) or on average to shield (average mask) this input image is carried out Fuzzy processing with reference to the image generation unit wherein.
22. image processing system as claimed in claim 20; Wherein this amount of variability intensity generation unit is when obtaining respectively the pairing amount of variability intensity of this input image block; This amount of variability intensity generation unit is to calculate an input image block and corresponding each input pixel data that is comprised respectively with reference to the image block and each reference pixel data in the variation of horizontal direction and in the variation of vertical direction; And produce according to result of calculation should the vertical totally amount of variability with of the overall amount of variability of the input pairing level of image block, and this amount of variability intensity generation unit is more obtained the pairing amount of variability intensity of this input image block according to the vertical totally amount of variability with this of the overall amount of variability of this level.
23. image processing system as claimed in claim 22; Wherein this input image block respectively comprises m * n pixel data with corresponding this with reference to the image block, and this amount of variability intensity generation unit is to produce the overall amount of variability of pairing this level of this input image block with following formula:
D_Ihor(i,j)=Abs(I(i,j)-I(i-1,j)),i=1…m-1,j=0…n-1;
D_Rhor(i,j)=Abs(R(i,j)-R(i-1,j)),i=1…m-1,j=0…n-1;
D_Vhor (i, j)=Max (0, and D_Ihor (i, j)-D_Rhor (i, j)), i=1 ... M-1, j=1 ... N-1; And
s _ Vhor = Σ i , j = 1 m - 1 , n - 1 D _ Vhor ( i , j ) ;
Wherein,
(i j) is expressed as (i, j) input pixel data of this input image block to I;
(i j) is expressed as this (i, j) reference pixel data with reference to the image block to R;
Abs (.) expression the carrying out computing of absolute value;
The peaked computing of Max (.) expression carrying out;
D_Ihor (i, j) be expressed as this input image block (i, j) input pixel data is in the variation of horizontal direction;
D_Rhor (i, j) be expressed as this with reference to the image block (i, j) a reference pixel data is in the variation of horizontal direction;
(i j) is expressed as (i, j) the horizontal amount of variability absolute difference of an input pixel data of this input image block to D_Vhor; And
S_Vhor is expressed as the overall amount of variability of this level of those input pixel datas of this input image block.
24. image processing system as claimed in claim 22; Wherein this input image block respectively comprises m * n pixel data with corresponding this with reference to the image block, and this amount of variability intensity generation unit be produce with following formula that this input image block is pairing should vertical overall amount of variability:
D_Iver(i,j)=Abs(I(i,j)-I(i,j-1)),j=1…n-1,i=0…m-1;
D_Rver(i,j)=Abs(R(i,j)-R(i,j-1)),j=1…n-1,i=0…m-1;
D_Vver (i, j)=Max (0, and D_Iver (i, j)-D_Rver (i, j)), i=1 ... M-1, j=0 ... N-1; And
s _ Vver = Σ i , j = 1 m - 1 , n - 1 D _ Vver ( i , j ) ;
Wherein,
(i j) is expressed as (i, j) input pixel data of this input image block to I;
(i j) is expressed as this (i, j) reference pixel data with reference to the image block to R;
Abs (.) expression the carrying out computing of absolute value;
The peaked computing of Max (.) expression carrying out;
D_Iver (i, j) be expressed as this input image block (i, j) input pixel data is in the variation of vertical direction;
D_Rver (i, j) be expressed as this with reference to the image block (i, j) a reference pixel data is in the variation of vertical direction;
(i j) is expressed as (i, j) the vertical amount of variability absolute difference of an input pixel data of this input image block to D_Vver; And
S_Vver is expressed as this vertical overall amount of variability of those input pixel datas of this input image block.
25. image processing system as claimed in claim 22; Wherein this amount of variability intensity generation unit is in order to the vertical overall amount of variability with this of the overall amount of variability of this level of normalization; And, obtain the pairing amount of variability intensity of this input image block according to the vertical overall amount of variability of the overall amount of variability of this level after the normalization with this.
26. image processing system as claimed in claim 25; Wherein this amount of variability intensity generation unit is to come the overall amount of variability of this level is carried out normalization with level normalization reference value, and comes this vertical overall amount of variability is carried out normalization with a vertical regular reference value.
27. image processing system as claimed in claim 26; Wherein corresponding this of this input image block respectively comprises m * n pixel data with reference to the image block, and this amount of variability intensity generation unit is to obtain the vertical regular reference value with this of this level normalization reference value with following formula:
D_Ihor(i,j)=Abs(I(i,j)-I(i-1,j)),i=1…m-1,j=0…n-1;
D_Iver(i,j)=Abs(I(i,j)-I(i,j-1)),j=1…n-1,i=0…m-1;
s _ Ihor = Σ i , j = 1 m - 1 , n - 1 D _ Ihor ( i , j ) ; And
s _ Iver = Σ i , j = 1 m - 1 , n - 1 D _ Iver ( i , j ) ;
Wherein,
(i j) is expressed as (i, j) input pixel data of this input image block to I;
Abs (.) expression the carrying out computing of absolute value;
D_Ihor (i, j) be expressed as this input image block (i, j) input pixel data is in the variation of horizontal direction;
D_Iver (i, j) be expressed as this input image block (i, j) input pixel data is in the variation of vertical direction;
S_Ihor is expressed as this level normalization reference value; And
S_Iver is expressed as this vertical regular reference value.
28. image processing system as claimed in claim 25, wherein this amount of variability intensity generation unit is the greater with the vertical overall amount of variability with this of the overall amount of variability of this level after the normalization, as the pairing amount of variability intensity of this input image block.
29. image processing system as claimed in claim 25; Wherein this amount of variability intensity generation unit is the geometric mean that calculates the vertical overall amount of variability with this of the overall amount of variability of this level after the normalization, is used as the pairing amount of variability intensity of this input image block.
30. image processing system as claimed in claim 20, wherein this image cutting unit is to come this input image is carried out the image dividing processing with the mode of edge detection.
31. image processing system as claimed in claim 20, wherein this image cutting unit is to come this input image is carried out the image dividing processing with the mode of growing up in the zone.
32. image processing system as claimed in claim 20; Wherein this output unit is when producing this depth information; Be according to each cut zone contain those corresponding input image blocks in fact those amount of variability intensity obtain the pairing amount of variability intensity typical value of each cut zone, produce this depth information according to pairing those amount of variability intensity typical values of those cut zone again.
33. image processing system as claimed in claim 32; Wherein this output unit is to calculate cut zone institute to contain corresponding those in fact and import the average of those amount of variability intensity of image blocks, is used as pairing this amount of variability intensity typical value of this cut zone.
34. image processing system as claimed in claim 32; Wherein this output unit is to calculate cut zone institute to contain corresponding those in fact and import the median of those amount of variability intensity of image blocks, is used as pairing this amount of variability intensity typical value of this cut zone.
35. image processing system as claimed in claim 32, wherein this output unit is to produce this depth information so that those amount of variability intensity typical values are carried out linear corresponding mode.
36. image processing system as claimed in claim 32, wherein this output unit is to produce this depth information so that those amount of variability intensity typical values are carried out non-linear corresponding mode.
37. image processing system as claimed in claim 20 more comprises:
One three-dimensional display in order to receiving this input image and this corresponding depth information, and produces the stereopsis with stereoscopic visual effect.
38. image processing system as claimed in claim 20, wherein this input unit is more in order to capturing a raw video, and imports image with the brightness composition of this raw video as this.
CN200910136819XA 2009-04-21 2009-04-21 Image processing method for providing depth information and image processing system thereof Active CN101873506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910136819XA CN101873506B (en) 2009-04-21 2009-04-21 Image processing method for providing depth information and image processing system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910136819XA CN101873506B (en) 2009-04-21 2009-04-21 Image processing method for providing depth information and image processing system thereof

Publications (2)

Publication Number Publication Date
CN101873506A CN101873506A (en) 2010-10-27
CN101873506B true CN101873506B (en) 2012-01-25

Family

ID=42998123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910136819XA Active CN101873506B (en) 2009-04-21 2009-04-21 Image processing method for providing depth information and image processing system thereof

Country Status (1)

Country Link
CN (1) CN101873506B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120142A (en) * 2010-11-08 2012-06-21 Sony Corp Stereoscopic image data transmission device, stereoscopic image data transmission method, and stereoscopic image data reception device
TWI469088B (en) * 2010-12-31 2015-01-11 Ind Tech Res Inst Depth map generation module for foreground object and the method thereof
CN102572457A (en) * 2010-12-31 2012-07-11 财团法人工业技术研究院 Foreground depth map generation module and method thereof
TWI633520B (en) * 2011-04-06 2018-08-21 承景科技股份有限公司 A method, an apparatus and a computer programming for generating depth of 2-d images
TWI682295B (en) * 2018-11-05 2020-01-11 財團法人資訊工業策進會 Device and method for producing test data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1466737A (en) * 2000-08-09 2004-01-07 动态数字视距研究有限公司 Image conversion and encoding techniques
CN1669053A (en) * 2002-06-07 2005-09-14 动态数字视距研究有限公司 Improved conversion and encoding techniques
CN101272511A (en) * 2007-03-19 2008-09-24 华为技术有限公司 Method and device for acquiring image depth information and image pixel information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1466737A (en) * 2000-08-09 2004-01-07 动态数字视距研究有限公司 Image conversion and encoding techniques
CN1669053A (en) * 2002-06-07 2005-09-14 动态数字视距研究有限公司 Improved conversion and encoding techniques
CN101272511A (en) * 2007-03-19 2008-09-24 华为技术有限公司 Method and device for acquiring image depth information and image pixel information

Also Published As

Publication number Publication date
CN101873506A (en) 2010-10-27

Similar Documents

Publication Publication Date Title
Huang et al. An advanced single-image visibility restoration algorithm for real-world hazy scenes
US9123115B2 (en) Depth estimation based on global motion and optical flow
TWI524734B (en) Method and device for generating a depth map
CN101287143B (en) Method for converting flat video to tridimensional video based on real-time dialog between human and machine
EP2755187A2 (en) 3d-animation effect generation method and system
CN101699512B (en) Depth generating method based on background difference sectional drawing and sparse optical flow method
EP2030172B1 (en) Method and apparatus for volume rendering using depth weighted colorization
CN102098528B (en) Method and device for converting planar image into stereoscopic image
TWI457853B (en) Image processing method for providing depth information and image processing system using the same
CN104966285B (en) A kind of detection method of salient region
KR20130102626A (en) Depth estimation based on global motion
EP3144898A1 (en) Method and system for determination of intrinsic images from two dimensional images in a video sequence
CN101873506B (en) Image processing method for providing depth information and image processing system thereof
Kuo et al. Depth estimation from a monocular view of the outdoors
CN106296744A (en) A kind of combining adaptive model and the moving target detecting method of many shading attributes
CN102930334A (en) Video recognition counter for body silhouette
CN103034983A (en) Defogging method based on anisotropic filtering
US8908994B2 (en) 2D to 3d image conversion
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
CN108550124B (en) Illumination compensation and image enhancement method based on bionic spiral
Hmue et al. Image enhancement and quality assessment methods in turbid water: A review article
Guo et al. Marine snow removal
Seitner et al. Trifocal system for high-quality inter-camera mapping and virtual view synthesis
Chengtao et al. Improved dark channel prior dehazing approach using adaptive factor
Wang et al. Task-driven image preprocessing algorithm evaluation strategy

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant