CN102572457A - Foreground depth map generation module and method thereof - Google Patents

Foreground depth map generation module and method thereof Download PDF

Info

Publication number
CN102572457A
CN102572457A CN2011100430186A CN201110043018A CN102572457A CN 102572457 A CN102572457 A CN 102572457A CN 2011100430186 A CN2011100430186 A CN 2011100430186A CN 201110043018 A CN201110043018 A CN 201110043018A CN 102572457 A CN102572457 A CN 102572457A
Authority
CN
China
Prior art keywords
image frame
cut piece
key image
shape
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100430186A
Other languages
Chinese (zh)
Inventor
刘楷哲
黄维嘉
吴俊德
王群元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Publication of CN102572457A publication Critical patent/CN102572457A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a foreground depth map generation method, which comprises the following steps: receiving the image sequence data, wherein the image sequence data comprises a plurality of image frames; selecting at least one key image frame from the image sequence data; providing at least one first depth indication data and a shape of a first partition of the at least one key image frame; and executing the signal processing step by a microprocessor. The signal processing step includes: generating a segmentation motion vector, a deformed key image frame and a deformed shape of the first segmentation element; generating a shape of a second partition in the first non-key image frame according to the data; and transferring at least one first depth indication data of the at least one key image frame to the first non-key image frame according to the split motion vector, thereby generating at least one second depth indication data of the first non-key image frame.

Description

Foreground depth map generation module and method thereof
Technical field
The present invention uses the 3-dimensional image information that provides about a kind of depth map (depth map) generation module and method thereof that is used for the bidimensional image sequence data.
Background technology
In recent years, along with the lifting of quality of life, Display Technique constantly strides forward.In order to satisfy the demand of real image more, Display Technique is developed to three-dimensional from two dimension.Except general image and color, the 3-dimensional image content more can provide the visual experience of solid space.
Wherein a kind of method of 3-dimensional image generation at present realizes through increasing by a depth information.That is; On original bidimensional image, add one with the corresponding depth map of bidimensional image; Just can pass through analog form, obtain stereopsis, or play to obtain 3-D effect through the dimensional image display of supporting various visual angles at least two different visual angles of left eye and right eye.
At present on the market, can simulate the dimensional image display of three-dimensional stereoscopic visual effect, gradually commercialization.Yet the 3-dimensional image information that for want of can supply dimensional image display to use is so the application of dimensional image display and popularization are restricted.Because the presentation content major part under the present information environment is main with bidimensional image.Therefore, be necessary to propose a kind of depth map generation module and method thereof that is used for the bidimensional image sequence data, so that 3-dimensional image information to be provided.
Summary of the invention
The object of the present invention is to provide a kind of foreground depth map generation module and method thereof, be used to solve prior art and can't efficiently utilize the bidimensional image sequence data to be provided for the problem of the 3-dimensional image information of dimensional image display.
One of foreground depth map generation module of the present invention is implemented example, comprises a microprocessor and a storage element.This storage element is coupled to this microprocessor, in order to store the data of this microprocessor computing.This microprocessor comprises a sequencing unit, data provide unit, a form generation unit, to cut apart a motion-vector generation unit and a degree of depth mobile unit.This sequencing unit is in order to receive this image sequence data; And the order of optionally adjusting these image sequence data according to the operating mode of this foreground depth map generation module to be to produce the image sequence data of a conversion, and wherein the image sequence data of this conversion comprise at least one crucial image frame and one first non-key image frame.These data provide at least one first degree of depth designation data and shape of one first cut piece of unit in order to this at least one crucial image frame to be provided.This cut apart the motion-vector generation unit in order to according to a shape of the colouring information of the colouring information of this key image frame, this first non-key image frame and this first cut piece that should the key image frame with produce one cut apart this key image frame and a distortion of motion-vector, a distortion this shape of this first cut piece.This form generation unit is in order to this shape, the colouring information of this key image frame and the colouring information of this first non-key image frame of this first cut piece of the colouring information of this key image frame of this shape of cutting apart motion-vector, this first cut piece according to this, this distortion, this distortion; To produce a shape of one second cut piece in this first non-key image frame, wherein this first and second cut piece is corresponding to a same object of this key image frame.This degree of depth mobile unit to this first non-key image frame, is used at least one second degree of depth designation data that produces this first non-key image frame in order at least one first degree of depth designation data of cutting apart motion-vector according to this and shifting this at least one crucial image frame.
An enforcement example that in image sequence data, produces the method for foreground depth map of the present invention, comprise following steps: receive this image sequence data, these image sequence data comprise a plurality of image frames, and each image frame comprises at least one object; From these image sequence data, select at least one crucial image frame; At least one first degree of depth designation data and a shape of one first cut piece of this at least one crucial image frame are provided; And carry out following steps through a microprocessor: according to a shape of the colouring information of the colouring information of this key image frame, this first non-key image frame and this first cut piece that should the key image frame with produce one cut apart this key image frame and a distortion of motion-vector, a distortion this shape of this first cut piece; Cut apart this shape, the colouring information of this key image frame and the colouring information of this first non-key image frame of this first cut piece of colouring information, this distortion of this key image frame of this shape, this distortion of motion-vector, this first cut piece according to this; To produce a shape of one second cut piece in this first non-key image frame, wherein this first and second cut piece is corresponding to a same object of this key image frame; And cut apart at least one first degree of depth designation data that motion-vector shifts this at least one crucial image frame to this first non-key image frame according to this, use at least one second degree of depth designation data that produces this first non-key image frame.
Compared with prior art; Useful technique effect of the present invention is: when considering object cutting; Except ratio according to this shape of this second cut piece of colouring information decision of the colouring information of prospect and background or color; Simultaneously also considered to cut apart motion-vector information, determined this shape of this second cut piece or the ratio of color.And use in two-way object cutting and the depth image and insert; Not only object itself has the information of a plurality of degree of depth; The degree of depth of each object of while also will consider first depth information with last and goes depth map is done linear interpolation, to reach better and rational depth image.
Description of drawings
Fig. 1 shows the block schematic diagram of the foreground depth map generation module of one embodiment of the invention;
Fig. 2 is the flow chart of the foreground depth map production method of one embodiment of the invention;
Fig. 3 shows a crucial image frame and a non-key image frame;
Fig. 4 show one embodiment of the invention this cut apart the block schematic diagram of motion-vector generation unit;
Fig. 5 A~Fig. 5 C simply illustrates and uses the step that this cuts apart motion-vector generation unit processing image;
Fig. 6 shows the block schematic diagram of this form generation unit of one embodiment of the invention;
Fig. 7 shows the function mode of this form generation unit of one embodiment of the invention;
Fig. 8 shows the block schematic diagram of the microprocessor that operates on this foreground depth map generation module under this two-way mode;
Fig. 9 shows the block schematic diagram of the depth map generation module of one embodiment of the invention; And
Figure 10 shows the block schematic diagram of the depth map generation module of one embodiment of the invention.
Wherein, Reference numeral:
10,10 ' foreground depth map generation module
11 storage elements
100,100 ' microprocessor
102 image depth map generation modules
110 first cut piece
112 foreground object
114 shapes
120 second cut piece
12,12 ' sequencing unit
14,14 ' data provide the unit
16,16 ' form generation unit
168 windowings and first sampling unit
170 window mobile units
172 second sampling units
174 profile generation units
18,18 ' cut apart the motion-vector generation unit
182 search unit
184 affine converting units
186 primary vector computing units
188 secondary vector computing units
20,20 ' degree of depth mobile unit
22,22 ' degree of depth is repaired the unit
24 degree of depth interpolation units
The 92-94 window
96 background depth map generation modules
98 degree of depth integral unit
90,200 depth map generation modules
S10~S70 step
Embodiment
Embodiments of the invention will cooperate graphic detailed description following, and wherein identical or approximate assembly is with similar reference number demonstration.
For explaining depth map production method of the present invention more glibly, below the foreground depth map generation module of carrying out method of the present invention will be described earlier.Fig. 1 shows the block schematic diagram of the foreground depth map generation module 10 of one embodiment of the invention.This foreground depth map generation module 10 is in order to receive a bidimensional image sequence data IMG_SEQ, and it is made up of a plurality of image frame.With reference to figure 1, this foreground depth map generation module 10 comprises a microprocessor 100 and a storage element 11.This storage element 11 is coupled to this microprocessor 100, in order to store the data of these microprocessor 100 computings.This microprocessor 100 comprises a sequencing unit 12, data to be provided unit 14, a form generation unit 16, to cut apart motion-vector generation unit 18, a degree of depth mobile unit 20 and a degree of depth to repair unit 22.This foreground depth map generation module 10 optionally operates on three kinds of operating modes: forward operating mode, reverse operating mode and two-way operating mode.According to different operating modes; This foreground depth map generation module 10 can automatically produce the depth map of this image sequence data I MG_SEQ, makes this two-dimentional image sequence data I MG_SEQ to let the observer view and admire it with the mode of 3-dimensional image through the device for image that a 3D shows.
Fig. 2 is the flow chart that produces foreground depth map production method in the image sequence data of a corresponding same scene of one embodiment of the invention; It comprises: receive this image sequence data (step S10); Wherein, These image sequence data comprise a plurality of image frames, and each image frame comprises at least one object; From these image sequence data, select at least one crucial image frame (step S20); At least one first degree of depth designation data and a shape (step S30) of one first cut piece of this at least one crucial image frame are provided; According to a shape of the colouring information of the colouring information of this key image frame, this first non-key image frame and this first cut piece that should the key image frame with produce one cut apart this key image frame and a distortion of motion-vector, a distortion this shape (step S40) of this first cut piece; Cut apart this shape, the colouring information of this key image frame and the colouring information of this first non-key image frame of this first cut piece of colouring information, this distortion of this key image frame of this shape, this distortion of motion-vector, this first cut piece according to this; To produce a shape (step S50) of one second cut piece in this first non-key image frame, wherein this first and second cut piece is corresponding to a same object of this key image frame; To this first non-key image frame, use at least one second degree of depth designation data (step S60) that produces this first non-key image frame according to this at least one first degree of depth designation data of cutting apart this at least one crucial image frame of motion-vector transfer; And according to the colouring information of the colouring information of this shape of this second cut piece, this key image frame and this first non-key image frame to compensate these the second degree of depth designation datas in this first non-key image frame, use at least one the 3rd degree of depth designation data (step S70) that produces this second cut piece in this first non-key image frame.Wherein, step S40 to S70 carries out it through a microprocessor.Below cooperate Fig. 1 to describe the details of depth map production method of the present invention.
With reference to figure 2, this sequencing unit 12 receives an image sequence data I MG_SEQ in step S10.This image sequence data I MG_SEQ is one section image sequence under the same scene.For the sake of brevity, in the present embodiment, this image sequence data I MG_SEQ is made up of image frame IMG_1, IMG_2, IMG_3, IMG_4 and the IMG_5 of five series arrangement.This sequencing unit 12 is optionally adjusted the output order of this image sequence data I MG_SEQ to produce the image sequence data of a conversion according to the operating mode of this foreground depth map generation module 10, wherein the image sequence data of this conversion comprise a crucial image frame and a non-key image frame.One implements example according to the present invention; When this foreground depth map generation module 10 operates on a forward mode; This key image frame is first image frame IMG_1 among this image sequence data I MG_SEQ, and this non-key image frame is second image frame IMG_2 among this image sequence data I MG_SEQ.Another implements example according to the present invention; When this foreground depth map generation module 10 operates on a reverse mode; This key image frame is last image frame IMG_5 among this image sequence data I MG_SEQ, and this non-key image frame is second from the bottom image frame IMG_4 among this image sequence data I MG_SEQ.
In one embodiment, when this foreground depth map generation module 10 operated on a forward mode, the image sequence data of this sequencing unit 12 outputs one conversion comprised a crucial image frame IMG_1, unit 14 were provided to these data.These data provide unit 14 in order at least one first degree of depth designation data and a shape information of one first cut piece (segment) among this key image frame IMG_1 to be provided.With reference to Fig. 3, this key image frame IMG_1 comprises a foreground object (foreground object) 110 and another foreground object 112, and this first cut piece is this foreground object 110 among this key image frame IMG_1.This first cut piece 110 has at least one degree of depth designation data DEP_1, for example, and the degree of depth designation data of the degree of depth designation data of the left arm of this foreground object 110 or the right arm of this foreground object 110.Shape for one second cut piece 120 that produces this non-key image frame IMG_2 automatically; Wherein this second cut piece 120 corresponds to this foreground object 110 among this key image frame IMG_1; The user provides a shape 114 that at first must manually produce this first cut piece 110 in the unit 14 in these data, and is as shown in Figure 3.
After this shape 114 that produces this first cut piece 110, these data provide this shape CON_1 of unit 14 outputs one first cut piece to cut apart motion-vector generation unit 18 to this.This cut apart motion-vector generation unit 18 according to the colouring information of the colouring information of this key image frame IMG_1, this non-key image frame IMG_2 and should key image frame IMG_1 in this first cut piece this shape CON_1 with produce one cut apart this key image frame IMG_1 ' and this distortion of motion-vector VEC_2, this distortion this shape CON_1 ' of this first cut piece.
Fig. 4 show one embodiment of the invention this cut apart the block schematic diagram of motion-vector generation unit 18; Wherein, this is cut apart motion-vector generation unit 18 and comprises the affine converting unit in a search unit 182, one 184, a primary vector computing unit 186 and a secondary vector computing unit 188.This search unit 182 is in order to search the common characteristic point coordinates position among this key image frame IMG_1 and this first non-key image frame IMG_2, particularly about the inner characteristic point coordinates position of this shape CON_1 of this first cut piece among this key image frame IMG_1.This affine converting unit 184 is according to the difference of these characteristic point coordinates positions; Carry out an affine conversion; For example rotation, translation or convergent-divergent should key image frame IMG_1 and this shape 114 of this first cut piece 110, use this shape CON_1 ' and a motion-vector VEC_1 of this first cut piece 110 of this key image frame IMG_1 ' of producing distortion, distortion.Then, this primary vector computing unit 186 calculates this key image frame IMG_1 ' that is out of shape with optical flow (optical flow) and reaches this non-key image frame IMG_2 to obtain this key image frame IMG_1 ' with respect to the vectorial VEC_1 ' of relatively moving of this non-key image frame IMG_2.Then; Behind this shape CON_1 ' of this first cut piece 110 of these secondary vector computing unit 188 reception distortion and the vectorial VEC_1 ' that relatively moves of whole image; With the motion-vector VEC_1 and the vectorial VEC_1 ' addition that relatively moves of each pixel in this shape CON_1 ' of this first cut piece 110 of distortion, use one of each interior pixel of this shape CON_1 ' of this first cut piece 110 that obtains last this distortion and cut apart motion-vector VEC_2.
Fig. 5 A~Fig. 5 C simply illustrates and uses the step that this cuts apart motion-vector generation unit 18 processing images.Fig. 5 A representes the degree of depth designation data DEP_1 of first cut piece 110.After over-segmentation motion-vector VEC_2 conversion, the degree of depth designation data DEP_2 of this second cut piece 120 still has incomplete part, shown in Fig. 5 B.
After the degree of depth designation data DEP_2 of a shape CON_2 of one second cut piece that obtains this second cut piece 120 and this second cut piece 120; This degree of depth is repaired the colouring information of unit 22 according to a shape CON_2, this key image frame IMG_1 and this first non-key image frame IMG_2 of one second cut piece of this second cut piece 120; Repair these degree of depth designation datas DEP_2 of this second cut piece 120 among this non-key image frame IMG_2; Use these degree of depth designation datas DEP_3 that produces this second cut piece 120 among this non-key image frame IMG_2, shown in Fig. 5 C.
Existing with reference to Fig. 1; After producing this and cutting apart motion-vector VEC_2, this form generation unit 16 produces a shape information CON_2 of this second cut piece 120 automatically according to this shape CON_1 ' of this first cut piece of the colouring information of this key image frame IMG_1 ' of the colouring information of the colouring information of this key image frame IMG_1, this non-key image frame IMG_2, this this shape CON_1 of cutting apart motion-vector VEC_2, this one first cut piece, distortion and distortion.The block schematic diagram of this form generation unit 16, as shown in Figure 6, comprise a windowing and first sampling unit 168, a window mobile unit 170, one second sampling unit 172 and a profile generation unit 174.
Fig. 7 shows the function mode of this form generation unit 16 of one embodiment of the invention.With reference to Fig. 7, this shape 114 of this windowing and first sampling unit 168 this first cut piece 110 in this key image frame IMG_1 is set up a plurality of windows 92.These windows 92 are a rectangular window, and these windows 92 rectangle that can be same size or different size, and overlap each other to each other.Then, this windowing and first sampling unit 168 are in these windows 92, according to this shape one group of shape of 114 samplings and colouring information of this first cut piece 110.The colouring information of sampling can drop on the inside or the outside of this shape 114 according to these windows 92, and is distinguished into foreground color information or background color information.On the other hand, the shape information of sampling can drop on the inside or the outside of this shape 114 according to these windows 92, and defines the shape information of prospect in each window.
Then; Cut apart the motion-vector of cutting apart that motion-vector generation unit 18 produced through this; This window mobile unit 170 is cut apart motion-vector information according to prospect in each window, move these windows 92 to this non-key image frame IMG_2 to set up a plurality of windows 94.Then; This second sampling unit 172 is according to the colouring information of this shape CON_1 ' of this first cut piece of distortion sampling prospect and background in these windows 94; And the prospect of being sampled with reference to this windowing and first sampling unit 168 and the colouring information of background, use the colouring information that forms one group of prospect and background.The colouring information that this second sampling unit 172 is sampled in these windows 94 can be divided into foreground color information or background color information.The shape information that this second sampling unit 172 is sampled in these windows 94 will fall within the inside or the outside of this shape CON_1 ' of this first cut piece of this distortion according to these windows 94, and define the shape information of prospect in each window.Therefore, this profile generation unit 174 can carry out a ranking operation with shape and colouring information.
When computing weighted, can according to the colouring information of prospect and background whether can know distinguish decide prospect shape in the ratio of this shape of this second cut piece or the ratio of color.If prospect and background color can be known differentiation, then reduce the shared ratio of shape information of prospect.If prospect and background color can't be known differentiation, then increase the shared ratio of shape of prospect.In addition, when color in the decision window and shape ratio, also can in an embodiment of the present invention, when motion-vector is big, can reduce the shared ratio of shape information of prospect simultaneously with reference to motion-vector information.
Then, after the repetition above-mentioned steps, can obtain a shape of one second cut piece 120 of this first non-key image frame.Yet the producing method of a shape of this second cut piece 120 should not be limited to the foregoing description, and other image processing mode also can be in order to produce a shape of this second cut piece 120.
On the other hand; After this cuts apart motion-vector VEC_2 in generation; This degree of depth mobile unit 20 to this non-key image frame IMG_2, is used at least one degree of depth designation data DEP_2 that produces this non-key image frame IMG_2 according to this at least one degree of depth designation data DEP_1 of cutting apart this first cut piece 110 among this key image frame of motion-vector VEC transfer IMG_1.These degree of depth designation datas DEP_1 provides 14 of unit to provide by these data.In above-mentioned steps; Owing to cut apart the generation of motion-vector VEC_2; The colouring information that receives image easily influences; And very inaccurate, therefore when this degree of depth designation data DEP_1 of this first cut piece 110 is transferred to this non-key image frame IMG_2, this second cut piece 120 degree of depth designation data DEP_2 might drop on outside the shape CON_2 of one second cut piece.
In addition, the degree of depth designation data of the cut piece in other non-key image frame can produce according to accurate shape, last image frame and the present non-key image frame of the cut piece of last image frame.For example, the degree of depth designation data of the cut piece of this non-key image frame IMG_3 can produce according to the colouring information of this accurate shape of second cut piece 120 of the colouring information of last image frame IMG_2, this non-key image frame IMG_2 and present non-key image frame IMG_3.
The another enforcement example according to the present invention, this foreground depth map generation module 10 also can operate on a two-way pattern except can operating on forward and reverse mode.Fig. 8 show this foreground depth map generation module 10 operate under this two-way mode ' microprocessor 100 ' block schematic diagram.Under this pattern, the image sequence data of this sequencing unit 12 ' output one conversion comprise a crucial image frame IMG_1 and another crucial image frame IMG_5, unit 14 ' are provided to these data.Therefore, these data provide this shape CON_5 of the cut piece among at least one second degree of depth designation data DEP_5 and this another the crucial image frame IMG_5 of a cut piece (it is corresponding to this foreground object 110 among this key image frame IMG_1) of this shape CON_1, this another crucial image frame IMG_5 of this first cut piece of these first degree of depth designation datas DEP_1, this key image frame of this first cut piece that unit 14 ' produces this key image frame IMG_1.After this degree of depth is repaired this shape CON_2 of these first degree of depth designation data DEP_2 and this first cut piece of colouring information, this first cut piece of colouring information that unit 22 ' receives this key image frame IMG_1, this first non-key image frame IMG_2, produce these degree of depth designation datas DEP_3 of this second cut piece 120 among this non-key image frame IMG_2.On the other hand; This degree of depth repair colouring information that unit 22 ' receives this another crucial image frame IMG_5, this first non-key image frame IMG_2 colouring information, cut piece second degree of depth designation data DEP_6 and should key image frame IMG_5 in a shape information CON_6 of a cut piece after, produce these degree of depth designation datas DEP_7 of this second cut piece 120 among this non-key image frame IMG_2.
With reference to Fig. 8, this foreground depth map generation module 10 ' in addition comprises a degree of depth interpolation unit 24.This degree of depth interpolation unit 24 produces at least one the 3rd degree of depth designation data DEP_8 through formula (1).
DEP_8=α×DEP_3+(1-α)×DEP_7 (1)
α=(M-N)/M wherein.In the present embodiment, M=5, N=1.
Therefore; Under two-way mode, the degree of depth designation data DEP_8 that this non-key image frame IMG_2 can be new according to the information generating of these degree of depth designation datas DEP_5 of these first degree of depth designation datas DEP_1 of this key image frame IMG_1 and this another crucial image frame IMG_5 one.According to similar step, can obtain the degree of depth designation data of other non-key image frame IMG_3 and IMG_4 in regular turn.
As stated, this foreground depth map generation module 10 can produce the degree of depth designation data of the foreground object in the non-key image frame.After obtaining the degree of depth designation data of foreground object, if the degree of depth designation data of integration background object can obtain the depth map of non-key image frame.Fig. 9 shows the block schematic diagram of the depth map generation module 90 of one embodiment of the invention.With reference to Fig. 9, this depth map generation module 90 comprises a foreground depth map generation module 10, a background depth map generation module 96 and a degree of depth integral unit 98.After this foreground depth map generation module 10 receives this image sequence data I MG_SEQ, produce the degree of depth designation data DEP_FG of foreground object.After this background depth map generation module 96 receives this image sequence data I MG_SEQ, produce the degree of depth designation data DEP_BG of background object.Behind the degree of depth designation data DEP_FG of these degree of depth integral unit 98 these foreground object of integration and the degree of depth designation data DEP_BG of background object, can produce the depth map DEP_FL of this image sequence data I MG_SEQ.
In addition, for the degree of depth designation data of highlighting foreground object, this foreground depth map generation module 10 also can cooperate an image depth generation unit to produce the depth map of image sequence data.Figure 10 shows the block schematic diagram of the depth map generation module 200 of one embodiment of the invention.With reference to Figure 10, this depth map generation module 200 comprises a foreground depth map generation module 10, an image depth map generation module 102 and a degree of depth integral unit 98.After this foreground depth map generation module 10 receives this image sequence data I MG_SEQ, produce the degree of depth designation data DEP_FG of foreground object.After this image depth map generation module 102 receives this image sequence data I MG_SEQ, produce the degree of depth designation data DEP_IMG of this image sequence data I MG_SEQ.After this degree of depth integral unit 98 is integrated the degree of depth designation data DEP_FG and degree of depth designation data DEP_IMG of this foreground object, can produce the depth map DEP_FL of this image sequence data I MG_SEQ.
One embodiment of image depth map generation module 102 can be with reference to first to file case " Method for generating depth maps from monocular images and systems using the same " (PCT/CN2009/075007; November 18 2009 applying date) method that is proposed produces image depth information, simply is described below at present.This image depth map generation module 102 is at first selected an ID Background.The producing method of this ID Background has multiple choices, and can comply with the source contents difference to adjust.
Then, use a bilateral filter (Bilateral Filter) to portray degree of depth details.Be noted that the object details in image source is portrayed in the ID Background, shield ranges must be very big.Shield ranges need contain 1/36 to 1/4 size of image usually, otherwise the general blur effect that can preserving edge of being merely of obtaining.The degree of depth clue that then, can add dynamical parallax obtains more accurate depth map.This adding step is forgiven following three steps:
Step (1): use optical flow method to seek motion-vector.Optical flow method (optical flow) is the method for two interior each pixel motion-vectors of shadow lattice before and after a kind of the calculating.The motion-vector that only uses optical flow method to find out has very big noise, but therefore matching step (2) and (3) effectively to remove noise and to obtain stable effect.
Step (2): use the image cutting technique to produce motion-vector.Use the image cutting technique can be, know that therefore two which image of shadow compartment cut apart (segment) and belong to same object with reference to the relation of front and back shadow compartment.
Step (3): with motion-vector Corrected Depth Background.
Be noted that this image depth map generation module 102 produces the mode of the depth information of whole image, should not be limited to the foregoing description, other image processing mode also can produce the depth information of background or the depth information of whole image.
This degree of depth integral unit 98 is according to the output information of image depth map generation module 102 and the output information of this background depth map generation module 96; Integrate output information or the foreground depth map generation module 10 of foreground depth map generation module 10 of the present invention ' output information after, can produce the depth map DEP_FL of this image sequence data I MG_SEQ.
By corresponding depth map of an original two dimensional image collocation; Display format through the different three-dimensional display of various visual angles image synthesis technology collocation; Produce the image of different visual angles, the method is commonly called Depth Image Based Rendering (DIBR).The image that last institute's interleave (interlace) goes out can produce 3-D effect on display.
Technology contents of the present invention and technical characterstic disclose as above, yet the personage who is familiar with this technology still maybe be based on teaching of the present invention and announcement and done all replacement and modifications that does not deviate from spirit of the present invention.Therefore, protection scope of the present invention should be not limited to implement example announcement person, and should comprise various do not deviate from replacement of the present invention and modifications, and is contained by following claim.

Claims (24)

1. a foreground depth map generation module in order to receive the image sequence data of a corresponding same scene, is used the degree of depth designation data that produces a plurality of image frames in these image sequence data, it is characterized in that this foreground depth map generation module comprises:
One microprocessor, it comprises:
One sequencing unit; In order to receive this image sequence data; And the order of optionally adjusting these image sequence data according to the operating mode of this foreground depth map generation module to be to produce the image sequence data of a conversion, and wherein the image sequence data of this conversion comprise at least one crucial image frame and one first non-key image frame;
One data provide the unit, in order at least one first degree of depth designation data and shape of one first cut piece that this at least one crucial image frame is provided;
One cuts apart the motion-vector generation unit, in order to according to a shape of the colouring information of the colouring information of this key image frame, this first non-key image frame and this first cut piece that should the key image frame with produce one cut apart this key image frame and a distortion of motion-vector, a distortion this shape of this first cut piece;
One form generation unit; This shape, the colouring information of this key image frame and the colouring information of this first non-key image frame in order to this first cut piece of the colouring information of this key image frame of this shape of cutting apart motion-vector, this first cut piece according to this, this distortion, this distortion; To produce a shape of one second cut piece in this first non-key image frame, wherein this first and second cut piece is corresponding to a same object of this key image frame; And
One degree of depth mobile unit; In order to cut apart at least one first degree of depth designation data that motion-vector shifts this at least one crucial image frame according to this, use at least one second degree of depth designation data that produces this first non-key image frame to this first non-key image frame; And
One storage element is coupled to this microprocessor, in order to store the data of this microprocessor computing.
2. foreground depth map generation module according to claim 1; It is characterized in that; This microprocessor also comprises a degree of depth and repairs the unit; In order to according to the colouring information of the colouring information of this shape of this second cut piece, this key image frame and this first non-key image frame to compensate said second degree of depth designation data in this first non-key image frame, use at least one the 3rd degree of depth designation data that produces this second cut piece in this first non-key image frame.
3. foreground depth map generation module according to claim 1; It is characterized in that; When this foreground depth map generation module operates on a forward mode; This key image frame is first image frame in these image sequence data, and this first non-key image frame is second image frame in these image sequence data.
4. foreground depth map generation module according to claim 1; It is characterized in that; When this foreground depth map generation module operates on a reverse mode; This key image frame is last image frame in these image sequence data, and this first non-key image frame is a second from the bottom image frame in these image sequence data.
5. foreground depth map generation module according to claim 1 is characterized in that, this form generation unit comprises:
One windowing and first sampling unit are set up a plurality of first windows in order to this shape of this first cut piece in this key image frame, and in said first window, sample shape and colouring information;
One window mobile unit, in order to according to this first cut piece in should the key image frame in said first window cut apart motion-vector information, move said first window to this first non-key image frame to set up a plurality of second windows;
One second sampling unit; In order to the shape information of in said second window, sampling according to this shape of this first cut piece of being out of shape; Use the shape information that forms one group of prospect; According to the colouring information of this shape of this first cut piece of distortion sampling prospect and background in said second window, and, use the colouring information that forms one group of prospect and background with reference to the prospect of being sampled in said first window and the colouring information of background; And
One profile generation unit is in order to carry out a ranking operation to produce this shape of this second cut piece to the shape and the colouring information of being sampled in said second window.
6. foreground depth map generation module according to claim 5; It is characterized in that; This profile generation unit decides the ratio of the shape of prospect in this shape of this second cut piece according to the colouring information of this prospect and the colouring information of background when computing weighted.
7. foreground depth map generation module according to claim 5 is characterized in that, this profile generation unit is cut apart shape that motion-vector information decides prospect in this shape of this second cut piece or the ratio of color according to this when computing weighted.
8. foreground depth map generation module according to claim 1 is characterized in that this is cut apart the motion-vector generation unit and comprises:
One search unit is in order to the coordinate position of the inner common trait point of this shape of first cut piece of searching this key image frame and this first non-key image frame;
One affine converting unit in order to according to said characteristic point coordinates difference, is carried out an affine conversion, uses this shape and a motion-vector of this first cut piece of this key image frame of producing distortion, distortion;
One primary vector computing unit, in order to this key image frame of calculating distortion and this non-key image frame with this key image frame of obtaining this distortion with respect to one of this non-key image frame vector that relatively moves; And
One secondary vector computing unit; This shape and this vector that relatively moves in order to this first cut piece of receiving this distortion; And this motion-vector and this addition of vectors that relatively moves of each pixel in this shape of this first cut piece that will be out of shape, use the motion-vector of cutting apart of each pixel in this shape of this first cut piece that produces this distortion;
Wherein, this is cut apart motion-vector and defines moving of this first cut piece to this second cut piece.
9. foreground depth map generation module according to claim 1; It is characterized in that; At least one degree of depth designation data of one the 3rd cut piece in the one second non-key image frame of contiguous this first non-key image frame produces according to this shape, this first non-key image frame and this second non-key image frame of this second cut piece, and the 3rd and second cut piece is corresponding to a same object of this key image frame.
10. foreground depth map generation module according to claim 1; It is characterized in that; When this foreground depth map generation module operates on a two-way pattern; The image sequence data of this conversion comprise one first crucial image frame and one second crucial image frame; This first crucial image frame is first image frame in these image sequence data, and this second crucial image frame is last image frame in these image sequence data, and this first non-key image frame is second image frame in these image sequence data.
11. foreground depth map generation module according to claim 10; It is characterized in that; Also comprise a degree of depth interpolation unit; Wherein when this foreground depth map generation module operates on this two-way mode; These data provide at least one degree of depth designation data of one the 4th cut piece of at least one degree of depth designation data and this second crucial image frame of this first cut piece that the unit provides this first crucial image frame; The 4th and first cut piece is corresponding to a same object of this key image frame, and the information that this degree of depth interpolation unit is calculated with the degree of depth designation data of the information calculated according to the degree of depth designation data of this first cut piece and the 4th cut piece through linear interpolation produces at least one the 4th degree of depth designation data.
12. a method that in the image sequence data of a corresponding same scene, produces the foreground depth map is characterized in that, comprises:
Receive this image sequence data, these image sequence data comprise a plurality of image frames, and each image frame comprises at least one object;
From these image sequence data, select at least one crucial image frame and one first non-key image frame;
At least one first degree of depth designation data and a shape of one first cut piece of this at least one crucial image frame are provided; And
Carry out following steps through a microprocessor:
According to a shape of the colouring information of the colouring information of this key image frame, this first non-key image frame and this first cut piece that should the key image frame with produce one cut apart this key image frame and a distortion of motion-vector, a distortion this shape of this first cut piece;
Cut apart this shape, the colouring information of this key image frame and the colouring information of this first non-key image frame of this first cut piece of colouring information, this distortion of this key image frame of this shape, this distortion of motion-vector, this first cut piece according to this; To produce a shape of one second cut piece in this first non-key image frame, wherein this first and second cut piece is corresponding to a same object of this key image frame; And
To this first non-key image frame, use at least one second degree of depth designation data that produces this first non-key image frame according to this at least one first degree of depth designation data of cutting apart this at least one crucial image frame of motion-vector transfer.
13. method according to claim 12 is characterized in that, the performed step of this microprocessor also comprises:
To compensate said second degree of depth designation data in this first non-key image frame, use at least one the 3rd degree of depth designation data that produces this second cut piece in this first non-key image frame according to the colouring information of the colouring information of this shape of this second cut piece, this key image frame and this first non-key image frame.
14. method according to claim 12; It is characterized in that; When operating on a forward mode, this key image frame is first image frame in these image sequence data, and this first non-key image frame is second image frame in these image sequence data.
15. method according to claim 12; It is characterized in that; When operating on a reverse mode, this key image frame is last image frame in these image sequence data, and this first non-key image frame is a second from the bottom image frame in these image sequence data.
16. method according to claim 12 is characterized in that, this step of cutting apart motion-vector of this generation comprises:
Search the coordinate position of the inner common trait point of this shape of first cut piece of this key image frame and this first non-key image frame;
According to said characteristic point coordinates difference, carry out an affine conversion, use this shape and a motion-vector of this first cut piece of this key image frame of producing distortion, distortion;
This key image frame and this non-key image frame that calculates distortion is with this key image frame of obtaining this distortion vector that relatively moves with respect to this non-key image frame; And
Receive this shape and this vector that relatively moves of this first cut piece of this distortion; And this motion-vector and this addition of vectors that relatively moves of each pixel in this shape of this first cut piece that will be out of shape, use the motion-vector of cutting apart of each pixel in this shape of this first cut piece that produces this distortion;
Wherein, this is cut apart motion-vector and defines moving of this first cut piece to this second cut piece.
17. method according to claim 12 is characterized in that, the step of this shape of this second cut piece in this first non-key image frame of this generation comprises:
This shape of this first cut piece in this key image frame is set up a plurality of first windows;
Sampling shape and colouring information in said first window;
According to this first cut piece in should the key image frame in said first window cut apart motion-vector information, move said first window to this first non-key image frame to set up a plurality of second windows;
According to this shape of this first cut piece of distortion in said second window, sample shape and colouring information; And, use the shape of one group of prospect of formation and the colouring information of colouring information and background with reference to the prospect of sampling in said first window and the colouring information of background; And
Shape and colouring information to being sampled in said second window carry out a ranking operation to produce this shape of this second cut piece.
18. method according to claim 17 is characterized in that, this step that computes weighted also comprises:
According to the shape of the colouring information decision prospect of the colouring information of prospect and background in this shape of this second cut piece or the ratio of color.
19. method according to claim 17 is characterized in that, this step that computes weighted also comprises:
According to this shape of cutting apart motion-vector information decision prospect in this shape of this second cut piece or the ratio of color.
20. method according to claim 12; It is characterized in that; At least one degree of depth designation data of one the 3rd cut piece in the one second non-key image frame of contiguous this first non-key image frame produces according to this shape, this first non-key image frame and the second non-key image frame of this second cut piece, and the 3rd and second cut piece is corresponding to a same object of this key image frame.
21. method according to claim 12; It is characterized in that; When this foreground depth map generation module operated on a two-way pattern, the image sequence data of this conversion also comprised another crucial image frame, and this key image frame is first image frame in these image sequence data; This another crucial image frame is last image frame in these image sequence data, and this first non-key image frame is second image frame in these image sequence data.
22. method according to claim 21 is characterized in that, the performed step of this microprocessor also comprises:
At least one degree of depth designation data of this first cut piece in this key image frame is provided;
At least one degree of depth designation data of one the 4th cut piece in this another crucial image frame is provided, the 4th cut piece and the same object of this first cut piece corresponding to this key image frame; And
The information of being calculated with the degree of depth designation data of the information calculated according to the degree of depth designation data of this first cut piece and the 4th cut piece through linear interpolation produces at least one the 4th degree of depth designation data.
23. method according to claim 12 is characterized in that, the performed step of this microprocessor also comprises:
Receive these image sequence data to produce the degree of depth designation data of background object; And
Integrate the degree of depth designation data of this at least one second degree of depth designation data and this background object, to produce the depth map of these image sequence data.
24. method according to claim 12 is characterized in that, the performed step of this microprocessor also comprises:
Receive these image sequence data to produce the degree of depth designation data of these image sequence data; And
Integrate the degree of depth designation data of this at least one second degree of depth designation data and these image sequence data, to produce the depth map of these image sequence data.
CN2011100430186A 2010-12-31 2011-02-21 Foreground depth map generation module and method thereof Pending CN102572457A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW99147152 2010-12-31
TW099147152 2010-12-31

Publications (1)

Publication Number Publication Date
CN102572457A true CN102572457A (en) 2012-07-11

Family

ID=46416759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100430186A Pending CN102572457A (en) 2010-12-31 2011-02-21 Foreground depth map generation module and method thereof

Country Status (1)

Country Link
CN (1) CN102572457A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331670A (en) * 2015-06-30 2017-01-11 明士股份有限公司 Endoscope stereoscopic visualization system and method by employing chromaticity forming method
CN108779979A (en) * 2015-11-12 2018-11-09 比特安尼梅特有限公司 Relief map

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1395231A (en) * 2001-07-04 2003-02-05 松下电器产业株式会社 Image signal coding method, equipment and storage medium
JP2003284097A (en) * 2001-11-17 2003-10-03 Hoko Koka Daigakko Multiview image compositing apparatus using two images of stereo camera and difference between both eyes
US6873654B1 (en) * 2000-05-16 2005-03-29 Redrock Semiconductor, Inc Method and system for predictive control for live streaming video/audio media
US20060067585A1 (en) * 2004-09-21 2006-03-30 Euclid Discoveries Llc Apparatus and method for processing video data
US20060093040A1 (en) * 2003-01-15 2006-05-04 Microsoft Corporation Method and system for extracting key frames from video using a triangle model of motion based on perceived motion energy
JP2008141666A (en) * 2006-12-05 2008-06-19 Fujifilm Corp Stereoscopic image creating device, stereoscopic image output device, and stereoscopic image creating method
US20080263605A1 (en) * 2007-04-09 2008-10-23 Hiroshi Mine Video delivery device, video receiver and key frame delivery method
JP2009044722A (en) * 2007-07-19 2009-02-26 Victor Co Of Japan Ltd Pseudo-3d-image generating device, image-encoding device, image-encoding method, image transmission method, image-decoding device and image image-decoding method
CN101542529A (en) * 2006-11-21 2009-09-23 皇家飞利浦电子股份有限公司 Generation of depth map for an image
CN101593349A (en) * 2009-06-26 2009-12-02 福州华映视讯有限公司 Bidimensional image is converted to the method for 3-dimensional image
WO2010064774A1 (en) * 2008-12-02 2010-06-10 (주)엘지전자 3d image signal transmission method, 3d image display apparatus and signal processing method therein
CN101873506A (en) * 2009-04-21 2010-10-27 财团法人工业技术研究院 Image processing method for providing depth information and image processing system thereof

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6873654B1 (en) * 2000-05-16 2005-03-29 Redrock Semiconductor, Inc Method and system for predictive control for live streaming video/audio media
CN1395231A (en) * 2001-07-04 2003-02-05 松下电器产业株式会社 Image signal coding method, equipment and storage medium
JP2003284097A (en) * 2001-11-17 2003-10-03 Hoko Koka Daigakko Multiview image compositing apparatus using two images of stereo camera and difference between both eyes
US20060093040A1 (en) * 2003-01-15 2006-05-04 Microsoft Corporation Method and system for extracting key frames from video using a triangle model of motion based on perceived motion energy
US20060067585A1 (en) * 2004-09-21 2006-03-30 Euclid Discoveries Llc Apparatus and method for processing video data
CN101542529A (en) * 2006-11-21 2009-09-23 皇家飞利浦电子股份有限公司 Generation of depth map for an image
JP2008141666A (en) * 2006-12-05 2008-06-19 Fujifilm Corp Stereoscopic image creating device, stereoscopic image output device, and stereoscopic image creating method
US20080263605A1 (en) * 2007-04-09 2008-10-23 Hiroshi Mine Video delivery device, video receiver and key frame delivery method
JP2009044722A (en) * 2007-07-19 2009-02-26 Victor Co Of Japan Ltd Pseudo-3d-image generating device, image-encoding device, image-encoding method, image transmission method, image-decoding device and image image-decoding method
WO2010064774A1 (en) * 2008-12-02 2010-06-10 (주)엘지전자 3d image signal transmission method, 3d image display apparatus and signal processing method therein
CN101873506A (en) * 2009-04-21 2010-10-27 财团法人工业技术研究院 Image processing method for providing depth information and image processing system thereof
CN101593349A (en) * 2009-06-26 2009-12-02 福州华映视讯有限公司 Bidimensional image is converted to the method for 3-dimensional image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331670A (en) * 2015-06-30 2017-01-11 明士股份有限公司 Endoscope stereoscopic visualization system and method by employing chromaticity forming method
CN108779979A (en) * 2015-11-12 2018-11-09 比特安尼梅特有限公司 Relief map

Similar Documents

Publication Publication Date Title
TWI469088B (en) Depth map generation module for foreground object and the method thereof
CN102254348B (en) Virtual viewpoint mapping method based o adaptive disparity estimation
CN110663245B (en) Apparatus and method for storing overlapping regions of imaging data to produce an optimized stitched image
CN108257219B (en) Method for realizing panoramic multipoint roaming
JP6370708B2 (en) Generation of a depth map for an input image using an exemplary approximate depth map associated with an exemplary similar image
CN111462311B (en) Panorama generation method and device and storage medium
CN102075779B (en) Intermediate view synthesizing method based on block matching disparity estimation
CN104618704B (en) Method and apparatus for image procossing
CN105023275B (en) Super-resolution optical field acquisition device and its three-dimensional rebuilding method
US20080002878A1 (en) Method For Fast Stereo Matching Of Images
US20060045329A1 (en) Image processing
CN105100775A (en) Image processing method and apparatus, and terminal
CN101287142A (en) Method for converting flat video to tridimensional video based on bidirectional tracing and characteristic points correction
CN103606188A (en) Geographical information on-demand acquisition method based on image point cloud
CN111988598B (en) Visual image generation method based on far and near view layered rendering
CN107170000B (en) Stereopsis dense Stereo Matching method based on the optimization of global block
JPWO2013054462A1 (en) User interface control device, user interface control method, computer program, and integrated circuit
CN103634519A (en) Image display method and device based on dual-camera head
WO2012117706A1 (en) Video processing device, video processing method, program
CN106447718B (en) A kind of 2D turns 3D depth estimation method
US20220408070A1 (en) Techniques for generating light field data by combining multiple synthesized viewpoints
CN106303491A (en) Image processing method and device
CN110738731A (en) 3D reconstruction method and system for binocular vision
CN103037226A (en) Method and device for depth fusion
CN102572457A (en) Foreground depth map generation module and method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120711