CN101593349A - Bidimensional image is converted to the method for 3-dimensional image - Google Patents

Bidimensional image is converted to the method for 3-dimensional image Download PDF

Info

Publication number
CN101593349A
CN101593349A CNA2009103037473A CN200910303747A CN101593349A CN 101593349 A CN101593349 A CN 101593349A CN A2009103037473 A CNA2009103037473 A CN A2009103037473A CN 200910303747 A CN200910303747 A CN 200910303747A CN 101593349 A CN101593349 A CN 101593349A
Authority
CN
China
Prior art keywords
image
picture element
grey
element data
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009103037473A
Other languages
Chinese (zh)
Other versions
CN101593349B (en
Inventor
陈建宏
林享昙
高盟超
Original Assignee
CPTF Visual Display Fuzhou Ltd
Chunghwa Picture Tubes Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CPTF Visual Display Fuzhou Ltd, Chunghwa Picture Tubes Ltd filed Critical CPTF Visual Display Fuzhou Ltd
Priority to CN2009103037473A priority Critical patent/CN101593349B/en
Publication of CN101593349A publication Critical patent/CN101593349A/en
Application granted granted Critical
Publication of CN101593349B publication Critical patent/CN101593349B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention relates to a kind of method that bidimensional image is converted to 3-dimensional image.At first, receive bidimensional image.Then, bidimensional image is converted to grey-tone image.Then, quench and get object in the grey-tone image.Object in the grey-tone image is carried out mark.Then, according to the pairing depth value of distance each object of decision between first border of each object in the grey-tone image and grey-tone image.Come again, produce depth map according to the pairing depth value of each object.Therefore, can be in conjunction with depth map and bidimensional image to produce 3-dimensional image.

Description

Bidimensional image is converted to the method for 3-dimensional image
Technical field
The present invention relates to a kind of bidimensional image is converted to the method for 3-dimensional image, and particularly produce pairing depth map by bidimensional image relevant for a kind of, and in conjunction with bidimensional image and depth map to be converted to the method for 3-dimensional image.
Background technology
Generally speaking, (Three-Dimension, 3D) in the kind of video display, the 3-dimensional image display of fence type (BarrierType) is to utilize the binocular parallax method, makes human eye experience 3-dimensional image in three-dimensional.
Fig. 1 is the synoptic diagram that utilizes fence type 3-dimensional image display architectures.Please refer to Fig. 1, on backlight liquid crystal display 30, increase by one and be used for the fence 10 that beam split uses, make the picture element 20 (Pixel) of LCD can be divided into two classes, be denoted as the left-eye images that 1 picture element shows the people, be denoted as the right-eye image that 2 picture element shows the people, synthetic through image, as to produce 3-dimensional image display display frame.
Disclose a technology in No. the 0232666th, United States Patent (USP), motion vector, brightness or the color value of last of its technical basis and present picture are done edge detection, produce depth map.What deserves to be mentioned is that this type of technology when the detecting object gross error takes place easily, produces wrong depth map, the quality that influences 3-dimensional image is extremely huge.
Summary of the invention
The invention provides and a kind of bidimensional image is converted to the method for 3-dimensional image, can directly quench and get object and depth map, can form 3-dimensional image in conjunction with the bidimensional image and the depth map of getting of quenching by bidimensional image.By this, the present invention does not need out of Memory, does not need extra image capture equipment, just bidimensional image can be converted to 3-dimensional image.
The invention provides and a kind of bidimensional image is converted to the method for 3-dimensional image, comprise the steps.At first, receive a bidimensional image.Then, bidimensional image is converted to a grey-tone image.Then, quench and get object in the grey-tone image.Then, the object in the grey-tone image is carried out mark.Then, according to the pairing depth value of distance each object of decision between one first border of each object in the grey-tone image and grey-tone image.Then, produce a depth map according to the pairing depth value of each object.At last, in conjunction with depth map and bidimensional image to produce a 3-dimensional image.
In one embodiment of this invention, above-mentioned method, wherein comprise grey-tone image is carried out edge detection (Edge Estimation) to produce an operational data in the step of getting the object in the grey-tone image of quenching, to operational data expand (Dilation) computing with the object of connection edge to obtain object outline to produce an edge image, fill up in each object outline The corresponding area and fill up picture to obtain one.
In one embodiment of this invention, above-mentioned method, wherein comprise quenching and get the target edges of grey-tone image that wherein the equation of edge detection is as follows with the step that produces operational data grey-tone image being carried out edge detection (EdgeEstimation) with Prewitt shielding (Mask):
▿ f = | Gx × P | + | Gy × P |
Wherein,
Figure A20091030374700052
The expression data computation, P is the picture element data of 3*3 matrix, G xWith G yBe the Prewitt shielding (Mask) of 3*3 matrix, above-mentioned G xWith G yBe expressed as follows respectively:
G x = - 1 0 1 - 1 0 1 - 1 0 1 ;
G y = - 1 - 1 - 1 0 0 0 1 1 1
In one embodiment of this invention, above-mentioned method is wherein being carried out dilation operation with the object of connection edge to operational data
Comprise the steps with the step that obtains object outline.At first, choose a picture element matrix P = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 p 9 , P wherein 1~ P 9Expression picture element data.Then, judge picture element data P 5Whether be one first numerical value.Then, as picture element data P 5Equal first numerical value, and picture element data P 1~ P 4, P 6~ P 9When one of them is not equal to first numerical value at least, then with picture element data P 5Be adjusted into a second value.Then, repeat above-mentioned steps so that operational data is carried out dilation operation.
In one embodiment of this invention, above-mentioned method, wherein first numerical value is 0, second value is 1.
In one embodiment of this invention, above-mentioned method, wherein the step of The corresponding area comprises the steps in filling up each object outline.At first, from left to right, scan edge images from top to bottom to obtain one first scan-image, wherein work as the one first picture element data that is scanned and equal one first numerical value, when the left side picture element data of the first picture element data and upside picture element data are equal to a second value, the first picture element data are adjusted into second value.Then, from right to left, scan edge images from the bottom to top to obtain one second scan-image, when the one second picture element data that is scanned equals first numerical value, when the right side picture element data of the second picture element data and downside picture element data are equal to second value, the second picture element data are adjusted into second value.Then, connection collection first scan-image and second scan-image fill up picture to obtain.
In one embodiment of this invention, above-mentioned wherein first numerical value is 0, and second value is 1.
In one embodiment of this invention, above-mentionedly wherein comprise the steps in the step of the object in the grey-tone image being carried out mark.At first, in regular turn each the object The corresponding area in the grey-tone image is carried out mark.Then, adjust the pairing mark of each object, make each object correspond respectively to different marks and according to a series arrangement.
Based on above-mentioned, the invention provides and a kind of bidimensional image is converted to the method for 3-dimensional image, comprise that image edge detects, expands, fills up and step such as object tag, obtain the depth map of bidimensional image correspondence, utilize this depth map to form 3-dimensional image again.The present invention only needs bidimensional image, does not need out of Memory, promptly is convertible into 3-dimensional image.
For above-mentioned feature and advantage of the present invention can be become apparent, embodiment cited below particularly, and cooperate appended graphic being described in detail below.
Description of drawings
Fig. 1 is the synoptic diagram that utilizes fence type 3-dimensional image display architectures.
Fig. 2 is the process flow diagram that bidimensional image is converted to the method for 3-dimensional image according to one embodiment of the invention explanation.
Fig. 3 is the process flow diagram according to a kind of embodiment of one embodiment of the invention description of step S203.
Fig. 4 is the synoptic diagram according to one embodiment of the invention explanation dilation operation.
Fig. 5 is the synoptic diagram according to the computing direction of one embodiment of the invention explanation dilation operation.
Fig. 6 is the synoptic diagram according to step S303 in one embodiment of the invention key diagram 3.
Fig. 7 is the synoptic diagram according to step S204 in one embodiment of the invention key diagram 2.
Fig. 8 is the image synoptic diagram that produces the depth map of two depth of field according to one embodiment of the invention explanation.
Fig. 9 is the image synoptic diagram that produces the depth map of many depth of field according to one embodiment of the invention explanation.
Primary clustering symbol description in the accompanying drawing:
10: fence
20: picture element
30: backlight
501,802,902: operational data
601,803,903: edge images
602,804,904: fill up picture
Scan image at 603: the first
Scan image at 604: the second
Scan mode at 605: the first
Scan mode at 606: the second
701: grey-tone image
702 ~ 704: tag images
801,901: bidimensional image
The depth map of 805: two depth of field
905: the depth map of many depth of field
X1 ~ X9: gauge point
P: picture element matrix
P ': the edge matrix behind the dilation operation
S201 ~ S207: flow chart step
S301 ~ S303: flow chart step
Embodiment
Fig. 2 is the process flow diagram that bidimensional image is converted to the method for 3-dimensional image according to one embodiment of the invention explanation.Please refer to Fig. 2, at first, among the step S201, receive a bidimensional image.Then, among the step S202, bidimensional image is converted to a grey-tone image.Then, among the step S203, quench and get object in the grey-tone image.Then, among the step S204, the object in the grey-tone image is carried out mark.Then, among the step S205, according to the pairing depth value of distance each object of decision between one first border of each object in the grey-tone image and grey-tone image.Then, among the step S207, produce a depth map according to the pairing depth value of each object.At last, among the step S207, in conjunction with depth map and bidimensional image to produce a 3-dimensional image.
Fig. 3 is the process flow diagram according to a kind of embodiment of one embodiment of the invention description of step S203.Please merge with reference to Fig. 2 and Fig. 3.In the present embodiment, step S203 comprises step S301 ~ S303.For example, grey-tone image is carried out edge detection to produce an operational data.Then, to operational data carry out dilation operation with the object of connection edge to obtain object outline to produce an edge image.Then, fill up the interior The corresponding area of each object outline and fill up picture to obtain one.
Under have in the technical field and know usually that the knowledgeable can look its demand and performing step S301, for example gradient idea detection method, small echo detection method and operator detection method by any way.Only being used as the sub-detecting image of image two dimension calculus of differences edge with Prewitt shielding at this is example, and the Prewitt shielding is one or three to take advantage of three matrixes, and the judgement formula at its detecting image edge is as follows:
▿ f = | ( G x 1 × P 1 + G x 2 × P 2 + G x 3 × P 3 + G x 4 × P 4 + G x 5 × P 5 + G x 6 × P 6 + G x 7 × P 7 + G x 8 × P 8 + G x 9 × P 9 ) |
+ | ( G y 1 × P 1 + G y 2 × P 2 + G y 3 × P 3 + G y 4 × P 4 + G y 5 × P 5 + G y 6 × P 6 + G y 7 × P 7 + G y 8 × P 8 + G y 9 × P 9 ) |
= | Gx × P | + | Gy × P |
G wherein xWith G vBe the Prewitt shielding:
Gx = G x 1 G x 2 G x 3 G x 4 G x 5 G x 6 G x 7 G x 8 G x 9 = - 1 0 1 - 1 0 1 - 1 0 1 , Gy = G y 1 G y 2 G y 3 G y 4 G y 5 G y 6 G y 7 G y 8 G y 9 = - 1 - 1 - 1 0 0 0 1 1 1
P is the corresponding screening-off position of picture element:
P = P 1 P 2 P 3 P 4 P 5 P 6 P 7 P 8 P 9
Fig. 4 is the synoptic diagram according to one embodiment of the invention explanation dilation operation.
Fig. 5 is the synoptic diagram according to the computing direction of one embodiment of the invention explanation dilation operation.Please merge with reference to Fig. 3 ~ Fig. 5.As shown in Figure 5, in operational data 501,, carry out dilation operation and obtain object outline with the object of connection edge and produce edge images (step 302) to reach mode from top to bottom from left to right.Its compute mode is as follows, at first, chooses the picture element matrix in the operational data 501 P = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 p 9 , P wherein 1~ P 9Picture element data among the expression picture element matrix P is respectively 1,0,1,0,0,1,0,0 and 0.At first, the picture element data P among the judgement picture element matrix P 5Whether be 0, as picture element data P 5Be 0, and P 1 ~ 4, P 6 ~ 9Be not congruent at 0 o'clock, then P5 be revised as 1.Edge matrix behind the dilation operation P , = p ′ 1 p ′ 2 p ′ 3 p ′ 4 p ′ 5 p ′ 6 p ′ 7 p ′ 8 p ′ 9 , P ' wherein 1~ P ' 9The picture element data of the edge matrix behind the expression dilation operation is respectively 1,0,1,0,1,1,0,0 and 0.In other words, as picture element data P 5Equal first numerical value, and picture element data P 1~ P 4, P 6~ P 9When one of them is not equal to first numerical value at least, then with picture element data P ' 5Be made as a second value.Repeat above-mentioned steps so that operational data is carried out dilation operation.Because P 5Equal 0, and P 1~ P 4, P 6~ P 9All do not equal 0, therefore can obtain P ' 1~ P ' 9Be 1,0,1,0,1,1,0,0 and 0.After whole picture (operational data) carried out above-mentioned dilation operation, the target edges closure can be produced an edge image to obtain object outline.It should be noted that in the present embodiment first numerical value is 0, second value is 1, and wherein 1 expression has the GTG value, 0 expression no GTG value (transferring viewable pictures to is black), but the present invention is not as limit.
Next, edge images is filled up computing fill up picture to obtain, this step can be known and indicates shared zone of object and position.Fig. 6 is the synoptic diagram according to step S303 in one embodiment of the invention key diagram 3.Please refer to Fig. 6, at first, edge images 601 reached from top to bottom image scan from left to right, scan mode 605 as first, wherein when P5 equal 0 and P2 and P4 be all 1, then P5 is adjusted into 1.Repeating above-mentioned numerical value and adjust mode, is scan unit with every 3*3 matrix, reaches from top to bottom from left to right and scans, and after the scanning of finishing whole edge images 601 and numerical value adjustment, can obtain first and scan image 603.Then, change the direction of scanning, adjust mode with identical numerical value, reach from right to left and scan edge images 601 from the bottom to top, can obtain first and scan image 604, the synoptic diagram of its scan mode scans mode 606 as second, wherein equal 0 and P6 and P8 when being all 1 as P5, then P5 is adjusted into 1.Then, scan image 603 and second with first and scan image 604 and get common factor, can obtain filling up picture 602, can obviously find out the object region by filling up in the picture 602.
Next, further specify the mode of object being carried out mark, Fig. 7 is the synoptic diagram according to step S204 in one embodiment of the invention key diagram 2.Please refer to Fig. 7, in this embodiment, for convenience of description, the picture element data of grey-tone image only lift 0 and 255 for example, 255 indicated object regions wherein, and 0 expression background area, but the present invention is not as limit.At first, each the object The corresponding area in the grey-tone image is carried out mark, if the picture element data of grey-tone image 701 are 255, and its top is 0 with the left data, and then mark is newly numbered.Because grey-tone image 701 has 7 subject area (by 255 zones of being formed), therefore carry out to produce tag images 702 behind the mark.Therefore, comprise seven new marker number in tag images 702, be respectively 1 ~ 7, its gauge point X1 ~ X7 is shown in tag images 702.
Then, adjust the pairing marker number of each object in the tag images 702, when the top data are not 0, the left data is 0 or when not having data (matrix edge), then duplicates the mark (as mark enumeration X8) of top.When the picture element data is 255, and when the picture element data of its top or left is not 0 (as gauge point X9), then revise these picture element data (value of gauge point X9 is revised as 5), revise the picture element data (value of gauge point X6 is revised as 5) of left simultaneously according to the top mark.Next, again to all tag sorts, and make it become continuous reference numerals.After the modification, promptly obtain tag images 704, wherein the lable number (7) on the gauge point X7 can be modified to 6.Therefore, reference marker image 704 can know that discovery has 6 objects, is labeled as 1 ~ 6 respectively.Merit attention and be, above-mentioned mark mode is one embodiment of the invention only, and the present invention is not limited to this.
Fig. 8 is the image synoptic diagram that produces the depth map of two depth of field according to one embodiment of the invention explanation.Please merge with reference to figure 2, Fig. 3 and Fig. 8.Among the step S201, receive bidimensional image 801.Then, execution in step S202 is converted to grey-tone image with bidimensional image 801, quenches then and gets object (step S203) in the grey-tone image.In step S203, grey-tone image is carried out edge detection to produce operational data 802 (step S301).Then, operational data 802 being carried out dilation operation obtains object outline with the object of connection edge and produces edge images 803 (step S302).Next, fill up the interior The corresponding area of object outline of each edge images 803, fill up picture 804 (step S303) to obtain.Then, in regular turn each object The corresponding area of filling up in the picture 804 is carried out mark, too small object is removed, with the depth map 805 that obtains two depth of field.In addition, object number that it should be noted that this image synoptic diagram is 1, and its depth value is divided into two-layer, i.e. background and object.
Fig. 9 is the image synoptic diagram that produces the depth map of many depth of field according to one embodiment of the invention explanation.Please merge with reference to figure 2, Fig. 3, Fig. 5.Comprise a plurality of objects in the bidimensional image 901, after the taking-up individual objects of quenching, can carry out mark to all objects, the coordinate position according to individual objects determines the pairing depth value of its object then.In Fig. 9, at operational data 902, edge images 903 and fill up the production method of picture 904 and Fig. 8 in similar, do not repeat them here.Then, in regular turn each object The corresponding area of filling up in the picture 904 is carried out mark, and too small object is removed, wherein fill up picture 904 and have 5 objects, can be labeled as 1 ~ 5 (step S201 ~ S204).Then, far away according to the depth map (step S205) of the distance decision individual objects between individual objects and grey-tone image (picture) bottom margin apart from healing, depth value more little (representing distance far), apart from nearer, depth value (is represented apart near more) more greatly.If two objects equate that with the distance of bottom margin then its depth value is identical.
Then, produce a depth map (step S206) according to the pairing depth value of this object respectively, shown in depth map 905, comprising 5 objects, and object B 1, the B2 of below equate with the distance of bottom margin, so its depth value is identical.Object B 3, B4, B5 are neither identical with the distance of bottom margin, therefore have different depth values respectively.If add the depth value of background, depth map 905 promptly has 5 kinds of depth values, the allocation scheme of the numerical value of its depth value out of the ordinary can be preestablished and be got by the user, for example with the height of picture corresponding to maximum depth value, the distance between object and the bottom margin then can be calculated it to fixed depth value according to proportional meter.Depth value that it should be noted that present embodiment is to get according to the distance calculation between object and the bottom margin, but its account form is not limited to aforesaid way.In addition, also can be with different borders, for example its depth value is calculated as the benchmark border in picture top margin or left and right side border, and present embodiment is not limited.At last, after obtaining depth map, can produce 3-dimensional image (step S207) in conjunction with bidimensional image.
Generally speaking, the invention provides directly to be quenched by bidimensional image gets the method for object and depth map, can form 3-dimensional image in conjunction with the bidimensional image and the depth map of getting of quenching.By this, the present invention does not need out of Memory, does not need extra image capture equipment, just bidimensional image can be converted to 3-dimensional image.
Though the present invention discloses as above with embodiment; right its is not in order to limit the present invention; have in the technical field under any and know the knowledgeable usually; without departing from the spirit and scope of the present invention; when doing a little change and retouching, so protection scope of the present invention is as the criterion when looking accompanying the claim person of defining.

Claims (8)

1. one kind is converted to the method for 3-dimensional image with bidimensional image, it is characterized in that this method comprises:
Receive a bidimensional image;
This bidimensional image is converted to a grey-tone image;
Quench and get object in this grey-tone image;
Object in this grey-tone image is carried out mark;
Determine the respectively pairing depth value of this object according to the distance between one first border of respectively this object in this grey-tone image and this grey-tone image;
Produce a depth map according to the pairing depth value of this object respectively; And
In conjunction with this depth map and this bidimensional image to produce a 3-dimensional image.
2. method according to claim 1 is characterized in that, the step of getting the object in the described grey-tone image comprises quenching:
This grey-tone image is carried out edge detection to produce an operational data;
To this operational data carry out dilation operation with the object of connection edge to obtain object outline to produce an edge image; And
Fill up respectively in this object outline The corresponding area and fill up picture to obtain one.
3. method according to claim 2 is characterized in that, comprises with the step that produces this operational data in that described grey-tone image is carried out edge detection:
Quench with Prewitt shielding and to get the target edges of this grey-tone image,
Wherein the equation of edge detection is as follows:
▿ f = | Gx × P | + | Gy × P |
Wherein,
Figure A2009103037470002C2
Represent this data computation, P is the picture element data of 3*3 matrix, and Gx and Gy are the Prewitt shielding of 3*3 matrix, and above-mentioned Gx and Gy are expressed as follows respectively:
G x = - 1 0 1 - 1 0 1 - 1 0 1 , G y = - 1 - 1 - 1 0 0 0 1 1 1 .
4. method according to claim 2 is characterized in that, wherein comprises down with the step that obtains object outline with the object of connection edge this operational data being carried out dilation operation:
Choose a picture element matrix P = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 p 9 , Wherein P1 ~ P9 represents the picture element data;
Judge whether picture element data P5 is one first numerical value;
When picture element data P5 equals this first numerical value, and picture element data P1 ~ P4, P6 ~ P9 then are adjusted into a second value with picture element data P5 when one of them is not equal to this first numerical value at least; And
Repeat above-mentioned steps so that this operational data is carried out dilation operation.
5. method according to claim 4 is characterized in that first numerical value of being told is 0, and described second value is 1.
6. method according to claim 2 is characterized in that, the step of The corresponding area comprises in filling up this object outline respectively:
From left to right, scan this edge images from top to bottom to obtain one first scan-image, wherein equal one first numerical value when one of the scan first picture element data, when the left side picture element data of these first picture element data and upside picture element data are equal to a second value, these first picture element data are adjusted into this second value;
From right to left, scan this edge images from the bottom to top to obtain one second scan-image, when the one second picture element data that is scanned equals this first numerical value, when the right side picture element data of these second picture element data and downside picture element data are equal to this second value, these second picture element data are adjusted into this second value; And
Connection this first scan-image of collection and this second scan-image fill up picture to obtain this.
7. method according to claim 6 is characterized in that, described first numerical value is 0, and described second value is 1.
8. method according to claim 1 is characterized in that, comprises in the step of the object in this grey-tone image being carried out mark:
In regular turn respectively this object The corresponding area in this grey-tone image is carried out mark; And
Adjust the respectively pairing mark of this object, make respectively this object correspond respectively to different marks and according to a series arrangement.
CN2009103037473A 2009-06-26 2009-06-26 Method for converting two-dimensional image into three-dimensional image Expired - Fee Related CN101593349B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009103037473A CN101593349B (en) 2009-06-26 2009-06-26 Method for converting two-dimensional image into three-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009103037473A CN101593349B (en) 2009-06-26 2009-06-26 Method for converting two-dimensional image into three-dimensional image

Publications (2)

Publication Number Publication Date
CN101593349A true CN101593349A (en) 2009-12-02
CN101593349B CN101593349B (en) 2012-06-13

Family

ID=41407991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009103037473A Expired - Fee Related CN101593349B (en) 2009-06-26 2009-06-26 Method for converting two-dimensional image into three-dimensional image

Country Status (1)

Country Link
CN (1) CN101593349B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908233A (en) * 2010-08-16 2010-12-08 福建华映显示科技有限公司 Method and system for producing plural viewpoint picture for three-dimensional image reconstruction
CN102026012A (en) * 2010-11-26 2011-04-20 清华大学 Generation method and device of depth map through three-dimensional conversion to planar video
CN101566784B (en) * 2009-06-02 2011-07-27 华映光电股份有限公司 Method for establishing depth of field data for three-dimensional image and system thereof
CN102186093A (en) * 2011-04-28 2011-09-14 深圳超多维光电子有限公司 Stereo image generation method and system
CN102238313A (en) * 2010-04-22 2011-11-09 扬智科技股份有限公司 Method for generating image transformation matrix as well as image transformation method and device
CN102244804A (en) * 2011-07-19 2011-11-16 彩虹集团公司 Method for converting 2D (two-dimensional) video signal to 3D (three-dimensional) video signal
CN102360489A (en) * 2011-09-26 2012-02-22 盛乐信息技术(上海)有限公司 Method and device for realizing conversion from two-dimensional image to three-dimensional image
CN102404594A (en) * 2011-10-31 2012-04-04 庞志勇 2D-to-3D conversion method based on image edge information
CN102426693A (en) * 2011-10-28 2012-04-25 彩虹集团公司 Method for converting 2D into 3D based on gradient edge detection algorithm
CN102469322A (en) * 2010-11-18 2012-05-23 Tcl集团股份有限公司 Image processing method for plane stereoscopic bodies
CN102572457A (en) * 2010-12-31 2012-07-11 财团法人工业技术研究院 Foreground depth map generation module and method thereof
CN103002297A (en) * 2011-09-16 2013-03-27 联咏科技股份有限公司 Method and device for generating dynamic depth values
CN103686125A (en) * 2012-08-29 2014-03-26 Jvc建伍株式会社 Depth estimation device, depth estimation method, depth estimation program, image processing device, image processing method, and image processing program
TWI469088B (en) * 2010-12-31 2015-01-11 Ind Tech Res Inst Depth map generation module for foreground object and the method thereof
US9147021B2 (en) 2011-11-18 2015-09-29 Industrial Technology Research Institute Data processing method and device using the same
CN109891880A (en) * 2016-10-14 2019-06-14 万维数码有限公司 The method of the automatic conversion quality of 2D to 3D is improved by machine learning techniques
CN110309738A (en) * 2019-06-17 2019-10-08 深圳大学 The method that a kind of pair of OCT fingerprint image is labeled
CN111615832A (en) * 2018-01-22 2020-09-01 苹果公司 Method and apparatus for generating a composite reality reconstruction of planar video content
CN110567360B (en) * 2018-06-06 2021-07-23 宏碁股份有限公司 Three-dimensional scanning system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1113320C (en) * 1994-02-01 2003-07-02 三洋电机株式会社 Method of converting two-dimensional images into three-dimensional images
EP1323135A1 (en) * 2000-09-14 2003-07-02 Orasee Corp. Method for automated two-dimensional and three-dimensional conversion

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566784B (en) * 2009-06-02 2011-07-27 华映光电股份有限公司 Method for establishing depth of field data for three-dimensional image and system thereof
CN102238313A (en) * 2010-04-22 2011-11-09 扬智科技股份有限公司 Method for generating image transformation matrix as well as image transformation method and device
CN101908233A (en) * 2010-08-16 2010-12-08 福建华映显示科技有限公司 Method and system for producing plural viewpoint picture for three-dimensional image reconstruction
CN102469322A (en) * 2010-11-18 2012-05-23 Tcl集团股份有限公司 Image processing method for plane stereoscopic bodies
CN102026012A (en) * 2010-11-26 2011-04-20 清华大学 Generation method and device of depth map through three-dimensional conversion to planar video
CN102026012B (en) * 2010-11-26 2012-11-14 清华大学 Generation method and device of depth map through three-dimensional conversion to planar video
TWI469088B (en) * 2010-12-31 2015-01-11 Ind Tech Res Inst Depth map generation module for foreground object and the method thereof
CN102572457A (en) * 2010-12-31 2012-07-11 财团法人工业技术研究院 Foreground depth map generation module and method thereof
CN102186093A (en) * 2011-04-28 2011-09-14 深圳超多维光电子有限公司 Stereo image generation method and system
CN102186093B (en) * 2011-04-28 2013-07-10 深圳超多维光电子有限公司 Stereo image generation method and system
CN102244804A (en) * 2011-07-19 2011-11-16 彩虹集团公司 Method for converting 2D (two-dimensional) video signal to 3D (three-dimensional) video signal
CN103002297A (en) * 2011-09-16 2013-03-27 联咏科技股份有限公司 Method and device for generating dynamic depth values
CN102360489A (en) * 2011-09-26 2012-02-22 盛乐信息技术(上海)有限公司 Method and device for realizing conversion from two-dimensional image to three-dimensional image
CN102426693A (en) * 2011-10-28 2012-04-25 彩虹集团公司 Method for converting 2D into 3D based on gradient edge detection algorithm
CN102426693B (en) * 2011-10-28 2013-09-11 彩虹集团公司 Method for converting 2D into 3D based on gradient edge detection algorithm
CN102404594B (en) * 2011-10-31 2014-02-12 庞志勇 2D-to-3D conversion method based on image edge information
CN102404594A (en) * 2011-10-31 2012-04-04 庞志勇 2D-to-3D conversion method based on image edge information
US9147021B2 (en) 2011-11-18 2015-09-29 Industrial Technology Research Institute Data processing method and device using the same
CN103686125A (en) * 2012-08-29 2014-03-26 Jvc建伍株式会社 Depth estimation device, depth estimation method, depth estimation program, image processing device, image processing method, and image processing program
CN109891880A (en) * 2016-10-14 2019-06-14 万维数码有限公司 The method of the automatic conversion quality of 2D to 3D is improved by machine learning techniques
CN109891880B (en) * 2016-10-14 2020-11-13 万维数码有限公司 Method for improving the quality of 2D to 3D automatic conversion by machine learning techniques
CN111615832A (en) * 2018-01-22 2020-09-01 苹果公司 Method and apparatus for generating a composite reality reconstruction of planar video content
CN111615832B (en) * 2018-01-22 2022-10-25 苹果公司 Method and apparatus for generating a composite reality reconstruction of planar video content
CN110567360B (en) * 2018-06-06 2021-07-23 宏碁股份有限公司 Three-dimensional scanning system
CN110309738A (en) * 2019-06-17 2019-10-08 深圳大学 The method that a kind of pair of OCT fingerprint image is labeled
CN110309738B (en) * 2019-06-17 2022-09-30 深圳大学 Method for labeling OCT fingerprint image

Also Published As

Publication number Publication date
CN101593349B (en) 2012-06-13

Similar Documents

Publication Publication Date Title
CN101593349B (en) Method for converting two-dimensional image into three-dimensional image
US10567741B2 (en) Stereoscopic image display device, terminal device, stereoscopic image display method, and program thereof
EP3350989B1 (en) 3d display apparatus and control method thereof
US20160300517A1 (en) Method for visualizing three-dimensional images on a 3d display device and 3d display device
CN105931240A (en) Three-dimensional depth sensing device and method
CN102510515B (en) A kind of grating-type multi-viewpoint stereo image synthesis method
JP2009124308A (en) Multi-viewpoint image creating system and multi-viewpoint image creating method
US20100079578A1 (en) Apparatus, method and computer program product for three-dimensional image processing
CN103198486B (en) A kind of depth image enhancement method based on anisotropy parameter
US20020141635A1 (en) Method and system for adjusting stereoscopic image to optimize viewing for image zooming
CN103702103B (en) Based on the grating stereo printing images synthetic method of binocular camera
CN104079913B (en) Sub-pixel ranking method, device that the compatible 2D-3D of grating type three-dimensional display shows
JP2006229725A (en) Image generation system and image generating method
US9202305B2 (en) Image processing device, three-dimensional image display device, image processing method and computer program product
CN103826114B (en) Stereo display method and free stereo display apparatus
CN104793341B (en) A kind of display drive method and device
CN101621707B (en) Image conversion method suitable for image display device and computer product
CN102892022A (en) Image processing system, image processing apparatus, and image processing method
CN104820293A (en) Stereoscopic display device and stereoscopic display method
CN102833570A (en) Image processing system, apparatus and method
CN106352847B (en) Distance-measuring device and distance measurement method based on phase difference
JP2014116706A (en) Two-dimensional color code display apparatus, display method, reading device, and reading method
US20220309710A1 (en) Obtaining method for image coordinates of position invisible to camera, calibration method and system
CN102780900B (en) Image display method of multi-person multi-view stereoscopic display
WO2014119555A1 (en) Image processing device, display device and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: CHINA PROJECTION TUBE CO., LTD.

Free format text: FORMER OWNER: FUZHOU HUAYING VIDEO + TELECOM CO., LTD.

Effective date: 20130604

Free format text: FORMER OWNER: CHINA PROJECTION TUBE CO., LTD.

Effective date: 20130604

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 350015 FUZHOU, FUJIAN PROVINCE TO: TAIWAN, CHINA

TR01 Transfer of patent right

Effective date of registration: 20130604

Address after: Chinese Taoyuan bade city Taiwan Peace Road No. 1127

Patentee after: Chunghwa Picture Tubes Ltd.

Address before: 350015 No. 1 Xingye Road, Mawei Science Park, Fujian, Fuzhou

Patentee before: Fuzhou Huaying Video & Telecom Co., Ltd.

Patentee before: Chunghwa Picture Tubes Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120613

Termination date: 20190626

CF01 Termination of patent right due to non-payment of annual fee