CN110149508A - A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system - Google Patents

A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system Download PDF

Info

Publication number
CN110149508A
CN110149508A CN201910448420.9A CN201910448420A CN110149508A CN 110149508 A CN110149508 A CN 110149508A CN 201910448420 A CN201910448420 A CN 201910448420A CN 110149508 A CN110149508 A CN 110149508A
Authority
CN
China
Prior art keywords
original
image
depth
value
parallax
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910448420.9A
Other languages
Chinese (zh)
Other versions
CN110149508B (en
Inventor
王世刚
韩一雪
韦健
赵岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201910448420.9A priority Critical patent/CN110149508B/en
Publication of CN110149508A publication Critical patent/CN110149508A/en
Application granted granted Critical
Publication of CN110149508B publication Critical patent/CN110149508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0085Motion estimation from stereoscopic image signals

Abstract

A kind of array of figure generation and complementing method category naked eye 3D display technical field based on one-dimensional integrated imaging system, the present invention is according to geometric optics and the thought of DIBR, the visual point image of any position is simulated using depth map, image inside bulk cavity is filled first with optical flow method, then other cavities are filled up with Criminisi image repair algorithm;The advantage of both algorithms of present invention combination optical flow method with Criminisi image repair algorithm fills up image, both with reference to the image information of different frame in video, the arithmetic speed of Criminisi image repair algorithm is improved again, compared with only carrying out hole-filling with Criminisi image repair algorithm, under the setting of identical parameters, the present invention averagely saves for 45.448% time, and fills up better effect.

Description

A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
Technical field
The invention belongs to naked eye 3D display technical fields, and in particular to a kind of array of figure based on one-dimensional integrated imaging system is raw At and complementing method.
Background technique
1908, Nobel laureate Lippmann first proposed integration imaging.It is a kind of utilization lenticule battle array Column record and the stereo display technique for reproducing three dimensional spatial scene.Integration imaging have continuous viewpoint, full parallax, without visual fatigue with And the advantages that without ancillary equipment, is to the extensive concern by various countries.But integration imaging itself there is also some disadvantages with It is insufficient.Integration imaging cost is high, is not suitable for large screen display, and the requirement to screen resolution is higher, empty to storage Between and data-handling capacity all have higher requirements.One-dimensional integrated imaging solves the above problem to a certain extent.It is one-dimensional integrated The vertical three-dimensional sense of integration imaging has been given up in imaging, in only the horizontal direction on have three-dimensional sense, greatly reduce data volume, increase To the utilization rate of screen resolution.So one-dimensional integrated imaging technology can faster processing speed in the realization side of integration imaging Show one's talent among method.
But currently, the three-dimensional film source based on one-dimensional integrated imaging is not common.It is suitble to one-dimensional integrated imaging to generate Three-dimensional film source, it is necessary to carry out hole-filling.Criminisi algorithm is currently used image repair algorithm.The reparation algorithm Algorithm complexity and picture size it is related with pixel block size, picture is bigger, and the Riming time of algorithm is longer;Unit pixel block Smaller, the Riming time of algorithm is longer, but reparation is more careful, and repairing effect is better.Find that the algorithm can be preferably in practice The cavity for being located at image border is repaired and spends the time few, and when being repaired to the bulk hole region inside picture, Repair time is long, and repairing effect is not good enough.
Due to the defect of algorithm above, it is difficult to obtain ideal effect in practical applications, has much room for improvement.
Summary of the invention
For solve algorithm above defect, the present invention is intended to provide what a kind of Criminisi algorithm was combined with optical flow method Hole-filling algorithm improves hole-filling in conjunction with the advantage of optical flow method and Criminisi algorithm and optical flow method to greatest extent Accuracy rate and efficiency.
Array of figure based on one-dimensional integrated imaging system of the invention generates and complementing method, including the following steps:
1.1 set original image as Ioriginal-k, depth map Hk, the viewpoint figure with cavity of generation is Wk-r,
Wherein: k indicates kth frame image in video;R indicates that r-th of viewpoint, the value range of r are 1~2*N, and N is half view Points;
1.1.1 according to required depth effect, what default human eye can be experienced goes out screen depth and enters to shield depth, by geometry Optics obtains:
M/B=P/ (P+D)
Wherein: B is eyes spacing;M is parallax, indicates the horizontal distance between right and left eyes picture;P is what human eye can be experienced Scenery depth;D is the distance between observer and screen;
Later, the scenery depth P that human eye can be experienced carries out identity transformation, becomes pixel by millimeter: through Conversion of measurement unit Maximum positive parallax afterwards, the parallax will be generated into screen effect: maxPM=maxM* (a/A);The negative view of maximum after Conversion of measurement unit Difference, which, which will lead to out, shields effect:
MaxNM=-abs (maxM) * (a/A)
Wherein: abs (maxM)=maxP*B/ (D-maxP);A is the horizontal resolution of screen;A is that the level of screen is wide Degree;
1.1.2 in the scenery depth that depth value uniform quantization to human eye can be experienced, it is translated into parallax value, is converted Formula are as follows:
D (i, j)=(maxNM-maxPM)/255*H (i, j)+maxPM
Wherein: d (i, j) is the corresponding parallax value of coordinate (i, j) point;H (i, j) is depth value corresponding to point (i, j);
It follows that when depth value is 0, corresponding positive parallax maxPM;When depth value is 255, corresponding negative parallax maxNM;
1.1.3 each pixel in original video image is translated according to parallax value, any viewpoint figure can be obtained Picture:
IN-x+1(i, j-d (i, j) * x/2N)=Ioriginal-k(i,j)、IN+x(I, j+d (i, j) * x/2N)=Ioriginal-k (i,j)
Viewpoint is numbered according to opposite original image direction from left to right, viewpoint sum is 2*N, which is the X visual point image;
1.2 statistics are in figure Wk-rIn, by Ioriginal-kIt is mapped to Wk-rThe coordinate of the number (n) of the point of (i, j) and they Ioriginal-k(x1,y1)、Ioriginal-k(x2,y2)、...Ioriginal-k(xn,yn);
As n=0, W is enabledk-r(i, j)=0, as empty point;
As n=1, W is enabledk-r(i, j)=Ioriginial-k(x,y);
As n=2, enable
As n > 2, I is takenoriginal-k(x1,y1)、Ioriginal-k(x2,y2)、...Ioriginal-k(xn,yn) in depth most Big value Horiginal-k(i, j) enables Wk-r(i, j)=Ioriginal-k(xp,yp)(1≤p≤n);
The point of n=0, the i.e. pixel value of cavity point are set as 0 by 1.3, and other is 255, obtains maskk-rImage;maskk-rFigure As will intuitively reflect Wk-rEmpty distribution situation in image;
1.4. hole-filling is carried out to the viewpoint figure tentatively generated using optical flow method to include the following steps:
1.4.1 extracting image maskk-rThe edge in internal large area cavity, if the marginal point in the i-th row cavity is respectively (i, a) and (i, b), if a <b, if reference frame picture is the third frame in original video, i.e. image Ioriginal-3, utilize LK optical flow method I.e. sparse optical flow method is to Wk-rWith figure Ioriginal-3It is calculated, obtains each pixel in both the horizontal and vertical directions Movement velocity, respectively ofv and ofh;
1.4.2 will (i, a) left side with (i, b) on the right side of depth value be compared, if (i, a) on the left of depth value it is small, then Illustrate that it is farther at a distance from observer, is the background parts of image;
1.4.3 the light stream value for estimating hollow sectors takes (i, a) the horizontal movement speed and vertical movement of 7 pixels in left side Speed is averaged, the horizontal movement speed and vertical movement speed as row cavity, it may be assumed that
1.4.4 the horizontal displacement for setting the displacement of corresponding pixel points as Δ x, Δ y, as a < j <b, at cavity are as follows: Δ x= Ofh (i, j) × 10, vertical displacement are as follows: Δ y=ofv (i, j) × 10 acquires cavity and is in Ioriginal-3Mapping block: Wk-r(i, J)=Ioriginal-3(i+Δx,j+Δy);
1.5 are included the following steps: using Criminisi image repair algorithm process image
1.5.1 Criminisi image repair algorithm process W is usedk-rImage, since original image size is 436*1024 pixel, The unit that the unit pixel block in Criminisi image repair algorithm is 12--20 pixel as radius is set according to original image size Block.
The present invention is in such a way that a kind of optical flow method is in conjunction with Criminisi image repair algorithm come filling cavity.Firstly, According to geometric optics and the thought of DIBR, the visual point image of any position is simulated using depth map.In this process, to mapping Point on to viewpoint figure is classified, and keeps the viewpoint figure generated more accurate.Secondly, when carrying out hole-filling, first with Optical flow method is filled image inside bulk cavity.Empty edge is extracted, the depth value of both sides of edges is compared, takes larger one The light stream value of the light stream value estimation hollow sectors of side, to calculate mapping block of the hollow sectors on reference frame.Finally, using again Criminisi image repair algorithm fills up other cavities.
The advantage of both algorithms of present invention combination optical flow method with Criminisi image repair algorithm fills out image It mends, not only with reference to the image information of different frame in video, but also improves the arithmetic speed of Criminisi image repair algorithm, than It rises and only carries out hole-filling with Criminisi image repair algorithm, under the setting of identical parameters, the present invention averagely saves the time 45.448%, and fill up better effect.
1 original Criminisi image repair algorithm of table is compared with the method for the present invention used time
Detailed description of the invention
Fig. 1 is the flow chart of array of figure generation and complementing method based on one-dimensional integrated imaging system
Fig. 2 is the preliminary viewpoint figure with cavity generated
Fig. 3 is by optical flow method treated viewpoint figure
Fig. 4 is by the viewpoint figure after Criminisi algorithm process
Fig. 5 is only by the viewpoint figure after Criminisi algorithm process
Specific embodiment
Core of the invention content is: when carrying out hole-filling, using the Optic flow information in video, filling up figure in advance As the large area cavity of center portion, Criminisi image repair algorithm is recycled to carry out hole-filling.Compared to Criminisi image repair algorithm, this method averagely saves for 45.448% time, and repairing effect is more preferable.Such as Fig. 4, Fig. 5 institute Show.
To make the purpose of the present invention, technical solution and advantage are clearer, and with reference to the accompanying drawing and example is done further Narration in detail:
A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system, such as Fig. 1, including the following steps:
1.3 set original image as Ioriginal-k, depth map Hk, the viewpoint figure with cavity of generation is Wk-r,
Wherein: k indicates kth frame image in video;R indicates that r-th of viewpoint, the value range of r are 1~2*N, and N is half view
Points;
1.1.1 according to required depth effect, what default human eye can be experienced goes out screen depth and enters to shield depth, by geometry Optics obtains:
M/B=P/ (P+D)
Wherein: B is eyes spacing;M is parallax, indicates the horizontal distance between right and left eyes picture;P is what human eye can be experienced Scenery depth;D is the distance between observer and screen;
Later, the scenery depth P that human eye can be experienced carries out identity transformation, becomes pixel by millimeter: through Conversion of measurement unit Maximum positive parallax afterwards, the parallax will be generated into screen effect: maxPM=maxM* (a/A);The negative view of maximum after Conversion of measurement unit Difference, which, which will lead to out, shields effect:
MaxNM=-abs (maxM) * (a/A)
Wherein: abs (maxM)=maxP*B/ (D-maxP);A is the horizontal resolution of screen;A is that the level of screen is wide Degree;
1.1.2 in the scenery depth that depth value uniform quantization to human eye can be experienced, it is translated into parallax value, is converted Formula are as follows:
D (i, j)=(maxNM-maxPM)/255*H (i, j)+maxPM
Wherein: d (i, j) is the corresponding parallax value of coordinate (i, j) point;H (i, j) is depth value corresponding to point (i, j);
It follows that when depth value is 0, corresponding positive parallax maxPM;When depth value is 255, corresponding negative parallax maxNM;
1.1.3 each pixel in original video image is translated according to parallax value, any viewpoint figure can be obtained Picture:
IN-x+1(i, j-d (i, j) * x/2N)=Ioriginal-k(i,j)、IN+x(I, j+d (i, j) * x/2N)=Ioriginal-k (i,j)
Viewpoint is numbered according to opposite original image direction from left to right, viewpoint sum is 2*N, which is the X visual point image;
1.4 statistics are in figure Wk-rIn, by Ioriginal-kIt is mapped to Wk-rThe coordinate of the number (n) of the point of (i, j) and they Ioriginal-k(x1,y1)、Ioriginal-k(x2,y2)、...Ioriginal-k(xn,yn);
As n=0, W is enabledk-r(i, j)=0, as empty point;
As n=1, W is enabledk-r(i, j)=Ioriginial-k(x,y);
As n=2, enable
As n > 2, I is takenoriginal-k(x1,y1)、Ioriginal-k(x2,y2)、...Ioriginal-k(xn,yn) in depth most Big value Horiginal-k(i, j) enables Wk-r(i, j)=Ioriginal-k(xp,yp)(1≤p≤n);
The point of n=0, the i.e. pixel value of cavity point are set as 0 by 1.3, and other is 255, obtains maskk-rImage;maskk-rFigure As will intuitively reflect Wk-rEmpty distribution situation in image;
2.1, which carry out hole-filling to the viewpoint figure tentatively generated using optical flow method, includes the following steps:
2.1.1 extracting image maskk-rThe edge in internal large area cavity, if the marginal point in the i-th row cavity is respectively (i might as well set a <b a) with (i, b), if reference frame picture is the third frame in original video, i.e. image Ioriginal-3, utilize LK light Stream method, that is, sparse optical flow method is to Wk-rWith figure Ioriginal-3It is calculated, obtains each pixel in horizontal and vertical directions On movement velocity, respectively ofv and ofh;
2.1.2 will (i, a) left side with (i, b) on the right side of depth value be compared, if (i, a) on the left of depth value it is small, then Illustrate that it is farther apart from observer, is the background parts of image;
2.1.3 the light stream value for estimating hollow sectors takes (i, a) the horizontal movement speed and vertical movement of 7 pixels in left side Speed is averaged, the horizontal movement speed and vertical movement speed as row cavity, it may be assumed that
2.1.4 the horizontal displacement for setting the displacement of corresponding pixel points as Δ x, Δ y, as a < j <b, at cavity are as follows: Δ x= Ofh (i, j) × 10, vertical displacement are as follows: Δ y=ofv (i, j) × 10 acquires cavity and is in Ioriginal-3Mapping block: Wk-r(i, J)=Ioriginal-3(i+Δx,j+Δy);
3.1 are included the following steps: using Criminisi image repair algorithm process image
3.1.1 algorithm process W is repaired with Criminisik-rImage.Since original image size is 436*1024 in the present invention Pixel, according to original image size set the unit pixel block in Criminisi image repair algorithm as radius be 12--20 pixel Units chunk.

Claims (1)

1. a kind of array of figure based on one-dimensional integrated imaging system generates and complementing method, it is characterised in that include the following steps:
1.1 set original image as Ioriginal-k, depth map Hk, the viewpoint figure with cavity of generation is Wk-r,
Wherein: k indicates kth frame image in video;R indicates that r-th of viewpoint, the value range of r are 1~2*N, and N is half viewpoint number;
1.1.1 according to required depth effect, what default human eye can be experienced goes out screen depth and enters to shield depth, by geometry light Learn to:
M/B=P/ (P+D)
Wherein: B is eyes spacing;M is parallax, indicates the horizontal distance between right and left eyes picture;P is the scenery that human eye can be experienced Depth;D is the distance between observer and screen;
Later, the scenery depth P that human eye can be experienced carries out identity transformation, becomes pixel by millimeter: after Conversion of measurement unit Maximum positive parallax, the parallax will be generated into screen effect: maxPM=maxM* (a/A);Maximum negative parallax after Conversion of measurement unit, should Parallax, which will lead to out, shields effect:
MaxNM=-abs (maxM) * (a/A)
Wherein: abs (maxM)=maxP*B/ (D-maxP);A is the horizontal resolution of screen;A is the horizontal width of screen;
1.1.2 in the scenery depth that depth value uniform quantization to human eye can be experienced, it is translated into parallax value, conversion is public Formula are as follows:
D (i, j)=(maxNM-maxPM)/255*H (i, j)+maxPM
Wherein: d (i, j) is the corresponding parallax value of coordinate (i, j) point;H (i, j) is depth value corresponding to point (i, j);
It follows that when depth value is 0, corresponding positive parallax maxPM;When depth value is 255, corresponding negative parallax maxNM;
1.1.3 each pixel in original video image is translated according to parallax value, any visual point image can be obtained:
IN-x+1(i, j-d (i, j) * x/2N)=Ioriginal-k(i,j)、IN+x(I, j+d (i, j) * x/2N)=Ioriginal-k(i,j)
Viewpoint is numbered according to the direction of opposite original image from left to right, viewpoint sum is 2*N, which is x-th Visual point image;
1.2 statistics are in figure Wk-rIn, by Ioriginal-kIt is mapped to Wk-rThe coordinate of the number (n) of the point of (i, j) and they Ioriginal-k(x1,y1)、Ioriginal-k(x2,y2)、...Ioriginal-k(xn,yn);
As n=0, W is enabledk-r(i, j)=0, as empty point;
As n=1, W is enabledk-r(i, j)=Ioriginial-k(x,y);
As n=2, enable
As n > 2, I is takenoriginal-k(x1,y1)、Ioriginal-k(x2,y2)、...Ioriginal-k(xn,yn) in depth maximum value Horiginal-k(i, j) enables Wk-r(i, j)=Ioriginal-k(xp,yp)(1≤p≤n);
The point of n=0, the i.e. pixel value of cavity point are set as 0 by 1.3, and other is 255, obtains maskk-rImage;maskk-rImage It will intuitively reflect Wk-rEmpty distribution situation in image;
1.4. hole-filling is carried out to the viewpoint figure tentatively generated using optical flow method to include the following steps:
1.4.1 extracting image maskk-rThe edge in internal large area cavity, if the marginal point in the i-th row cavity be respectively (i, a) (i, b), if a <b, if reference frame picture is the third frame in original video, i.e. image Ioriginal-3, utilize LK optical flow method, that is, dilute Optical flow method is dredged to Wk-rWith figure Ioriginal-3It is calculated, obtains the movement of each pixel in both the horizontal and vertical directions Speed, respectively ofv and ofh;
1.4.2 will (i, a) left side with (i, b) on the right side of depth value be compared, if (i, a) on the left of depth value it is small, then illustrate It is farther at a distance from observer, is the background parts of image;
1.4.3 the light stream value for estimating hollow sectors takes (i, a) the horizontal movement speed of 7 pixels in left side and vertical movement speed Degree, is averaged, the horizontal movement speed and vertical movement speed as row cavity, it may be assumed that
1.4.4 the horizontal displacement for setting the displacement of corresponding pixel points as Δ x, Δ y, as a < j <b, at cavity are as follows: Δ x=ofh (i, j) × 10, vertical displacement are as follows: Δ y=ofv (i, j) × 10 acquires cavity and is in Ioriginal-3Mapping block: Wk-r(i,j) =Ioriginal-3(i+Δx,j+Δy);
1.5 are included the following steps: using Criminisi image repair algorithm process image
1.5.1 Criminisi image repair algorithm process W is usedk-rImage, since original image size is 436*1024 pixel, root The unit that the unit pixel block in Criminisi image repair algorithm is 12--20 pixel as radius is set according to original image size Block.
CN201910448420.9A 2019-05-28 2019-05-28 Array diagram generating and filling method based on one-dimensional integrated imaging system Active CN110149508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910448420.9A CN110149508B (en) 2019-05-28 2019-05-28 Array diagram generating and filling method based on one-dimensional integrated imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910448420.9A CN110149508B (en) 2019-05-28 2019-05-28 Array diagram generating and filling method based on one-dimensional integrated imaging system

Publications (2)

Publication Number Publication Date
CN110149508A true CN110149508A (en) 2019-08-20
CN110149508B CN110149508B (en) 2021-01-12

Family

ID=67593392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910448420.9A Active CN110149508B (en) 2019-05-28 2019-05-28 Array diagram generating and filling method based on one-dimensional integrated imaging system

Country Status (1)

Country Link
CN (1) CN110149508B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193921A (en) * 2020-01-10 2020-05-22 吉林大学 LED screen one-dimensional integrated imaging display method based on combined discrete grating
CN111198448A (en) * 2020-01-10 2020-05-26 吉林大学 One-dimensional integrated imaging display method based on special-shaped cylindrical lens grating

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101312542A (en) * 2008-07-07 2008-11-26 浙江大学 Natural three-dimensional television system
CN101610423A (en) * 2009-07-13 2009-12-23 清华大学 A kind of method and apparatus of rendering image
CN102413332A (en) * 2011-12-01 2012-04-11 武汉大学 Multi-viewpoint video coding method based on time-domain-enhanced viewpoint synthesis prediction
US20150130902A1 (en) * 2013-11-08 2015-05-14 Samsung Electronics Co., Ltd. Distance sensor and image processing system including the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101312542A (en) * 2008-07-07 2008-11-26 浙江大学 Natural three-dimensional television system
CN101610423A (en) * 2009-07-13 2009-12-23 清华大学 A kind of method and apparatus of rendering image
CN102413332A (en) * 2011-12-01 2012-04-11 武汉大学 Multi-viewpoint video coding method based on time-domain-enhanced viewpoint synthesis prediction
US20150130902A1 (en) * 2013-11-08 2015-05-14 Samsung Electronics Co., Ltd. Distance sensor and image processing system including the same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吕源治等: "组合成像中的立体元阵列合成与稀疏视点采集", 《吉林大学学报(工学版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193921A (en) * 2020-01-10 2020-05-22 吉林大学 LED screen one-dimensional integrated imaging display method based on combined discrete grating
CN111198448A (en) * 2020-01-10 2020-05-26 吉林大学 One-dimensional integrated imaging display method based on special-shaped cylindrical lens grating
CN111198448B (en) * 2020-01-10 2021-07-30 吉林大学 One-dimensional integrated imaging display method based on special-shaped cylindrical lens grating

Also Published As

Publication number Publication date
CN110149508B (en) 2021-01-12

Similar Documents

Publication Publication Date Title
Cao et al. Semi-automatic 2D-to-3D conversion using disparity propagation
CN108513123B (en) Image array generation method for integrated imaging light field display
CN111325693B (en) Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image
JP2008090617A (en) Device, method and program for creating three-dimensional image
CN102905145B (en) Stereoscopic image system, image generation method, image adjustment device and method thereof
CN111047709B (en) Binocular vision naked eye 3D image generation method
CN108573521B (en) Real-time interactive naked eye 3D display method based on CUDA parallel computing framework
CN104506872B (en) A kind of method and device of converting plane video into stereoscopic video
CN102447925A (en) Method and device for synthesizing virtual viewpoint image
US8577202B2 (en) Method for processing a video data set
CN104635337B (en) The honeycomb fashion lens arra method for designing of stereo-picture display resolution can be improved
Bleyer et al. Temporally consistent disparity maps from uncalibrated stereo videos
WO2012140397A2 (en) Three-dimensional display system
CN109358430A (en) A kind of real-time three-dimensional display methods based on two-dimentional LED fan screen
CN115482323A (en) Stereoscopic video parallax control and editing method based on nerve radiation field
CN102026012B (en) Generation method and device of depth map through three-dimensional conversion to planar video
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
CN103945206B (en) A kind of stereo-picture synthesis system compared based on similar frame
CN105791798B (en) A kind of 4K based on GPU surpasses the real-time method for transformation of multiple views 3D videos and device
CN102780900B (en) Image display method of multi-person multi-view stereoscopic display
CN115841539A (en) Three-dimensional light field generation method and device based on visual shell
Seitner et al. Trifocal system for high-quality inter-camera mapping and virtual view synthesis
De Sorbier et al. Augmented reality for 3D TV using depth camera input
CN108769644B (en) Binocular animation stylized rendering method based on deep learning
JP6768431B2 (en) Image generator and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant