CN105611271A - Real-time stereo image generating system - Google Patents
Real-time stereo image generating system Download PDFInfo
- Publication number
- CN105611271A CN105611271A CN201510967607.1A CN201510967607A CN105611271A CN 105611271 A CN105611271 A CN 105611271A CN 201510967607 A CN201510967607 A CN 201510967607A CN 105611271 A CN105611271 A CN 105611271A
- Authority
- CN
- China
- Prior art keywords
- pixel
- unit
- image
- module
- visual point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 24
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000009877 rendering Methods 0.000 claims abstract description 7
- 238000013507 mapping Methods 0.000 claims description 11
- 239000000203 mixture Substances 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 claims 18
- 230000015654 memory Effects 0.000 claims 7
- 230000006399 behavior Effects 0.000 claims 1
- 238000013016 damping Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 8
- 101100325756 Arabidopsis thaliana BAM5 gene Proteins 0.000 description 5
- 101150046378 RAM1 gene Proteins 0.000 description 5
- 101100476489 Rattus norvegicus Slc20a2 gene Proteins 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000000034 method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 230000008439 repair process Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 239000002131 composite material Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 101100207024 Caenorhabditis elegans sel-9 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种实时立体图像生成系统,属于图像处理技术领域。本发明包括多视点生成模块、缓存模块与立体图像合成模块。其中多视点生成模块由N个并行的基于深度图像绘制处理单元构成,根据原始图和视差图信息生成N个虚拟视点图像。生成的虚拟视点图像数据分别存放到缓存模块的N个双端口双缓存单元中。立体图像合成模块从缓存模块读取虚拟视点图像数据,将N个虚拟视点图像合成一幅立体图像,用于柱镜光栅自由立体显示器进行显示。本发明利用了FPGA支持并行和流水处理的特性,提高了立体图像生成的速度,提高系统处理实时性。
The invention discloses a real-time stereoscopic image generating system, which belongs to the technical field of image processing. The invention includes a multi-view point generation module, a cache module and a stereoscopic image synthesis module. The multi-view point generation module is composed of N parallel depth-based image rendering processing units, and generates N virtual view point images according to the original image and the disparity map information. The generated virtual viewpoint image data are respectively stored in N dual-port double-buffer units of the cache module. The stereoscopic image synthesis module reads the virtual viewpoint image data from the cache module, and synthesizes N virtual viewpoint images into a stereoscopic image, which is displayed on the lenticular autostereoscopic display. The invention utilizes the characteristics of FPGA supporting parallel and pipeline processing, improves the speed of stereoscopic image generation, and improves the real-time performance of system processing.
Description
技术领域technical field
本发明属于图像处理技术领域,更具体地,涉及一种实时立体图像生成系统。The invention belongs to the technical field of image processing, and more specifically relates to a real-time stereoscopic image generation system.
背景技术Background technique
如今3D视频应用已出现在人们的生活中,在娱乐领域、军事领域、医疗领域等都能见其身影。观看一般的立体视频时,人们需要佩戴红蓝眼镜或偏振光眼镜等辅助设备,从而限制了立体视频显示技术的推广应用。所谓自由立体显示技术,是指观看者在不需要任何辅助设备的条件下,直接观察显示器屏幕就能感知立体感的一种技术,即裸眼3D技术。自由立体显示技术,不仅能使观看者获取立体感,而且也便于观看,其发展前景广,是当前的一个研究热点。Now 3D video applications have appeared in people's lives, and can be seen in entertainment, military, and medical fields. When viewing general stereoscopic video, people need to wear auxiliary equipment such as red-blue glasses or polarizing glasses, thereby limiting the popularization and application of stereoscopic video display technology. The so-called autostereoscopic display technology refers to a technology in which the viewer can perceive the three-dimensional effect by directly observing the display screen without any auxiliary equipment, that is, naked-eye 3D technology. Autostereoscopic display technology not only enables the viewer to obtain a three-dimensional sense, but also facilitates viewing. It has broad development prospects and is a current research hotspot.
基于柱镜光栅的自由立体显示技术的原理是在显示器前方放置一块柱镜光栅,将多幅具有视差的图像以交错排列的方式显示在显示器。当观看者站在某个观看位置,显示器上图像像素点发出的光线经过柱镜光栅产生折射进入到人眼,这样人眼能观察到两幅不同的图像。柱镜光栅因其立体效果好、光学性能好及成本低等优点,在自由立体显示领域应用较多。The principle of autostereoscopic display technology based on lenticular grating is to place a lenticular grating in front of the display to display multiple images with parallax in a staggered manner on the display. When the viewer stands at a certain viewing position, the light emitted by the image pixels on the display is refracted through the lenticular grating and enters the human eye, so that the human eye can observe two different images. Lenticular gratings are widely used in the autostereoscopic display field because of their advantages such as good stereo effect, good optical performance and low cost.
自由立体显示系统主要包括视频信息的采集、编码、传输及显示这几个步骤。现有的自由立体显示器在显示端需要多个视点的信息,这样观看者在观看范围内,左右眼能接收到两幅具有视差的图像,经过大脑的融合处理从而感知到立体感。这些多视点信息的来源是一个待解决的问题,如果利用多个相机进行图像的采集,这样信息量就会急剧增大,需要设计较好的编码方案,同时对传输带宽要求高,对硬件的存储空间要求大,因此这种方案一般不可取。另外立体显示系统对实时性要求高,以使观看者能看到连续、流畅的视频。The autostereoscopic display system mainly includes the steps of acquisition, encoding, transmission and display of video information. Existing autostereoscopic displays require information from multiple viewpoints on the display side, so that within the viewing range, the viewer can receive two images with parallax for the left and right eyes, and perceive the stereoscopic effect through fusion processing by the brain. The source of these multi-viewpoint information is a problem to be solved. If multiple cameras are used to collect images, the amount of information will increase sharply, and a better coding scheme needs to be designed. At the same time, it requires high transmission bandwidth and hardware requirements. The storage space requirement is large, so this solution is generally not advisable. In addition, the stereoscopic display system has high requirements on real-time performance, so that the viewer can see continuous and smooth video.
发明内容Contents of the invention
针对上述背景技术中出现的需求,即如何实时完成从2D图生成3D图,本发明提供一种实时立体图像生成系统,包括多视点生成模块、缓存模块和立体图像合成模块,其中:Aiming at the demand arising in the above-mentioned background technology, that is, how to generate a 3D image from a 2D image in real time, the present invention provides a real-time stereoscopic image generation system, including a multi-view point generation module, a cache module and a stereoscopic image synthesis module, wherein:
所述多视点生成模块包括N个并行的基于深度图像绘制处理单元,其中N=x2,x为大于1的正整数,每一所述基于深度图像绘制处理单元根据原始图和视差图生成虚拟视点图像并发送至所述缓存模块中,同时发送指示所述虚拟视点图像绘制完成的信号至所述立体图像合成模块;The multi-view generation module includes N parallel depth-based image rendering processing units, where N=x 2 , x is a positive integer greater than 1, and each of the depth-based image rendering processing units generates a virtual image based on the original image and the disparity image. The viewpoint image is sent to the cache module, and at the same time, a signal indicating that the rendering of the virtual viewpoint image is completed is sent to the stereoscopic image synthesis module;
所述缓存模块包括N个并行的双端口双缓存单元,分别接收并存储所述多视点生成模块生成的N个所述虚拟视点图像,其中每一双端口双缓存单元包括两个深度为所述虚拟视点图像宽度、位宽与所述原始图的颜色深度一致的双端口RAM,所述两个双端口RAM组成双缓冲结构来实现乒乓操作,以实现对所述虚拟视点图像的流水处理;以及The cache module includes N parallel dual-port double-buffer units, respectively receiving and storing the N virtual viewpoint images generated by the multi-viewpoint generating module, wherein each dual-port double-buffer unit includes two depths for the virtual A dual-port RAM whose width and bit width of the viewpoint image are consistent with the color depth of the original image, and the two dual-port RAMs form a double buffer structure to realize a ping-pong operation, so as to realize pipeline processing of the virtual viewpoint image; and
所述立体图像合成模块包括像素计数器及立体图像生成单元,所述像素计数器接收所述指示虚拟视点图像绘制完成的信号后,将其计数作为地址输入到所述缓存模块,所述立体图像生成单元从所述缓存模块中读取地址为所述计数的存储单元所存储的虚拟视点图像数据,将所述N个虚拟视点图像合成为1个立体图像进行输出。The stereoscopic image synthesis module includes a pixel counter and a stereoscopic image generating unit. After the pixel counter receives the signal indicating that the drawing of the virtual viewpoint image is completed, the count is input to the cache module as an address, and the stereoscopic image generating unit Reading the virtual viewpoint image data stored in the storage unit whose address is the count is read from the cache module, and synthesizing the N virtual viewpoint images into one stereoscopic image for output.
总体而言,通过本发明所构思的以上技术方案与现有技术相比,具有以下有益效果:Generally speaking, compared with the prior art, the above technical solution conceived by the present invention has the following beneficial effects:
根据原始图和视差图信息利用DIBR技术实现多视点的绘制,为自由立体显示提供多视点信息,降低了系统的传输带宽、存储空间以及成本;According to the original image and disparity image information, DIBR technology is used to realize multi-viewpoint rendering, providing multi-viewpoint information for auto-stereoscopic display, reducing the transmission bandwidth, storage space and cost of the system;
充分利用FPGA支持并行和流水处理的特性,将对像素的处理采用硬件来进行加速处理,大大提高系统处理速度,提高系统处理实时性;Make full use of the characteristics of FPGA supporting parallel and pipeline processing, and use hardware to accelerate the pixel processing, greatly improving the system processing speed and improving the real-time performance of the system processing;
系统的适用性好,本系统生成虚拟视点的方式不依赖于相机参数,适用性广泛。The applicability of the system is good. The way the system generates virtual viewpoint does not depend on the camera parameters and has wide applicability.
实验结果表明:在Xilinx公司生产的Spartan6家族XC6SLX150型号的FPGA上,本发明的处理速度比处理器为英特尔酷睿i5的PC机的软件匹配速度快8倍。Experimental results show that: on the FPGA of the Spartan6 family XC6SLX150 model produced by Xilinx, the processing speed of the present invention is 8 times faster than the software matching speed of a PC whose processor is Intel Core i5.
附图说明Description of drawings
图1是本发明实时立体图像生成系统的硬件框图;Fig. 1 is the hardware block diagram of real-time stereoscopic image generating system of the present invention;
图2是本发明DIBR处理单元的结构框图;Fig. 2 is the structural block diagram of DIBR processing unit of the present invention;
图3是本发明双端口双缓存单元的原理框图;Fig. 3 is the functional block diagram of dual-port dual-buffer unit of the present invention;
图4是本发明缓存模块与立体图像合成模块的连接示意图。Fig. 4 is a schematic diagram of the connection between the cache module and the stereoscopic image synthesis module of the present invention.
具体实施方式detailed description
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.
本发明实施例利用VHDL在Xilinx公司的Spartan6家族XC6SLX150型号的FPGA上实施,充分挖掘FPGA并行性,并且结合流水线技术和流媒体技术原理将其应用于视差的多视点生成计算与立体图像合成。The embodiment of the present invention utilizes VHDL to implement on the FPGA of the Spartan6 family XC6SLX150 model of Xilinx Company, fully taps the parallelism of the FPGA, and applies it to multi-view point generation calculation and stereoscopic image synthesis of parallax in combination with pipeline technology and streaming media technology principles.
图1所示为本发明实时立体图像生成系统的硬件框图。如图1所示,实时立体图像生成系统包括多视点生成模块、缓存模块与立体图像合成模块。Fig. 1 shows the hardware block diagram of the real-time stereoscopic image generating system of the present invention. As shown in Figure 1, the real-time stereoscopic image generation system includes a multi-view point generation module, a cache module and a stereoscopic image synthesis module.
在本发明实施例中,多视点生成模块由9个并行的基于深度图像绘制(Depth-Image-BasedRendering,以下简称DIBR)处理单元DIBR_V1~DIBR_V9构成,用以根据原始图和视差图分别生成9个分辨率为640×360的虚拟视点图像,同时发送用于指示已经经过空洞填充的虚拟视点图像生成完成的信号hf_over1~hf_over9(其中,DIBR_V1发送hf_over1,DIBR_V2发送hf_over2,以此类推)至立体图像合成模块的像素计数器。其中,原始图(大小为640×360,颜色深度为24位,R8G8B8)为从双目摄像机拍摄采集到的左右视图图像中任取一个;视差图(大小为640×360,64灰度)为由原始图经过立体匹配计算得到,其尺寸与原始图相同。In the embodiment of the present invention, the multi-view generation module is composed of 9 parallel depth-image-based rendering (Depth-Image-BasedRendering, hereinafter referred to as DIBR) processing units DIBR_V1~DIBR_V9, which are used to generate 9 views according to the original image and the disparity image respectively. A virtual view point image with a resolution of 640×360, while sending signals hf_over1~hf_over9 (where DIBR_V1 sends hf_over1, DIBR_V2 sends hf_over2, and so on) to stereoscopic image synthesis The module's pixel counter. Among them, the original image (size is 640×360, color depth is 24 bits, R8G8B8) is any one of the left and right view images collected from the binocular camera; the disparity image (size is 640×360, 64 grayscale) is It is calculated from the original image through stereo matching, and its size is the same as the original image.
缓存模块包括9个并行的双端口双缓存单元1~9。DIBR处理单元DIBR_V1~DIBR_V9每生成一行虚拟视点图像数据就分别对应存放到缓存模块的双端口双缓存单元1~9中。缓存模块将所接收的虚拟视点图像数据发送到立体图像合成模块。The cache module includes 9 parallel dual-port double-buffer units 1-9. The DIBR processing units DIBR_V1-DIBR_V9 generate a row of virtual viewpoint image data and respectively store them in the dual-port double-buffer units 1-9 of the cache module. The cache module sends the received virtual viewpoint image data to the stereoscopic image synthesis module.
立体图像合成模块包括像素计数器和立体图像生成单元。像素计数器接收信号hf_over1~hf_over9,在9个信号都到达后,像素计数器开始在虚拟视点图像时序信号的每个时钟周期增加1,增加到最大值时归零,其变化范围是0~640×360。立体图像生成单元从缓存模块读取虚拟视点图像数据,将9个颜色深度为24、分辨率为640×360的虚拟视点图像合成为1个分辨率为1920×1080的立体图像(颜色深度为24位,R8G8B8)。在本发明实施例中,对于大小为width×height的虚拟视点图像,其合成图的大小为(3×width)×(3×height)。The stereoscopic image synthesis module includes a pixel counter and a stereoscopic image generating unit. The pixel counter receives signals hf_over1~hf_over9. After all 9 signals arrive, the pixel counter starts to increase by 1 in each clock cycle of the virtual viewpoint image timing signal, and returns to zero when it reaches the maximum value. The range of change is 0~640×360 . The stereoscopic image generation unit reads the virtual viewpoint image data from the cache module, and synthesizes nine virtual viewpoint images with a color depth of 24 and a resolution of 640×360 into a stereoscopic image with a resolution of 1920×1080 (the color depth is 24 bit, R8G8B8). In the embodiment of the present invention, for a virtual viewpoint image whose size is width×height, the size of the composite image is (3×width)×(3×height).
图2所示为本发明DIBR处理单元的结构框图。如图2所示,每个DIBR处理单元包括视点映射模块、空洞标记缓存模块、虚视点缓存模块及空洞填补模块。Fig. 2 is a structural block diagram of the DIBR processing unit of the present invention. As shown in FIG. 2 , each DIBR processing unit includes a view mapping module, a hole mark cache module, a virtual view cache module and a hole filling module.
视点映射模块包括坐标计算单元、zbuf缓存单元和视差查找单元。其中,坐标计算单元依据某个像素点(u0,v0)在视差图中的灰度值,根据下述等价公式(1)计算出该点在虚拟视点图像中对应的坐标值(u0+dis×n,v0),逐行地完成从参考视点到虚拟视点的映射,并在表示一行参考视点到虚拟视点的映射完成时发送指示完成的row_over信号至空洞填补模块,以通知空洞填补模块可以对新一行数据进行空洞填补。The viewpoint mapping module includes a coordinate calculation unit, a zbuf cache unit and a disparity lookup unit. Wherein, the coordinate calculation unit calculates the corresponding coordinate value (u0+dis ×n,v0), complete the mapping from the reference viewpoint to the virtual viewpoint row by row, and send the row_over signal indicating completion to the hole filling module when the mapping from the reference viewpoint to the virtual viewpoint of a row is completed, so as to notify the hole filling module that it can A new row of data is filled with holes.
zbuf缓存单元由2个容量为640位、位宽为6位(视差图灰度64阶,6位表示视差图灰度位数)的单端口RAM组成,用于存储zbuffer算法结果。坐标计算单元根据视差图某个像素点计算出其坐标结果后,首先读取并判断zbuf缓存单元里地址为该坐标结果的单元是否已经存在视差记录,如果存在,则将zbuf缓存单元里的视差与当前计算所采用的视差进行比较,将视差较小的像素点所在的坐标作为虚拟视点图像中某像素点的来源点坐标,并在zbuf缓存单元里地址为该坐标结果的单元里记录下计算所采用的视差值。zbuffer算法的这一过程能够解决像素映射计算过程中的遮挡问题。The zbuf cache unit consists of two single-port RAMs with a capacity of 640 bits and a bit width of 6 bits (64 levels of disparity map grayscale, 6 bits representing the number of digits of the disparity map grayscale), which are used to store the results of the zbuffer algorithm. After the coordinate calculation unit calculates the coordinate result of a certain pixel point in the disparity map, it first reads and judges whether the unit whose address is the coordinate result in the zbuf cache unit already has a disparity record, and if so, writes the disparity record in the zbuf cache unit Compare with the disparity used in the current calculation, take the coordinates of the pixel with the smaller disparity as the source point coordinates of a pixel in the virtual viewpoint image, and record the calculation in the unit whose address is the coordinate result in the zbuf cache unit The disparity value to use. This process of the zbuffer algorithm can solve the occlusion problem during the pixel mapping calculation process.
视差查找单元由一个容量为64(与视差图的像素灰度级数64一致)的ROM构成,其包含一个视差查找表,用于完成从视差值到视差函数的计算,其中视差值即为视差图中各像素的灰度取值,范围为0~63;偏移值为参与虚拟视点生成计算的一个中间结果,为整数,范围为-12~12。其中0~63线性对应到-12~12之上,例如视差值为31的对应偏移值为0。由于偏移值与深度成比例关系,本发明实施例将偏移值而不是深度值放入视差查找表中,可以加速计算。The disparity lookup unit is composed of a ROM with a capacity of 64 (consistent with the pixel gray scale number 64 of the disparity map), which contains a disparity lookup table, which is used to complete the calculation from the disparity value to the disparity function, wherein the disparity value is It is the grayscale value of each pixel in the disparity map, ranging from 0 to 63; the offset value is an intermediate result involved in the calculation of virtual viewpoint generation, and is an integer, ranging from -12 to 12. Among them, 0 to 63 linearly correspond to above -12 to 12, for example, the corresponding offset value of the parallax value of 31 is 0. Since the offset value is proportional to the depth, the embodiment of the present invention puts the offset value instead of the depth value into the disparity lookup table, which can speed up calculation.
多视点生成模块的9个DIBR处理单元的坐标计算单元分别计算得到原始图、原始图左边四个虚拟视点图像和右边的四个虚拟视点图像,根据DIBR算法,得到简化等价公式(1):The coordinate calculation units of the 9 DIBR processing units of the multi-viewpoint generation module respectively calculate the original image, the four virtual viewpoint images on the left side of the original image, and the four virtual viewpoint images on the right side. According to the DIBR algorithm, the simplified equivalent formula (1) is obtained:
VIR_n(u0+DEEP(DIS(u0,v0))×n,v0)=ORI(u0,v0)(1)VIR_n(u0+DEEP(DIS(u0,v0))×n,v0)=ORI(u0,v0)(1)
其中,VIR_n表示第n个虚拟视点图像的像素矩阵(n=-4,-3,…,4),ORI(u0,v0)表示原始图的像素矩阵;原始图n=0,原始图相邻左侧m个虚拟视点图像n=-1~-m,原始图相邻右侧y个虚拟视点图像n=1~m,m为整数,当N为奇数时m=(N-1)/2,当N为偶数时m=N/2,在本发明实施例中,原始图相邻左侧第1个虚拟视点图像n=-1,原始图相邻右侧第1个虚拟视点图像n=1;原始图相邻左侧第2个虚拟视点图像n=-2,原始图相邻右侧第2个虚拟视点图像n=2;…以此类推可知在9个虚拟视点图像中最左侧虚拟视点图像的n=-4,最右侧虚拟视点图像的n=4;u0、v0分别为原始图中像素点的横纵坐标;DEEP表示视差查找表,DIS表示视差图,DIS(u0,v0)表示坐标点(u0,v0)点的视差值,DEEP(DIS(u0,v0))表示坐标点(u0,v0)点的深度值。例如对于最左侧虚拟视点图像,即有:Among them, VIR_n represents the pixel matrix (n=-4,-3,...,4) of the nth virtual viewpoint image, ORI (u0, v0) represents the pixel matrix of the original image; the original image n=0, the original image is adjacent m virtual view point images on the left side n=-1~-m, y virtual view point images n=1~m on the right adjacent to the original image, m is an integer, when N is an odd number, m=(N-1)/2 , when N is an even number, m=N/2, in the embodiment of the present invention, the first virtual viewpoint image n=-1 on the left adjacent to the original image, and the first virtual viewpoint image n=-1 on the adjacent right side of the original image 1; the second virtual viewpoint image n=-2 on the left adjacent to the original image, and the second virtual viewpoint image n=2 on the adjacent right side of the original image; ... and so on, it can be known that the leftmost among the nine virtual viewpoint images n=-4 of the virtual viewpoint image, and n=4 of the rightmost virtual viewpoint image; u0 and v0 are the horizontal and vertical coordinates of pixels in the original image respectively; DEEP represents a disparity lookup table, DIS represents a disparity map, and DIS(u0, v0) represents the parallax value of the coordinate point (u0, v0), and DEEP(DIS(u0, v0)) represents the depth value of the coordinate point (u0, v0). For example, for the leftmost virtual viewpoint image, there are:
VIR_-4(u0+DEEP(DIS(u0,v0))×(-4),v0)=ORI(u0,v0)VIR_-4(u0+DEEP(DIS(u0,v0))×(-4),v0)=ORI(u0,v0)
虚拟视点图像的像素矩阵中有些点的值是没有确定的,即为空洞,有些点会有多次值,即为遮挡。经过计算,9个坐标计算单元分别计算出了9个处于不同视点位置的带有空洞的虚拟视点图像数据,并在空洞标记缓存模块标记出了虚拟视点图像中的空洞点位置。The value of some points in the pixel matrix of the virtual viewpoint image is not determined, which is a hole, and some points have multiple values, which is occlusion. After calculation, 9 coordinate calculation units respectively calculate 9 virtual viewpoint image data with holes at different viewpoint positions, and mark the hole point positions in the virtual viewpoint image in the hole mark cache module.
空洞标记缓存模块由两个深度为640(虚拟视点图像中一行像素数)、位宽为1位的双端口RAM,即RAM11和RAM12组成,用于存储非空洞标记。在本发明实施例中,空洞点坐标处标记1,非空洞点处标记0。The hole mark cache module is composed of two dual-port RAMs with a depth of 640 (the number of pixels in a line in the virtual viewpoint image) and a bit width of 1 bit, namely RAM11 and RAM12, which are used to store non-hole marks. In the embodiment of the present invention, the coordinates of the hole point are marked with 1, and the coordinates of the non-hole point are marked with 0.
虚视点缓存模块由两个深度为640、位宽为24位(与原始图颜色深度一致)的双端口RAM,即RAM21和RAM22组成,用于存储映射后的虚拟视点图像数据。虚视点缓存模块采用两个RAM组成双缓冲结构来实现乒乓操作,以实现数据的流水存取。The virtual viewpoint cache module consists of two dual-port RAMs with a depth of 640 and a bit width of 24 bits (consistent with the color depth of the original image), namely RAM21 and RAM22, which are used to store the mapped virtual viewpoint image data. The virtual viewpoint cache module uses two RAMs to form a double-buffer structure to realize the ping-pong operation, so as to realize the pipeline access of data.
空洞填补模块包括空洞查找单元和空洞修复单元。空洞查找单元通过读取空洞标记缓存模块的RAM11和RAM12,将从双目摄像机拍摄采集到的左右视图中非空洞像素的坐标值输出给空洞修复单元,其中信号Laddr为从双目摄像机拍摄采集到的双目图像左图里的非空洞像素的坐标值,信号Raddr为从双目摄像机拍摄采集到的双目图像右图里的非空洞像素的坐标值。空洞修复单元依据传来的左右图非空洞像素的坐标值进行空洞填补,并发送用于指示对虚拟视点图像空洞填充完成的信号hf_over1~hf_over9(DIBR_V1的空洞填补模块发送hf_over1,DIBR_V2的空洞填补模块发送hf_over2,以此类推)至立体图像合成模块的像素计数器。The hole filling module includes a hole finding unit and a hole repairing unit. The hole search unit outputs the coordinate values of the non-hole pixels in the left and right views collected from the binocular camera to the hole repair unit by reading the RAM11 and RAM12 of the hole mark cache module, wherein the signal Laddr is collected from the binocular camera The coordinate value of the non-hole pixel in the left image of the binocular image, and the signal Raddr is the coordinate value of the non-hole pixel in the right image of the binocular image collected from the binocular camera. The hole repair unit performs hole filling according to the coordinate values of the non-hole pixels in the left and right images, and sends signals hf_over1~hf_over9 (the hole filling module of DIBR_V1 sends hf_over1, and the hole filling module of DIBR_V2 Send hf_over2, and so on) to the pixel counter of the stereoscopic image synthesis module.
在本发明实施例中,视点映射模块以行为单位生成虚拟视点图像数据,每生成虚拟视点图像中一行数据,将正确映射的像素点的像素值存入虚视点缓存模块中,同时将非空洞标记存入空洞标记缓存模块中。空洞查找单元通过读取空洞标记缓存模块中的非空洞标记,将空洞区域左右图非空洞像素的坐标值输出给空洞修复单元,空洞修复单元依据左右坐标值进行空洞填补。空洞填补完成之后的数据输出到缓存模块中存储,用于生成虚拟视点图像。In the embodiment of the present invention, the viewpoint mapping module generates virtual viewpoint image data in row units, and stores the pixel values of correctly mapped pixels into the virtual viewpoint buffer module for each row of data in the virtual viewpoint image, and at the same time marks non-holes Stored in the hole mark cache module. The hole search unit outputs the coordinate values of the non-hole pixels in the left and right images of the hole area to the hole repair unit by reading the non-hole marks in the hole mark cache module, and the hole repair unit fills the holes according to the left and right coordinate values. The data after the hole filling is completed is output to the cache module for storage, and is used to generate a virtual viewpoint image.
图3所示为本发明双端口双缓存单元的原理框图。如图3所示,存储部分采用两个24位双端口RAM组成,即RAM0和RAM1,RAM0和RAM1的4个数据端口分别为IN0、OUT1、IN1、OUT0,每个RAM的存储单元数为640。ce信号为使能信号,DIBR处理单元输出的数据经过受ce信号控制的选择器muxi会进入到RAM0的IN0端口或RAM1的IN1端口;与此同时,受ce信号控制的选择器muxo会将与当前正在接收数据不同的另一个RAM的数据输出。即向一个RAM的一个输入端存入数据的同时,另一个RAM里的已经存进去的数据会作为输出,更替进行形成乒乓操作。In_addr为数据输入端的地址线,连接到RAM0、RAM1的输入端一侧的地址线;Out_addr为数据输出端的地址线,连接到RAM0、RAM1的输出端一侧的地址线。FIG. 3 is a functional block diagram of a dual-port double-buffer unit of the present invention. As shown in Figure 3, the storage part is composed of two 24-bit dual-port RAMs, namely RAM0 and RAM1. The four data ports of RAM0 and RAM1 are IN0, OUT1, IN1, and OUT0 respectively, and the number of storage units in each RAM is 640. . The ce signal is the enable signal, and the data output by the DIBR processing unit will enter the IN0 port of RAM0 or the IN1 port of RAM1 through the selector muxi controlled by the ce signal; at the same time, the selector muxo controlled by the ce signal will communicate with Data output from another RAM with different data is currently being received. That is, when data is stored in one input terminal of one RAM, the data already stored in another RAM will be used as output, and the ping-pong operation is formed alternately. In_addr is the address line of the data input end, connected to the address line of the input end of RAM0, RAM1; Out_addr is the address line of the data output end, connected to the address line of the output end of RAM0, RAM1.
图4所示为本发明缓存模块与立体图像合成模块的连接示意图。立体图像合成模块包括1个像素计数器和多个立体图像生成单元。在本发明实施例中,立体图像合成模块包括3个立体图像生成单元,大于3个立体图像生成单元可产生更好的立体效果但是计算过于复杂,降低了实时性;少于3个立体图像生成单元计算速度快但是立体效果较弱。其中每个立体图像生成单元包括合成控制器、多个数据开关、多路选择器及输出缓存。FIG. 4 is a schematic diagram of the connection between the cache module and the stereoscopic image synthesis module of the present invention. The stereoscopic image synthesis module includes a pixel counter and multiple stereoscopic image generation units. In the embodiment of the present invention, the stereoscopic image synthesis module includes 3 stereoscopic image generation units, more than 3 stereoscopic image generation units can produce better stereoscopic effects but the calculation is too complicated, which reduces real-time performance; less than 3 stereoscopic image generation units Cell calculation speed is fast but the stereo effect is weak. Each stereoscopic image generating unit includes a composite controller, a plurality of data switches, a multiplexer and an output buffer.
在本发明实施例中,像素计数器的时钟为系统时钟或者视频信号里的时钟。像素计数器接收到hf_over1~hf_over9这9个信号之后,其计数piex_num在每个时钟周期增加1并传输至合成控制器,其中计数piex_num表示当前要合成的子像素的位置序数。合成控制器将像素计数器的计数piex_num作为地址输入到合成控制器包含的合成表进行查询。In the embodiment of the present invention, the clock of the pixel counter is a system clock or a clock in a video signal. After the pixel counter receives the 9 signals hf_over1~hf_over9, its count piex_num increases by 1 every clock cycle and is transmitted to the composition controller, wherein the count piex_num represents the position ordinal number of the sub-pixel to be composed currently. The compositing controller inputs the count piex_num of the pixel counter as an address to a compositing table included in the compositing controller for lookup.
合成表由一个深度为640×360(与原始图和视差图大小一致)、宽度为9(与虚拟视点图像的个数一致)位的ROM构成,其内容由公式确定,其中x表示子像素横坐标;y表示子像素纵坐标;a表示柱镜光栅倾斜角;A表示柱镜光栅跨过的子像素数目;N表示虚拟视点图像个数。对一副图像从左往右、从上往下进行计数,各子像素横纵坐标即可形成一个像素序号。当像素序号作为地址输入时,对合成表的内容进行查找,输出视图索引序数,其表示立体图像中该像素序号对应的子像素取自哪一个虚拟视点图像的序号。The synthesis table consists of a ROM with a depth of 640×360 (consistent with the size of the original image and disparity map) and a width of 9 bits (consistent with the number of virtual viewpoint images), and its content is given by the formula Determine, where x represents the abscissa of the sub-pixel; y represents the ordinate of the sub-pixel; a represents the inclination angle of the lenticular grating; A represents the number of sub-pixels spanned by the lenticular grating; N represents the number of virtual viewpoint images. Count an image from left to right and from top to bottom, and the horizontal and vertical coordinates of each sub-pixel can form a pixel serial number. When the pixel serial number is input as an address, the content of the synthesis table is searched, and the view index serial number is output, which indicates the serial number of which virtual viewpoint image the sub-pixel corresponding to the pixel serial number in the stereoscopic image is taken from.
视图索引序数(在本发明实施例中范围是1~9)通过一个四位的连线输入到多路选择器,多路选择器根据视图索引序数输出9个使能信号sel1~sel9,每个使能信号对应控制一个数据开关。The view index sequence number (in the embodiment of the present invention, the range is 1 to 9) is input to the multiplexer through a four-bit connection, and the multiplexer outputs 9 enable signals sel1 to sel9 according to the view index sequence number, each The enable signal corresponds to controlling a data switch.
数据开关1~9在同一时刻最多只有一个被使能,被使能的数据开关允许其输入的数据从输出连线输出,而其余未使能的数据开关的输出呈现高阻态。合成控制器将像素计数器的计数piex_num发送到缓存模块中各双端口双缓存单元的输出端口一侧的地址线Out_addr上,双端口双缓存单元里地址为piex_num的存储单元的数据会输出至数据开关。根据多路选择器的选通信号打开对应的数据开关,该数据开关输出虚拟视点图像数据至输出缓存。At most one of the data switches 1 to 9 is enabled at the same time, and the enabled data switch allows its input data to be output from the output connection, while the outputs of the remaining unenabled data switches are in a high-impedance state. The synthesis controller sends the count piex_num of the pixel counter to the address line Out_addr on the output port side of each dual-port double buffer unit in the buffer module, and the data of the storage unit with the address piex_num in the dual-port double buffer unit will be output to the data switch . The corresponding data switch is turned on according to the strobe signal of the multiplexer, and the data switch outputs the virtual viewpoint image data to the output buffer.
3个立体图像生成单元分别对三行数据进行交错合成,并发进行。立体图像生成单元0对第y行的像素数据进行交错时,立体图像生成单元1处理第y+1行,立体图像生成单元2处理第y+2行。合成控制器的图像像素点频率是像素计数器时钟频率的3倍,即每个像素计数器的计数piex_num持续的时间内,完成3个像素和交错合成。最后一个3行交错合成完成时,即可得到1920×1080像素数据。The three stereoscopic image generating units respectively interleave and synthesize the three rows of data, and perform concurrently. When the stereoscopic image generating unit 0 interleaves the pixel data of the yth row, the stereoscopic image generating unit 1 processes the y+1th row, and the stereoscopic image generating unit 2 processes the y+2th row. The image pixel point frequency of the synthesis controller is 3 times of the clock frequency of the pixel counter, that is, within the duration of the count piex_num of each pixel counter, 3 pixels and interleaved synthesis are completed. When the last 3-line interlaced composition is completed, 1920×1080 pixel data can be obtained.
输出缓存为一个深度为1920×1080(立体图大小)、宽度为24(立体图颜色深度位数,与原始图一致)位的ROM,接收来自数据开关1~9的中的虚拟视点图像数据。当数据写满后即为合成后的立体图像的数据流,按照时序排列封装输出,即可得到实时的立体图像显示。The output buffer is a ROM with a depth of 1920×1080 (stereo image size) and a width of 24 (stereo image color depth digits, consistent with the original image) bits, receiving virtual viewpoint image data from data switches 1-9. When the data is full, it becomes the data stream of the synthesized stereoscopic image, which is arranged and packaged according to time sequence for output, so that real-time stereoscopic image display can be obtained.
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。It is easy for those skilled in the art to understand that the above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention, All should be included within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510967607.1A CN105611271A (en) | 2015-12-18 | 2015-12-18 | Real-time stereo image generating system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510967607.1A CN105611271A (en) | 2015-12-18 | 2015-12-18 | Real-time stereo image generating system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105611271A true CN105611271A (en) | 2016-05-25 |
Family
ID=55990773
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510967607.1A Pending CN105611271A (en) | 2015-12-18 | 2015-12-18 | Real-time stereo image generating system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105611271A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105872530A (en) * | 2016-05-31 | 2016-08-17 | 上海易维视科技股份有限公司 | Multi-view three-dimensional image synthesis method |
CN105959676A (en) * | 2016-05-31 | 2016-09-21 | 上海易维视科技股份有限公司 | Naked-eye 3D display system capable of carrying out lateral and vertical display |
CN106060522A (en) * | 2016-06-29 | 2016-10-26 | 努比亚技术有限公司 | Video image processing device and method |
CN106228530A (en) * | 2016-06-12 | 2016-12-14 | 深圳超多维光电子有限公司 | A kind of stereography method, device and stereophotography equipment |
CN106454320A (en) * | 2016-11-01 | 2017-02-22 | 珠海明医医疗科技有限公司 | Method and equipment for generating three-dimensional (3D) image at low delay |
CN108986062A (en) * | 2018-07-23 | 2018-12-11 | Oppo(重庆)智能科技有限公司 | Image processing method and device, electronic device, storage medium and computer equipment |
CN109739651A (en) * | 2019-01-08 | 2019-05-10 | 中国科学技术大学 | A Stereo Matching Hardware Architecture with Low Resource Consumption |
CN110033426A (en) * | 2018-01-12 | 2019-07-19 | 杭州海康威视数字技术股份有限公司 | A kind of device for being handled disparity estimation image |
CN111325693A (en) * | 2020-02-24 | 2020-06-23 | 西安交通大学 | Large-scale panoramic viewpoint synthesis method based on single-viewpoint RGB-D image |
CN118573935A (en) * | 2024-06-28 | 2024-08-30 | 深圳市欧冶半导体有限公司 | Video processing method, device and video processing chip |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101729919A (en) * | 2009-10-30 | 2010-06-09 | 无锡景象数字技术有限公司 | System for full-automatically converting planar video into stereoscopic video based on FPGA |
CN102316354A (en) * | 2011-09-22 | 2012-01-11 | 冠捷显示科技(厦门)有限公司 | Parallelly processable multi-view image synthesis method in imaging technology |
CN102572482A (en) * | 2012-01-06 | 2012-07-11 | 浙江大学 | 3D (three-dimensional) reconstruction method for stereo/multi-view videos based on FPGA (field programmable gata array) |
CN102625127A (en) * | 2012-03-24 | 2012-08-01 | 山东大学 | An Optimization Method Suitable for 3D TV Virtual Viewpoint Generation |
CN102930593A (en) * | 2012-09-28 | 2013-02-13 | 上海大学 | Real-time rendering method based on GPU (Graphics Processing Unit) in binocular system |
CN104079913A (en) * | 2014-06-24 | 2014-10-01 | 重庆卓美华视光电有限公司 | Sub-pixel arrangement method and device for compatibility of raster stereoscopic displayer with 2D and 3D display modes |
CN104506872A (en) * | 2014-11-26 | 2015-04-08 | 深圳凯澳斯科技有限公司 | Method and device for converting planar video into stereoscopic video |
CN104796685A (en) * | 2015-03-31 | 2015-07-22 | 王子强 | Universal free three-dimensional display system |
CN104822059A (en) * | 2015-04-23 | 2015-08-05 | 东南大学 | Virtual viewpoint synthesis method based on GPU acceleration |
CN104980733A (en) * | 2015-06-18 | 2015-10-14 | 中央民族大学 | Glasses-free 3D display crosstalk test method and test image thereof |
CN105049826A (en) * | 2015-07-23 | 2015-11-11 | 南京大学 | FPGA-based real-time stereoscopic video fusion conversion method |
-
2015
- 2015-12-18 CN CN201510967607.1A patent/CN105611271A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101729919A (en) * | 2009-10-30 | 2010-06-09 | 无锡景象数字技术有限公司 | System for full-automatically converting planar video into stereoscopic video based on FPGA |
CN102316354A (en) * | 2011-09-22 | 2012-01-11 | 冠捷显示科技(厦门)有限公司 | Parallelly processable multi-view image synthesis method in imaging technology |
CN102572482A (en) * | 2012-01-06 | 2012-07-11 | 浙江大学 | 3D (three-dimensional) reconstruction method for stereo/multi-view videos based on FPGA (field programmable gata array) |
CN102625127A (en) * | 2012-03-24 | 2012-08-01 | 山东大学 | An Optimization Method Suitable for 3D TV Virtual Viewpoint Generation |
CN102930593A (en) * | 2012-09-28 | 2013-02-13 | 上海大学 | Real-time rendering method based on GPU (Graphics Processing Unit) in binocular system |
CN104079913A (en) * | 2014-06-24 | 2014-10-01 | 重庆卓美华视光电有限公司 | Sub-pixel arrangement method and device for compatibility of raster stereoscopic displayer with 2D and 3D display modes |
CN104506872A (en) * | 2014-11-26 | 2015-04-08 | 深圳凯澳斯科技有限公司 | Method and device for converting planar video into stereoscopic video |
CN104796685A (en) * | 2015-03-31 | 2015-07-22 | 王子强 | Universal free three-dimensional display system |
CN104822059A (en) * | 2015-04-23 | 2015-08-05 | 东南大学 | Virtual viewpoint synthesis method based on GPU acceleration |
CN104980733A (en) * | 2015-06-18 | 2015-10-14 | 中央民族大学 | Glasses-free 3D display crosstalk test method and test image thereof |
CN105049826A (en) * | 2015-07-23 | 2015-11-11 | 南京大学 | FPGA-based real-time stereoscopic video fusion conversion method |
Non-Patent Citations (1)
Title |
---|
郭浩: "基于FPGA的实时2D转3D系统研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105872530A (en) * | 2016-05-31 | 2016-08-17 | 上海易维视科技股份有限公司 | Multi-view three-dimensional image synthesis method |
CN105959676A (en) * | 2016-05-31 | 2016-09-21 | 上海易维视科技股份有限公司 | Naked-eye 3D display system capable of carrying out lateral and vertical display |
CN106228530A (en) * | 2016-06-12 | 2016-12-14 | 深圳超多维光电子有限公司 | A kind of stereography method, device and stereophotography equipment |
CN106060522A (en) * | 2016-06-29 | 2016-10-26 | 努比亚技术有限公司 | Video image processing device and method |
CN106454320A (en) * | 2016-11-01 | 2017-02-22 | 珠海明医医疗科技有限公司 | Method and equipment for generating three-dimensional (3D) image at low delay |
CN110033426B (en) * | 2018-01-12 | 2021-07-09 | 杭州海康威视数字技术股份有限公司 | Device for processing parallax estimation image |
CN110033426A (en) * | 2018-01-12 | 2019-07-19 | 杭州海康威视数字技术股份有限公司 | A kind of device for being handled disparity estimation image |
CN108986062A (en) * | 2018-07-23 | 2018-12-11 | Oppo(重庆)智能科技有限公司 | Image processing method and device, electronic device, storage medium and computer equipment |
CN109739651A (en) * | 2019-01-08 | 2019-05-10 | 中国科学技术大学 | A Stereo Matching Hardware Architecture with Low Resource Consumption |
CN109739651B (en) * | 2019-01-08 | 2023-06-16 | 中国科学技术大学 | A Stereo Matching Hardware Architecture with Low Resource Consumption |
CN111325693A (en) * | 2020-02-24 | 2020-06-23 | 西安交通大学 | Large-scale panoramic viewpoint synthesis method based on single-viewpoint RGB-D image |
CN111325693B (en) * | 2020-02-24 | 2022-07-12 | 西安交通大学 | A Large-scale Panoramic Viewpoint Synthesis Method Based on Single Viewpoint RGB-D Image |
CN118573935A (en) * | 2024-06-28 | 2024-08-30 | 深圳市欧冶半导体有限公司 | Video processing method, device and video processing chip |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105611271A (en) | Real-time stereo image generating system | |
KR101629479B1 (en) | High density multi-view display system and method based on the active sub-pixel rendering | |
US8503764B2 (en) | Method for generating images of multi-views | |
CN102647610B (en) | Integrated imaging directivity display method based on pixel extraction | |
WO2012176431A1 (en) | Multi-viewpoint image generation device and multi-viewpoint image generation method | |
WO2020199888A1 (en) | Multi-view naked-eye stereoscopic display, display system, and display method | |
WO2015161541A1 (en) | Parallel synchronous scaling engine and method for multi-view point naked eye 3d display | |
CN101547376A (en) | Multi parallax image generation apparatus and method | |
CN103813153A (en) | Weighted sum based naked eye three-dimensional (3D) multi-view image synthesis method | |
KR100913173B1 (en) | 3D graphic processing device and 3D image display device using the same | |
CN102572482A (en) | 3D (three-dimensional) reconstruction method for stereo/multi-view videos based on FPGA (field programmable gata array) | |
KR20120063984A (en) | Multi-viewpoint image generating method and apparatus thereof | |
CN108769664B (en) | Naked eye 3D display method, device, equipment and medium based on human eye tracking | |
Zinger et al. | View interpolation for medical images on autostereoscopic displays | |
CN105611270B (en) | A kind of binocular vision auto-stereo display system | |
CN102404592A (en) | Image processing device and method, and stereoscopic image display device | |
CN103945205A (en) | Video processing device and method compatible with two-dimensional and multi-view naked-eye three-dimensional displaying | |
CN102621702A (en) | Method and system for naked eye three dimensional (3D) image generation during unconventional arrangement of liquid crystal display pixels | |
WO2012140397A2 (en) | Three-dimensional display system | |
WO2012172766A1 (en) | Image processing device and method thereof, and program | |
CN112929644A (en) | Multi-view naked eye 3D display screen and multi-view naked eye 3D display equipment | |
CN107483912A (en) | A multi-viewpoint image fusion method based on floating-point lenticular lens grating | |
Fan et al. | Three-dimensional auto-stereoscopic image recording, mapping and synthesis system for multiview 3D display | |
KR100764382B1 (en) | Image Mapping Apparatus and Method in Computer-generated Integrated Image System | |
CN101626517B (en) | Real-time Stereoscopic Image Synthesis Method of Parallax Image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160525 |