WO2018176927A1 - Binocular rendering method and system for virtual active parallax computation compensation - Google Patents

Binocular rendering method and system for virtual active parallax computation compensation Download PDF

Info

Publication number
WO2018176927A1
WO2018176927A1 PCT/CN2017/117130 CN2017117130W WO2018176927A1 WO 2018176927 A1 WO2018176927 A1 WO 2018176927A1 CN 2017117130 W CN2017117130 W CN 2017117130W WO 2018176927 A1 WO2018176927 A1 WO 2018176927A1
Authority
WO
WIPO (PCT)
Prior art keywords
compensation
parallax
image
rendering
active
Prior art date
Application number
PCT/CN2017/117130
Other languages
French (fr)
Chinese (zh)
Inventor
赵凤萍
邓金富
Original Assignee
上海讯陌通讯技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海讯陌通讯技术有限公司 filed Critical 上海讯陌通讯技术有限公司
Publication of WO2018176927A1 publication Critical patent/WO2018176927A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity

Definitions

  • the present invention relates to the field of virtual reality image processing technologies, and in particular, to a binocular rendering method and system for virtual active disparity calculation compensation.
  • the distance between the eyes of a person is a certain distance, so when observing a three-dimensional object, since the two eyes are horizontally separated at two different positions, the observed image of the object is different, and there is an aberration between them.
  • the existence of aberrations, through the human brain, can feel stereoscopic effects. How to compensate the rendering with the virtual active parallax method to enhance the look and feel of the virtual reality is the focus of the present invention.
  • the imaging foundation relied on in virtual reality is based on the simulation of binoculars in the natural environment of two different images with aberrations, while the monocular camera acquisition device is still the mainstream (smartphone, tablet, notebook, headset, Cameras, etc., binoculars are not suitable for popularization due to the specialization of the use scene and the limitations of physical hardware. So how to use the monocular camera to capture the simulated dual-purpose framing effect and compensate the rendering through virtual active parallax is also difficult and different.
  • the synchronization between the two images is directly related to whether the user causes vertigo, and the image color, saturation and exposure difference of the two channels will also reduce the virtual reality perception;
  • the traditional dual-purpose view content is fixed parallax that is immutable.
  • the present invention proposes a binocular rendering method for virtual active disparity calculation compensation for a single-purpose imaging device.
  • the method strives to bring the parallax fitness of the subject closer to the human eyes and directly observe the parallax brought in by nature, and image the virtual active difference part of the left and right eyes.
  • the binocular is rendered once to ensure synchronization and obtain enhancement. Virtual reality experience.
  • the application number is: 201610666409.6, and the name is: “A virtual reality mobile dynamic time frame compensation rendering system and method”, the method is: applying frame rendering to generate an application frame buffer sequence, and extracting in the application frame buffer sequence
  • the latest or most recent application frame performs secondary rendering to obtain a time frame
  • the time frame is sent to the shared buffer, and is refreshed by the screen reading time frame rendering result under the timing control of the vertical synchronization management module.
  • the GPU renders the result directly to the screen refresh buffer, reducing the delay of the multi-level cache exchange.
  • vertical synchronization time management the GPU's rendering time is controlled, and the conflict between GPU rendering and screen refresh reading is avoided, so that the picture can be displayed normally at low delay without tearing.
  • the method in the above document solves the problem that the split screen image is not synchronized due to the split screen rendering when the number of cache frames is relatively large, which is related to the parallax compensation and the binocular rendering problem close to the real presence experience to be solved by the present invention.
  • the present invention performs pre-processing before final rendering to the screen to achieve the same effect of the comparative invention.
  • the method of the present invention mainly comprises: acquiring a parallax range preset by a virtual reality model, the parallax range being a parallax upper limit value And a range of a parallax lower limit value; obtaining a stereoscopic slice source and a disparity value corresponding to the stereoscopic slice source, the stereoscopic slice source being a slice source displayed in a virtual reality scene; adjusting the parallax range according to the parallax range The disparity value of the stereoscopic sheet source such that the disparity value of the stereoscopic sheet source does not exceed the parallax range.
  • the present invention can improve the display effect of displaying a new stereoscopic scene in a stereoscopic scene.
  • the methods in the above documents mainly focus on and improve the display effect of displaying a new stereoscopic scene in a stereoscopic scene, which is different from the parallax compensation problem to be solved by the present invention and the binocular rendering problem close to the real on-the-spot experience.
  • the projection transformation step includes: adding a middle plane as a projection plane between the near plane and the far plane, and the near plane and the far side
  • the primitives between the planes are projected onto the midplane.
  • the invention projects the primitive between the near plane and the far plane onto the middle plane through the added midplane, and the primitive between the near plane and the middle plane has a stereoscopic effect of the screen, between the middle plane and the far plane
  • the primitives have a stereo effect of entering the screen; thus, the existing "outline” and “on screen” effects can be rendered without using special hardware when using the existing rendering pipeline in the 3D display device.
  • the method in the above document focuses on the rendering of three-dimensional graphics by utilizing the hardware characteristics of the 3D display device, which is solved by the present invention based entirely on software without relying on the parallax compensation binocular rendering method of the 3D display device in the virtual reality device. Not the same problem.
  • an object of the present invention is to provide a binocular rendering method and system for virtual active disparity calculation compensation.
  • the binocular rendering method for virtual active disparity calculation compensation provided by the present invention includes the following steps:
  • Active parallax calculation step performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive parallax synchronization matrix of the corresponding user;
  • Image disparity compensation and clipping step the user's exclusive adaptive parallax synchronization matrix is used as the clipping factor of the virtual active disparity compensation, and a complete image is cut into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;
  • Binocular rendering step pre-rendering the same monocular camera-viewed image on the waiting overlay canvas of the next field, and projecting the prepared overlay canvas composite image rendering target virtual reality header at a time according to a fixed time period. Wear the device's display hardware layer.
  • the active disparity calculation step comprises:
  • Step A1 selecting a plurality of original images of different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the M points at the fixed position in the original image.
  • the positions are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;
  • Step A2 obtaining the optimal parallax correction parameter and the visual range of each region in the original image by using the user's adaptive feedback, and generating the adapted parallax synchronization matrix of the user; wherein the original image has a visual range that is greater than the left and right eyes respectively. Range of vision;
  • Step A3 automatically generate an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and use the active parallax compensation standard as the best fitting adjustment standard for each subsequent view.
  • the image disparity compensation and clipping step comprises:
  • Step B1 The left and right eyes are respectively based on the parallax tailoring factors of different regions as the clipping basis of the current wearer to view the virtual reality, and the complete image is cut into the optimal left and right eye combinations, and the optimal left and right eye combinations are:
  • the wearer determines the best fit result according to his own wearing habits.
  • the best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix
  • the dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;
  • Step B2 The two-way divided image obtained by the step B1 is used as two subsets of the original image pixel ensemble, and the two subsets have an intersection; the clipping is performed by the virtual active parallax compensation clipping factor, so that the left and right eyes see the image.
  • the principle of maximizing the optimal parallax compensation is: recording data based on single or multiple active corrections as a set of candidate compensation, in a period of wearing, recording and screening The preferred compensation matrix for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;
  • Step B3 performing pre-processing on the image obtained by the subsequent monocular camera, and forming a separate view image of the two-way coordinate system, wherein the sub-view compensation parameters include: according to different observation angles, observation distances, Light, color, image conversion, and image cropping.
  • the period of the fixed timing in the binocular rendering step is to render each image by a specified frequency timing.
  • a binocular rendering system for virtual active disparity calculation compensation includes the following modules:
  • Active disparity calculation module used for performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive disparity synchronization matrix of the corresponding user;
  • Image disparity compensation and trimming module used as a tailoring disparity synchronization matrix of the user as a clipping factor of virtual active disparity compensation, and cuts a complete image into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;
  • Binocular rendering module used to pre-render the same monocular camera image on the waiting overlay canvas of the next field, and project the prepared overlay canvas composite image rendering to the target virtual at a time according to the fixed timing period.
  • Realistic display device hardware layer used to pre-render the same monocular camera image on the waiting overlay canvas of the next field, and project the prepared overlay canvas composite image rendering to the target virtual at a time according to the fixed timing period.
  • the active disparity calculation module includes:
  • Parallax synchronization matrix selection sub-module selecting a plurality of original images with different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the fixed position in the original image
  • the positions of the M points are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;
  • Adapting the parallax synchronization matrix generation sub-module obtaining the optimal parallax correction parameters and the visual range of each region in the original image through the user's adaptive feedback, and generating the adaptive parallax synchronization matrix of the user; wherein the original image has a range of visual forces Greater than the left and right eyesight alone;
  • the best fit calibration standard generation sub-module automatically generates an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and based on the active parallax compensation standard as the best fit adjustment for each subsequent view standard.
  • the image disparity compensation and cropping module comprises:
  • the wearer determines the best fit result according to his own wearing habits.
  • the best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix
  • the dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;
  • the cropping sub-module the two-way divided image obtained by the cropping is used as two subsets of the original image pixel complete set, the two subsets have an intersection; the cropping factor of the virtual active parallax compensation is used to compensate the cropping, so that the left and right eyes see the image conforming
  • the principle of maximizing the optimal parallax compensation is: the data based on single or multiple active corrections as a set of candidate compensation, recorded and selected in a period of wearing Including: different compensation modes for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;
  • the sub-view image generation sub-module pre-processes the image obtained by the subsequent monocular camera by the pre-view compensation parameter to form a separate bi-directional coordinate system
  • the sub-view compensation parameter refers to the observation environment: Observe the angle, the observation distance, the brightness, and the color when the image is transformed and the image is cropped.
  • the present invention has the following beneficial effects:
  • the binocular rendering method for virtual active parallax calculation compensation uses a single camera as the hardware foundation of the image acquisition device, and compensates for the wearer's own eyes to directly observe the natural and reduce the ghosting caused by vertigo by software parallax calculation. According to the individual difference of the wearer, the optimal adjustment of the parallax can be actively obtained, and the single view source of the cropping can be compensated in reverse to achieve a more realistic immersive look.
  • the binocular rendering method of the virtual active parallax calculation compensation ensures the timing synchronization in the binocular rendering process, the color and the luminosity are consistent, and the two-way imaging has other effects, which helps the wearing body to dynamically modify the rendering content to Meet the virtual effects of binoculars close to reality.
  • the binocular rendering method for virtual active parallax calculation compensation provided by the invention can provide targeted and precise adjustment for the parallax of the view content, which not only satisfies the synchronization requirement, but also satisfies the difference of parallax and reduces dizziness.
  • FIG. 1 is a flowchart of a binocular rendering method for virtual active disparity calculation compensation provided by the present invention
  • FIG. 2 is a schematic diagram of pre-calibration of left and right eyes for virtual active disparity calculation compensation provided by the present invention
  • 3 is a standard diagram of a best fit for a binocular generated by a user
  • Figure 4 is a schematic diagram showing the result of the cropping compensation
  • Figure 5 is a new view of the render composition.
  • the binocular rendering method for virtual active disparity calculation compensation provided by the present invention includes the following steps:
  • Active parallax calculation step performing binocular rendering on the initial user to obtain a dedicated adaptive parallax synchronization matrix of the corresponding user.
  • the active parallax calculation mentioned in the present invention is virtual active parallax calculation, that is, when the user first uses, firstly, several dual-purpose images are rendered to the wearer, respectively.
  • the nine point positions shown in Fig. 2 are adapted one by one, and the parameters of the optimal parallax correction and the range of visual acuity of each region are obtained by the user's adaptive feedback (minimum of 3 points of 1/3/5/7/9) And, in turn, generate a wearer-specific adaptive parallax synchronization matrix.
  • the original image's visual range is always greater than the left and right eyesight's separate visual range.
  • the active parallax compensation standard suitable for the viewer's own eyes is achieved, and the compensation based on this will be the best fit adjustment for each subsequent view, as shown in FIG.
  • the user's exclusive adaptive parallax synchronization matrix is used as the clipping factor of the virtual active parallax compensation, and a complete image is cut into left and right eye combinations, which are respectively displayed in the left and right eye view windows.
  • the adaptive parallax synchronization matrix that best fits the wearer is used as the clipping factor of the virtual active disparity compensation, and the original image is a complete set of pixels, and the left and right eyes respectively use the disparity clipping factor of different regions as the current wearer to view the virtual
  • a complete image is cut into the best left and right eye combination, and the field of view is displayed in a single way in the left and right eye viewfinder windows.
  • the two-way split image thus cropped is a subset of the complete set, but the two subsets overlap and have non-overlapping portions.
  • the split compensation trim is shown in Figure 4.
  • Binocular rendering step pre-rendering the image of the same monocular camera to the waiting overlay on the next field, and projecting the prepared overlay composite image at the target display hardware map at a time according to the period of the fixed timing.
  • the image rendering is designed to synthesize the image of the same monocular camera, and the pre-rendering is synthesized on the waiting superimposed canvas of the next field.
  • the prepared superimposed canvas is rendered and projected at one time.
  • the hardware layer is displayed in the target, because in the pre-processing binocular rendering has ensured the optimal combination of color, chromatic aberration and parallax, the image has been synthesized in the fixed timing cycle to ensure optimal combination of binocular timing, as shown in Figure 5.
  • the binocular rendering method for virtual active disparity calculation compensation includes the following steps:
  • Step S1 randomly selecting several original images of different resolutions from the template library
  • Step S2 pre-marking 5 to 9 parallax standard crosses on the selected original image
  • Step S3 attempting to separately project the synthesized area in the binocular display
  • Step S4 the calibration cross is adjusted by the wearer according to his own perception, if the non-vertigo binocular fitting that meets the maximum viewing angle is performed, step S5 is performed, otherwise, step S3 is performed;
  • Step S5 storing the final compensation adjustment parameter of the calibration cross according to the self-perception of the wearer, and attempting to adjust the compensation parameters of the plurality of sets according to different resolutions;
  • Step S6 start frame processing of the actual single-view video data stream, perform reverse cutting based on the pre-prepared parallax compensation adjustment value, and cut out the left-eye frame and the right-eye frame from the original image respectively;
  • Step S7 pre-synthesizing the binoculars with different cuts, reconstructing a new composite image with a larger overall size than the original view, waiting for rendering in the binocular viewfinder;
  • Step S8 Rendering into a new picture at one time.
  • the virtual active device can perform virtual active parallax compensation rendering on the virtual device side to the user. Provides the best immersive entertainment experience with low stun ghosting and high image quality.
  • Embodiment 2 (360 degree panoramic broadcast)
  • system provided by the present invention and its various devices can be logically gated, except that the system provided by the present invention and its various devices are implemented in purely computer readable program code. Switches, ASICs, programmable logic controllers, and embedded microcontrollers are used to achieve the same functionality. Therefore, the system and its various devices provided by the present invention can be considered as a hardware component, and the devices included therein for implementing various functions can also be regarded as structures within hardware components; A device that implements various functions is considered to be either a software module that implements a method or a structure within a hardware component.

Abstract

The present invention provides a binocular rendering method and system for virtual active parallax computation compensation. The method comprises: active parallax computation: performing binocular rendering adaptation on an initial user to obtain a dedicated adaptive parallax synchronization matrix corresponding to the user; image parallax compensation and cropping: using the dedicated adaptive parallax synchronization matrix of the user as a cropping factor for virtual active parallax compensation, cropping a complete image into left and right eye groups; and binocular rendering: pre-rendering dichoptic images of the same monocular camera on a waiting overlay canvas of a next scene, and rendering and projecting, according to a fixed time sequence period, a composite image of the prepared overlay canvas on a target display hardware layer in one pass. In the present invention, a single camera is used as basic image acquisition hardware, and ghosting causing dizziness is reduced since, after software-based parallax computation compensation, the effect approaches one in which a wearer directly observes natural objects with their own eyes. Optimal adjustment of parallax is actively achieved, and a cropped single view source is reversely compensated, achieving a more immersive visual effect.

Description

虚拟主动视差计算补偿的双目渲染方法及系统Binary rendering method and system for virtual active disparity calculation compensation 技术领域Technical field
本发明涉及虚拟现实图像处理技术领域,具体地,涉及虚拟主动视差计算补偿的双目渲染方法及系统。The present invention relates to the field of virtual reality image processing technologies, and in particular, to a binocular rendering method and system for virtual active disparity calculation compensation.
背景技术Background technique
人的双眼相距有一定距离,所以在观察一个三维物体时,由于两眼水平分开在两个不同的位置上,所观察到的物体图像是不同的,它们之间存在着一个像差,由于这个像差的存在,通过人类的大脑,可以感到立体视觉效果。如何用虚拟主动视差的方法去补偿渲染,以便增强虚拟现实的观感,是本发明的重点。The distance between the eyes of a person is a certain distance, so when observing a three-dimensional object, since the two eyes are horizontally separated at two different positions, the observed image of the object is different, and there is an aberration between them. The existence of aberrations, through the human brain, can feel stereoscopic effects. How to compensate the rendering with the virtual active parallax method to enhance the look and feel of the virtual reality is the focus of the present invention.
虚拟现实中所依赖的成像基础是基于模拟双目在自然环境中看到的两路有像差的不同图像,而单目摄像采集设备依然是主流(智能手机,平板,笔记本,头戴式,照相机等),双目由于使用场景的特殊化和物理硬件局限,并不适合被普及。所以如何用单目摄像头取到模拟双目的取景效果并通过虚拟主动视差去补偿渲染,也是难点和差异点。The imaging foundation relied on in virtual reality is based on the simulation of binoculars in the natural environment of two different images with aberrations, while the monocular camera acquisition device is still the mainstream (smartphone, tablet, notebook, headset, Cameras, etc., binoculars are not suitable for popularization due to the specialization of the use scene and the limitations of physical hardware. So how to use the monocular camera to capture the simulated dual-purpose framing effect and compensate the rendering through virtual active parallax is also difficult and different.
目前,现有技术处理上述难点问题,仍然存在以下缺陷:At present, the prior art deals with the above difficulties, and still has the following drawbacks:
1)单幅图像直接通过分屏器分视,不能根据观看者实际生理视差(因佩戴设备,因瞳距等原因),稳定真实的还原双目看到的差别影像;1) The single image is directly divided by the split screen device, and the difference image seen by the binocular can be stabilized according to the actual physiological parallax of the viewer (because of wearing the device, due to the distance of the pupil) and the like;
2)单摄像头采集的图像终归是来自一个观察者原点,而双摄像头分别获得的两个观察者原点的图像会因目标对象引入视差的重影,而消除重影需依赖外部设备;2) The image captured by the single camera is ultimately from an observer origin, and the images of the two observer origins obtained by the dual cameras respectively introduce a ghost of the parallax due to the target object, and the elimination of the ghosting depends on the external device;
3)双目采集后两路图像间同步好坏直接关系到使用者是否引发眩晕,两路的图像色彩,饱和度和曝光度差也会降低虚拟现实观感;3) After the binocular acquisition, the synchronization between the two images is directly related to whether the user causes vertigo, and the image color, saturation and exposure difference of the two channels will also reduce the virtual reality perception;
4)传统双目的视图内容是固定视差不可变。4) The traditional dual-purpose view content is fixed parallax that is immutable.
基于以上局限性和缺陷,本发明针对单目的摄像设备提出一种虚拟主动视差计算补偿的双目渲染方法。该方法力求将主体的视差适应度趋近人双眼直接观察自然时带入的视差,并通过左右眼的虚拟主动差异部分成像,在完成渲染预处理后一次性渲染双目来保障同步,获得增强的虚拟现实体验。Based on the above limitations and defects, the present invention proposes a binocular rendering method for virtual active disparity calculation compensation for a single-purpose imaging device. The method strives to bring the parallax fitness of the subject closer to the human eyes and directly observe the parallax brought in by nature, and image the virtual active difference part of the left and right eyes. After the rendering preprocessing is completed, the binocular is rendered once to ensure synchronization and obtain enhancement. Virtual reality experience.
经检索,申请号为:201610666409.6,名称为:“一种虚拟现实移动端动态时间帧补 偿渲染系统和方法”,所述方法为:应用帧渲染生成应用帧缓存序列,在应用帧缓存序列中提取最新的或者最近的应用帧进行二级渲染得到时间帧,时间帧发送至共享缓冲区,在垂直同步管理模块的时序控制下由屏幕读取时间帧渲染结果进行刷新。通过共享缓冲区的设计,让GPU渲染结果直接到屏幕刷新的缓存,降低多级缓存交换的延迟。通过垂直同步时间管理,控制GPU的渲染时间,避免GPU渲染写入和屏幕刷新读取的冲突,让画面能够在低延迟的同时正常显示,不会产生撕裂。After searching, the application number is: 201610666409.6, and the name is: “A virtual reality mobile dynamic time frame compensation rendering system and method”, the method is: applying frame rendering to generate an application frame buffer sequence, and extracting in the application frame buffer sequence The latest or most recent application frame performs secondary rendering to obtain a time frame, and the time frame is sent to the shared buffer, and is refreshed by the screen reading time frame rendering result under the timing control of the vertical synchronization management module. Through the design of the shared buffer, the GPU renders the result directly to the screen refresh buffer, reducing the delay of the multi-level cache exchange. Through vertical synchronization time management, the GPU's rendering time is controlled, and the conflict between GPU rendering and screen refresh reading is avoided, so that the picture can be displayed normally at low delay without tearing.
上述文献中的方法解决的是在缓存帧数量比较多的时候由于分屏渲染引入了分屏图像不同步的问题,这与本发明要解决的视差补偿以及贴近真实临场体验的的双目渲染问题不同,另外本发明在最终渲染到屏幕前会进行预处理以达到该对比发明同样的效果。The method in the above document solves the problem that the split screen image is not synchronized due to the split screen rendering when the number of cache frames is relatively large, which is related to the parallax compensation and the binocular rendering problem close to the real presence experience to be solved by the present invention. Differently, in addition, the present invention performs pre-processing before final rendering to the screen to achieve the same effect of the comparative invention.
申请号:201511001007.6,名称为“一种虚拟现实调整视差的方法及装置”,其中,本发明的方法主要包括:获取虚拟现实模型预设的视差范围,所述视差范围是由一个视差上限值和一个视差下限值构成的范围;获取立体片源以及对应所述立体片源的视差值,所述立体片源为在虚拟现实场景中显示的片源;根据所述视差范围调整所述立体片源的视差值,以使所述立体片源的视差值不超出所述视差范围。与现有技术相比,本发明能够改善在立体场景中展示新的立体场景的展示效果。Application No.: 201511001007.6, entitled "A Method and Apparatus for Adjusting Parallax of Virtual Reality", wherein the method of the present invention mainly comprises: acquiring a parallax range preset by a virtual reality model, the parallax range being a parallax upper limit value And a range of a parallax lower limit value; obtaining a stereoscopic slice source and a disparity value corresponding to the stereoscopic slice source, the stereoscopic slice source being a slice source displayed in a virtual reality scene; adjusting the parallax range according to the parallax range The disparity value of the stereoscopic sheet source such that the disparity value of the stereoscopic sheet source does not exceed the parallax range. Compared with the prior art, the present invention can improve the display effect of displaying a new stereoscopic scene in a stereoscopic scene.
上述文献中的方法主要关注与改善在立体场景中展示新的立体场景的展示效果,与本发明要解决的视差补偿以及贴近真实临场体验的的双目渲染问题不同。The methods in the above documents mainly focus on and improve the display effect of displaying a new stereoscopic scene in a stereoscopic scene, which is different from the parallax compensation problem to be solved by the present invention and the binocular rendering problem close to the real on-the-spot experience.
申请号:201410302221.4,名称为“双目三维图形渲染方法及相关系统”,包括投影变换步骤,所述投影变换步骤包括:在近平面和远平面之间增加中平面作为投影面,将近平面与远平面之间的图元投影到中平面上。本发明通过增加的中平面,将近平面与远平面之间的图元投影到中平面上,则近平面与中平面之间的图元会有出屏的立体效果,中平面与远平面之间的图元会有入屏的立体效果;从而,使得在3D显示设备中使用现有的渲染管线时不需要特别的硬件,就可以渲染“出屏”和“入屏”效果。Application No.: 201410302221.4, entitled "Binocular three-dimensional graphics rendering method and related system", including a projection transformation step, the projection transformation step includes: adding a middle plane as a projection plane between the near plane and the far plane, and the near plane and the far side The primitives between the planes are projected onto the midplane. The invention projects the primitive between the near plane and the far plane onto the middle plane through the added midplane, and the primitive between the near plane and the middle plane has a stereoscopic effect of the screen, between the middle plane and the far plane The primitives have a stereo effect of entering the screen; thus, the existing "outline" and "on screen" effects can be rendered without using special hardware when using the existing rendering pipeline in the 3D display device.
上述文献中的方法是关注与利用3D显示设备的硬件特性来进行三维图形的渲染,这与本发明完全基于软件而不依赖于3D显示设备在虚拟现实设备中的视差补偿双目渲染方法所要解决的不是同一个问题。The method in the above document focuses on the rendering of three-dimensional graphics by utilizing the hardware characteristics of the 3D display device, which is solved by the present invention based entirely on software without relying on the parallax compensation binocular rendering method of the 3D display device in the virtual reality device. Not the same problem.
发明内容Summary of the invention
针对现有技术中的缺陷,本发明的目的是提供一种虚拟主动视差计算补偿的双目渲染方法及系统。In view of the deficiencies in the prior art, an object of the present invention is to provide a binocular rendering method and system for virtual active disparity calculation compensation.
根据本发明提供的虚拟主动视差计算补偿的双目渲染方法,包括如下步骤:The binocular rendering method for virtual active disparity calculation compensation provided by the present invention includes the following steps:
主动视差计算步骤:对初始用户进行双目渲染适配,得到对应用户的专属适配视差同步矩阵;Active parallax calculation step: performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive parallax synchronization matrix of the corresponding user;
图像视差补偿与剪裁步骤:将用户的专属适配视差同步矩阵作为虚拟主动视差补偿的剪裁因子,将一副完整的图像剪裁成左右眼组合,分别在左右眼取景视窗中单路展示;Image disparity compensation and clipping step: the user's exclusive adaptive parallax synchronization matrix is used as the clipping factor of the virtual active disparity compensation, and a complete image is cut into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;
双目渲染步骤:将同一个单目摄像分视的图像预渲染在下一场的等待叠加画布上,并根据固定时序的周期,一次性将准备好的叠加画布合成图像渲染投射在目标虚拟现实头戴设备的显示硬件图层。Binocular rendering step: pre-rendering the same monocular camera-viewed image on the waiting overlay canvas of the next field, and projecting the prepared overlay canvas composite image rendering target virtual reality header at a time according to a fixed time period. Wear the device's display hardware layer.
优选地,所述主动视差计算步骤包括:Preferably, the active disparity calculation step comprises:
步骤A1:从图片库中选取多张不同解像度的原图,通过所述原图对初次佩戴使用虚拟现实头戴设备的用户进行双目渲染,基于原图中的固定位置处的M个点的位置进行逐一适配;其中M为正整数,M个点的位置选取基于适配视差同步矩阵的形式;Step A1: selecting a plurality of original images of different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the M points at the fixed position in the original image. The positions are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;
步骤A2:通过用户的适配反馈获得原图中每个区域最佳视差矫正的参数和视力范围,生成该用户的适配视差同步矩阵;其中原图的视力范围始终分别大于左右两目单独的视力范围;Step A2: obtaining the optimal parallax correction parameter and the visual range of each region in the original image by using the user's adaptive feedback, and generating the adapted parallax synchronization matrix of the user; wherein the original image has a visual range that is greater than the left and right eyes respectively. Range of vision;
步骤A3:根据佩戴者的感观舒适反应自动生成适合用户自己双眼的主动视差补偿标准,将基于该主动视差补偿标准作为后续每组视图的最佳拟合调校标准。Step A3: automatically generate an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and use the active parallax compensation standard as the best fitting adjustment standard for each subsequent view.
优选地,所述图像视差补偿与剪裁步骤包括:Preferably, the image disparity compensation and clipping step comprises:
步骤B1:将左右眼分别基于不同区域的视差剪裁因子作为当前佩戴者观看虚拟现实的剪裁依据,将一副完整的图像剪裁成最佳左右眼组合,所述最佳左右眼组合是指:Step B1: The left and right eyes are respectively based on the parallax tailoring factors of different regions as the clipping basis of the current wearer to view the virtual reality, and the complete image is cut into the optimal left and right eye combinations, and the optimal left and right eye combinations are:
初始调校时,由佩戴者根据自己佩戴习惯确定最佳适配结果,该最佳适配结果即为最佳调校目标,保存以最佳调校目标生成的补偿矩阵,并将补偿矩阵做动态调校做参考,所述补偿矩阵即为调校矩阵,又称为最佳左右眼组合;In the initial adjustment, the wearer determines the best fit result according to his own wearing habits. The best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix The dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;
步骤B2:将步骤B1裁剪得到的两路分视的图像作为原始图像素全集的两个子集,所述两个子集存在交集;通过虚拟主动视差补偿的剪裁因子进行补偿剪裁,使得左右眼看到图像符合最优视差补偿的最大化原则,所述最优视差补偿的最大化原则是指:基于单次或多次主动矫正的数据作为一组待选补偿,在一段时间的佩戴中,记录并筛选出不同光线、场景、佩戴角度时的优选补偿矩阵,同时以双目合成视野最大作为辅助目标;Step B2: The two-way divided image obtained by the step B1 is used as two subsets of the original image pixel ensemble, and the two subsets have an intersection; the clipping is performed by the virtual active parallax compensation clipping factor, so that the left and right eyes see the image. Consistent with the principle of maximizing the optimal parallax compensation, the principle of maximizing the optimal parallax compensation is: recording data based on single or multiple active corrections as a set of candidate compensation, in a period of wearing, recording and screening The preferred compensation matrix for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;
步骤B3:对后续单目摄像头采集的图像进行分视补偿参数预处理,形成两路坐标系有别的分视图像,所述分视补偿参数包括:观察环境中依据不同观察角度、观察距离、 光亮、色彩进行图像变换和图像裁剪时的数据。Step B3: performing pre-processing on the image obtained by the subsequent monocular camera, and forming a separate view image of the two-way coordinate system, wherein the sub-view compensation parameters include: according to different observation angles, observation distances, Light, color, image conversion, and image cropping.
优选地,所述双目渲染步骤中的固定时序的周期是指定频定时对各路图像进行渲染。Preferably, the period of the fixed timing in the binocular rendering step is to render each image by a specified frequency timing.
根据本发明提供的虚拟主动视差计算补偿的双目渲染系统,包括如下模块:A binocular rendering system for virtual active disparity calculation compensation according to the present invention includes the following modules:
主动视差计算模块:用于对初始用户进行双目渲染适配,得到对应用户的专属适配视差同步矩阵;Active disparity calculation module: used for performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive disparity synchronization matrix of the corresponding user;
图像视差补偿与剪裁模块:用于将用户的专属适配视差同步矩阵作为虚拟主动视差补偿的剪裁因子,将一副完整的图像剪裁成左右眼组合,分别在左右眼取景视窗中单路展示;Image disparity compensation and trimming module: used as a tailoring disparity synchronization matrix of the user as a clipping factor of virtual active disparity compensation, and cuts a complete image into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;
双目渲染模块:用于将同一个单目摄像分视的图像预渲染在下一场的等待叠加画布上,并根据固定时序的周期,一次性将准备好的叠加画布合成图像渲染投射在目标虚拟现实头戴设备的显示硬件图层。Binocular rendering module: used to pre-render the same monocular camera image on the waiting overlay canvas of the next field, and project the prepared overlay canvas composite image rendering to the target virtual at a time according to the fixed timing period. Realistic display device hardware layer.
优选地,所述主动视差计算模块包括:Preferably, the active disparity calculation module includes:
视差同步矩阵选取子模块:从图片库中选取多张不同解像度的原图,通过所述原图对初次佩戴使用虚拟现实头戴设备的用户进行双目渲染,基于原图中的固定位置处的M个点的位置进行逐一适配;其中M为正整数,M个点的位置选取基于适配视差同步矩阵的形式;Parallax synchronization matrix selection sub-module: selecting a plurality of original images with different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the fixed position in the original image The positions of the M points are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;
适配视差同步矩阵生成子模块:通过用户的适配反馈获得原图中每个区域最佳视差矫正的参数和视力范围,生成该用户的适配视差同步矩阵;其中原图的视力范围始终分别大于左右两目单独的视力范围;Adapting the parallax synchronization matrix generation sub-module: obtaining the optimal parallax correction parameters and the visual range of each region in the original image through the user's adaptive feedback, and generating the adaptive parallax synchronization matrix of the user; wherein the original image has a range of visual forces Greater than the left and right eyesight alone;
最佳拟合调校标准生成子模块:根据佩戴者的感观舒适反应自动生成适合用户自己双眼的主动视差补偿标准,将基于该主动视差补偿标准作为后续每组视图的最佳拟合调校标准。The best fit calibration standard generation sub-module: automatically generates an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and based on the active parallax compensation standard as the best fit adjustment for each subsequent view standard.
优选地,所述图像视差补偿与剪裁模块包括:Preferably, the image disparity compensation and cropping module comprises:
最佳左右眼组合生成子模块:将左右眼分别基于不同区域的视差剪裁因子作为当前佩戴者观看虚拟现实的剪裁依据,将一副完整的图像剪裁成最佳左右眼组合,所述最佳左右眼组合是指:The best left and right eye combination generation sub-module: the left and right eyes are respectively based on the parallax clipping factors of different regions as the clipping basis of the current wearer to view the virtual reality, and a complete image is cut into the best left and right eye combination, the best left and right eye Eye combination means:
初始调校时,由佩戴者根据自己佩戴习惯确定最佳适配结果,该最佳适配结果即为最佳调校目标,保存以最佳调校目标生成的补偿矩阵,并将补偿矩阵做动态调校做参考,所述补偿矩阵即为调校矩阵,又称为最佳左右眼组合;In the initial adjustment, the wearer determines the best fit result according to his own wearing habits. The best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix The dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;
裁剪子模块:将裁剪得到的两路分视的图像作为原始图像素全集的两个子集,所述 两个子集存在交集;通过虚拟主动视差补偿的剪裁因子进行补偿剪裁,使得左右眼看到图像符合最优视差补偿的最大化原则,所述最优视差补偿的最大化原则是指:基于单次或多次主动矫正的数据作为一组待选补偿,在一段时间的佩戴中,记录并筛选出包括:不同光线,场景,佩戴角度情况下时的优选补偿矩阵,同时以双目合成视野最大作为辅助目标;The cropping sub-module: the two-way divided image obtained by the cropping is used as two subsets of the original image pixel complete set, the two subsets have an intersection; the cropping factor of the virtual active parallax compensation is used to compensate the cropping, so that the left and right eyes see the image conforming The principle of maximizing the optimal parallax compensation, the principle of maximizing the optimal parallax compensation is: the data based on single or multiple active corrections as a set of candidate compensation, recorded and selected in a period of wearing Including: different compensation modes for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;
分视图像生成子模块:对后续单目摄像头采集的图像进行分视补偿参数预处理,形成两路坐标系有别的分视图像,所述分视补偿参数是指观察环境中依据包括:不同观察角度、观察距离、光亮、色彩情况下进行图像变换和图像裁剪时的数据。The sub-view image generation sub-module: pre-processes the image obtained by the subsequent monocular camera by the pre-view compensation parameter to form a separate bi-directional coordinate system, and the sub-view compensation parameter refers to the observation environment: Observe the angle, the observation distance, the brightness, and the color when the image is transformed and the image is cropped.
与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:
1、本发明提供的虚拟主动视差计算补偿的双目渲染方法,以单摄像头作为图像采集设备的硬件基础,通过软件的视差计算补偿趋近佩戴者本人眼睛直接观察自然,减少引发眩晕的重影,能够根据佩戴者的个体差,主动获得视差的最佳调校,反向补偿剪裁的单一视图源以达到更加真实的沉浸式观感。1. The binocular rendering method for virtual active parallax calculation compensation provided by the present invention uses a single camera as the hardware foundation of the image acquisition device, and compensates for the wearer's own eyes to directly observe the natural and reduce the ghosting caused by vertigo by software parallax calculation. According to the individual difference of the wearer, the optimal adjustment of the parallax can be actively obtained, and the single view source of the cropping can be compensated in reverse to achieve a more realistic immersive look.
2、本发明提供的虚拟主动视差计算补偿的双目渲染方法,保障双目渲染过程中时序同步,色彩和光度一致,两路成像有别的效果,帮助佩戴主体动态剪裁修饰将渲染内容,以满足双目贴近真实的虚拟效果。2. The binocular rendering method of the virtual active parallax calculation compensation provided by the invention ensures the timing synchronization in the binocular rendering process, the color and the luminosity are consistent, and the two-way imaging has other effects, which helps the wearing body to dynamically modify the rendering content to Meet the virtual effects of binoculars close to reality.
3、本发明提供的虚拟主动视差计算补偿的双目渲染方法,可为视图内容的视差提供针对性强的精准调校,既满足同步需要,也满足视差的差异化,减少眩晕。3. The binocular rendering method for virtual active parallax calculation compensation provided by the invention can provide targeted and precise adjustment for the parallax of the view content, which not only satisfies the synchronization requirement, but also satisfies the difference of parallax and reduces dizziness.
附图说明DRAWINGS
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects, and advantages of the present invention will become apparent from the Detailed Description of Description
图1为本发明提供的虚拟主动视差计算补偿的双目渲染方法流程图;1 is a flowchart of a binocular rendering method for virtual active disparity calculation compensation provided by the present invention;
图2为本发明提供的虚拟主动视差计算补偿的左右眼预校准示意图;2 is a schematic diagram of pre-calibration of left and right eyes for virtual active disparity calculation compensation provided by the present invention;
图3为针对用户生成的双眼最佳拟合的标准示意图;3 is a standard diagram of a best fit for a binocular generated by a user;
图4为分视补偿的裁剪结果示意图;Figure 4 is a schematic diagram showing the result of the cropping compensation;
图5为渲染合成的新视图。Figure 5 is a new view of the render composition.
具体实施方式detailed description
下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人 员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于本发明的保护范围。The invention will now be described in detail in connection with specific embodiments. The following examples are intended to further aid the invention to those skilled in the art, but are not intended to limit the invention in any way. It should be noted that a number of changes and modifications may be made by those skilled in the art without departing from the inventive concept. These are all within the scope of protection of the present invention.
根据本发明提供的虚拟主动视差计算补偿的双目渲染方法,包括如下步骤:The binocular rendering method for virtual active disparity calculation compensation provided by the present invention includes the following steps:
主动视差计算步骤:对初始用户进行双目渲染,得到对应用户的专属适配视差同步矩阵。具体地,因选用的是单摄像头输入源,本发明中提及的主动视差计算实为虚拟主动视差计算,也就是在用户初次使用时首先会给佩戴者渲染几幅双目的图像,分别如图2中所示的9个点位置逐一适配,通过用户的适配反馈获得每个区域最佳视差矫正的参数和视力范围(最少是1/3/5/7/9这5个点位),进而生成佩戴者专属的适配视差同步矩阵。而原始图像的视力范围始终分别大于左右两目单独的视力范围。最终达到适合观看者自己双眼的主动视差补偿标准,以此为基准的补偿将是后续每组视图的最佳拟合调校,如图3所示。Active parallax calculation step: performing binocular rendering on the initial user to obtain a dedicated adaptive parallax synchronization matrix of the corresponding user. Specifically, since the single camera input source is selected, the active parallax calculation mentioned in the present invention is virtual active parallax calculation, that is, when the user first uses, firstly, several dual-purpose images are rendered to the wearer, respectively. The nine point positions shown in Fig. 2 are adapted one by one, and the parameters of the optimal parallax correction and the range of visual acuity of each region are obtained by the user's adaptive feedback (minimum of 3 points of 1/3/5/7/9) And, in turn, generate a wearer-specific adaptive parallax synchronization matrix. The original image's visual range is always greater than the left and right eyesight's separate visual range. Finally, the active parallax compensation standard suitable for the viewer's own eyes is achieved, and the compensation based on this will be the best fit adjustment for each subsequent view, as shown in FIG.
图像视差补偿与剪裁步骤:将用户的专属适配视差同步矩阵作为虚拟主动视差补偿的剪裁因子,将一副完整的图像剪裁成左右眼组合,分别在左右眼取景视窗中单路展示。具体地,将最佳适配该佩戴者的适配视差同步矩阵作为虚拟主动视差补偿的剪裁因子,原始图是像素的全集,而左右眼分别基于不同区域的视差剪裁因子作为当前佩戴者观看虚拟现实的剪裁依据,把一副完整的图像剪裁成最佳左右眼组合,分别在左右眼取景视窗中单路展示的视野。这样裁剪出来的两路分视的图像即为全集的子集,但两个子集有重叠也有不重叠的部分。分视补偿剪裁如图4所示。通过分视补偿的剪裁,确保左右眼看到图像时符合最优视差补偿的最大化,如此再后续的每一帧图像由单目摄像头采集后,都会根据上述分视补偿参数预处理成两路坐标系有别的分视图像。Image Disparity Compensation and Trimming Step: The user's exclusive adaptive parallax synchronization matrix is used as the clipping factor of the virtual active parallax compensation, and a complete image is cut into left and right eye combinations, which are respectively displayed in the left and right eye view windows. Specifically, the adaptive parallax synchronization matrix that best fits the wearer is used as the clipping factor of the virtual active disparity compensation, and the original image is a complete set of pixels, and the left and right eyes respectively use the disparity clipping factor of different regions as the current wearer to view the virtual Based on the realistic tailoring, a complete image is cut into the best left and right eye combination, and the field of view is displayed in a single way in the left and right eye viewfinder windows. The two-way split image thus cropped is a subset of the complete set, but the two subsets overlap and have non-overlapping portions. The split compensation trim is shown in Figure 4. By dividing the compensation of the sub-view, it is ensured that the left and right eyes are in accordance with the maximum of the optimal parallax compensation, so that each subsequent image is acquired by the monocular camera, and then pre-processed into two coordinates according to the above-mentioned sub-view compensation parameters. There are other separate images.
双目渲染步骤:将同一个单目摄像分视的图像预渲染在下一场的等待叠加画布上,并根据固定时序的周期,一次性将准备好的叠加画布合成图像渲染投射在目标显示硬件图层。具体地,图像渲染的设计目的是将同一个单目摄像分视的图像,预渲染在下一场的等待叠加画布上合成,根据固定时序的周期,一次性将准备好的叠加画布合成图像渲染投射在目标显示硬件图层,因为在预处理双目渲染已经确保色彩,色差和视差最优组合,在固定时序周期的渲染已被合成的图像以确保双目时序最优组合,如图5所示,从而取得相似佩戴调校者本人直接观察自然界的双目体验。Binocular rendering step: pre-rendering the image of the same monocular camera to the waiting overlay on the next field, and projecting the prepared overlay composite image at the target display hardware map at a time according to the period of the fixed timing. Floor. Specifically, the image rendering is designed to synthesize the image of the same monocular camera, and the pre-rendering is synthesized on the waiting superimposed canvas of the next field. According to the period of the fixed timing, the prepared superimposed canvas is rendered and projected at one time. The hardware layer is displayed in the target, because in the pre-processing binocular rendering has ensured the optimal combination of color, chromatic aberration and parallax, the image has been synthesized in the fixed timing cycle to ensure optimal combination of binocular timing, as shown in Figure 5. In order to obtain a similar wearable adjuster, I directly observe the binocular experience of nature.
下面结合具体实施例对本发明中的技术方案做更加详细的说明。The technical solution in the present invention will be described in more detail below with reference to specific embodiments.
如图1所示,本发明提供的虚拟主动视差计算补偿的双目渲染方法,包括如下步 骤:As shown in FIG. 1, the binocular rendering method for virtual active disparity calculation compensation provided by the present invention includes the following steps:
步骤S1:从模板库中随机选取几张不同解像度的原图;Step S1: randomly selecting several original images of different resolutions from the template library;
步骤S2:在选出的原图上预先标记5~9个视差标准十字;Step S2: pre-marking 5 to 9 parallax standard crosses on the selected original image;
步骤S3:将试图分别投射在双目显示的合成区域;Step S3: attempting to separately project the synthesized area in the binocular display;
步骤S4:由佩戴者根据自身观感调节校准十字,若符合最大视角的不眩晕双目拟合则执行步骤S5,否则返回执行步骤S3;Step S4: the calibration cross is adjusted by the wearer according to his own perception, if the non-vertigo binocular fitting that meets the maximum viewing angle is performed, step S5 is performed, otherwise, step S3 is performed;
步骤S5:存储佩戴者根据自身观感调节校准十字的最终补偿调校参数,根据不同的解像度试图调校多组适配的补偿参数;Step S5: storing the final compensation adjustment parameter of the calibration cross according to the self-perception of the wearer, and attempting to adjust the compensation parameters of the plurality of sets according to different resolutions;
步骤S6:启动实际单视角视频数据流的帧处理,基于预准备的视差补偿调校值做反向切割,分别从原图中切出左眼帧和右眼帧;Step S6: start frame processing of the actual single-view video data stream, perform reverse cutting based on the pre-prepared parallax compensation adjustment value, and cut out the left-eye frame and the right-eye frame from the original image respectively;
步骤S7:预合成双目有差别的剪切片段,重构一张新的整尺寸大于原先视图的新合成图,等待渲染在双目取景器中;Step S7: pre-synthesizing the binoculars with different cuts, reconstructing a new composite image with a larger overall size than the original view, waiting for rendering in the binocular viewfinder;
步骤S8:一次性渲染成新图。Step S8: Rendering into a new picture at one time.
实施例1(虚拟现实沉浸式娱乐)Example 1 (Virtual Reality Immersion Entertainment)
在虚拟现实沉浸式娱乐如过山车、深海游以及游戏的时候,如果采集设备为单摄像头或者合成拼接成单一图像数据源的时候,通过本发明可以在虚拟设备端进行虚拟主动视差补偿渲染来给用户提供低眩晕重影和高图像质量的最佳沉浸式娱乐体验。In the virtual reality immersive entertainment such as roller coaster, deep sea tour and game, if the collection device is a single camera or a composite image into a single image data source, the virtual active device can perform virtual active parallax compensation rendering on the virtual device side to the user. Provides the best immersive entertainment experience with low stun ghosting and high image quality.
实施例2(360度全景直播)Embodiment 2 (360 degree panoramic broadcast)
在对运动赛事或者演唱会进行360度全景采集并直播的时候,多路图像阵列通常会实时的进行图像融合,从而丧失了多摄像头独特的视角,通过本发明可以在虚拟现实设备端进行虚拟主动视差补偿渲染来给用户提供低眩晕重影和高图像质量的最佳全景直播体验。When 360-degree panoramic collection and live broadcast of sports events or concerts, multi-channel image arrays usually perform image fusion in real time, thus losing the unique perspective of multiple cameras. Through the present invention, virtual active devices can be virtualized on the virtual reality device side. Parallax compensated rendering provides the user with the best panoramic live broadcast experience with low stun ghosting and high image quality.
本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统及其各个装置以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统及其各个装置以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同功能。所以,本发明提供的系统及其各项装置可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构;也可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。Those skilled in the art will appreciate that the system provided by the present invention and its various devices can be logically gated, except that the system provided by the present invention and its various devices are implemented in purely computer readable program code. Switches, ASICs, programmable logic controllers, and embedded microcontrollers are used to achieve the same functionality. Therefore, the system and its various devices provided by the present invention can be considered as a hardware component, and the devices included therein for implementing various functions can also be regarded as structures within hardware components; A device that implements various functions is considered to be either a software module that implements a method or a structure within a hardware component.
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上 述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改,这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的特征可以任意相互组合。The specific embodiments of the present invention have been described above. It is to be understood that the invention is not limited to the specific embodiments described above, and various changes or modifications may be made by those skilled in the art without departing from the scope of the invention. The features of the embodiments and the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (7)

  1. 一种虚拟主动视差计算补偿的双目渲染方法,其特征在于,包括如下步骤:A binocular rendering method for virtual active disparity calculation compensation, comprising the following steps:
    主动视差计算步骤:对初始用户进行双目渲染适配,得到对应用户的专属适配视差同步矩阵;Active parallax calculation step: performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive parallax synchronization matrix of the corresponding user;
    图像视差补偿与剪裁步骤:将用户的专属适配视差同步矩阵作为虚拟主动视差补偿的剪裁因子,将一副完整的图像剪裁成左右眼组合,分别在左右眼取景视窗中单路展示;Image disparity compensation and clipping step: the user's exclusive adaptive parallax synchronization matrix is used as the clipping factor of the virtual active disparity compensation, and a complete image is cut into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;
    双目渲染步骤:将同一个单目摄像分视的图像预渲染在下一场的等待叠加画布上,并根据固定时序的周期,一次性将准备好的叠加画布合成图像渲染投射在目标虚拟现实头戴设备的显示硬件图层。Binocular rendering step: pre-rendering the same monocular camera-viewed image on the waiting overlay canvas of the next field, and projecting the prepared overlay canvas composite image rendering target virtual reality header at a time according to a fixed time period. Wear the device's display hardware layer.
  2. 根据权利要求1所述的虚拟主动视差计算补偿的双目渲染方法,其特征在于,所述主动视差计算步骤包括:The binocular rendering method for virtual active disparity calculation compensation according to claim 1, wherein the active disparity calculation step comprises:
    步骤A1:从图片库中选取多张不同解像度的原图,通过所述原图对初次佩戴使用虚拟现实头戴设备的用户进行双目渲染,基于原图中的固定位置处的M个点的位置进行逐一适配;其中M为正整数,M个点的位置选取基于适配视差同步矩阵的形式;Step A1: selecting a plurality of original images of different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the M points at the fixed position in the original image. The positions are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;
    步骤A2:通过用户的适配反馈获得原图中每个区域最佳视差矫正的参数和视力范围,生成该用户的适配视差同步矩阵;其中原图的视力范围始终分别大于左右两目单独的视力范围;Step A2: obtaining the optimal parallax correction parameter and the visual range of each region in the original image by using the user's adaptive feedback, and generating the adapted parallax synchronization matrix of the user; wherein the original image has a visual range that is greater than the left and right eyes respectively. Range of vision;
    步骤A3:根据佩戴者的感观舒适反应自动生成适合用户自己双眼的主动视差补偿标准,将基于该主动视差补偿标准作为后续每组视图的最佳拟合调校标准。Step A3: automatically generate an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and use the active parallax compensation standard as the best fitting adjustment standard for each subsequent view.
  3. 根据权利要求1所述的虚拟主动视差计算补偿的双目渲染方法,其特征在于,所述图像视差补偿与剪裁步骤包括:The binocular rendering method for virtual active disparity calculation compensation according to claim 1, wherein the image disparity compensation and clipping step comprises:
    步骤B1:将左右眼分别基于不同区域的视差剪裁因子作为当前佩戴者观看虚拟现实的剪裁依据,将一副完整的图像剪裁成最佳左右眼组合,所述最佳左右眼组合是指:Step B1: The left and right eyes are respectively based on the parallax tailoring factors of different regions as the clipping basis of the current wearer to view the virtual reality, and the complete image is cut into the optimal left and right eye combinations, and the optimal left and right eye combinations are:
    初始调校时,由佩戴者根据自己佩戴习惯确定最佳适配结果,该最佳适配结果即为最佳调校目标,保存以最佳调校目标生成的补偿矩阵,并将补偿矩阵做动态调校做参考,所述补偿矩阵即为调校矩阵,又称为最佳左右眼组合;In the initial adjustment, the wearer determines the best fit result according to his own wearing habits. The best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix The dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;
    步骤B2:将步骤B1裁剪得到的两路分视的图像作为原始图像素全集的两个子集,所述两个子集存在交集;通过虚拟主动视差补偿的剪裁因子进行补偿剪裁,使得左右眼看到图像符合最优视差补偿的最大化原则,所述最优视差补偿的最大化原则是指:基于 单次或多次主动矫正的数据作为一组待选补偿,在一段时间的佩戴中,记录并筛选出不同光线、场景、佩戴角度时的优选补偿矩阵,同时以双目合成视野最大作为辅助目标;Step B2: The two-way divided image obtained by the step B1 is used as two subsets of the original image pixel ensemble, and the two subsets have an intersection; the clipping is performed by the virtual active parallax compensation clipping factor, so that the left and right eyes see the image. Consistent with the principle of maximizing the optimal parallax compensation, the principle of maximizing the optimal parallax compensation is: recording data based on single or multiple active corrections as a set of candidate compensation, in a period of wearing, recording and screening The preferred compensation matrix for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;
    步骤B3:对后续单目摄像头采集的图像进行分视补偿参数预处理,形成两路坐标系有别的分视图像,所述分视补偿参数包括:观察环境中依据不同观察角度、观察距离、光亮、色彩进行图像变换和图像裁剪时的数据。Step B3: performing pre-processing on the image obtained by the subsequent monocular camera, and forming a separate view image of the two-way coordinate system, wherein the sub-view compensation parameters include: according to different observation angles, observation distances, Light, color, image conversion, and image cropping.
  4. 根据权利要求1所述的虚拟主动视差计算补偿的双目渲染方法,其特征在于,所述双目渲染步骤中的固定时序的周期是指定频定时对各路图像进行渲染。The binocular rendering method for virtual active disparity calculation compensation according to claim 1, wherein the period of the fixed timing in the binocular rendering step is to render the respective images by the specified frequency timing.
  5. 一种虚拟主动视差计算补偿的双目渲染系统,其特征在于,包括如下模块:A binocular rendering system for virtual active disparity calculation compensation, comprising the following modules:
    主动视差计算模块:用于对初始用户进行双目渲染适配,得到对应用户的专属适配视差同步矩阵;Active disparity calculation module: used for performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive disparity synchronization matrix of the corresponding user;
    图像视差补偿与剪裁模块:用于将用户的专属适配视差同步矩阵作为虚拟主动视差补偿的剪裁因子,将一副完整的图像剪裁成左右眼组合,分别在左右眼取景视窗中单路展示;Image disparity compensation and trimming module: used as a tailoring disparity synchronization matrix of the user as a clipping factor of virtual active disparity compensation, and cuts a complete image into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;
    双目渲染模块:用于将同一个单目摄像分视的图像预渲染在下一场的等待叠加画布上,并根据固定时序的周期,一次性将准备好的叠加画布合成图像渲染投射在目标虚拟现实头戴设备的显示硬件图层。Binocular rendering module: used to pre-render the same monocular camera image on the waiting overlay canvas of the next field, and project the prepared overlay canvas composite image rendering to the target virtual at a time according to the fixed timing period. Realistic display device hardware layer.
  6. 根据权利要求5所述的虚拟主动视差计算补偿的双目渲染系统,其特征在于,所述主动视差计算模块包括:The virtual active disparity calculation compensation binocular rendering system according to claim 5, wherein the active disparity calculation module comprises:
    视差同步矩阵选取子模块:从图片库中选取多张不同解像度的原图,通过所述原图对初次佩戴使用虚拟现实头戴设备的用户进行双目渲染,基于原图中的固定位置处的M个点的位置进行逐一适配;其中M为正整数,M个点的位置选取基于适配视差同步矩阵的形式;Parallax synchronization matrix selection sub-module: selecting a plurality of original images with different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the fixed position in the original image The positions of the M points are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;
    适配视差同步矩阵生成子模块:通过用户的适配反馈获得原图中每个区域最佳视差矫正的参数和视力范围,生成该用户的适配视差同步矩阵;其中原图的视力范围始终分别大于左右两目单独的视力范围;Adapting the parallax synchronization matrix generation sub-module: obtaining the optimal parallax correction parameters and the visual range of each region in the original image through the user's adaptive feedback, and generating the adaptive parallax synchronization matrix of the user; wherein the original image has a range of visual forces Greater than the left and right eyesight alone;
    最佳拟合调校标准生成子模块:根据佩戴者的感观舒适反应自动生成适合用户自己双眼的主动视差补偿标准,将基于该主动视差补偿标准作为后续每组视图的最佳拟合调校标准。The best fit calibration standard generation sub-module: automatically generates an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and based on the active parallax compensation standard as the best fit adjustment for each subsequent view standard.
  7. 根据权利要求5所述的虚拟主动视差计算补偿的双目渲染系统,其特征在于,所述图像视差补偿与剪裁模块包括:The virtual active disparity calculation compensation binocular rendering system according to claim 5, wherein the image disparity compensation and cropping module comprises:
    最佳左右眼组合生成子模块:将左右眼分别基于不同区域的视差剪裁因子作为当前佩戴者观看虚拟现实的剪裁依据,将一副完整的图像剪裁成最佳左右眼组合,所述最佳左右眼组合是指:The best left and right eye combination generation sub-module: the left and right eyes are respectively based on the parallax clipping factors of different regions as the clipping basis of the current wearer to view the virtual reality, and a complete image is cut into the best left and right eye combination, the best left and right eye Eye combination means:
    初始调校时,由佩戴者根据自己佩戴习惯确定最佳适配结果,该最佳适配结果即为最佳调校目标,保存以最佳调校目标生成的补偿矩阵,并将补偿矩阵做动态调校做参考,所述补偿矩阵即为调校矩阵,又称为最佳左右眼组合;In the initial adjustment, the wearer determines the best fit result according to his own wearing habits. The best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix The dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;
    裁剪子模块:将裁剪得到的两路分视的图像作为原始图像素全集的两个子集,所述两个子集存在交集;通过虚拟主动视差补偿的剪裁因子进行补偿剪裁,使得左右眼看到图像符合最优视差补偿的最大化原则,所述最优视差补偿的最大化原则是指:基于单次或多次主动矫正的数据作为一组待选补偿,在一段时间的佩戴中,记录并筛选出包括:不同光线,场景,佩戴角度情况下时的优选补偿矩阵,同时以双目合成视野最大作为辅助目标;The cropping sub-module: the two-way divided image obtained by the cropping is used as two subsets of the original image pixel complete set, the two subsets have an intersection; the cropping factor of the virtual active parallax compensation is used to compensate the cropping, so that the left and right eyes see the image conforming The principle of maximizing the optimal parallax compensation, the principle of maximizing the optimal parallax compensation is: the data based on single or multiple active corrections as a set of candidate compensation, recorded and selected in a period of wearing Including: different compensation modes for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;
    分视图像生成子模块:对后续单目摄像头采集的图像进行分视补偿参数预处理,形成两路坐标系有别的分视图像,所述分视补偿参数是指观察环境中依据包括:不同观察角度、观察距离、光亮、色彩情况下进行图像变换和图像裁剪时的数据。The sub-view image generation sub-module: pre-processes the image obtained by the subsequent monocular camera by the pre-view compensation parameter to form a separate bi-directional coordinate system, and the sub-view compensation parameter refers to the observation environment: Observe the angle, the observation distance, the brightness, and the color when the image is transformed and the image is cropped.
PCT/CN2017/117130 2017-04-01 2017-12-19 Binocular rendering method and system for virtual active parallax computation compensation WO2018176927A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710214653.3 2017-04-01
CN201710214653.3A CN107071384B (en) 2017-04-01 2017-04-01 The binocular rendering intent and system of virtual active disparity computation compensation

Publications (1)

Publication Number Publication Date
WO2018176927A1 true WO2018176927A1 (en) 2018-10-04

Family

ID=59602891

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117130 WO2018176927A1 (en) 2017-04-01 2017-12-19 Binocular rendering method and system for virtual active parallax computation compensation

Country Status (2)

Country Link
CN (1) CN107071384B (en)
WO (1) WO2018176927A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071384B (en) * 2017-04-01 2018-07-06 上海讯陌通讯技术有限公司 The binocular rendering intent and system of virtual active disparity computation compensation
CN110072049B (en) * 2019-03-26 2021-11-09 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN110177216B (en) * 2019-06-28 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and storage medium
CN111202663B (en) * 2019-12-31 2022-12-27 浙江工业大学 Vision training learning system based on VR technique
CN113010020A (en) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 Time schedule controller and display device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005060271A1 (en) * 2003-12-18 2005-06-30 University Of Durham Method and apparatus for generating a stereoscopic image
CN102811359A (en) * 2011-06-01 2012-12-05 三星电子株式会社 3D-image conversion apparatus and method for adjusting depth information of the same
CN103039080A (en) * 2010-06-28 2013-04-10 汤姆森特许公司 Method and apparatus for customizing 3-dimensional effects of stereo content
CN106507093A (en) * 2016-09-26 2017-03-15 北京小鸟看看科技有限公司 A kind of display mode switching method of virtual reality device and device
CN107071384A (en) * 2017-04-01 2017-08-18 上海讯陌通讯技术有限公司 The binocular rendering intent and system of virtual active disparity computation compensation

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102300103B (en) * 2010-06-25 2013-08-14 深圳Tcl新技术有限公司 Method for converting 2D (Two-Dimensional) content into 3D (Three-Dimensional) contents
JP2012174237A (en) * 2011-02-24 2012-09-10 Nintendo Co Ltd Display control program, display control device, display control system and display control method
CN103905806B (en) * 2012-12-26 2018-05-01 三星电子(中国)研发中心 The system and method that 3D shootings are realized using single camera
CN105611278B (en) * 2016-02-01 2018-10-02 欧洲电子有限公司 The image processing method and system and display equipment of anti-bore hole 3D viewings spinning sensation
CN106251403B (en) * 2016-06-12 2018-02-16 深圳超多维光电子有限公司 A kind of methods, devices and systems of virtual three-dimensional Scene realization
CN106101689B (en) * 2016-06-13 2018-03-06 西安电子科技大学 The method that using mobile phone monocular cam virtual reality glasses are carried out with augmented reality
CN106231292B (en) * 2016-09-07 2017-08-25 深圳超多维科技有限公司 A kind of stereoscopic Virtual Reality live broadcasting method, device and equipment
CN106385576B (en) * 2016-09-07 2017-12-08 深圳超多维科技有限公司 Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment
CN106375749B (en) * 2016-09-12 2018-06-29 北京邮电大学 A kind of disparity adjustment method and device
CN106454313A (en) * 2016-10-18 2017-02-22 深圳市云宙多媒体技术有限公司 3D video image rendering and parallax adjustment method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005060271A1 (en) * 2003-12-18 2005-06-30 University Of Durham Method and apparatus for generating a stereoscopic image
CN103039080A (en) * 2010-06-28 2013-04-10 汤姆森特许公司 Method and apparatus for customizing 3-dimensional effects of stereo content
CN102811359A (en) * 2011-06-01 2012-12-05 三星电子株式会社 3D-image conversion apparatus and method for adjusting depth information of the same
CN106507093A (en) * 2016-09-26 2017-03-15 北京小鸟看看科技有限公司 A kind of display mode switching method of virtual reality device and device
CN107071384A (en) * 2017-04-01 2017-08-18 上海讯陌通讯技术有限公司 The binocular rendering intent and system of virtual active disparity computation compensation

Also Published As

Publication number Publication date
CN107071384B (en) 2018-07-06
CN107071384A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
WO2018176927A1 (en) Binocular rendering method and system for virtual active parallax computation compensation
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
JP2010033367A (en) Information processor and information processing method
CN103329165B (en) The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
US11528464B2 (en) Wide-angle stereoscopic vision with cameras having different parameters
JP6384940B2 (en) 3D image display method and head mounted device
CN101562754B (en) Method for improving visual effect of plane image transformed into 3D image
US10553014B2 (en) Image generating method, device and computer executable non-volatile storage medium
US11659158B1 (en) Frustum change in projection stereo rendering
JP2018500690A (en) Method and system for generating magnified 3D images
TWI589150B (en) Three-dimensional auto-focusing method and the system thereof
US20130208097A1 (en) Three-dimensional imaging system and image reproducing method thereof
JP2011529285A (en) Synthetic structures, mechanisms and processes for the inclusion of binocular stereo information in reproducible media
WO2013133057A1 (en) Image processing apparatus, method, and program
WO2023056803A1 (en) Holographic presentation method and apparatus
Mikšícek Causes of visual fatigue and its improvements in stereoscopy
WO2017085803A1 (en) Video display device and video display method
Brooker et al. Operator performance evaluation of controlled depth of field in a stereographically displayed virtual environment
Wu et al. P‐94: Free‐form micro‐optical design for enhancing image quality (MTF) at large FOV in light field near eye display
JP5539486B2 (en) Information processing apparatus and information processing method
KR20170096567A (en) Immersive display system for virtual reality and virtual reality display method using the same
CN202713529U (en) 3D video color corrector
JP2003101690A (en) Image processing method, digital camera, and recording medium
Sasaki et al. 8‐2: Invited Paper: Hyper‐Realistic Head‐up Display System for Medical Application
CN206946095U (en) A kind of 3D telescopes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17904098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17904098

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.11.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17904098

Country of ref document: EP

Kind code of ref document: A1