WO2018176927A1 - Binocular rendering method and system for virtual active parallax computation compensation - Google Patents
Binocular rendering method and system for virtual active parallax computation compensation Download PDFInfo
- Publication number
- WO2018176927A1 WO2018176927A1 PCT/CN2017/117130 CN2017117130W WO2018176927A1 WO 2018176927 A1 WO2018176927 A1 WO 2018176927A1 CN 2017117130 W CN2017117130 W CN 2017117130W WO 2018176927 A1 WO2018176927 A1 WO 2018176927A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- compensation
- parallax
- image
- rendering
- active
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
Definitions
- the present invention relates to the field of virtual reality image processing technologies, and in particular, to a binocular rendering method and system for virtual active disparity calculation compensation.
- the distance between the eyes of a person is a certain distance, so when observing a three-dimensional object, since the two eyes are horizontally separated at two different positions, the observed image of the object is different, and there is an aberration between them.
- the existence of aberrations, through the human brain, can feel stereoscopic effects. How to compensate the rendering with the virtual active parallax method to enhance the look and feel of the virtual reality is the focus of the present invention.
- the imaging foundation relied on in virtual reality is based on the simulation of binoculars in the natural environment of two different images with aberrations, while the monocular camera acquisition device is still the mainstream (smartphone, tablet, notebook, headset, Cameras, etc., binoculars are not suitable for popularization due to the specialization of the use scene and the limitations of physical hardware. So how to use the monocular camera to capture the simulated dual-purpose framing effect and compensate the rendering through virtual active parallax is also difficult and different.
- the synchronization between the two images is directly related to whether the user causes vertigo, and the image color, saturation and exposure difference of the two channels will also reduce the virtual reality perception;
- the traditional dual-purpose view content is fixed parallax that is immutable.
- the present invention proposes a binocular rendering method for virtual active disparity calculation compensation for a single-purpose imaging device.
- the method strives to bring the parallax fitness of the subject closer to the human eyes and directly observe the parallax brought in by nature, and image the virtual active difference part of the left and right eyes.
- the binocular is rendered once to ensure synchronization and obtain enhancement. Virtual reality experience.
- the application number is: 201610666409.6, and the name is: “A virtual reality mobile dynamic time frame compensation rendering system and method”, the method is: applying frame rendering to generate an application frame buffer sequence, and extracting in the application frame buffer sequence
- the latest or most recent application frame performs secondary rendering to obtain a time frame
- the time frame is sent to the shared buffer, and is refreshed by the screen reading time frame rendering result under the timing control of the vertical synchronization management module.
- the GPU renders the result directly to the screen refresh buffer, reducing the delay of the multi-level cache exchange.
- vertical synchronization time management the GPU's rendering time is controlled, and the conflict between GPU rendering and screen refresh reading is avoided, so that the picture can be displayed normally at low delay without tearing.
- the method in the above document solves the problem that the split screen image is not synchronized due to the split screen rendering when the number of cache frames is relatively large, which is related to the parallax compensation and the binocular rendering problem close to the real presence experience to be solved by the present invention.
- the present invention performs pre-processing before final rendering to the screen to achieve the same effect of the comparative invention.
- the method of the present invention mainly comprises: acquiring a parallax range preset by a virtual reality model, the parallax range being a parallax upper limit value And a range of a parallax lower limit value; obtaining a stereoscopic slice source and a disparity value corresponding to the stereoscopic slice source, the stereoscopic slice source being a slice source displayed in a virtual reality scene; adjusting the parallax range according to the parallax range The disparity value of the stereoscopic sheet source such that the disparity value of the stereoscopic sheet source does not exceed the parallax range.
- the present invention can improve the display effect of displaying a new stereoscopic scene in a stereoscopic scene.
- the methods in the above documents mainly focus on and improve the display effect of displaying a new stereoscopic scene in a stereoscopic scene, which is different from the parallax compensation problem to be solved by the present invention and the binocular rendering problem close to the real on-the-spot experience.
- the projection transformation step includes: adding a middle plane as a projection plane between the near plane and the far plane, and the near plane and the far side
- the primitives between the planes are projected onto the midplane.
- the invention projects the primitive between the near plane and the far plane onto the middle plane through the added midplane, and the primitive between the near plane and the middle plane has a stereoscopic effect of the screen, between the middle plane and the far plane
- the primitives have a stereo effect of entering the screen; thus, the existing "outline” and “on screen” effects can be rendered without using special hardware when using the existing rendering pipeline in the 3D display device.
- the method in the above document focuses on the rendering of three-dimensional graphics by utilizing the hardware characteristics of the 3D display device, which is solved by the present invention based entirely on software without relying on the parallax compensation binocular rendering method of the 3D display device in the virtual reality device. Not the same problem.
- an object of the present invention is to provide a binocular rendering method and system for virtual active disparity calculation compensation.
- the binocular rendering method for virtual active disparity calculation compensation provided by the present invention includes the following steps:
- Active parallax calculation step performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive parallax synchronization matrix of the corresponding user;
- Image disparity compensation and clipping step the user's exclusive adaptive parallax synchronization matrix is used as the clipping factor of the virtual active disparity compensation, and a complete image is cut into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;
- Binocular rendering step pre-rendering the same monocular camera-viewed image on the waiting overlay canvas of the next field, and projecting the prepared overlay canvas composite image rendering target virtual reality header at a time according to a fixed time period. Wear the device's display hardware layer.
- the active disparity calculation step comprises:
- Step A1 selecting a plurality of original images of different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the M points at the fixed position in the original image.
- the positions are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;
- Step A2 obtaining the optimal parallax correction parameter and the visual range of each region in the original image by using the user's adaptive feedback, and generating the adapted parallax synchronization matrix of the user; wherein the original image has a visual range that is greater than the left and right eyes respectively. Range of vision;
- Step A3 automatically generate an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and use the active parallax compensation standard as the best fitting adjustment standard for each subsequent view.
- the image disparity compensation and clipping step comprises:
- Step B1 The left and right eyes are respectively based on the parallax tailoring factors of different regions as the clipping basis of the current wearer to view the virtual reality, and the complete image is cut into the optimal left and right eye combinations, and the optimal left and right eye combinations are:
- the wearer determines the best fit result according to his own wearing habits.
- the best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix
- the dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;
- Step B2 The two-way divided image obtained by the step B1 is used as two subsets of the original image pixel ensemble, and the two subsets have an intersection; the clipping is performed by the virtual active parallax compensation clipping factor, so that the left and right eyes see the image.
- the principle of maximizing the optimal parallax compensation is: recording data based on single or multiple active corrections as a set of candidate compensation, in a period of wearing, recording and screening The preferred compensation matrix for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;
- Step B3 performing pre-processing on the image obtained by the subsequent monocular camera, and forming a separate view image of the two-way coordinate system, wherein the sub-view compensation parameters include: according to different observation angles, observation distances, Light, color, image conversion, and image cropping.
- the period of the fixed timing in the binocular rendering step is to render each image by a specified frequency timing.
- a binocular rendering system for virtual active disparity calculation compensation includes the following modules:
- Active disparity calculation module used for performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive disparity synchronization matrix of the corresponding user;
- Image disparity compensation and trimming module used as a tailoring disparity synchronization matrix of the user as a clipping factor of virtual active disparity compensation, and cuts a complete image into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;
- Binocular rendering module used to pre-render the same monocular camera image on the waiting overlay canvas of the next field, and project the prepared overlay canvas composite image rendering to the target virtual at a time according to the fixed timing period.
- Realistic display device hardware layer used to pre-render the same monocular camera image on the waiting overlay canvas of the next field, and project the prepared overlay canvas composite image rendering to the target virtual at a time according to the fixed timing period.
- the active disparity calculation module includes:
- Parallax synchronization matrix selection sub-module selecting a plurality of original images with different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the fixed position in the original image
- the positions of the M points are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;
- Adapting the parallax synchronization matrix generation sub-module obtaining the optimal parallax correction parameters and the visual range of each region in the original image through the user's adaptive feedback, and generating the adaptive parallax synchronization matrix of the user; wherein the original image has a range of visual forces Greater than the left and right eyesight alone;
- the best fit calibration standard generation sub-module automatically generates an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and based on the active parallax compensation standard as the best fit adjustment for each subsequent view standard.
- the image disparity compensation and cropping module comprises:
- the wearer determines the best fit result according to his own wearing habits.
- the best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix
- the dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;
- the cropping sub-module the two-way divided image obtained by the cropping is used as two subsets of the original image pixel complete set, the two subsets have an intersection; the cropping factor of the virtual active parallax compensation is used to compensate the cropping, so that the left and right eyes see the image conforming
- the principle of maximizing the optimal parallax compensation is: the data based on single or multiple active corrections as a set of candidate compensation, recorded and selected in a period of wearing Including: different compensation modes for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;
- the sub-view image generation sub-module pre-processes the image obtained by the subsequent monocular camera by the pre-view compensation parameter to form a separate bi-directional coordinate system
- the sub-view compensation parameter refers to the observation environment: Observe the angle, the observation distance, the brightness, and the color when the image is transformed and the image is cropped.
- the present invention has the following beneficial effects:
- the binocular rendering method for virtual active parallax calculation compensation uses a single camera as the hardware foundation of the image acquisition device, and compensates for the wearer's own eyes to directly observe the natural and reduce the ghosting caused by vertigo by software parallax calculation. According to the individual difference of the wearer, the optimal adjustment of the parallax can be actively obtained, and the single view source of the cropping can be compensated in reverse to achieve a more realistic immersive look.
- the binocular rendering method of the virtual active parallax calculation compensation ensures the timing synchronization in the binocular rendering process, the color and the luminosity are consistent, and the two-way imaging has other effects, which helps the wearing body to dynamically modify the rendering content to Meet the virtual effects of binoculars close to reality.
- the binocular rendering method for virtual active parallax calculation compensation provided by the invention can provide targeted and precise adjustment for the parallax of the view content, which not only satisfies the synchronization requirement, but also satisfies the difference of parallax and reduces dizziness.
- FIG. 1 is a flowchart of a binocular rendering method for virtual active disparity calculation compensation provided by the present invention
- FIG. 2 is a schematic diagram of pre-calibration of left and right eyes for virtual active disparity calculation compensation provided by the present invention
- 3 is a standard diagram of a best fit for a binocular generated by a user
- Figure 4 is a schematic diagram showing the result of the cropping compensation
- Figure 5 is a new view of the render composition.
- the binocular rendering method for virtual active disparity calculation compensation provided by the present invention includes the following steps:
- Active parallax calculation step performing binocular rendering on the initial user to obtain a dedicated adaptive parallax synchronization matrix of the corresponding user.
- the active parallax calculation mentioned in the present invention is virtual active parallax calculation, that is, when the user first uses, firstly, several dual-purpose images are rendered to the wearer, respectively.
- the nine point positions shown in Fig. 2 are adapted one by one, and the parameters of the optimal parallax correction and the range of visual acuity of each region are obtained by the user's adaptive feedback (minimum of 3 points of 1/3/5/7/9) And, in turn, generate a wearer-specific adaptive parallax synchronization matrix.
- the original image's visual range is always greater than the left and right eyesight's separate visual range.
- the active parallax compensation standard suitable for the viewer's own eyes is achieved, and the compensation based on this will be the best fit adjustment for each subsequent view, as shown in FIG.
- the user's exclusive adaptive parallax synchronization matrix is used as the clipping factor of the virtual active parallax compensation, and a complete image is cut into left and right eye combinations, which are respectively displayed in the left and right eye view windows.
- the adaptive parallax synchronization matrix that best fits the wearer is used as the clipping factor of the virtual active disparity compensation, and the original image is a complete set of pixels, and the left and right eyes respectively use the disparity clipping factor of different regions as the current wearer to view the virtual
- a complete image is cut into the best left and right eye combination, and the field of view is displayed in a single way in the left and right eye viewfinder windows.
- the two-way split image thus cropped is a subset of the complete set, but the two subsets overlap and have non-overlapping portions.
- the split compensation trim is shown in Figure 4.
- Binocular rendering step pre-rendering the image of the same monocular camera to the waiting overlay on the next field, and projecting the prepared overlay composite image at the target display hardware map at a time according to the period of the fixed timing.
- the image rendering is designed to synthesize the image of the same monocular camera, and the pre-rendering is synthesized on the waiting superimposed canvas of the next field.
- the prepared superimposed canvas is rendered and projected at one time.
- the hardware layer is displayed in the target, because in the pre-processing binocular rendering has ensured the optimal combination of color, chromatic aberration and parallax, the image has been synthesized in the fixed timing cycle to ensure optimal combination of binocular timing, as shown in Figure 5.
- the binocular rendering method for virtual active disparity calculation compensation includes the following steps:
- Step S1 randomly selecting several original images of different resolutions from the template library
- Step S2 pre-marking 5 to 9 parallax standard crosses on the selected original image
- Step S3 attempting to separately project the synthesized area in the binocular display
- Step S4 the calibration cross is adjusted by the wearer according to his own perception, if the non-vertigo binocular fitting that meets the maximum viewing angle is performed, step S5 is performed, otherwise, step S3 is performed;
- Step S5 storing the final compensation adjustment parameter of the calibration cross according to the self-perception of the wearer, and attempting to adjust the compensation parameters of the plurality of sets according to different resolutions;
- Step S6 start frame processing of the actual single-view video data stream, perform reverse cutting based on the pre-prepared parallax compensation adjustment value, and cut out the left-eye frame and the right-eye frame from the original image respectively;
- Step S7 pre-synthesizing the binoculars with different cuts, reconstructing a new composite image with a larger overall size than the original view, waiting for rendering in the binocular viewfinder;
- Step S8 Rendering into a new picture at one time.
- the virtual active device can perform virtual active parallax compensation rendering on the virtual device side to the user. Provides the best immersive entertainment experience with low stun ghosting and high image quality.
- Embodiment 2 (360 degree panoramic broadcast)
- system provided by the present invention and its various devices can be logically gated, except that the system provided by the present invention and its various devices are implemented in purely computer readable program code. Switches, ASICs, programmable logic controllers, and embedded microcontrollers are used to achieve the same functionality. Therefore, the system and its various devices provided by the present invention can be considered as a hardware component, and the devices included therein for implementing various functions can also be regarded as structures within hardware components; A device that implements various functions is considered to be either a software module that implements a method or a structure within a hardware component.
Abstract
Description
Claims (7)
- 一种虚拟主动视差计算补偿的双目渲染方法,其特征在于,包括如下步骤:A binocular rendering method for virtual active disparity calculation compensation, comprising the following steps:主动视差计算步骤:对初始用户进行双目渲染适配,得到对应用户的专属适配视差同步矩阵;Active parallax calculation step: performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive parallax synchronization matrix of the corresponding user;图像视差补偿与剪裁步骤:将用户的专属适配视差同步矩阵作为虚拟主动视差补偿的剪裁因子,将一副完整的图像剪裁成左右眼组合,分别在左右眼取景视窗中单路展示;Image disparity compensation and clipping step: the user's exclusive adaptive parallax synchronization matrix is used as the clipping factor of the virtual active disparity compensation, and a complete image is cut into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;双目渲染步骤:将同一个单目摄像分视的图像预渲染在下一场的等待叠加画布上,并根据固定时序的周期,一次性将准备好的叠加画布合成图像渲染投射在目标虚拟现实头戴设备的显示硬件图层。Binocular rendering step: pre-rendering the same monocular camera-viewed image on the waiting overlay canvas of the next field, and projecting the prepared overlay canvas composite image rendering target virtual reality header at a time according to a fixed time period. Wear the device's display hardware layer.
- 根据权利要求1所述的虚拟主动视差计算补偿的双目渲染方法,其特征在于,所述主动视差计算步骤包括:The binocular rendering method for virtual active disparity calculation compensation according to claim 1, wherein the active disparity calculation step comprises:步骤A1:从图片库中选取多张不同解像度的原图,通过所述原图对初次佩戴使用虚拟现实头戴设备的用户进行双目渲染,基于原图中的固定位置处的M个点的位置进行逐一适配;其中M为正整数,M个点的位置选取基于适配视差同步矩阵的形式;Step A1: selecting a plurality of original images of different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the M points at the fixed position in the original image. The positions are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;步骤A2:通过用户的适配反馈获得原图中每个区域最佳视差矫正的参数和视力范围,生成该用户的适配视差同步矩阵;其中原图的视力范围始终分别大于左右两目单独的视力范围;Step A2: obtaining the optimal parallax correction parameter and the visual range of each region in the original image by using the user's adaptive feedback, and generating the adapted parallax synchronization matrix of the user; wherein the original image has a visual range that is greater than the left and right eyes respectively. Range of vision;步骤A3:根据佩戴者的感观舒适反应自动生成适合用户自己双眼的主动视差补偿标准,将基于该主动视差补偿标准作为后续每组视图的最佳拟合调校标准。Step A3: automatically generate an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and use the active parallax compensation standard as the best fitting adjustment standard for each subsequent view.
- 根据权利要求1所述的虚拟主动视差计算补偿的双目渲染方法,其特征在于,所述图像视差补偿与剪裁步骤包括:The binocular rendering method for virtual active disparity calculation compensation according to claim 1, wherein the image disparity compensation and clipping step comprises:步骤B1:将左右眼分别基于不同区域的视差剪裁因子作为当前佩戴者观看虚拟现实的剪裁依据,将一副完整的图像剪裁成最佳左右眼组合,所述最佳左右眼组合是指:Step B1: The left and right eyes are respectively based on the parallax tailoring factors of different regions as the clipping basis of the current wearer to view the virtual reality, and the complete image is cut into the optimal left and right eye combinations, and the optimal left and right eye combinations are:初始调校时,由佩戴者根据自己佩戴习惯确定最佳适配结果,该最佳适配结果即为最佳调校目标,保存以最佳调校目标生成的补偿矩阵,并将补偿矩阵做动态调校做参考,所述补偿矩阵即为调校矩阵,又称为最佳左右眼组合;In the initial adjustment, the wearer determines the best fit result according to his own wearing habits. The best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix The dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;步骤B2:将步骤B1裁剪得到的两路分视的图像作为原始图像素全集的两个子集,所述两个子集存在交集;通过虚拟主动视差补偿的剪裁因子进行补偿剪裁,使得左右眼看到图像符合最优视差补偿的最大化原则,所述最优视差补偿的最大化原则是指:基于 单次或多次主动矫正的数据作为一组待选补偿,在一段时间的佩戴中,记录并筛选出不同光线、场景、佩戴角度时的优选补偿矩阵,同时以双目合成视野最大作为辅助目标;Step B2: The two-way divided image obtained by the step B1 is used as two subsets of the original image pixel ensemble, and the two subsets have an intersection; the clipping is performed by the virtual active parallax compensation clipping factor, so that the left and right eyes see the image. Consistent with the principle of maximizing the optimal parallax compensation, the principle of maximizing the optimal parallax compensation is: recording data based on single or multiple active corrections as a set of candidate compensation, in a period of wearing, recording and screening The preferred compensation matrix for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;步骤B3:对后续单目摄像头采集的图像进行分视补偿参数预处理,形成两路坐标系有别的分视图像,所述分视补偿参数包括:观察环境中依据不同观察角度、观察距离、光亮、色彩进行图像变换和图像裁剪时的数据。Step B3: performing pre-processing on the image obtained by the subsequent monocular camera, and forming a separate view image of the two-way coordinate system, wherein the sub-view compensation parameters include: according to different observation angles, observation distances, Light, color, image conversion, and image cropping.
- 根据权利要求1所述的虚拟主动视差计算补偿的双目渲染方法,其特征在于,所述双目渲染步骤中的固定时序的周期是指定频定时对各路图像进行渲染。The binocular rendering method for virtual active disparity calculation compensation according to claim 1, wherein the period of the fixed timing in the binocular rendering step is to render the respective images by the specified frequency timing.
- 一种虚拟主动视差计算补偿的双目渲染系统,其特征在于,包括如下模块:A binocular rendering system for virtual active disparity calculation compensation, comprising the following modules:主动视差计算模块:用于对初始用户进行双目渲染适配,得到对应用户的专属适配视差同步矩阵;Active disparity calculation module: used for performing binocular rendering adaptation on the initial user, and obtaining a dedicated adaptive disparity synchronization matrix of the corresponding user;图像视差补偿与剪裁模块:用于将用户的专属适配视差同步矩阵作为虚拟主动视差补偿的剪裁因子,将一副完整的图像剪裁成左右眼组合,分别在左右眼取景视窗中单路展示;Image disparity compensation and trimming module: used as a tailoring disparity synchronization matrix of the user as a clipping factor of virtual active disparity compensation, and cuts a complete image into left and right eye combinations, which are respectively displayed in the left and right eye framing windows;双目渲染模块:用于将同一个单目摄像分视的图像预渲染在下一场的等待叠加画布上,并根据固定时序的周期,一次性将准备好的叠加画布合成图像渲染投射在目标虚拟现实头戴设备的显示硬件图层。Binocular rendering module: used to pre-render the same monocular camera image on the waiting overlay canvas of the next field, and project the prepared overlay canvas composite image rendering to the target virtual at a time according to the fixed timing period. Realistic display device hardware layer.
- 根据权利要求5所述的虚拟主动视差计算补偿的双目渲染系统,其特征在于,所述主动视差计算模块包括:The virtual active disparity calculation compensation binocular rendering system according to claim 5, wherein the active disparity calculation module comprises:视差同步矩阵选取子模块:从图片库中选取多张不同解像度的原图,通过所述原图对初次佩戴使用虚拟现实头戴设备的用户进行双目渲染,基于原图中的固定位置处的M个点的位置进行逐一适配;其中M为正整数,M个点的位置选取基于适配视差同步矩阵的形式;Parallax synchronization matrix selection sub-module: selecting a plurality of original images with different resolutions from the image library, and performing binocular rendering on the user wearing the virtual reality wearing device for the first time through the original image, based on the fixed position in the original image The positions of the M points are adapted one by one; wherein M is a positive integer, and the positions of the M points are selected based on the form of the adaptive parallax synchronization matrix;适配视差同步矩阵生成子模块:通过用户的适配反馈获得原图中每个区域最佳视差矫正的参数和视力范围,生成该用户的适配视差同步矩阵;其中原图的视力范围始终分别大于左右两目单独的视力范围;Adapting the parallax synchronization matrix generation sub-module: obtaining the optimal parallax correction parameters and the visual range of each region in the original image through the user's adaptive feedback, and generating the adaptive parallax synchronization matrix of the user; wherein the original image has a range of visual forces Greater than the left and right eyesight alone;最佳拟合调校标准生成子模块:根据佩戴者的感观舒适反应自动生成适合用户自己双眼的主动视差补偿标准,将基于该主动视差补偿标准作为后续每组视图的最佳拟合调校标准。The best fit calibration standard generation sub-module: automatically generates an active parallax compensation standard suitable for the user's own eyes according to the wearer's sensory comfort response, and based on the active parallax compensation standard as the best fit adjustment for each subsequent view standard.
- 根据权利要求5所述的虚拟主动视差计算补偿的双目渲染系统,其特征在于,所述图像视差补偿与剪裁模块包括:The virtual active disparity calculation compensation binocular rendering system according to claim 5, wherein the image disparity compensation and cropping module comprises:最佳左右眼组合生成子模块:将左右眼分别基于不同区域的视差剪裁因子作为当前佩戴者观看虚拟现实的剪裁依据,将一副完整的图像剪裁成最佳左右眼组合,所述最佳左右眼组合是指:The best left and right eye combination generation sub-module: the left and right eyes are respectively based on the parallax clipping factors of different regions as the clipping basis of the current wearer to view the virtual reality, and a complete image is cut into the best left and right eye combination, the best left and right eye Eye combination means:初始调校时,由佩戴者根据自己佩戴习惯确定最佳适配结果,该最佳适配结果即为最佳调校目标,保存以最佳调校目标生成的补偿矩阵,并将补偿矩阵做动态调校做参考,所述补偿矩阵即为调校矩阵,又称为最佳左右眼组合;In the initial adjustment, the wearer determines the best fit result according to his own wearing habits. The best fit result is the best adjustment target, saves the compensation matrix generated by the optimal adjustment target, and makes the compensation matrix The dynamic adjustment is used as a reference, and the compensation matrix is a calibration matrix, which is also called an optimal left and right eye combination;裁剪子模块:将裁剪得到的两路分视的图像作为原始图像素全集的两个子集,所述两个子集存在交集;通过虚拟主动视差补偿的剪裁因子进行补偿剪裁,使得左右眼看到图像符合最优视差补偿的最大化原则,所述最优视差补偿的最大化原则是指:基于单次或多次主动矫正的数据作为一组待选补偿,在一段时间的佩戴中,记录并筛选出包括:不同光线,场景,佩戴角度情况下时的优选补偿矩阵,同时以双目合成视野最大作为辅助目标;The cropping sub-module: the two-way divided image obtained by the cropping is used as two subsets of the original image pixel complete set, the two subsets have an intersection; the cropping factor of the virtual active parallax compensation is used to compensate the cropping, so that the left and right eyes see the image conforming The principle of maximizing the optimal parallax compensation, the principle of maximizing the optimal parallax compensation is: the data based on single or multiple active corrections as a set of candidate compensation, recorded and selected in a period of wearing Including: different compensation modes for different light, scene, and wearing angle, and the maximum binocular synthesis field of view as the auxiliary target;分视图像生成子模块:对后续单目摄像头采集的图像进行分视补偿参数预处理,形成两路坐标系有别的分视图像,所述分视补偿参数是指观察环境中依据包括:不同观察角度、观察距离、光亮、色彩情况下进行图像变换和图像裁剪时的数据。The sub-view image generation sub-module: pre-processes the image obtained by the subsequent monocular camera by the pre-view compensation parameter to form a separate bi-directional coordinate system, and the sub-view compensation parameter refers to the observation environment: Observe the angle, the observation distance, the brightness, and the color when the image is transformed and the image is cropped.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710214653.3 | 2017-04-01 | ||
CN201710214653.3A CN107071384B (en) | 2017-04-01 | 2017-04-01 | The binocular rendering intent and system of virtual active disparity computation compensation |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018176927A1 true WO2018176927A1 (en) | 2018-10-04 |
Family
ID=59602891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/117130 WO2018176927A1 (en) | 2017-04-01 | 2017-12-19 | Binocular rendering method and system for virtual active parallax computation compensation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107071384B (en) |
WO (1) | WO2018176927A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107071384B (en) * | 2017-04-01 | 2018-07-06 | 上海讯陌通讯技术有限公司 | The binocular rendering intent and system of virtual active disparity computation compensation |
CN110072049B (en) * | 2019-03-26 | 2021-11-09 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN110177216B (en) * | 2019-06-28 | 2021-06-15 | Oppo广东移动通信有限公司 | Image processing method, image processing device, mobile terminal and storage medium |
CN111202663B (en) * | 2019-12-31 | 2022-12-27 | 浙江工业大学 | Vision training learning system based on VR technique |
CN113010020A (en) * | 2021-05-25 | 2021-06-22 | 北京芯海视界三维科技有限公司 | Time schedule controller and display device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005060271A1 (en) * | 2003-12-18 | 2005-06-30 | University Of Durham | Method and apparatus for generating a stereoscopic image |
CN102811359A (en) * | 2011-06-01 | 2012-12-05 | 三星电子株式会社 | 3D-image conversion apparatus and method for adjusting depth information of the same |
CN103039080A (en) * | 2010-06-28 | 2013-04-10 | 汤姆森特许公司 | Method and apparatus for customizing 3-dimensional effects of stereo content |
CN106507093A (en) * | 2016-09-26 | 2017-03-15 | 北京小鸟看看科技有限公司 | A kind of display mode switching method of virtual reality device and device |
CN107071384A (en) * | 2017-04-01 | 2017-08-18 | 上海讯陌通讯技术有限公司 | The binocular rendering intent and system of virtual active disparity computation compensation |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102300103B (en) * | 2010-06-25 | 2013-08-14 | 深圳Tcl新技术有限公司 | Method for converting 2D (Two-Dimensional) content into 3D (Three-Dimensional) contents |
JP2012174237A (en) * | 2011-02-24 | 2012-09-10 | Nintendo Co Ltd | Display control program, display control device, display control system and display control method |
CN103905806B (en) * | 2012-12-26 | 2018-05-01 | 三星电子(中国)研发中心 | The system and method that 3D shootings are realized using single camera |
CN105611278B (en) * | 2016-02-01 | 2018-10-02 | 欧洲电子有限公司 | The image processing method and system and display equipment of anti-bore hole 3D viewings spinning sensation |
CN106251403B (en) * | 2016-06-12 | 2018-02-16 | 深圳超多维光电子有限公司 | A kind of methods, devices and systems of virtual three-dimensional Scene realization |
CN106101689B (en) * | 2016-06-13 | 2018-03-06 | 西安电子科技大学 | The method that using mobile phone monocular cam virtual reality glasses are carried out with augmented reality |
CN106231292B (en) * | 2016-09-07 | 2017-08-25 | 深圳超多维科技有限公司 | A kind of stereoscopic Virtual Reality live broadcasting method, device and equipment |
CN106385576B (en) * | 2016-09-07 | 2017-12-08 | 深圳超多维科技有限公司 | Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment |
CN106375749B (en) * | 2016-09-12 | 2018-06-29 | 北京邮电大学 | A kind of disparity adjustment method and device |
CN106454313A (en) * | 2016-10-18 | 2017-02-22 | 深圳市云宙多媒体技术有限公司 | 3D video image rendering and parallax adjustment method and system |
-
2017
- 2017-04-01 CN CN201710214653.3A patent/CN107071384B/en active Active
- 2017-12-19 WO PCT/CN2017/117130 patent/WO2018176927A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005060271A1 (en) * | 2003-12-18 | 2005-06-30 | University Of Durham | Method and apparatus for generating a stereoscopic image |
CN103039080A (en) * | 2010-06-28 | 2013-04-10 | 汤姆森特许公司 | Method and apparatus for customizing 3-dimensional effects of stereo content |
CN102811359A (en) * | 2011-06-01 | 2012-12-05 | 三星电子株式会社 | 3D-image conversion apparatus and method for adjusting depth information of the same |
CN106507093A (en) * | 2016-09-26 | 2017-03-15 | 北京小鸟看看科技有限公司 | A kind of display mode switching method of virtual reality device and device |
CN107071384A (en) * | 2017-04-01 | 2017-08-18 | 上海讯陌通讯技术有限公司 | The binocular rendering intent and system of virtual active disparity computation compensation |
Also Published As
Publication number | Publication date |
---|---|
CN107071384B (en) | 2018-07-06 |
CN107071384A (en) | 2017-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018176927A1 (en) | Binocular rendering method and system for virtual active parallax computation compensation | |
US11076142B2 (en) | Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene | |
JP2010033367A (en) | Information processor and information processing method | |
CN103329165B (en) | The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls | |
US11528464B2 (en) | Wide-angle stereoscopic vision with cameras having different parameters | |
JP6384940B2 (en) | 3D image display method and head mounted device | |
CN101562754B (en) | Method for improving visual effect of plane image transformed into 3D image | |
US10553014B2 (en) | Image generating method, device and computer executable non-volatile storage medium | |
US11659158B1 (en) | Frustum change in projection stereo rendering | |
JP2018500690A (en) | Method and system for generating magnified 3D images | |
TWI589150B (en) | Three-dimensional auto-focusing method and the system thereof | |
US20130208097A1 (en) | Three-dimensional imaging system and image reproducing method thereof | |
JP2011529285A (en) | Synthetic structures, mechanisms and processes for the inclusion of binocular stereo information in reproducible media | |
WO2013133057A1 (en) | Image processing apparatus, method, and program | |
WO2023056803A1 (en) | Holographic presentation method and apparatus | |
Mikšícek | Causes of visual fatigue and its improvements in stereoscopy | |
WO2017085803A1 (en) | Video display device and video display method | |
Brooker et al. | Operator performance evaluation of controlled depth of field in a stereographically displayed virtual environment | |
Wu et al. | P‐94: Free‐form micro‐optical design for enhancing image quality (MTF) at large FOV in light field near eye display | |
JP5539486B2 (en) | Information processing apparatus and information processing method | |
KR20170096567A (en) | Immersive display system for virtual reality and virtual reality display method using the same | |
CN202713529U (en) | 3D video color corrector | |
JP2003101690A (en) | Image processing method, digital camera, and recording medium | |
Sasaki et al. | 8‐2: Invited Paper: Hyper‐Realistic Head‐up Display System for Medical Application | |
CN206946095U (en) | A kind of 3D telescopes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17904098 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17904098 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.11.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17904098 Country of ref document: EP Kind code of ref document: A1 |