CN113014902B - 3D-2D synchronous display method and system - Google Patents

3D-2D synchronous display method and system Download PDF

Info

Publication number
CN113014902B
CN113014902B CN202110172879.8A CN202110172879A CN113014902B CN 113014902 B CN113014902 B CN 113014902B CN 202110172879 A CN202110172879 A CN 202110172879A CN 113014902 B CN113014902 B CN 113014902B
Authority
CN
China
Prior art keywords
view
edge
parallax
visible
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110172879.8A
Other languages
Chinese (zh)
Other versions
CN113014902A (en
Inventor
刘峰
刘家志
王文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN202110172879.8A priority Critical patent/CN113014902B/en
Publication of CN113014902A publication Critical patent/CN113014902A/en
Application granted granted Critical
Publication of CN113014902B publication Critical patent/CN113014902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/361Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Abstract

The invention discloses a 3D-2D synchronous display method and a system, belonging to the technical field of 3D display, wherein a left eye modulation view and a right eye modulation view are obtained by receiving a left eye view and a right eye view, a middle view, a left parallax view and a right parallax view and performing pixel-by-pixel modulation; and playing the left eye modulation view and the right eye modulation view on a display screen, synchronously presenting a 3D image for a viewer wearing 3D glasses, and synchronously presenting a 2D image for the viewer wearing naked eyes.

Description

3D-2D synchronous display method and system
Technical Field
The invention belongs to the technical field of 3D display, particularly relates to a 3D cinema, and particularly relates to a method for watching 3D contents by an observer wearing 3D glasses at the same time and watching clear 2D contents by an observer not wearing 3D glasses.
Background
While the traditional 3D display requiring 3D glasses wearing is still one of the mainstream ways for people to experience stereo perception, for example, 3D cinema still explodes in the world. The 3D display mainly utilizes binocular disparity of human eyes, i.e., there will be slight difference between left and right eye views, providing additional binocular disparity information to the observer to further enhance the stereoscopic perception of human. An observer separately views left and right eye views in 3D content played by traditional 3D hardware through 3D glasses. However, due to convergence conflict, viewing 3D content for a long time may cause viewer asthenopia or vertigo feeling, especially for people with defective stereoscopic vision (about 11%) or children (their visual nervous system is not fully developed). The 3D glasses are most directly viewed when the eyes are tired or dizzy, but the original 3D content is played on the conventional 3D display hardware, and is overlapped into a picture or video with double images due to the parallax of the left and right views, which are called ghost images in the present invention, and they seriously degrade the quality of 2D viewing. Therefore, the technical problem faced is how to make an observer wear 3D glasses to obtain a 3D experience, while at the same time, an observer who does not wear 3D glasses can see clear 2D content; the observer can randomly select to watch the 3D content or the 2D content on the premise of not influencing the watching experience of other people.
Most current solutions are to design novel display hardware, for example, the 2Dx3D display system [1] to decompose the left eye view into a combination of right and residual views ([1] Wataru Fujimura, y.k., Robert singer, Takahiro Hayakawa, Akihiko Shirai, and a.k.yanaka,2x 3D: Real Time der for Simultaneous 2D/3D Hy-bridge thermal siggraph asian emitting Technologies,2012.) the right and residual views are played on the display screen of the conventional 3D display hardware, and the viewer's visual nervous system re-fuses the right and residual views to the left eye view when viewed with the naked eye, so that the viewer sees either the 3D or 2D effect depends on whether their right eye, and the glasses can separate the right eye image of the wearer from the mixed light signal. The 3D +2DTV display [2] employs three video channels: left, right and neigher channels ([2] Scher, s., et al.,3D +2DTV:3D Displays with No ghesting for Viewers with glasses, acm transformations on Graphics,2013.32 (3)). The right eye channel and the neigher channel are complementary, when an observer does not wear special glasses, mixed light signals of the three channels all enter a human eye visual system, and then the right eye view and the neigher view are fused and offset to only perceive a left eye view, namely a clear 2D view; when an observer wears the special glasses, the left and right eye views are stripped from the mixed light signal and then projected to the left and right eyes respectively, and the observer can perceive a 3D effect at the moment. Both of the above solutions require special display hardware coordination, which is incompatible with existing display hardware, and the change of one view overlay auxiliary view to another view in the solution comes at the cost of reducing the contrast of the left and right eye views, which can seriously degrade the quality of 3D viewing and 2D viewing.
Yet another solution is the HiddenStereo [3] ([3] Scher, S., et al.,3D +2DTV:3D display with No Ghosting for video with glasses. ACM transformations on Graphics,2013.32 (3)). The HiddenStereo generates left and right eye views with 3D-2D synchronous display effect by adding and subtracting disparity indicator on clear 2D view, when the modulated left and right eye views are played using conventional 3D display hardware, the viewer wearing 3D glasses still can perceive 3D effect, and for the viewer not wearing 3D glasses, they can perceive clear 2D view because disparity indicator components in the left and right eye views cancel each other out. The Disparity indicator is from a Disparity map or directly from the original left and right eye views. The researchers of HiddenStereo verify through psycho-visual experiments that when the maximum absolute parallax of the original left and right eye views is smaller than 8arcmin, the 3D effect of the newly generated left and right eye views is not affected basically, but when the absolute parallax of the original left and right eye views is too large, HiddenStereo seriously distorts the left and right eye views, especially in a region with large parallax and high spatial frequency, so that the discomfort of the observer is increased. The above problem is due to the fact that the Steerable pyramid used by HiddenStereo is not sufficient to adaptively adjust the spatial frequency of left and right eye views according to disparity.
Disclosure of Invention
The invention aims to provide a 3D-2D synchronous display method and a system applied to traditional 3D display hardware, wherein the spatial frequency at the position is adaptively adjusted according to the parallax of pixels on left and right eye views, the smoothness degree is higher in a region with large parallax and high spatial frequency so as to reduce uncomfortable feeling, and the smoothness degree is lower in a region with low parallax so as to ensure 3D parallax. By the method, an observer wearing the 3D glasses and an observer not wearing the 3D glasses can watch the glasses at the same time, and respectively enjoy the 3D effect and the 2D effect; the observer can also randomly select 3D viewing or 2D viewing by wearing or taking off the 3D glasses under the condition of not influencing the viewing experience of other people; the cinema operator can also switch the 3D cinema into a 3D-2D synchronous playing mode at zero hardware cost.
In order to achieve the purpose, the invention adopts the following technical scheme:
A3D-2D synchronous display method comprises the following steps:
receiving a left eye view and a right eye view, a middle view, a left parallax image and a right parallax image, and performing pixel-by-pixel modulation to obtain a left eye modulation view and a right eye modulation view;
the left eye modulation view and the right eye modulation view are played on a display screen, a 3D image is synchronously presented to a viewer wearing 3D glasses, and a 2D image is synchronously presented to the viewer wearing naked eyes;
the pixel-by-pixel modulation is performed according to the following formula:
Figure GDA0003340249470000021
wherein, PlAnd PrRepresenting a left eye modulation view and a right eye modulation view, C representing a middle view, DlAnd DrRepresenting a left disparity map and a right disparity map, x representing a pixel, Lσ()And Rσ()Representing the smooth filtered images of the left and right eye views, σ (D)l(x) And σ (D)r(x) Represents the magnitude of the degree of smoothing, positively correlated to the absolute parallax at pixel x.
Further, according to the left eye view and the right eye view, firstly calculating double-visible edge parallax, then estimating single-visible edge parallax, then estimating more double-visible edge parallax, and finally expanding the double-visible parallax to generate a left parallax image, a right parallax image and visible information; wherein:
the method for calculating the double-visible-edge parallax comprises the following steps: acquiring an edge pixel point set by using an edge detection algorithm for the left eye view and the right eye view; measuring pixel difference between edge pixel points on the left eye view and edge pixel points on the right eye view by using a cost function, and searching candidate pixel matching pairs of the left eye view and the right eye view on the same level; the candidate pixel matching pair is subjected to left-right consistency check, suboptimal check and histogram check to determine a final optimal pixel matching pair;
the method for estimating the single-visible edge parallax comprises the following steps: for a certain edge on the left eye view, the parallax d of partial pixels on the edge is obtained by calculating the measured pixel difference of the double-visible edge parallax method, and for the residual pixels on the edge, if the left side of the residual pixels has double-visible pixels and the left-right sequence of the corresponding points on the right eye view is not consistent, the parallax of the residual pixels is judged to be d and is only visible in the left eye view; for a certain edge on the right eye view, the parallax d of partial pixels on the edge is obtained by calculating the measured pixel difference of the double-visible edge parallax method, and for the residual pixels on the edge, if the right side of the residual pixels has double-visible pixels and the left and right sequence of the corresponding points on the left eye view is not consistent, the parallax of the residual pixels is judged to be d and only the right eye view can be seen;
the more double-visible edge parallax method is presumed to be: for some pixels on the vertical edge and all pixels on the horizontal edge, of which the parallax is unknown, on the left-eye view and the right-eye view, if double-visible pixel points with the same parallax are found at two ends of the edge of the pixel of which the parallax is unknown, the parallax of the edge is determined;
the method for expanding the double-visible parallax comprises the following steps: and modulating the edge with known parallax and the pixels around the edge, and performing breadth expansion on the pixels based on the known parallax to make the parallax of the pixels around the edge consistent with the edge.
Further, the cost function is formulated as follows:
Figure GDA0003340249470000031
wherein, C (x)l,xr) Representing a cost function, xlAnd xrRepresenting edge pixel points on the left and right eye views, L and R representing the left and right eye views,
Figure GDA0003340249470000032
and
Figure GDA0003340249470000033
representing horizontal and vertical gradients, | · |. non-woven phosphor1To represent
Figure GDA0003340249470000034
Norm, λ is a coefficient.
Further, the left-right consistency check means that two pixels of the candidate pixel matching pair are the best matching pixel points of the other view; suboptimal inspection refers to the cost function C (x)l,xr)≤tsC(xl,x′r) Wherein x'rIs xlSub-optimal matching point, t, in the right eye viewsIs a threshold value of 0<ts<1; the histogram inspection is to analyze the parallax of the pixel points on a certain edge by taking the edge as a unit and to eliminate the pixel points corresponding to the abnormal parallax.
Further, in the single-visible edge disparity estimation method, the method for determining that only the left eye view is visible is as follows: if there is a double visible pixel x' to the right of the remaining pixel x such that x + d>x′+Dl(x′),DlRepresenting the left disparity map, the pixel x is considered to be blocked by the pixel x', if more than 3 pixels are blocked on a certain continuous segment of the edge, the continuous segment is considered to be only visible in the left eye view; the method for judging that only the right eye view is visible is as follows: if there is a double visible pixel x' to the left of the remaining pixel x such that x + d>x′+Dr(x′),DrRepresenting the right disparity map, pixel x is considered blocked by pixel x', and if more than 3 pixels are blocked on a continuous segment of the edge, the continuous segment is considered visible only in the right eye view.
Further, according to the left eye view and the right eye view, the left disparity map and the right disparity map, and visible information obtained when the left disparity map and the right disparity map are generated through calculation, a middle view is generated through rendering a skeleton area and rendering a residual area; wherein:
the method for rendering the skeleton area comprises the following steps: directly mapping each element in the edge pixel point set to a pixel in a left eye view or a right eye view corresponding to the edge pixel point set to an intermediate view to form a skeleton part of the intermediate view;
the method for rendering the residual area comprises the following steps: and expanding the skeleton region by using the parallax information and the visible information breadth of the middle view skeleton region until the skeleton region is completely rendered.
Further, the left-eye modulation view and the right-eye modulation view are alternately played on the display screen at a frequency of at least 60Hz, and the screen refresh rate is at least 120 Hz.
A 3D-2D synchronized display system comprising:
the storage module is used for storing original 3D video content or the processed left eye modulation view and the right eye modulation view, and the original 3D video content at least comprises a left eye view and a right eye view;
the processing module comprises a core module, the core module is used for receiving the left eye view and the right eye view, the middle view, the left parallax map and the right parallax map, and carrying out pixel-by-pixel modulation according to the following formula to obtain a left eye modulation view and a right eye modulation view;
Figure GDA0003340249470000041
wherein, PlAnd PrRepresenting a left eye modulation view and a right eye modulation view, C representing a middle view, DlAnd DrRepresenting a left disparity map and a right disparity map, x representing a pixel, Lσ()And Rσ()Representing the smooth filtered images of the left and right eye views, σ (D)l(x) And σ (D)r(x) Represents the magnitude of the degree of smoothing, positively correlated with the absolute parallax over pixel x;
and the display module comprises a display screen and is used for playing the left eye modulation view and the right eye modulation view, synchronously presenting a 3D image for a viewer wearing the 3D glasses and synchronously presenting a 2D image for the viewer wearing naked eyes.
Further, the parallax map module is used for firstly calculating double-visible edge parallax according to the left eye view and the right eye view, then presuming single-visible edge parallax, then presuming more double-visible edge parallax, and finally expanding the double-visible parallax, correspondingly generating a left parallax map and a right parallax map, and selectively generating visible information.
Further, the system also comprises an intermediate view module, which is used for generating an intermediate view by rendering the skeleton region and the residual region according to the left eye view and the right eye view, the left disparity map, the right disparity map and the visible information received from the disparity map module, and transmitting the generated intermediate view to the core module.
Compared with the prior art, the invention has the advantages that:
(1) the method is compatible with the existing 3D display hardware, and does not need additional hardware support. When an observer needs to synchronously display 3D-2D, modulated left and right eye views are input into the existing 3D display hardware. And when the observer does not need 3D-2D synchronous display, the 3D display hardware normally plays left and right eye views of the original 3D content.
(2) The spatial frequency of different areas of the views is adjusted according to the parallax of the left and right eye views, more components with high spatial frequency are filtered out in a large parallax area, and the high spatial frequency components are more contained in a small parallax area, so that the discomfort caused by large parallax can be reduced.
The invention can be applied to movie theaters, and 3D-2D synchronous display cinemas are added on the basis of 3D cinemas and 2D cinemas. 3D-2D synchronized display theaters are specifically prepared for viewers who are not suitable for long-term viewing of 3D movies, which in the past were very exclusive to 3D movies, either viewing 2D movies or viewing only 2D versions of 3D movies. According to the 3D-2D synchronous display system of the invention, a cinema manager can freely switch the 3D cinema into the 3D cinema mode and the 3D-2D synchronous display cinema mode with zero hardware cost.
Drawings
Fig. 1 is a schematic structural diagram of a 3D-2D synchronous display system according to an embodiment.
Fig. 2A-2D are schematic diagrams of four input variants of a processing module.
Detailed Description
The invention provides a 3D-2D synchronous display system, which comprises a storage module, a processing module and a display module, and specifically comprises the following contents:
the storage module stores original 3D content or left and right eye views which are pre-modulated by the processing module and have a 3D-2D synchronous display effect. If the former is stored, the storage module transmits the original 3D content to the processing module so that the storage module can modulate in real time; if the latter is stored, the storage module directly sends the content to the display module for playing.
The processing module is a program developed in advance, and runs in the form of hardware or software on a general-purpose computer or a special computer. The input of the processing module is the original left and right eye view, the disparity map (optional) and the middle view (optional), and the output is the modulated left and right eye view with the 3D-2D synchronous display effect. The processing module is further subdivided into a core module, a disparity map module and an intermediate view module. The input of the core module is an original left-right eye view, a disparity map and a middle view, and the output is a modulated left-right eye view with a 3D-2D synchronous display effect; the input of the disparity map module is the original left and right eye views, the output is the disparity information and the visible information of the skeleton area of the original left and right eye views, and the disparity map module only works when no disparity map is input to the processing module; the input of the intermediate view module is the output of the original left and right eye views and the disparity map module, the output is the intermediate view of the original left and right eye views, and the intermediate view module only works when no intermediate view is input to the processing module.
The display module is traditional 3D display hardware, mainly includes display screen and 3D glasses, and the left and right eye view after will passing through processing module modulation arranges in specific space or time and plays on the display screen, and the observer need wear the 3D glasses that can separate the light signal that the screen sent and watch. Through the 3D glasses, right and left eyes of an observer are presented with correct right and left eye signals respectively, and then form 3D perception through a human eye visual system; when the observer watches the screen directly with naked eyes without using 3D glasses, the modulated left and right eye views are merged into a clear 2D view in the human visual system.
The core of the technical scheme of the invention is a processing module, and the processing module comprises the following working steps:
the core module inputs are a left eye view L, a right eye view R, a middle view C and a left parallax view DlAnd right disparity map DrThe output is a left eye modulation view PlAnd a right eye modulation view PrPixel by pixel modulation. For pixel x, the process is as follows:
Figure GDA0003340249470000061
wherein L isσ(x)And Rσ(x)Represents the smooth filtered image of the left and right eye views, and σ (D (x)) is the magnitude of the degree of smoothing, which is the absolute value of the pixel xThe parallax is positively correlated.
The input of the disparity map module is a left eye view L and a right eye view R, and the output is a left disparity map DlAnd right disparity map Dr. The disparity map module is further subdivided into computing double-visible edge disparities, inferring single-visible edge disparities, inferring more double-visible edge disparities, expanding double-visible disparities.
(1) A double visible edge disparity is calculated. Suppose BAnd BThe set of edge pixels of the left eye view L and the right eye view R is obtained by an edge detection algorithm, and θ represents the maximum included angle between the gradient of the edge pixels and the horizontal direction. Using a cost function C (x)l,xr) Measuring edge pixel point x on left eye viewlAnd edge pixel point x of right eye viewrAnd searching candidate pixel matching pairs of left and right eye views on the same level according to the pixel difference between the left and right eye views, wherein the candidate pixel matching pairs are subjected to left and right consistency check, suboptimal check and histogram check to determine the final optimal pixel matching pairs. The left and right consistency check shows that two pixels of the candidate matching pair are the best matching pixel points of the other view; the suboptimal check is C (x)l,xr)≤tsC(xl,x′r) Wherein x'rIs xlSub-optimal matching point, t, in the right eye views(0<ts<1) Is a threshold value; the histogram check is to analyze the parallax of the pixel points on a certain edge by taking the edge as a unit and to eliminate the pixel points corresponding to the abnormal parallax.
(2) A single visible edge parallax is presumed. The pixels for which disparity has been calculated in step (1) are all double-visible, but in foreground and background adjacent regions there are still some edges that are only visible in the current view but not in the other view. For a certain edge epsilon on the left eye view, the disparity of the partial pixels on the edge has been calculated d in the first step, the disparity of the remaining pixels on the edge can be temporarily considered as d, if there are pixels that are double visible on the left side of a certain remaining pixel, but their left-right order of the corresponding points on the right eye view is not consistent, then the disparity of this remaining pixel is really d, and it is only visible in the left eye view. For a certain edge epsilon on the right eye view, the disparity of the partial pixels on the edge has been calculated d in the first step, the disparity of the remaining pixels on the edge can be temporarily considered as d, if there are pixels that are double-visible to the right of a certain remaining pixel, but their left-right order of the corresponding points on the left eye view is not consistent, then the disparity of this remaining pixel is really d, and it is only visible on the right eye view.
(3) More double visible edge parallax is presumed. The disparities of most edge pixels are calculated in the steps (1) and (2), but the disparities of some vertical edge pixels and all horizontal edge pixels are unknown, and the further determination of the disparities of the pixels on the edges is carried out. For an edge with unknown parallax on the left eye view or the right eye view, if double-visible pixel points with consistent parallax can be found at two ends of the edge, the parallax of the edge can be determined.
(4) Expanding the double visible edge parallax. Similar to the breadth search algorithm, the edge with known parallax is expanded, so that the parallax of the pixel points around the edge is consistent with the edge.
The input to the intermediate view module is the left eye view L and the right eye view R, as well as the disparity information and the visual information generated from the disparity map module, and the output is the intermediate view C. The parallax information and the visible information output by the parallax map module mainly relate to the edges and the surrounding areas of the left and right eye views, so that the edges and the surrounding areas of the intermediate view can be directly rendered according to the parallax information and the visible information, and the rest areas of the intermediate view can be estimated from the rendered areas.
The output of the processing module is a modulated left and right eye view with a 3D-2D synchronized display effect, but its input has four variants, as shown in fig. 2A-2D:
(1) as shown in fig. 2A, the input is a left eye view and a right eye view. At the moment, the left eye view and the right eye view are transmitted to the core module, the parallax map module and the middle view module; the parallax information (namely the parallax map) generated by the parallax map module is transmitted to the core module and the intermediate view module, and the visible information generated by the parallax map module is transmitted to the intermediate view module; transmitting the intermediate view generated by the intermediate view module to the core module; the output of the core module is a modulated left and right eye modulation view.
(2) As shown in fig. 2B, the input is a left eye view, a right eye view, and a disparity map (i.e., left-right eye disparity information). At the moment, the left eye view is respectively transmitted to the core module, the parallax map module and the middle view module, the right eye view is respectively transmitted to the parallax map module and the middle view module, and the parallax map is transmitted to the core module; transmitting the parallax information and the visible information generated by the parallax map module to the intermediate view module; transmitting the intermediate view generated by the intermediate view module to the core module; the output of the core module is a modulated left and right eye modulation view.
(3) As shown in fig. 2C, the input is a left eye view, a right eye view, and a middle view. At the moment, the left eye view and the right eye view are transmitted to the core module and the disparity map module; the intermediate view is directly transmitted to the core module; the disparity map module generates disparity information and transmits the disparity information to the core module; the output of the core module is a modulated left and right eye modulation view.
(4) As shown in fig. 2D, the inputs are left eye view, right eye view, disparity map, and intermediate view. At the moment, the left eye view, the right eye view, the parallax map and the middle view are transmitted to the core module; the output of the core module is a modulated left and right eye modulation view.
Fig. 2A-2D illustrate the left eye view input above and the right eye view input below, but the invention is not limited thereto, and due to the symmetry of the left and right eyes, the left and right eye views of fig. 2A-2D can be reversed in another case, i.e., the right eye view input above and the left eye view input below, and the desired result can be obtained, which should be understood.
The present invention provides a specific embodiment of a 3D-2D synchronized display system that can be applied to conventional 3D display hardware, which considers the most demanding case, i.e. inputting only the original left and right eye views to the display system, and not inputting the disparity map and the middle view. As shown in fig. 1, the embodiment includes a storage module 11, a processing module 12, and a display module 13, and the specific workflow of each module is as follows:
(1) the storage module 11 sends the original left-eye view L and right-eye view R to the processing module 12.
(2) Treatment ofA disparity map module 121 in the module 12 receives the original left and right eye views L and R sent by the storage module 11, generates disparity information and visible information through four steps of calculating double-visible edge disparity, estimating single-visible edge disparity, estimating more double-visible edge disparity and expanding the double-visible disparity, sends the disparity information and the visible information to an intermediate view module 122, and sends the disparity information to a core module 123; an intermediate view module 122 in the processing module 12 receives the original left and right eye views L and R sent by the storage module 11, and receives view information and visible information sent by the disparity map module 121, generates an intermediate view through two steps of rendering a skeleton region and rendering a remaining region, and sends the intermediate view to a core module 123; the core module 123 receives the original left and right eye views L and R sent by the storage module 11, receives the disparity information sent by the disparity map module 121, receives the intermediate view sent by the intermediate view module 122, and generates a modulated left and right eye view PlAnd PrAnd sends it to the display module 13.
(3) The display module 13 receives the modulated left and right eye views P sent by the processing module 12lAnd PrIt is played on the display screen 131 in a particular space or time: if the display module 13 is a time-division 3D display hardware, the left and right eye views P are modulatedlAnd PrAlternately playing on the screen at a rate of at least 60Hz (meaning a screen refresh rate of at least 120 Hz); if the display module 13 is a space division type display hardware, the left and right eye views P are modulatedlAnd PrPlaying with different light polarization directions or wavelength ranges on the screen. The 3D glasses 132 can split the light signal emitted by the screen into modulated left and right eye views PlAnd PrThe observer wearing the 3D glasses can make the left and right eyes respectively receive and modulate the left and right eye views PlAnd PrThereby forming 3D vision, both eyes receive modulated left and right eye views P when the observer is not wearing 3D glasseslAnd PrModulating left and right eye views PlAnd PrThe parallax-inducing components in (1) are superimposed and offset with each other, and at this time, the observer perceives a clear 2D image.
The specific working steps of the disparity map module 121 in the processing module 12 are as follows:
(1) a double visible edge disparity is calculated. Respectively calculating edge pixel sets B of left and right eye views L and R by using a Canny edge detection operatorrAnd BlCalculating the gradient direction of the edge pixels by using a Sobel operator, and screening an edge pixel set B according to the gradient directionrAnd BlObtaining a set B of edge pixels in the horizontal directionAnd BWhere θ represents the maximum angle between the gradient direction and the horizontal, where θ is 5/12 π. Defining left eye view pixel xlAnd right eye view pixel xrHas a cost function of
Figure GDA0003340249470000081
Wherein
Figure GDA0003340249470000082
And
Figure GDA0003340249470000083
representing horizontal and vertical gradients, | · |. non-woven phosphor1Is composed of
Figure GDA0003340249470000084
Norm, where the coefficient λ takes 0.9. Given the edge pixel point x of the left eye viewl∈BLooking for x at the same level of the right eye viewrSo that their cost function C (x)l,xr) And minimum. Now matching pairs (x)l,xr) For the best matching pair to be selected, the left-right consistency check, the sub-best check and the histogram analysis check are carried out: left and right coincidence check as fixed xrThe edge point found on the left eye view that minimizes the cost function is also xl(ii) a The suboptimal check is any edge pixel point x 'of the right-eye view'r≠xrAll have C (x)l,xr)≤tsC(xl,x′r) Here tsTaking 0.3; histogram checking is the analysis of the disparity obtained by edge analysis for the left eye view if a certain disparity is presentIf the number of corresponding pixels is less than 3, the disparity is considered as abnormal disparity, and the disparity of the corresponding pixels becomes unknown again. Put all the pixels with known parallax in this step into a set
Figure GDA0003340249470000091
Collection
Figure GDA0003340249470000092
The element in (A) is in the form of (x)l,xrAnd 2), where 2 denotes that the pixel match pair is double visible.
(2) A single visible edge parallax is presumed. The single visible pixel in the left eye view L is left visible, i.e. visible only in the left eye view and not in the right eye view; the single visible pixel in the right eye view R is visible to the right, i.e. only in the right eye view and not in the left eye view. The inference of the left visible edge of the left eye view is only described here, and the inference of the right visible edge of the right eye view is similar thereto. For one edge of the left eye view
Figure GDA0003340249470000093
The disparity of the partial pixels on the image is determined as d in the step (1), and for the residual pixels with unknown disparity x epsilon, if the disparity is d, if the double-visible pixels x' exist on the right side of the pixel x, so that x + d>x′+Dl(x '), then pixel x is considered blocked by pixel x'. If more than 3 pixels are blocked on a consecutive segment of the edge ε, the consecutive segment is considered to be visible to the left, and the consecutive segment is updated to the set
Figure GDA0003340249470000098
The element form is (x, x + d,0), where 0 indicates that the pixel is left visible. Similarly, if the pixel matching pair is visible to the right, the element is in the form (x-d, x, 1).
(3) More double visible edge parallax is presumed. The steps (1) and (2) only consider partial vertical edges, and the parallax of all horizontal edge pixels and partial vertical edge pixels is not known. Given an edge ε, the disparities of the pixels above it are all temporaryThe process is unknown, if two ends of the edge epsilon have known double-visible pixel points with consistent parallax, if the parallax is d, the parallax of the edge epsilon is determined as d, and the edge is updated to a set
Figure GDA0003340249470000094
(4) The double visible edge is expanded. The edge and its surrounding pixels tend to change color drastically and contain a large amount of high spatial frequency components, and the core module 123 mainly modulates the edge and its surrounding pixels. And starting from all the pixels with known parallax, expanding in a form of breadth expansion, wherein the expansion depth is 60.
The specific working steps of the intermediate view module 122 in the processing module 12 are as follows:
(1) and rendering the skeleton area. Will be assembled
Figure GDA0003340249470000095
Each element in the left-eye view L or the right-eye view R is directly mapped onto the intermediate view corresponding to a pixel in the left-eye view L or the right-eye view R, and constitutes a skeleton portion of the intermediate view.
(2) And rendering the residual area. And the parallax information and the visible information of the middle view skeleton region can be known, and the skeleton region is expanded by utilizing the information until the middle view C is generated by all rendering.
The specific working steps of the core module 123 in the processing module 12 are as follows: collection
Figure GDA0003340249470000096
Only the disparity of the edges and their surrounding pixels in the left and right eye views are included, and only these pixels are modulated in this step. For the
Figure GDA0003340249470000097
The pixel x in the left eye view is modulated as followsl
Figure GDA0003340249470000101
Wherein
Figure GDA0003340249470000102
Representing the original left-eye image L being nucleated by σ (x)l,xr) The gaussian filter of (1) smoothes the filtering. Likewise, pixel x in the right eye view is modulated as followsr
Figure GDA0003340249470000103
The above embodiments are provided only to assist those skilled in the art with a further understanding of the present invention, and are not to be construed as limiting the present invention in any way, the scope of the present invention being defined by the appended claims, and various equivalent substitutions and modifications made without departing from the spirit and principles of the present invention are intended to be covered by the scope of the present invention.

Claims (8)

1. A3D-2D synchronous display method is characterized by comprising the following steps:
receiving a left eye view and a right eye view, a middle view, a left parallax image and a right parallax image, and performing pixel-by-pixel modulation to obtain a left eye modulation view and a right eye modulation view; according to the left eye view and the right eye view, firstly calculating double-visible edge parallax, then estimating single-visible edge parallax, then estimating more double-visible edge parallax, and finally expanding the double-visible parallax to generate a left parallax image, a right parallax image and visible information; the method for calculating the double-visible-edge parallax comprises the following steps: acquiring an edge pixel point set by using an edge detection algorithm for the left eye view and the right eye view; measuring pixel difference between edge pixel points on the left eye view and edge pixel points on the right eye view by using a cost function, and searching candidate pixel matching pairs of the left eye view and the right eye view on the same level; the candidate pixel matching pair is subjected to left-right consistency check, suboptimal check and histogram check to determine a final optimal pixel matching pair; the method for estimating the single-visible edge parallax comprises the following steps: for a certain edge on the left eye view, the parallax d of partial pixels on the edge is obtained by calculating the measured pixel difference of the double-visible edge parallax method, and for the residual pixels on the edge, if the left side of the residual pixels has double-visible pixels and the left-right sequence of the corresponding points on the right eye view is not consistent, the parallax of the residual pixels is judged to be d and is only visible in the left eye view; for a certain edge on the right eye view, the parallax d of partial pixels on the edge is obtained by calculating the measured pixel difference of the double-visible edge parallax method, and for the residual pixels on the edge, if the right side of the residual pixels has double-visible pixels and the left and right sequence of the corresponding points on the left eye view is not consistent, the parallax of the residual pixels is judged to be d and only the right eye view can be seen; the more double-visible edge parallax method is presumed to be: for some pixels on the vertical edge and all pixels on the horizontal edge, of which the parallax is unknown, on the left-eye view and the right-eye view, if double-visible pixel points with the same parallax are found at two ends of the edge of the pixel of which the parallax is unknown, the parallax of the edge is determined; the method for expanding the double-visible parallax comprises the following steps: modulating edges with known parallax and surrounding pixels thereof, and performing breadth expansion on pixels based on the known parallax to make the parallax of the pixels around the edges consistent with the edges;
the left eye modulation view and the right eye modulation view are played on a display screen, a 3D image is synchronously presented to a viewer wearing 3D glasses, and a 2D image is synchronously presented to the viewer wearing naked eyes;
the pixel-by-pixel modulation is performed according to the following formula:
Figure FDA0003340249460000011
wherein, PlAnd PrRepresenting a left eye modulation view and a right eye modulation view, C representing a middle view, DlAnd DrRepresenting a left disparity map and a right disparity map, x representing a pixel, Lσ()And Rσ()Representing the smooth filtered images of the left and right eye views, σ (D)l(x) And σ (D)r(x) ) indicates the degree of smoothness.
2. The method of claim 1, wherein the cost function is formulated as follows:
Figure FDA0003340249460000012
wherein, C (x)l,xr) Representing a cost function, xlAnd xrRepresenting edge pixel points on the left and right eye views, L and R representing the left and right eye views,
Figure FDA0003340249460000021
and
Figure FDA0003340249460000022
representing horizontal and vertical gradients, | · |. non-woven phosphor1To represent
Figure FDA0003340249460000023
Norm, λ is a coefficient.
3. The method of claim 1 or 2, wherein the left-right consistency check is that the two pixels of the candidate pixel matching pair are the best matching pixels of each other in the other view; suboptimal inspection refers to the cost function C (x)l,xr)≤tsC(xl,x′r) Wherein x'rIs xlSub-optimal matching point, t, in the right eye viewsIs a threshold value of 0<ts<1; the histogram inspection is to analyze the parallax of the pixel points on a certain edge by taking the edge as a unit and to eliminate the pixel points corresponding to the abnormal parallax.
4. The method of claim 1, wherein the method of inferring single-visible edge disparity to be visible only in the left eye view comprises: if there is a double visible pixel x' to the right of the remaining pixel x such that x + d>x′+Dl(x′),DlIf the number of pixels blocked in a certain continuous segment of the edge exceeds 3, the continuous segment is determined to be only in the segmentThe left eye view is visible; the method for judging that only the right eye view is visible is as follows: if there is a double visible pixel x' to the left of the remaining pixel x such that x + d>x′+Dr(x′),DrRepresenting the right disparity map, pixel x is considered blocked by pixel x', and if more than 3 pixels are blocked on a continuous segment of the edge, the continuous segment is considered visible only in the right eye view.
5. The method of claim 1, wherein an intermediate view is generated by rendering the skeleton regions and rendering the remaining regions based on the left and right eye views, the left and right disparity maps, and visual information obtained when the left and right disparity maps are computationally generated; wherein:
the method for rendering the skeleton area comprises the following steps: directly mapping each element in the edge pixel point set to a pixel in a left eye view or a right eye view corresponding to the edge pixel point set to an intermediate view to form a skeleton part of the intermediate view;
the method for rendering the residual area comprises the following steps: and expanding the skeleton region by using the parallax information and the visible information breadth of the middle view skeleton region until the skeleton region is completely rendered.
6. The method of claim 1, wherein the left-eye modulation view and the right-eye modulation view are alternately played on the display screen at a frequency of at least 60Hz, and the screen refresh rate is at least 120 Hz.
7. A3D-2D synchronous display system, comprising:
the storage module is used for storing original 3D video content or the processed left eye modulation view and the right eye modulation view, and the original 3D video content at least comprises a left eye view and a right eye view;
the processing module comprises a disparity map module and a core module, wherein the disparity map module is used for firstly calculating double-visible edge disparity according to a left eye view and a right eye view, then presuming single-visible edge disparity, then presuming more double-visible edge disparities, and finally expanding the double-visible disparity, correspondingly generating a left disparity map and a right disparity map, and selectively generating visible information; the method for calculating the double-visible-edge parallax comprises the following steps: acquiring an edge pixel point set by using an edge detection algorithm for the left eye view and the right eye view; measuring pixel difference between edge pixel points on the left eye view and edge pixel points on the right eye view by using a cost function, and searching candidate pixel matching pairs of the left eye view and the right eye view on the same level; the candidate pixel matching pair is subjected to left-right consistency check, suboptimal check and histogram check to determine a final optimal pixel matching pair; the method for estimating the single-visible edge parallax comprises the following steps: for a certain edge on the left eye view, the parallax d of partial pixels on the edge is obtained by calculating the measured pixel difference of the double-visible edge parallax method, and for the residual pixels on the edge, if the left side of the residual pixels has double-visible pixels and the left-right sequence of the corresponding points on the right eye view is not consistent, the parallax of the residual pixels is judged to be d and is only visible in the left eye view; for a certain edge on the right eye view, the parallax d of partial pixels on the edge is obtained by calculating the measured pixel difference of the double-visible edge parallax method, and for the residual pixels on the edge, if the right side of the residual pixels has double-visible pixels and the left and right sequence of the corresponding points on the left eye view is not consistent, the parallax of the residual pixels is judged to be d and only the right eye view can be seen; the more double-visible edge parallax method is presumed to be: for some pixels on the vertical edge and all pixels on the horizontal edge, of which the parallax is unknown, on the left-eye view and the right-eye view, if double-visible pixel points with the same parallax are found at two ends of the edge of the pixel of which the parallax is unknown, the parallax of the edge is determined; the method for expanding the double-visible parallax comprises the following steps: modulating edges with known parallax and surrounding pixels thereof, and performing breadth expansion on pixels based on the known parallax to make the parallax of the pixels around the edges consistent with the edges; the core module is used for receiving a left eye view, a right eye view, a middle view, a left disparity map and a right disparity map, and carrying out pixel-by-pixel modulation according to the following formula to obtain a left eye modulation view and a right eye modulation view;
Figure FDA0003340249460000031
wherein, PlAnd PrRepresenting a left eye modulation view and a right eye modulation view, C representing a middle view, DlAnd DrRepresenting a left disparity map and a right disparity map, x representing a pixel, Lσ()And Rσ()Representing the smooth filtered images of the left and right eye views, σ (D)l(x) And σ (D)r(x) Represents the magnitude of the degree of smoothing, positively correlated with the absolute parallax over pixel x;
and the display module comprises a display screen and is used for playing the left eye modulation view and the right eye modulation view, synchronously presenting a 3D image for a viewer wearing the 3D glasses and synchronously presenting a 2D image for the viewer wearing naked eyes.
8. The system of claim 7, further comprising an intermediate view module for generating an intermediate view by rendering the skeleton region and rendering the remaining regions based on the left and right eye views and the left and right disparity maps and the visual information received from the disparity map module, and transmitting the generated intermediate view to the core module.
CN202110172879.8A 2021-02-08 2021-02-08 3D-2D synchronous display method and system Active CN113014902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110172879.8A CN113014902B (en) 2021-02-08 2021-02-08 3D-2D synchronous display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110172879.8A CN113014902B (en) 2021-02-08 2021-02-08 3D-2D synchronous display method and system

Publications (2)

Publication Number Publication Date
CN113014902A CN113014902A (en) 2021-06-22
CN113014902B true CN113014902B (en) 2022-04-01

Family

ID=76384135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110172879.8A Active CN113014902B (en) 2021-02-08 2021-02-08 3D-2D synchronous display method and system

Country Status (1)

Country Link
CN (1) CN113014902B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117596381A (en) * 2024-01-18 2024-02-23 江西科骏实业有限公司 Display content control method and device, terminal equipment and computer storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101496405A (en) * 2006-08-22 2009-07-29 孙犁 2-d and 3-d display
CN102075773A (en) * 2010-11-25 2011-05-25 深圳市创凯电子有限公司 Synchronized method for imaging stereo and planar image mixed signal on super large screen
CN102164291A (en) * 2010-02-22 2011-08-24 深圳华映显示科技有限公司 Method and display system for simultaneously displaying two-dimensional (2D) image and three-dimensional (3D) image
WO2011125726A1 (en) * 2010-03-31 2011-10-13 シャープ株式会社 Stereoscopic image display device and stereoscopic image display system
JP2012003123A (en) * 2010-06-18 2012-01-05 Funai Electric Co Ltd Stereoscopic video display device
CN103563363A (en) * 2011-05-19 2014-02-05 汤姆逊许可公司 Automatic conversion of a stereoscopic image in order to allow a simultaneous stereoscopic and monoscopic display of said image
CN103718550A (en) * 2011-08-12 2014-04-09 阿尔卡特朗讯 3d display apparatus, method and structures
CN104023223A (en) * 2014-05-29 2014-09-03 京东方科技集团股份有限公司 Display control method, apparatus and system
CN105513064A (en) * 2015-12-03 2016-04-20 浙江万里学院 Image segmentation and adaptive weighting-based stereo matching method
CN110460836A (en) * 2019-09-17 2019-11-15 华拓域科技有限公司 A kind of 3D and 2D image display system using single synchronizing information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150062315A1 (en) * 2012-04-18 2015-03-05 The Regents Of The University Of California Simultaneous 2d and 3d images on a display

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101496405A (en) * 2006-08-22 2009-07-29 孙犁 2-d and 3-d display
CN102164291A (en) * 2010-02-22 2011-08-24 深圳华映显示科技有限公司 Method and display system for simultaneously displaying two-dimensional (2D) image and three-dimensional (3D) image
WO2011125726A1 (en) * 2010-03-31 2011-10-13 シャープ株式会社 Stereoscopic image display device and stereoscopic image display system
JP2012003123A (en) * 2010-06-18 2012-01-05 Funai Electric Co Ltd Stereoscopic video display device
CN102075773A (en) * 2010-11-25 2011-05-25 深圳市创凯电子有限公司 Synchronized method for imaging stereo and planar image mixed signal on super large screen
CN103563363A (en) * 2011-05-19 2014-02-05 汤姆逊许可公司 Automatic conversion of a stereoscopic image in order to allow a simultaneous stereoscopic and monoscopic display of said image
CN103718550A (en) * 2011-08-12 2014-04-09 阿尔卡特朗讯 3d display apparatus, method and structures
CN104023223A (en) * 2014-05-29 2014-09-03 京东方科技集团股份有限公司 Display control method, apparatus and system
CN105513064A (en) * 2015-12-03 2016-04-20 浙江万里学院 Image segmentation and adaptive weighting-based stereo matching method
CN110460836A (en) * 2019-09-17 2019-11-15 华拓域科技有限公司 A kind of 3D and 2D image display system using single synchronizing information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
3D+2DTV: 3D DISPLAYS WITH NO GHOSTING FOR VIEWERS WITHOUT GLASSES;Steven Scher,et al;《ACM Transactions on Graphics》;20130704;第32卷(第3期);全文 *

Also Published As

Publication number Publication date
CN113014902A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
US8488869B2 (en) Image processing method and apparatus
DE102011057187B4 (en) Stereoscopic image display device and method for setting a stereoscopic image of the same
EP2332340B1 (en) A method of processing parallax information comprised in a signal
JP4098235B2 (en) Stereoscopic image processing apparatus and method
US7440004B2 (en) 3-D imaging arrangements
CN100565589C (en) The apparatus and method that are used for depth perception
Vázquez et al. Stereoscopic imaging: filling disoccluded areas in depth image-based rendering
EP3350989B1 (en) 3d display apparatus and control method thereof
EP2544459A1 (en) Stereoscopic video display device and operation method of stereoscopic video display device
CN102932662B (en) Single-view-to-multi-view stereoscopic video generation method and method for solving depth information graph and generating disparity map
Winkler et al. Stereo/multiview picture quality: Overview and recent advances
JP2014515569A (en) Automatic conversion of binocular images to enable simultaneous display of binocular and monocular images
US9251564B2 (en) Method for processing a stereoscopic image comprising a black band and corresponding device
CN113014902B (en) 3D-2D synchronous display method and system
Tam et al. Depth image based rendering for multiview stereoscopic displays: Role of information at object boundaries
Fezza et al. Perceptually driven nonuniform asymmetric coding of stereoscopic 3d video
CN102196276A (en) Complete three-dimensional television image display scheme
EP2752815A1 (en) Display method and display apparatus
JP2015149547A (en) Image processing method, image processing apparatus, and electronic apparatus
KR102142480B1 (en) Three Dimensional Image Display Device
KR20130026370A (en) Depth estimation data generating device, computer readable recording medium having depth estimation data generating program recorded thereon, and pseudo-stereo image display device
Li et al. Visual perception of computer-generated stereoscopic pictures: Toward the impact of image resolution
Winkler et al. Stereoscopic image quality compendium
Li et al. On adjustment of stereo parameters in multiview synthesis for planar 3D displays
Smit et al. Three Extensions to Subtractive Crosstalk Reduction.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant