US20150245012A1 - Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery - Google Patents
Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery Download PDFInfo
- Publication number
- US20150245012A1 US20150245012A1 US14/508,878 US201414508878A US2015245012A1 US 20150245012 A1 US20150245012 A1 US 20150245012A1 US 201414508878 A US201414508878 A US 201414508878A US 2015245012 A1 US2015245012 A1 US 2015245012A1
- Authority
- US
- United States
- Prior art keywords
- alarm
- image
- stereoscopic
- rig
- disparity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H04N13/0239—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H04N13/0018—
-
- H04N13/0022—
-
- H04N13/004—
-
- H04N13/0048—
-
- H04N13/0066—
-
- H04N13/0246—
-
- H04N13/0271—
-
- H04N13/0459—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/144—Processing image signals for flicker reduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/167—Synchronising or controlling image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/178—Metadata, e.g. disparity information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0096—Synchronisation or controlling aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/002—Eyestrain reduction by processing stereoscopic signals or controlling stereoscopic devices
Definitions
- the present invention relates generally to stereoscopic 3D camera and viewing systems.
- This invention describes a process to ensure a comfortable Stereoscopic 3D viewing experience, whereby the viewer will observe natural 3D imagery without the psychovisual effects of eye strain, fatigue, or discomfort.
- the process of this invention is performed by extracting useful information from visual content, by means of image-processing the 3D imagery from a stereoscopic 3D camera system, and provides an output response based on logic decisions and pre-defined thresholds.
- the output response includes generating alarms to alert technical crew members of a “discomfort” condition so they may take corrective action. These alarms may be an audio or visual alert, and warn of the type of discomfort detected.
- the alarm condition may also trigger the automatic control of a video switching device, which would immediately route an appropriate “comfortable” input source to the output.
- the matrix switcher requires a common banking capability, because stereo pairs are routed simultaneously to the output.
- the alarm condition may duplex a single camera or playback signal into an identical stereo pair, which will be 2D, but comfortable to view.
- the “comfortable” input source may be a pre-defined or known “comfortable” 3D camera imagery, or other video content from a playback device.
- FIG. 1 shows a typical signal path of a “live” stereoscopic 3D infrastructure, including the processing units involved.
- FIG. 2 shows a typical Capture-side detail, in the form of a block diagram.
- FIG. 1 One embodiment of this invention ( FIG. 1 ) includes a complete “live” infrastructure, consisting of the following signal path: a stereoscopic 3D camera rig/s, an image processing unit to ensure comfortable 3D capture, encoding equipment, transmission equipment at the capture side, reception equipment at the viewing side/s, decoding equipment, an image-processing unit to ensure the reception conforms to comfortable 3D viewing, and a 3D display apparatus such as a 3D projector/s.
- a stereoscopic 3D camera rig/s an image processing unit to ensure comfortable 3D capture
- encoding equipment transmission equipment at the capture side
- reception equipment at the viewing side/s decoding equipment
- an image-processing unit to ensure the reception conforms to comfortable 3D viewing
- a 3D display apparatus such as a 3D projector/s.
- imagery may be uncomfortable to view due to many variables and reasons, and these are quantified by image processing electronics, which generates alarms based on pre-defined criteria, and alarm thresholds.
- This process may also be used in post-production (off-line), to ensure recorded media maintains comfortable 3D imagery, and generates similar output responses appropriate for an edit session, and is especially useful for finding transition points that are “comfortable” for close-matching 3D.
- This invention uses a mathematical model to quantify acceptable ranges of comfort.
- the capture-side ( FIG. 2 ) input into to the system may include:
- Acceptable image horizontal disparity which may be expressed as a percentage of total image size.
- the system's image-processing function will first look for obvious image errors such as missing video from a camera or cameras, or out-of-sync video, either sub-frame or multiple frame.
- Focus disparity is calculated, where the image-processing algorithm includes edge detection and/or a high-pass filtering to narrow in on the highest frequency detail of the chart.
- Luminance disparity created by iris, gamma, black-level, knee, etc.
- Chrominance disparity is calculated, where the image-processing algorithm includes color matrix conversion, and image correlation. Alarms are generated if the mismatches exceed pre-defined thresholds.
- a depth map is created. This is done using a standard neural-net process, where strong links are magnified (for adjacency) by parallel vector analysis to find stereoscopically “fusable” subject matter. Broken links in the neural-net will be determined to be caused by either “breaking frame” on the boundary of the images, or from stereoscopic occlusion within the images.
- the “breaking-frame” condition has an adjustable acceptance level, or alarm threshold.
- the amount of “fusable” subject matter is used to determine if there is a gross difference from both camera views, which may be caused by something obstructing the view of one camera. If this gross difference is sufficient to exceed the alarm threshold, an alarm condition will be generated.
- the “blobs are analyzed for and magnification disparity (zoom mismatch), and telecentricity mismatch, upon which an alarm will be generated if these mismatches exceed the alarm thresholds.
- the search range of the neural net will include several lines above and below the present search range, to extract possible vertical or rotational disparity, upon which an alarm will be generated if the vertical disparity is found to exceed the alarm threshold, and takes into account the screen size.
- the background will be searched for a concentration of vertical content (such as lamp posts, or a fence line).
- a Fourier transform is performed in the horizontal direction to extract this in the frequency domain.
- This area of the image will be considered less stereoscopically “fusable”, and weighted accordingly, taking into account other “fusable” subject matter.
- An alarm will be generated if it exceeds a pre-defined threshold.
- the alarm condition may also trigger the automatic control of a video switching device, which would immediately route an appropriate “comfortable” input source to the output.
Abstract
This process and invention may be used for “live” (on-line, recorded, real-time, or transmitted) stereoscopic image generation, post-production, and viewing to ensure the imagery, from the cameras through the systems to the final screens, maintains an acceptable level of viewer comfort and actively causes an action if an image becomes “uncomfortable” to view. The invention will use image-processing hardware and software to monitor stereoscopic 3D image data, and will generate an output response based on qualifying the imagery. The output responses include generating an alarm to alert technical crew members to a “discomfort” condition so they may take corrective action. The alarm may be an audio or visual alert, and warn of the type of discomfort detected. The alarm condition may trigger the automatic control of a video switching device, which would immediately route an appropriate “comfortable” input source to the output. The alarm condition may duplex a single camera or playback signal into an identical stereo pair, which will be 2D, but will be comfortable to view.
Description
- Any and all priority claims identified in the Application Data Sheet, or any correction thereto, are hereby incorporated by reference under 37 CFR 1.57.
- The present invention relates generally to stereoscopic 3D camera and viewing systems.
- For many years stereoscopic 3D movies have been made. Some have been good, but many others have been bad. The badly made 3D movies have been mainly the symptom of inadequate equipment, or the lack of knowledge of good stereography. The effectively creates discomfort by the viewer, as well as having a negative impact on the 3D industry.
- In this digital age, it is now feasible to create technological solutions to this dilemma.
- This invention describes a process to ensure a comfortable Stereoscopic 3D viewing experience, whereby the viewer will observe natural 3D imagery without the psychovisual effects of eye strain, fatigue, or discomfort.
- The process of this invention is performed by extracting useful information from visual content, by means of image-processing the 3D imagery from a stereoscopic 3D camera system, and provides an output response based on logic decisions and pre-defined thresholds.
- The output response includes generating alarms to alert technical crew members of a “discomfort” condition so they may take corrective action. These alarms may be an audio or visual alert, and warn of the type of discomfort detected. The alarm condition may also trigger the automatic control of a video switching device, which would immediately route an appropriate “comfortable” input source to the output. The matrix switcher requires a common banking capability, because stereo pairs are routed simultaneously to the output.
- The alarm condition may duplex a single camera or playback signal into an identical stereo pair, which will be 2D, but comfortable to view.
- (The “comfortable” input source may be a pre-defined or known “comfortable” 3D camera imagery, or other video content from a playback device.)
-
FIG. 1 shows a typical signal path of a “live” stereoscopic 3D infrastructure, including the processing units involved. -
FIG. 2 shows a typical Capture-side detail, in the form of a block diagram. - One embodiment of this invention (
FIG. 1 ) includes a complete “live” infrastructure, consisting of the following signal path: a stereoscopic 3D camera rig/s, an image processing unit to ensure comfortable 3D capture, encoding equipment, transmission equipment at the capture side, reception equipment at the viewing side/s, decoding equipment, an image-processing unit to ensure the reception conforms to comfortable 3D viewing, and a 3D display apparatus such as a 3D projector/s. - During capture, the imagery may be uncomfortable to view due to many variables and reasons, and these are quantified by image processing electronics, which generates alarms based on pre-defined criteria, and alarm thresholds.
- When using a remote viewing location, where the video content is transmitted “live” from the 3D camera location, additional image processing electronics is required at the viewing location to ensure the projectors are fed “comfortable” 3D imagery, in case there is failure of the captured quality, during the transmission. The imagery can be corrupted, missing, noisy, or out of sync. This electronics system on the receiving side would use a subset of the image-processing described above, and use a matrix switched output, to guarantee the projectors are fed “comfortable” 3D content.
- This process may also be used in post-production (off-line), to ensure recorded media maintains comfortable 3D imagery, and generates similar output responses appropriate for an edit session, and is especially useful for finding transition points that are “comfortable” for close-matching 3D.
- This invention uses a mathematical model to quantify acceptable ranges of comfort.
- The capture-side (
FIG. 2 ) input into to the system may include: - 1) Imagery from the left camera of the stereoscopic 3D camera rig, to be image processed.
- 2) Imagery from the right camera of the stereoscopic 3D camera rig, to be image processed.
- 3) Focus metadata from the 3D rig.
- 4) Iris metadata from the 3D rig.
- 5) Zoom metadata from the 3D rig.
- 6) Inter-Ocular metadata from the 3D rig.
- 7) Convergence metadata from the 3D rig.
- 8) Screen dimensions. (for TV, theater, IMAX, etc)
- 9) Distance range between this screen and the viewers.
- 10) Acceptable image horizontal disparity, which may be expressed as a percentage of total image size.
- 11) Fusional range.
- 12) Alarm thresholds:
- a) Gross difference (non-“fusable”)
- b) Focus disparity
- c) Luminance disparity
- d) Chrominance disparity
- e) Magnification disparity
- f) Telecentricity disparity
- g) “Broken frame” acceptance level
- h) Vertical content weighting factor
- i) Vertical disparity (expressed as number of lines, angle, or Percentage of screen height)
- To generate an alarm, the system's image-processing function will first look for obvious image errors such as missing video from a camera or cameras, or out-of-sync video, either sub-frame or multiple frame.
- Then obvious lens mismatch is processed. Focus disparity is calculated, where the image-processing algorithm includes edge detection and/or a high-pass filtering to narrow in on the highest frequency detail of the chart. Luminance disparity (created by iris, gamma, black-level, knee, etc.) is calculated, where the image-processing algorithm includes image subtraction and/or image correlation. Chrominance disparity (hue, saturation) is calculated, where the image-processing algorithm includes color matrix conversion, and image correlation. Alarms are generated if the mismatches exceed pre-defined thresholds.
- Then by using disparity mapping, by block and pixel methods, a depth map is created. This is done using a standard neural-net process, where strong links are magnified (for adjacency) by parallel vector analysis to find stereoscopically “fusable” subject matter. Broken links in the neural-net will be determined to be caused by either “breaking frame” on the boundary of the images, or from stereoscopic occlusion within the images. The “breaking-frame” condition has an adjustable acceptance level, or alarm threshold.
- “Blob analysis” algorithms are used to combine any linked “fusable” subject matter into bigger “blobs”.
- The amount of “fusable” subject matter, as an area ratio of the full screen size, is used to determine if there is a gross difference from both camera views, which may be caused by something obstructing the view of one camera. If this gross difference is sufficient to exceed the alarm threshold, an alarm condition will be generated.
- The “blobs are analyzed for and magnification disparity (zoom mismatch), and telecentricity mismatch, upon which an alarm will be generated if these mismatches exceed the alarm thresholds.
- The range of all angles to the boundaries of “fusable” subject matter, or “fusional range” are calculated, and if any angle exceeds the alarm threshold, an alarm will be generated. These angles to the “fusable” subject matter are performed in the horizontal plain, as this is the natural stereoscopic horizontal disparity. Excessive horizontal disparity, either towards viewer divergence or excessive convergence, will generate an alarm.
- The search range of the neural net will include several lines above and below the present search range, to extract possible vertical or rotational disparity, upon which an alarm will be generated if the vertical disparity is found to exceed the alarm threshold, and takes into account the screen size.
- The background will be searched for a concentration of vertical content (such as lamp posts, or a fence line). A Fourier transform is performed in the horizontal direction to extract this in the frequency domain. This area of the image will be considered less stereoscopically “fusable”, and weighted accordingly, taking into account other “fusable” subject matter. An alarm will be generated if it exceeds a pre-defined threshold.
- Finally, the remaining uncategorized areas will be deemed occlusion areas, and will be ignored, because they are naturally stereoscopic.
- The alarm condition may also trigger the automatic control of a video switching device, which would immediately route an appropriate “comfortable” input source to the output.
Claims (11)
1. (canceled)
2. A system, comprising:
a stereoscopic camera rig comprising a stereoscopic camera;
an image capture processor operably connected to the stereoscopic camera, wherein the image capture processor executes instructions to implement real time image processing of stereoscopic image inputs and further executes decision logic instructions to determine an alarm condition based on the real time image processing;
one or more than one alarm operably connected to the image capture processor and a display image processor;
one or more than one video switch operably connected to the display image processor; and
a display connected to the one or more than one video switch, wherein the stereoscopic image inputs comprise one or more of:
one or more images from a left camera of the stereoscopic camera;
one or more images from a right camera of the stereoscopic camera;
focus metadata associated with the rig;
iris metadata associated with the rig;
zoom metadata associated with the rig;
inter-ocular metadata associated with the rig;
convergence metadata associated with the rig;
one or more screen dimensions associated with the display;
one or both of a distance range between a screen of the display and one or more viewers and a distance between the screen of the display and one or more viewers;
image horizontal disparity, that can be expressed as a percentage of total image size;
fusional range; and
one or more alarm thresholds.
3. The system of claim 2 , wherein the system further comprises one or more of:
an encoder operably connected to the image capture processor;
a transmitter operably connected to the encoder;
a receiver operably connected to the transmitter; and
a decoder operably connected to the receiver;
wherein the display image processor is operably connected to the decoder;
4. The system of claim 2 , wherein the stereoscopic camera comprises a and lens motor control connected to the rig and a user interface connected to the rig.
5. The system of claim 2 , wherein the screen dimensions comprise an average of an expected screen size.
6. The system of claim 2 , wherein the screen dimensions comprise an expected largest screen size.
7. The system of claim 2 , wherein the alarm thresholds are selected from the group consisting of gross (non-“fusable”) difference, focus disparity, luminance disparity, chrominance disparity, magnification disparity, telecentricity disparity, “broken frame” acceptance levels, vertical content weighting factors and vertical disparity values expressed as number of lines, angle, or percentage of screen height.
8. The system of claim 2 , wherein the one or more than one alarm outputs a visual alarm.
9. The system of claim 2 , wherein the one or more than one alarm outputs an audible alarm.
10. The system of claim 2 , wherein the one or more than one video switch causes a switch between displaying normal or alternative video.
11. The system of claim 2 , wherein the one or more than one video switch comprises a matrix switcher having common banking capability so that stereo pairs are routed simultaneously to the output.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/508,878 US20150245012A1 (en) | 2005-07-14 | 2014-10-07 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery |
US14/975,104 US20160344994A1 (en) | 2005-07-14 | 2015-12-18 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US69896305P | 2005-07-14 | 2005-07-14 | |
US11/486,368 US8885017B2 (en) | 2005-07-14 | 2006-07-14 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3D imagery |
US14/508,878 US20150245012A1 (en) | 2005-07-14 | 2014-10-07 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/486,368 Continuation US8885017B2 (en) | 2005-07-14 | 2006-07-14 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3D imagery |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/975,104 Continuation US20160344994A1 (en) | 2005-07-14 | 2015-12-18 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150245012A1 true US20150245012A1 (en) | 2015-08-27 |
Family
ID=38173015
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/486,368 Active 2032-07-26 US8885017B2 (en) | 2005-07-14 | 2006-07-14 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3D imagery |
US14/508,878 Abandoned US20150245012A1 (en) | 2005-07-14 | 2014-10-07 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery |
US14/975,104 Abandoned US20160344994A1 (en) | 2005-07-14 | 2015-12-18 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/486,368 Active 2032-07-26 US8885017B2 (en) | 2005-07-14 | 2006-07-14 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3D imagery |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/975,104 Abandoned US20160344994A1 (en) | 2005-07-14 | 2015-12-18 | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3d imagery |
Country Status (1)
Country | Link |
---|---|
US (3) | US8885017B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107613385A (en) * | 2017-09-21 | 2018-01-19 | 程盼盼 | A kind of method for preventing child myopia |
CN107896344A (en) * | 2017-09-21 | 2018-04-10 | 程盼盼 | Prevent the device of child myopia |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8610757B2 (en) * | 2009-08-24 | 2013-12-17 | Next3D, Inc. | Stereoscopic video encoding and decoding methods and apparatus |
US8614737B2 (en) * | 2009-09-11 | 2013-12-24 | Disney Enterprises, Inc. | System and method for three-dimensional video capture workflow for dynamic rendering |
US8284235B2 (en) * | 2009-09-28 | 2012-10-09 | Sharp Laboratories Of America, Inc. | Reduction of viewer discomfort for stereoscopic images |
US10531062B2 (en) * | 2009-10-13 | 2020-01-07 | Vincent Pace | Stereographic cinematography metadata recording |
KR101377325B1 (en) * | 2009-12-21 | 2014-03-25 | 한국전자통신연구원 | Stereoscopic image, multi-view image and depth image acquisition appratus and its control method |
KR101674956B1 (en) * | 2010-07-12 | 2016-11-10 | 엘지전자 주식회사 | MOBILE TERMINAL AND METHOD FOR CONTROLLING A THREE DIMENSION IMAGE in thereof |
US20120062548A1 (en) * | 2010-09-14 | 2012-03-15 | Sharp Laboratories Of America, Inc. | Reducing viewing discomfort |
US8654181B2 (en) | 2011-03-28 | 2014-02-18 | Avid Technology, Inc. | Methods for detecting, visualizing, and correcting the perceived depth of a multicamera image sequence |
US8643699B2 (en) * | 2011-04-26 | 2014-02-04 | Mediatek Inc. | Method for processing video input by detecting if picture of one view is correctly paired with another picture of another view for specific presentation time and related processing apparatus thereof |
GB2490872B (en) * | 2011-05-09 | 2015-07-29 | Toshiba Res Europ Ltd | Methods and systems for capturing 3d surface geometry |
GB2490886B (en) * | 2011-05-13 | 2017-07-05 | Snell Advanced Media Ltd | Video processing method and apparatus for use with a sequence of stereoscopic images |
US9602801B2 (en) | 2011-07-18 | 2017-03-21 | Truality, Llc | Method for smoothing transitions between scenes of a stereo film and controlling or regulating a plurality of 3D cameras |
DE102011109301B4 (en) * | 2011-08-03 | 2013-05-08 | 3Ality Digital Systems, Llc | Method for correcting the zoom setting and / or the vertical offset of partial images of a stereo film as well as controlling or controlling a camera digit with two cameras |
US9402065B2 (en) | 2011-09-29 | 2016-07-26 | Qualcomm Incorporated | Methods and apparatus for conditional display of a stereoscopic image pair |
WO2013050184A1 (en) * | 2011-10-04 | 2013-04-11 | Telefonaktiebolaget L M Ericsson (Publ) | Objective 3d video quality assessment model |
KR101929814B1 (en) * | 2012-02-27 | 2019-03-14 | 엘지전자 주식회사 | Image displaying device and control method for the same |
TWI558165B (en) * | 2012-09-25 | 2016-11-11 | 宏碁股份有限公司 | Computer system and stereoscopic image generating method |
CN111526354B (en) * | 2019-11-25 | 2021-08-20 | 杭州电子科技大学 | Stereo video comfort prediction method based on multi-scale spatial parallax information |
CN111145495A (en) * | 2019-12-31 | 2020-05-12 | 钧捷智能(深圳)有限公司 | Fatigue driving early warning camera test system |
CN112738501B (en) * | 2020-12-29 | 2022-05-17 | 杭州电子科技大学 | Three-dimensional image comfort level testing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5416510A (en) * | 1991-08-28 | 1995-05-16 | Stereographics Corporation | Camera controller for stereoscopic video system |
US5699108A (en) * | 1993-09-01 | 1997-12-16 | Canon Kabushiki Kaisha | Multi-eye image pickup apparatus with multi-function finder screen and display |
US20020009137A1 (en) * | 2000-02-01 | 2002-01-24 | Nelson John E. | Three-dimensional video broadcasting system |
US20040057612A1 (en) * | 1998-06-04 | 2004-03-25 | Olympus Optical Co., Ltd. | Visual image system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6384872B1 (en) * | 1999-09-13 | 2002-05-07 | Intel Corporation | Method and apparatus for interlaced image enhancement |
US6765568B2 (en) * | 2000-06-12 | 2004-07-20 | Vrex, Inc. | Electronic stereoscopic media delivery system |
US7190825B2 (en) * | 2001-08-17 | 2007-03-13 | Geo-Rae Co., Ltd. | Portable communication device for stereoscopic image display and transmission |
-
2006
- 2006-07-14 US US11/486,368 patent/US8885017B2/en active Active
-
2014
- 2014-10-07 US US14/508,878 patent/US20150245012A1/en not_active Abandoned
-
2015
- 2015-12-18 US US14/975,104 patent/US20160344994A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5416510A (en) * | 1991-08-28 | 1995-05-16 | Stereographics Corporation | Camera controller for stereoscopic video system |
US5699108A (en) * | 1993-09-01 | 1997-12-16 | Canon Kabushiki Kaisha | Multi-eye image pickup apparatus with multi-function finder screen and display |
US20040057612A1 (en) * | 1998-06-04 | 2004-03-25 | Olympus Optical Co., Ltd. | Visual image system |
US20020009137A1 (en) * | 2000-02-01 | 2002-01-24 | Nelson John E. | Three-dimensional video broadcasting system |
Non-Patent Citations (1)
Title |
---|
Inoue Tetsuri et al., Accomadative Responses to Stereoscopic Three Dimensional Display, 1997 Optical Society of America, 1, July 1997, vol. 36 no. 19 pages 4509-4515. * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107613385A (en) * | 2017-09-21 | 2018-01-19 | 程盼盼 | A kind of method for preventing child myopia |
CN107896344A (en) * | 2017-09-21 | 2018-04-10 | 程盼盼 | Prevent the device of child myopia |
Also Published As
Publication number | Publication date |
---|---|
US20160344994A1 (en) | 2016-11-24 |
US20070139612A1 (en) | 2007-06-21 |
US8885017B2 (en) | 2014-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8885017B2 (en) | Real-time process and technology using image processing to maintain and ensure viewer comfort during capture, live transmission, and post-production of stereoscopic 3D imagery | |
US8736667B2 (en) | Method and apparatus for processing video images | |
US8766973B2 (en) | Method and system for processing video images | |
JP6027034B2 (en) | 3D image error improving method and apparatus | |
US20110228051A1 (en) | Stereoscopic Viewing Comfort Through Gaze Estimation | |
US20100007773A1 (en) | Video Processing and Telepresence System and Method | |
US9305331B2 (en) | Image processor and image combination method thereof | |
US8558875B2 (en) | Video signal processing device | |
US20130141550A1 (en) | Method, apparatus and computer program for selecting a stereoscopic imaging viewpoint pair | |
JP2013527646A5 (en) | ||
US20110157312A1 (en) | Image processing apparatus and method | |
JP2011176541A (en) | Three-dimensional video processing apparatus and method, and program thereof | |
TW201143355A (en) | Apparatus and method for video processing | |
US20120087571A1 (en) | Method and apparatus for synchronizing 3-dimensional image | |
US20150195443A1 (en) | Systems and methods for real-time view-synthesis in a multi-camera setup | |
CN102300104A (en) | 3D television and menu processing method thereof | |
Kim et al. | Depth perception and motion cue based 3D video quality assessment | |
US20170142392A1 (en) | 3d system including additional 2d to 3d conversion | |
US20200311978A1 (en) | Image encoding method and technical equipment for the same | |
Wang et al. | Very low frame-rate video streaming for face-to-face teleconference | |
Balcerek et al. | Brightness correction and stereovision impression based methods of perceived quality improvement of cCTV video sequences | |
US10242448B2 (en) | 3D system including queue management | |
WO2017220851A1 (en) | Image compression method and technical equipment for the same | |
CN114760457B (en) | Automatic 2D/3D image switching method and system of 3D display system | |
CN116546238A (en) | Video data transmission method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |