US20170310942A1 - Method and apparatus for processing three-dimensional (3d) pseudoscopic images - Google Patents
Method and apparatus for processing three-dimensional (3d) pseudoscopic images Download PDFInfo
- Publication number
- US20170310942A1 US20170310942A1 US15/648,598 US201715648598A US2017310942A1 US 20170310942 A1 US20170310942 A1 US 20170310942A1 US 201715648598 A US201715648598 A US 201715648598A US 2017310942 A1 US2017310942 A1 US 2017310942A1
- Authority
- US
- United States
- Prior art keywords
- view
- pseudoscopic
- feature points
- image
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H04N13/0018—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
Definitions
- the present disclosure generally relates to the field of three-dimensional (3D) display technologies and, more particularly, relates to a method for detecting 3D pseudoscopic images and a display device thereof.
- a smartphone or a digital camera capable of capturing three-dimensional (3D) images it usually takes two images of a same scene, in which the two images have a correct relative order and a parallax between them, and then arranges the two images into a 3D image.
- a user may upload two-dimensional (2D) images and utilize a 3D image creating software to generate corresponding 3D images of the uploaded 2D images.
- the user may further generate a 3D video based on multiple 3D video frames (i.e. 3D images), which can be played back on a 3D video player.
- the user may not notice the relative order between the two images forming the 3D image or 3D video frame, for example, a left eye view and a right eye view, and thus may arrange the two images in an incorrect order, resulting an incorrect 3D image.
- the user's left eye may see the right eye view and the user's right eye may see the left eye view, causing 3D image depth to be reversed to the user. That is, a pseudoscopic image or a pseudoscopic view may be generated and the viewing experience may be affected.
- detection and correction of the pseudoscopic image is of significant importance. If the pseudoscopic image cannot be detected and corrected, the user experience will be significantly degraded. To detect whether there is a pseudoscopy between a left view and a right view may often need to utilize corners respectively detected in the left view and the right view.
- the disclosed methods and systems are directed to solve one or more problems set forth above and other problems.
- One aspect of the present disclosure includes a method for detecting three-dimensional (3D) pseudoscopic images.
- the method includes extracting corresponding feature points in a first view and corresponding feature points in a second view, wherein the first view and the second view form a current 3D image; calculating an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view; based on the average coordinate value of the feature points in the first view and the average coordinate value of the feature points in the second view, determining whether the current 3D image is pseudoscopic or not; and processing the current 3D image when it is determined that the current 3D image is pseudoscopic.
- the display device includes a feature extraction module configured to extract corresponding feature points in a first view and corresponding feature points in a second view, wherein the first view and the second view form a current 3D image, an average value calculation module configured to calculate an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view, and a decision module configured to determine whether the current 3D image is pseudoscopic or not based on the average coordinate value of the feature points in the first view and the average coordinates values of the feature points in the second view.
- FIG. 1 illustrates a flow chart of an exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments
- FIG. 2 a and FIG. 2 b illustrate exemplary feature points in an exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments
- FIG. 3 illustrates a flow chart of another exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments
- FIG. 4 illustrates a structural schematic diagram of an exemplary display device consistent with disclosed embodiments
- FIG. 5 illustrates a block diagram of an exemplary display device consistent with disclosed embodiments
- FIG. 6 illustrates a structural schematic diagram of another exemplary display device consistent with disclosed embodiments.
- FIG. 7 illustrates an exemplary display device consistent with disclosed embodiments.
- 3D display device is usually based a parallax principle, in which a left view for a left eye and a right view for a right eye are separated by a lens or a grating and then received by the user's left eye and right eye, respectively.
- the user's brain fuses the left view and the right view to generate a correct visual perception of 3D display, i.e., a 3D image with a correct depth perception.
- the left eye receives the right view while the right eye receives the left view, for example, two images with an incorrect relative order are arranged into a 3D image, or two images with an incorrect relative position are synthesized into a 3D image, etc.
- the user may experience a pseudoscopic (reversed stereo/3D) image showing an incorrect depth perception.
- a box on a floor may appear as a box shaped hole in the floor.
- the viewing experience may be greatly degraded.
- FIG. 7 illustrates an exemplary display device consistent with disclosed embodiments.
- the display device 700 may be an electronic device which is capable of capturing 3D images, such as a smartphone, a tablet, and a digital camera, etc., or an electronic device which is capable of playing and/or generating 3D images such as a notebook, a TV, and a smartwatch, etc.
- the display device 700 is shown as a smartphone, any device with computing power may be used.
- FIG. 5 illustrates a block diagram of an exemplary display device consistent with disclosed embodiments.
- the display device 500 may include a processor 502 , a display 504 , a camera 506 , a system memory 508 , a system bus 510 , an input/output module 512 , and a mass storage device 514 .
- Other components may be added and certain devices may be removed without departing from the principles of the disclosed embodiments.
- the processor 502 may include any appropriate type of central processing module (CPU), graphic processing module (GPU), general purpose microprocessor, digital signal processor (DSP) or microcontroller, and application specific integrated circuit (ASIC).
- CPU central processing module
- GPU graphic processing module
- DSP digital signal processor
- ASIC application specific integrated circuit
- the display 504 may be any appropriate type of display, such as plasma display panel (PDP) display, field emission display (FED), cathode ray tube (CRT) display, liquid crystal display (LCD), organic light emitting diode (OLED) display, light emitting diode (LED) display, or other types of displays.
- PDP plasma display panel
- FED field emission display
- CRT cathode ray tube
- LCD liquid crystal display
- OLED organic light emitting diode
- LED light emitting diode
- the camera 506 may be an internal camera in the display device 500 or may be an external camera connected to the display device 500 over a network.
- the camera 506 may provide images and videos to the display device.
- the system memory 508 is just a general term that may include read-only memory (ROM), random access memory (RAM) and etc.
- ROM read-only memory
- RAM random access memory
- the ROM may store necessary software for a system, such as system software.
- the RAM may store real-time data, such as images for displaying.
- the system bus 510 may provide communication connections, such that the display device may be accessed remotely and/or communicate with other systems via various communication protocols, such as transmission control protocol/internet protocol (TCP/IP), hypertext transfer protocol (HTTP), etc.
- TCP/IP transmission control protocol/internet protocol
- HTTP hypertext transfer protocol
- the input/output module 512 may be provided for users to input information into the display device or for the users to receive information from the display device.
- the input/output module 512 may include any appropriate input device, such as a remote control, a keyboard, a mouse, an electronic tablet, voice communication devices, or any other optical or wireless input devices.
- the mass storage device 514 may include any appropriate type of mass storage medium, such as a CD-ROM, a hard disk, an optical storage, a DVD drive, or other type of storage devices.
- mass storage medium such as a CD-ROM, a hard disk, an optical storage, a DVD drive, or other type of storage devices.
- the processor 502 may perform certain processes to display images or videos to one or more users.
- FIG. 1 illustrates a flow chart of an exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments.
- corresponding feature points in a first view and corresponding feature points in a second view are extracted respectively, in which the first view and the second view may form a current 3D image (S 101 ).
- the feature points in the first view and the feature points in the second view may also be called as interest points, significant points, key points, etc., which may exhibit a certain pattern in a local image, such as intersections of different objects, intersections of different regions with different colors, and points of discontinuity on a boundary of an object, etc.
- a display device may extract feature points in the first view and the feature points in the second view based on Harris corner detection algorithm or SUSAN corner detection algorithm, etc.
- the display device may extract feature parameters of the feature points in the first view and feature parameters of the feature points in the second view, the feature parameters may include a Speeded Lip Robust Features (SURF), a Scale-invariant feature transform (SIFT) feature, a Binary Robust Invariant Scalable Keypoints (BRISK) feature, and a Binary Robust Independent Elementary Features (BRIEF), etc.
- SURF Speeded Lip Robust Features
- SIFT Scale-invariant feature transform
- BRISK Binary Robust Invariant Scalable Keypoints
- BRIEF Binary Robust Independent Elementary Features
- the display device may match the feature points in the first view and the feature points in the second view, and then determine the corresponding feature points in the first view and feature points in the second view.
- the feature parameters of the first view and the feature parameters of the second view may be matched based on a K approaching algorithm.
- the corresponding feature points in the first view and feature points in the second view may be defined.
- the feature points may be disposed in an X-Y coordinate system, and given an X-axis coordinate value and a Y-axis coordinate value.
- an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view are calculated, respectively (S 102 ).
- the average coordinate value may be an average of X-axis coordinate values of all the feature points in each view (i.e. the first view or the second view), or an average of Y-axis coordinate values of all the feature points in each view (i.e. the first view or the second view).
- a left view and a right view may be adopted to form a current 3D image
- the left view may be a first view
- the right view may be a second view.
- An average coordinate value in the first view may be the sum of X-axis coordinate values of all feature points in the first view divided by the number of all the feature points in the first view.
- An average coordinate value in the second view may be the sum of X-axis coordinate values of all feature points in the second view divided by the number of all the feature points in the second view.
- an upper view and a lower view may be adopted to form a current 3D image
- the upper view may be a first view
- the lower view may be a second view.
- An average coordinate value in the first view may be the sum of Y-axis coordinate values of all feature points in the first view divided by the number of all the feature points in the first view.
- An average coordinate value in the second view may be the sum of Y-axis coordinate values of all feature points in the second view divided by the number of all the feature points in the second view.
- a left view and a right view may be used to form a current 3D image, and the left view may be a first view and the right view may be a second view. If an average coordinate value of feature points in the first view is larger than or equal to an average coordinate value of the feature points in the second view, the current 3D image may be determined to be pseudoscopic. On the contrary, if the average coordinate value of the feature points in the first view is smaller than the average coordinate value of the feature points in the second view, the current 3D image may be not pseudoscopic.
- a left view and a right view may be used to form a current 3D image, and the right view may be a first view and the left view may be a second view. If an average coordinate value of feature points in the second view is larger than or equal to an average coordinate value of feature points in the first view, the current 3D image may be determined to be pseudoscopic. On the contrary, if the average coordinate value of the feature points in the second view is smaller than the average coordinate value of the feature points in the first view, the current 3D image may not be pseudoscopic.
- the display device may be able to correct the pseudoscopic image through processing the first view and the second.
- a left view and a right view may be used to form a current 3D image, and the left view may be a first view and the right view may be a second view.
- the display device may correct the pseudoscopic image through adjusting the relative order or the relative position of the first view and the second view. That is, the first view may become the right view while the second view may become the left view.
- a left view and a right view may be used to form the current 3D image, and the right view may be a first view and the left view may be a second view.
- the display device may correct the pseudoscopic image through adjusting the relative order or the relative position of the first view and the second view. That is, the first view may become the left view while the second view may become the right view.
- an upper view and a lower view may be adopted to form the current 3D image
- the upper view may be a first view
- the lower view may be a second view.
- the current 3D image is determined to be pseudoscopic.
- the display device may correct the current pseudoscopic image through adjusting the relative order or the relative position of the first view and the second view. That is, the first view may become the lower view while the second view may become the upper view.
- an upper view and a lower view may be adopted to form the current 3D image
- the lower view may be a first view
- the upper view may be a second view.
- the current 3D image is determined to be pseudoscopic.
- the display device may correct the current pseudoscopic image through adjusting the relative order or the relative position of the first view and the second view. That is, the first view may become the upper view while the second view may become the lower view.
- FIG. 2 a and FIG. 2 b illustrate exemplary feature points in an exemplary method for detecting 3D pseudoscopic views consistent with disclosed embodiments.
- corners i.e. features points
- FIG. 2 a corners are detected in a first view.
- Six corners may be detected and coordinate values of the six corners in an X-Y coordinate system are L1 (1,2), L2 (1,4), L3 (2,5), L4 (3,4), L5 (3,2), L6 (2,1), respectively.
- six corners may be detected in a second view and coordinate values of the six corners in the X-Y coordinate system are R1 (2,3), R2 (2,5), R3 (3,6), R4 (4,5), R5 (4,3), R6 (3,2), respectively.
- the six corners (i.e. feature points) in the first view may correspond to the six corners (i.e. feature points) in the second view.
- the first view and the second view may be adopted to form a current 3D image.
- the current 3D image may be not pseudoscopic, and a correction may not be required.
- the pseudoscopic image may need to be corrected.
- the pseudoscopic image may be corrected through adjusting the relative order or relative position of the first view and the second view when generating the current 3D image or displaying the current 3D image. For example, if FIG. 2 b shows a first view and FIG. 2 a shows a second view, the relative order of the first view and the second view may be exchanged to correct the pseudoscopic image.
- a user may utilize a 3D video player implemented in a display device to play a 3D video.
- detecting 3D pseudoscopic images i.e. the pseudoscopic image detection
- the 3D video may include any pseudoscopic video frames (i.e., pseudoscopic images) which may affect the viewing experience.
- the display device may correct the pseudoscopic video frames as described above, and then play the corrected 3D video.
- 3D video frames may have to be selected before extracting the corresponding feature points in the first view and the corresponding feature points in the second view.
- FIG. 3 illustrates a flow chart of another exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments.
- a plurality of 3D video frames i.e. a 3D image
- Each 3D image may include a first view and a second view.
- two views are used to form the 3D image. However, any appropriate number of views may be used.
- the predetermined rule i.e., the selection of the 3D video frames
- the predetermined rule may be random but not repeated or may be based on a certain interval, which is only for illustrative purposes and is not intended to limit the scope of the present invention.
- a maximum number of detecting 3D pseudoscopic images indicating maximum times of detecting 3D pseudoscopic images
- a minimum number of detecting 3D pseudoscopic images indicating minimum times of detecting pseudoscopic
- the maximum number of detecting 3D pseudoscopic images may be Q
- the minimum number of detecting 3D pseudoscopic images may be P.
- Q and P are positive integers, respectively.
- each 3D video frame (i.e. 3D image) may be detected for pseudoscopic images.
- a current 3D video frame i.e. 3D image
- corresponding feature points in the first view and corresponding feature points in the second view forming the current 3D image are extracted, respectively (S 301 ).
- an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view are calculated respectively (S 302 ).
- Step 303 Based on the average coordinate value of the feature points in the first view and the average coordinates value of the feature points in the second view, whether the current 3D image is pseudoscopic or not may be determined (S 303 ).
- the Steps of S 301 , S 302 and S 303 may be similar to those of S 101 , S 102 and S 103 in FIG. 1 , details of which are not repeated here while certain differences are explained.
- the display device may further record S number of detecting 3D pseudoscopic images in the 3D video and corresponding detection results of the multiple 3D video frames (i.e. 3D images). Then, based on the number S and the corresponding detection results of the multiple 3D video frames (i.e., 3D images), the display device may determine whether the 3D video is pseudoscopic or not.
- S number of pseudoscopic image detections may be performed in the 3D video, among which N number of 3D video frames (i.e. 3D images) may be determined to be pseudoscopic, M number of 3D video frames (i.e. 3D images) may be determined to be not pseudoscopic and T number of pseudoscopic image detections may be failed.
- N number of 3D video frames
- M number of 3D video frames
- T number of pseudoscopic image detections may be failed.
- S M+N+T
- S, M, N and T are positive integers respectively.
- P denotes the minimum number of detecting 3D pseudoscopic images in the 3D video, P is a positive integer.
- the 3D video may not be determined to be pseudoscopic. If the value of N/S is larger than a predetermined threshold value, the 3D video may be determined to be pseudoscopic.
- the predetermined threshold value may be determined according to requirements of 3D viewing experience. Various requirements may have various threshold values.
- the pseudoscopic image detection in the 3D video may be determined to be failed and the pseudoscopic image detection in the 3D video may end, otherwise the pseudoscopic image detection in the 3D video may have to continue.
- Q denotes the maximum number of the detecting 3D pseudoscopic images in the 3D video
- P denotes the minimum number of detecting 3D pseudoscopic images in the 3D video.
- Q>P a preferred value of P may be 5 and a preferred value of Q may be 10.
- the pseudoscopic video frames (i.e., pseudoscopic image) in the 3D video may need to be corrected through adjusting the relative order or relative position of the first view and the second view in the pseudoscopic video frames (i.e., pseudoscopic image), respectively.
- the display device may be able to determine whether the current 3D image is pseudoscopic or not. Further, based on the results of the pseudoscopic image detection, the display device may determine whether the relative order of the relative position of the first view and the second view forming the current 3D image needs to be changed. Thus, the display device may enable the user to watch 3D images/3D videos with correct depth perceptions, and enhance the viewing experience.
- FIG. 4 illustrates a structural schematic diagram of an exemplary display device consistent with disclosed embodiments.
- the display device 400 may be an electronic device which is capable of capturing 3D images, such as a smartphone, a tablet, and a digital camera, etc., or an electronic device which is capable of playing and/or generating 3D images such as a notebook, a TV, and a smartwatch, etc. (e.g., FIG. 7 )
- the display device 400 may include a feature extraction module 401 , an average value calculation module 402 and a decision module 403 . All of the modules may be implemented in hardware, software, or a combination of hardware and software. Software programs may be stored in the system memory 508 , which may be called and executed by the processor 502 to complete corresponding functions/steps.
- the feature extraction module 401 may be configured to extract corresponding feature points in a first view and in a second view, in which the first view and the second view may form a current 3D image.
- the average value calculation module 402 may be configured to calculate an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view. Based on the average coordinate value of the feature points in the first view and the average coordinates values of the feature points in the second view, the decision module 403 may be configured to determine whether the current 3D image is pseudoscopic or not.
- a left view and a right view may be adopted to form a current 3D image
- a first view may be the left view and a second view may be the right view. If an average coordinate value of feature points in the first view is larger than or equal to an average coordinate value of feature points in the second view, the decision module 403 may determine the current 3D image to be pseudoscopic.
- the average coordinate value of the feature points in the first view may be the sum of all X-axis coordinate values of all the feature points in the first view divided by the number of all the feature points in the first view.
- the average coordinate value of the feature points in the second view may be the sum of all X-axis coordinate values of all the feature points in the second view divided by the number of all the feature points in second first view.
- a left view and a right view may be adopted to form a current 3D image
- a first view may be the right view and a second view may be the left view. If an average coordinate value of feature points in the second view is larger than or equal to an average coordinate value of feature points in the first view, the decision module 403 may determine that the current 3D image to be pseudoscopic.
- the average coordinate value of the feature points in the first view may be the sum of all X-axis coordinate values of all the feature points in the first view divided by the number of all the feature points in the first view.
- the average coordinate value of the feature points in the second view may be the sum of all X-axis coordinate values of all the feature points in the second view divided by the number of all the feature points in second first view.
- FIG. 6 illustrates a structural schematic diagram of another exemplary display device consistent with disclosed embodiments.
- the display device 600 may include a feature extraction module 601 , an average value calculation module 602 and a decision module 603 , which may perform similar functions as the modules in FIG. 4 .
- the similarities between FIG. 4 and FIG. 6 are not repeated here, while certain differences are explained.
- the display device 600 may further include a pseudoscopic image processing module 604 . If the current 3D image is pseudoscopic, the pseudoscopic image processing module may be configured to correct the pseudoscopic image through processing the first view and the second view forming the current 3D image. In particular, the pseudoscopic image processing module may adjust the relative order or relative position of the first view and the second view.
- the display device 600 may also include a selecting module 605 configured to select 3D video frames (i.e., 3D images) in a 3D video based on a predetermined rule.
- Each 3D video frame i.e., 3D image
- the display device 600 may also include a recording module 606 configured to record the number of detecting 3D pseudoscopic images in the 3D video (i.e. times of detecting 3D pseudoscopic images in the 3D video) and corresponding detection results of detecting 3D pseudoscopic images in the 3D video. Based on the number of detecting 3D pseudoscopic images in the 3D video and the corresponding detection results of detecting 3D pseudoscopic images in the 3D video, the decision module 603 may determine whether the 3D video is pseudoscopic or not. If the 3D video is determined to be pseudoscopic, the pseudoscopic image processing module may adjust the relative order or relative position of the first view and the second view in the pseudoscopic video frames (i.e. pseudoscopic image) in the 3D video, respectively.
- a recording module 606 configured to record the number of detecting 3D pseudoscopic images in the 3D video (i.e. times of detecting 3D pseudoscopic images in the 3D video) and corresponding detection results of detecting 3D pseudoscopic images
- the display device consistent with disclosed embodiments may be an electronic device implemented with various software modules of detecting 3D pseudoscopic images consistent with disclosed embodiments, in which the details of detecting 3D pseudoscopic images may be referred to the previous description of detecting 3D pseudoscopic images. It should be noted that, names of the software modules are only for illustrative purposes, which are not intended to limit the scope of the present invention.
- the method for detecting 3D pseudoscopic images may be used in applications such as capturing 3D images and playing 3D videos.
- a smartphone or a digital camera capable of capturing 3D images it usually takes two images of a same scene, in which the two images have a correct relative order and a parallax between them, and then arranges the two images into a 3D image.
- a user may upload 2D images and utilize a 3D image creating software to generate corresponding 3D images of the uploaded 2D images.
- the user may further generate a 3D video based on multiple 3D video frames (i.e., 3D images), which can be played back on a video player.
- the user may not notice the relative order between the two images forming the 3D image or 3D video frame, and thus may arrange the two images in an incorrect order, resulting an incorrect 3D image. That is, a pseudoscopic image or a pseudoscopic view may be generated and the viewing experience may be affected.
- the display device may be able to determine whether the current 3D image is pseudoscopic or not. Further, based on the results of the pseudoscopic image detection, the display device may determine whether the relative order of the relative position of the first view and the second view forming the current 3D image needs to be changed. Thus, the display device may enable the user to watch 3D images/3D videos with correct depth perceptions, and enhance the viewing experience.
- a software module may reside in RAM, flash memory, ROM, EPROM (erasable programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- This application is a continuation application of U.S. patent application Ser. No. 15/004,514, filed on Jan. 22, 2016, which claims priority of Chinese Application No. 201510033241.0, filed on Jan. 22, 2015, the entire contents of all of which are hereby incorporated by reference.
- The present disclosure generally relates to the field of three-dimensional (3D) display technologies and, more particularly, relates to a method for detecting 3D pseudoscopic images and a display device thereof.
- For a smartphone or a digital camera capable of capturing three-dimensional (3D) images, it usually takes two images of a same scene, in which the two images have a correct relative order and a parallax between them, and then arranges the two images into a 3D image. In addition, a user may upload two-dimensional (2D) images and utilize a 3D image creating software to generate corresponding 3D images of the uploaded 2D images. The user may further generate a 3D video based on multiple 3D video frames (i.e. 3D images), which can be played back on a 3D video player.
- However, the user may not notice the relative order between the two images forming the 3D image or 3D video frame, for example, a left eye view and a right eye view, and thus may arrange the two images in an incorrect order, resulting an incorrect 3D image. For example, the user's left eye may see the right eye view and the user's right eye may see the left eye view, causing 3D image depth to be reversed to the user. That is, a pseudoscopic image or a pseudoscopic view may be generated and the viewing experience may be affected.
- According to the present disclosure, detection and correction of the pseudoscopic image is of significant importance. If the pseudoscopic image cannot be detected and corrected, the user experience will be significantly degraded. To detect whether there is a pseudoscopy between a left view and a right view may often need to utilize corners respectively detected in the left view and the right view.
- The disclosed methods and systems are directed to solve one or more problems set forth above and other problems.
- One aspect of the present disclosure includes a method for detecting three-dimensional (3D) pseudoscopic images. The method includes extracting corresponding feature points in a first view and corresponding feature points in a second view, wherein the first view and the second view form a current 3D image; calculating an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view; based on the average coordinate value of the feature points in the first view and the average coordinate value of the feature points in the second view, determining whether the current 3D image is pseudoscopic or not; and processing the current 3D image when it is determined that the current 3D image is pseudoscopic.
- Another aspect of the present disclosure includes a display device for detecting 3D pseudoscopic images. The display device includes a feature extraction module configured to extract corresponding feature points in a first view and corresponding feature points in a second view, wherein the first view and the second view form a current 3D image, an average value calculation module configured to calculate an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view, and a decision module configured to determine whether the current 3D image is pseudoscopic or not based on the average coordinate value of the feature points in the first view and the average coordinates values of the feature points in the second view.
- Other aspects of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings of the present disclosure.
- The following drawings are merely examples for illustrative purposes according to various disclosed embodiments and are not intended to limit the scope of the present disclosure.
-
FIG. 1 illustrates a flow chart of an exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments; -
FIG. 2a andFIG. 2b illustrate exemplary feature points in an exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments, -
FIG. 3 illustrates a flow chart of another exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments; -
FIG. 4 illustrates a structural schematic diagram of an exemplary display device consistent with disclosed embodiments; -
FIG. 5 illustrates a block diagram of an exemplary display device consistent with disclosed embodiments; -
FIG. 6 illustrates a structural schematic diagram of another exemplary display device consistent with disclosed embodiments; and -
FIG. 7 illustrates an exemplary display device consistent with disclosed embodiments. - Reference will now be made in detail to exemplary embodiments of the invention, which are illustrated in the accompanying drawings. Hereinafter, embodiments consistent with the disclosure will be described with reference to drawings. It is apparent that the described embodiments are some but not all of the embodiments of the present invention. Based on the disclosed embodiments, persons of ordinary skill in the art may derive other embodiments consistent with the present disclosure, all of which are within the scope of the present invention.
- 3D display device is usually based a parallax principle, in which a left view for a left eye and a right view for a right eye are separated by a lens or a grating and then received by the user's left eye and right eye, respectively. The user's brain fuses the left view and the right view to generate a correct visual perception of 3D display, i.e., a 3D image with a correct depth perception.
- However, it is possible that the left eye receives the right view while the right eye receives the left view, for example, two images with an incorrect relative order are arranged into a 3D image, or two images with an incorrect relative position are synthesized into a 3D image, etc. In such cases, the user may experience a pseudoscopic (reversed stereo/3D) image showing an incorrect depth perception. For example, a box on a floor may appear as a box shaped hole in the floor. Thus the viewing experience may be greatly degraded.
- The present disclosure provides a method for detecting 3D pseudoscopic images, which may be implemented into a display device.
FIG. 7 illustrates an exemplary display device consistent with disclosed embodiments. Thedisplay device 700 may be an electronic device which is capable of capturing 3D images, such as a smartphone, a tablet, and a digital camera, etc., or an electronic device which is capable of playing and/or generating 3D images such as a notebook, a TV, and a smartwatch, etc. Although thedisplay device 700 is shown as a smartphone, any device with computing power may be used. -
FIG. 5 illustrates a block diagram of an exemplary display device consistent with disclosed embodiments. As shown inFIG. 5 , thedisplay device 500 may include aprocessor 502, adisplay 504, acamera 506, asystem memory 508, a system bus 510, an input/output module 512, and amass storage device 514. Other components may be added and certain devices may be removed without departing from the principles of the disclosed embodiments. - The
processor 502 may include any appropriate type of central processing module (CPU), graphic processing module (GPU), general purpose microprocessor, digital signal processor (DSP) or microcontroller, and application specific integrated circuit (ASIC). Theprocessor 502 may execute sequences of computer program instructions to perform various processes associated with the display device. - The
display 504 may be any appropriate type of display, such as plasma display panel (PDP) display, field emission display (FED), cathode ray tube (CRT) display, liquid crystal display (LCD), organic light emitting diode (OLED) display, light emitting diode (LED) display, or other types of displays. - The
camera 506 may be an internal camera in thedisplay device 500 or may be an external camera connected to thedisplay device 500 over a network. Thecamera 506 may provide images and videos to the display device. - The
system memory 508 is just a general term that may include read-only memory (ROM), random access memory (RAM) and etc. The ROM may store necessary software for a system, such as system software. The RAM may store real-time data, such as images for displaying. - The system bus 510 may provide communication connections, such that the display device may be accessed remotely and/or communicate with other systems via various communication protocols, such as transmission control protocol/internet protocol (TCP/IP), hypertext transfer protocol (HTTP), etc.
- The input/output module 512 may be provided for users to input information into the display device or for the users to receive information from the display device. For example, the input/output module 512 may include any appropriate input device, such as a remote control, a keyboard, a mouse, an electronic tablet, voice communication devices, or any other optical or wireless input devices.
- Further, the
mass storage device 514 may include any appropriate type of mass storage medium, such as a CD-ROM, a hard disk, an optical storage, a DVD drive, or other type of storage devices. - During an operating process, the
processor 502, executing various software modules, may perform certain processes to display images or videos to one or more users. -
FIG. 1 illustrates a flow chart of an exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments. As shown inFIG. 1 , at the beginning, corresponding feature points in a first view and corresponding feature points in a second view are extracted respectively, in which the first view and the second view may form a current 3D image (S101). - In particular, the feature points in the first view and the feature points in the second view may also be called as interest points, significant points, key points, etc., which may exhibit a certain pattern in a local image, such as intersections of different objects, intersections of different regions with different colors, and points of discontinuity on a boundary of an object, etc.
- A display device may extract feature points in the first view and the feature points in the second view based on Harris corner detection algorithm or SUSAN corner detection algorithm, etc. For example, the display device may extract feature parameters of the feature points in the first view and feature parameters of the feature points in the second view, the feature parameters may include a Speeded Lip Robust Features (SURF), a Scale-invariant feature transform (SIFT) feature, a Binary Robust Invariant Scalable Keypoints (BRISK) feature, and a Binary Robust Independent Elementary Features (BRIEF), etc.
- Based on the feature parameters of the first view and the feature parameters of the second view, the display device may match the feature points in the first view and the feature points in the second view, and then determine the corresponding feature points in the first view and feature points in the second view.
- For example, the feature parameters of the first view and the feature parameters of the second view may be matched based on a K approaching algorithm. After being determined, the corresponding feature points in the first view and feature points in the second view may be defined. For example, the feature points may be disposed in an X-Y coordinate system, and given an X-axis coordinate value and a Y-axis coordinate value.
- After extracting the corresponding feature points in the first view and the corresponding feature points in the second view, an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view are calculated, respectively (S102).
- The average coordinate value may be an average of X-axis coordinate values of all the feature points in each view (i.e. the first view or the second view), or an average of Y-axis coordinate values of all the feature points in each view (i.e. the first view or the second view).
- For example, in one embodiment, a left view and a right view may be adopted to form a current 3D image, the left view may be a first view and the right view may be a second view. An average coordinate value in the first view may be the sum of X-axis coordinate values of all feature points in the first view divided by the number of all the feature points in the first view. An average coordinate value in the second view may be the sum of X-axis coordinate values of all feature points in the second view divided by the number of all the feature points in the second view.
- In one embodiment, an upper view and a lower view may be adopted to form a current 3D image, the upper view may be a first view and the lower view may be a second view. An average coordinate value in the first view may be the sum of Y-axis coordinate values of all feature points in the first view divided by the number of all the feature points in the first view. An average coordinate value in the second view may be the sum of Y-axis coordinate values of all feature points in the second view divided by the number of all the feature points in the second view.
- Based on the average coordinate value of the feature points in the first view and the average coordinates value of the feature points in the second view, whether the current 3D image is pseudoscopic or not may be determined (S103).
- In particular, in one embodiment, a left view and a right view may be used to form a current 3D image, and the left view may be a first view and the right view may be a second view. If an average coordinate value of feature points in the first view is larger than or equal to an average coordinate value of the feature points in the second view, the current 3D image may be determined to be pseudoscopic. On the contrary, if the average coordinate value of the feature points in the first view is smaller than the average coordinate value of the feature points in the second view, the current 3D image may be not pseudoscopic.
- In another embodiment, a left view and a right view may be used to form a current 3D image, and the right view may be a first view and the left view may be a second view. If an average coordinate value of feature points in the second view is larger than or equal to an average coordinate value of feature points in the first view, the current 3D image may be determined to be pseudoscopic. On the contrary, if the average coordinate value of the feature points in the second view is smaller than the average coordinate value of the feature points in the first view, the current 3D image may not be pseudoscopic.
- Further, if the current 3D image is determined to be pseudoscopic, the display device may be able to correct the pseudoscopic image through processing the first view and the second.
- For example, in one embodiment, a left view and a right view may be used to form a current 3D image, and the left view may be a first view and the right view may be a second view. When the current 3D image is determined to be pseudoscopic, the display device may correct the pseudoscopic image through adjusting the relative order or the relative position of the first view and the second view. That is, the first view may become the right view while the second view may become the left view.
- In another embodiment, a left view and a right view may be used to form the current 3D image, and the right view may be a first view and the left view may be a second view. When the current 3D image is determined to be pseudoscopic, the display device may correct the pseudoscopic image through adjusting the relative order or the relative position of the first view and the second view. That is, the first view may become the left view while the second view may become the right view.
- In another embodiment, an upper view and a lower view may be adopted to form the current 3D image, the upper view may be a first view and the lower view may be a second view. The current 3D image is determined to be pseudoscopic. The display device may correct the current pseudoscopic image through adjusting the relative order or the relative position of the first view and the second view. That is, the first view may become the lower view while the second view may become the upper view.
- In another embodiment, an upper view and a lower view may be adopted to form the current 3D image, the lower view may be a first view and the upper view may be a second view. The current 3D image is determined to be pseudoscopic. The display device may correct the current pseudoscopic image through adjusting the relative order or the relative position of the first view and the second view. That is, the first view may become the upper view while the second view may become the lower view.
-
FIG. 2a andFIG. 2b illustrate exemplary feature points in an exemplary method for detecting 3D pseudoscopic views consistent with disclosed embodiments. As shown inFIG. 2a , corners (i.e. features points) are detected in a first view. Six corners may be detected and coordinate values of the six corners in an X-Y coordinate system are L1 (1,2), L2 (1,4), L3 (2,5), L4 (3,4), L5 (3,2), L6 (2,1), respectively. - As shown in
FIG. 2b , six corners may be detected in a second view and coordinate values of the six corners in the X-Y coordinate system are R1 (2,3), R2 (2,5), R3 (3,6), R4 (4,5), R5 (4,3), R6 (3,2), respectively. The six corners (i.e. feature points) in the first view may correspond to the six corners (i.e. feature points) in the second view. The first view and the second view may be adopted to form a current 3D image. - An average coordinate value in the first view may be the sum of X-axis coordinate values of all the feature points in the first view divided by the number of all the feature points in the first view. That is, A=(2+4+5+4+2+1)/6=3. An average coordinate value in the second view may be the sum of X-axis coordinate values of all the feature points in the second view divided by the number of all the feature points in the second view. That is, B=(3+5+6+5+3+2)/6=4.
- Because the average coordinate value of the feature points in the first view is smaller than the average coordinate value of the corresponding feature points in the second view, the current 3D image may be not pseudoscopic, and a correction may not be required.
- If the current 3D image is pseudoscopic, the pseudoscopic image may need to be corrected. In particular, the pseudoscopic image may be corrected through adjusting the relative order or relative position of the first view and the second view when generating the current 3D image or displaying the current 3D image. For example, if
FIG. 2b shows a first view andFIG. 2a shows a second view, the relative order of the first view and the second view may be exchanged to correct the pseudoscopic image. - Further, a user may utilize a 3D video player implemented in a display device to play a 3D video. However, before playing the 3D video, detecting 3D pseudoscopic images (i.e. the pseudoscopic image detection) in the 3D video may be highly desired, in case the 3D video may include any pseudoscopic video frames (i.e., pseudoscopic images) which may affect the viewing experience. The display device may correct the pseudoscopic video frames as described above, and then play the corrected 3D video. Thus, 3D video frames may have to be selected before extracting the corresponding feature points in the first view and the corresponding feature points in the second view.
-
FIG. 3 illustrates a flow chart of another exemplary method for detecting 3D pseudoscopic images consistent with disclosed embodiments. As shown inFIG. 3 , at the beginning, a plurality of 3D video frames (i.e. a 3D image) in a 3D video are selected based on a predetermined rule (S300). Each 3D image may include a first view and a second view. As used in the present disclosure, two views are used to form the 3D image. However, any appropriate number of views may be used. - In particular, the predetermined rule, i.e., the selection of the 3D video frames, may be random but not repeated or may be based on a certain interval, which is only for illustrative purposes and is not intended to limit the scope of the present invention. Further, for one 3D video, a maximum number of detecting 3D pseudoscopic images (indicating maximum times of detecting 3D pseudoscopic images) and a minimum number of detecting 3D pseudoscopic images (indicating minimum times of detecting pseudoscopic) may be determined. For example, the maximum number of detecting 3D pseudoscopic images may be Q, and the minimum number of detecting 3D pseudoscopic images may be P. Q and P are positive integers, respectively.
- After the plurality of 3D video frames (i.e. 3D images) are selected, each 3D video frame (i.e. 3D image) may be detected for pseudoscopic images. For a current 3D video frame (i.e. 3D image) having a first view and a second view, as shown in
FIG. 3 , at the beginning, corresponding feature points in the first view and corresponding feature points in the second view forming the current 3D image are extracted, respectively (S301). Then an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view are calculated respectively (S302). Based on the average coordinate value of the feature points in the first view and the average coordinates value of the feature points in the second view, whether the current 3D image is pseudoscopic or not may be determined (S303). The Steps of S301, S302 and S303 may be similar to those of S101, S102 and S103 inFIG. 1 , details of which are not repeated here while certain differences are explained. - The display device may further record S number of detecting 3D pseudoscopic images in the 3D video and corresponding detection results of the multiple 3D video frames (i.e. 3D images). Then, based on the number S and the corresponding detection results of the multiple 3D video frames (i.e., 3D images), the display device may determine whether the 3D video is pseudoscopic or not.
- For example, S number of pseudoscopic image detections may be performed in the 3D video, among which N number of 3D video frames (i.e. 3D images) may be determined to be pseudoscopic, M number of 3D video frames (i.e. 3D images) may be determined to be not pseudoscopic and T number of pseudoscopic image detections may be failed. In particular, S=M+N+T, S, M, N and T are positive integers respectively. P denotes the minimum number of detecting 3D pseudoscopic images in the 3D video, P is a positive integer.
- If the value of M/S is larger than a predetermined threshold value, the 3D video may not be determined to be pseudoscopic. If the value of N/S is larger than a predetermined threshold value, the 3D video may be determined to be pseudoscopic. The predetermined threshold value may be determined according to requirements of 3D viewing experience. Various requirements may have various threshold values.
- For example, according to certain requirements of 3D viewing experience, the predetermined threshold value may be set as approximately 0.7. That is, if a condition of S>=P and (M/S)>0.7 is satisfied, the 3D video may not be determined to be pseudoscopic and the pseudoscopic image detection in the 3D video may end. If the condition of S>=P and (M/S)>0.7 is not satisfied, the pseudoscopic image detection in the 3D video may have to continue.
- If a condition of S>=P and (N/S)>0.7 is satisfied, the 3D video may be determined to be pseudoscopic and the pseudoscopic image detection in the 3D video may end. If the condition of S>=P and (N/S)>0.7 is not satisfied, the pseudoscopic image detection in the 3D video may have to continue.
- If S>Q, the pseudoscopic image detection in the 3D video may be determined to be failed and the pseudoscopic image detection in the 3D video may end, otherwise the pseudoscopic image detection in the 3D video may have to continue. Q denotes the maximum number of the detecting 3D pseudoscopic images in the 3D video, and P denotes the minimum number of detecting 3D pseudoscopic images in the 3D video. In particular, Q>P, a preferred value of P may be 5 and a preferred value of Q may be 10.
- If the 3D video is determined to be pseudoscopic, the pseudoscopic video frames (i.e., pseudoscopic image) in the 3D video may need to be corrected through adjusting the relative order or relative position of the first view and the second view in the pseudoscopic video frames (i.e., pseudoscopic image), respectively.
- Through respectively calculating the average coordinate value of the feature points in the first view and the average coordinate value of the feature points in the second view, the display device may be able to determine whether the current 3D image is pseudoscopic or not. Further, based on the results of the pseudoscopic image detection, the display device may determine whether the relative order of the relative position of the first view and the second view forming the current 3D image needs to be changed. Thus, the display device may enable the user to watch 3D images/3D videos with correct depth perceptions, and enhance the viewing experience.
-
FIG. 4 illustrates a structural schematic diagram of an exemplary display device consistent with disclosed embodiments. Thedisplay device 400 may be an electronic device which is capable of capturing 3D images, such as a smartphone, a tablet, and a digital camera, etc., or an electronic device which is capable of playing and/or generating 3D images such as a notebook, a TV, and a smartwatch, etc. (e.g.,FIG. 7 ) - As shown in
FIG. 4 , thedisplay device 400 may include afeature extraction module 401, an averagevalue calculation module 402 and adecision module 403. All of the modules may be implemented in hardware, software, or a combination of hardware and software. Software programs may be stored in thesystem memory 508, which may be called and executed by theprocessor 502 to complete corresponding functions/steps. - The
feature extraction module 401 may be configured to extract corresponding feature points in a first view and in a second view, in which the first view and the second view may form a current 3D image. The averagevalue calculation module 402 may be configured to calculate an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view. Based on the average coordinate value of the feature points in the first view and the average coordinates values of the feature points in the second view, thedecision module 403 may be configured to determine whether the current 3D image is pseudoscopic or not. - For example, in one embodiment, a left view and a right view may be adopted to form a current 3D image, a first view may be the left view and a second view may be the right view. If an average coordinate value of feature points in the first view is larger than or equal to an average coordinate value of feature points in the second view, the
decision module 403 may determine the current 3D image to be pseudoscopic. - In particular, the average coordinate value of the feature points in the first view may be the sum of all X-axis coordinate values of all the feature points in the first view divided by the number of all the feature points in the first view. The average coordinate value of the feature points in the second view may be the sum of all X-axis coordinate values of all the feature points in the second view divided by the number of all the feature points in second first view.
- In another embodiment, a left view and a right view may be adopted to form a current 3D image, a first view may be the right view and a second view may be the left view. If an average coordinate value of feature points in the second view is larger than or equal to an average coordinate value of feature points in the first view, the
decision module 403 may determine that the current 3D image to be pseudoscopic. - In particular, the average coordinate value of the feature points in the first view may be the sum of all X-axis coordinate values of all the feature points in the first view divided by the number of all the feature points in the first view. The average coordinate value of the feature points in the second view may be the sum of all X-axis coordinate values of all the feature points in the second view divided by the number of all the feature points in second first view.
-
FIG. 6 illustrates a structural schematic diagram of another exemplary display device consistent with disclosed embodiments. As shown inFIG. 6 , thedisplay device 600 may include afeature extraction module 601, an averagevalue calculation module 602 and adecision module 603, which may perform similar functions as the modules inFIG. 4 . The similarities betweenFIG. 4 andFIG. 6 are not repeated here, while certain differences are explained. - The
display device 600 may further include a pseudoscopicimage processing module 604. If the current 3D image is pseudoscopic, the pseudoscopic image processing module may be configured to correct the pseudoscopic image through processing the first view and the second view forming the current 3D image. In particular, the pseudoscopic image processing module may adjust the relative order or relative position of the first view and the second view. - The
display device 600 may also include a selectingmodule 605 configured to select 3D video frames (i.e., 3D images) in a 3D video based on a predetermined rule. Each 3D video frame (i.e., 3D image) may include a first view and a second view. - The
display device 600 may also include arecording module 606 configured to record the number of detecting 3D pseudoscopic images in the 3D video (i.e. times of detecting 3D pseudoscopic images in the 3D video) and corresponding detection results of detecting 3D pseudoscopic images in the 3D video. Based on the number of detecting 3D pseudoscopic images in the 3D video and the corresponding detection results of detecting 3D pseudoscopic images in the 3D video, thedecision module 603 may determine whether the 3D video is pseudoscopic or not. If the 3D video is determined to be pseudoscopic, the pseudoscopic image processing module may adjust the relative order or relative position of the first view and the second view in the pseudoscopic video frames (i.e. pseudoscopic image) in the 3D video, respectively. - The display device consistent with disclosed embodiments may be an electronic device implemented with various software modules of detecting 3D pseudoscopic images consistent with disclosed embodiments, in which the details of detecting 3D pseudoscopic images may be referred to the previous description of detecting 3D pseudoscopic images. It should be noted that, names of the software modules are only for illustrative purposes, which are not intended to limit the scope of the present invention.
- The method for detecting 3D pseudoscopic images may be used in applications such as capturing 3D images and playing 3D videos. For a smartphone or a digital camera capable of capturing 3D images, it usually takes two images of a same scene, in which the two images have a correct relative order and a parallax between them, and then arranges the two images into a 3D image. In addition, a user may upload 2D images and utilize a 3D image creating software to generate corresponding 3D images of the uploaded 2D images. The user may further generate a 3D video based on multiple 3D video frames (i.e., 3D images), which can be played back on a video player.
- However, the user may not notice the relative order between the two images forming the 3D image or 3D video frame, and thus may arrange the two images in an incorrect order, resulting an incorrect 3D image. That is, a pseudoscopic image or a pseudoscopic view may be generated and the viewing experience may be affected.
- Through respectively calculating the average coordinate value of the feature points in the first view and the average coordinate value of the feature points in the second view, the display device may be able to determine whether the current 3D image is pseudoscopic or not. Further, based on the results of the pseudoscopic image detection, the display device may determine whether the relative order of the relative position of the first view and the second view forming the current 3D image needs to be changed. Thus, the display device may enable the user to watch 3D images/3D videos with correct depth perceptions, and enhance the viewing experience.
- Those of skill would further appreciate that the various illustrative modules and algorithm steps disclosed in the embodiments may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative modules and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
- The steps of a method or algorithm disclosed in the embodiments may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM (erasable programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
- The description of the disclosed embodiments is provided to illustrate the present invention to those skilled in the art. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/648,598 US20170310942A1 (en) | 2015-01-22 | 2017-07-13 | Method and apparatus for processing three-dimensional (3d) pseudoscopic images |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510033241.0A CN104639934B (en) | 2015-01-22 | 2015-01-22 | Stereo-picture instead regards processing method and display device |
CN201510033241.0 | 2015-01-22 | ||
US15/004,514 US9743063B2 (en) | 2015-01-22 | 2016-01-22 | Method and apparatus for processing three-dimensional (3D) pseudoscopic images |
US15/648,598 US20170310942A1 (en) | 2015-01-22 | 2017-07-13 | Method and apparatus for processing three-dimensional (3d) pseudoscopic images |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/004,514 Continuation US9743063B2 (en) | 2015-01-22 | 2016-01-22 | Method and apparatus for processing three-dimensional (3D) pseudoscopic images |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170310942A1 true US20170310942A1 (en) | 2017-10-26 |
Family
ID=53218175
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/004,514 Active 2036-02-19 US9743063B2 (en) | 2015-01-22 | 2016-01-22 | Method and apparatus for processing three-dimensional (3D) pseudoscopic images |
US15/648,598 Abandoned US20170310942A1 (en) | 2015-01-22 | 2017-07-13 | Method and apparatus for processing three-dimensional (3d) pseudoscopic images |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/004,514 Active 2036-02-19 US9743063B2 (en) | 2015-01-22 | 2016-01-22 | Method and apparatus for processing three-dimensional (3D) pseudoscopic images |
Country Status (3)
Country | Link |
---|---|
US (2) | US9743063B2 (en) |
JP (1) | JP6339601B2 (en) |
CN (1) | CN104639934B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106231293B (en) * | 2015-10-30 | 2018-01-26 | 深圳超多维光电子有限公司 | A kind of anti-detection method and device regarded of three-dimensional film source |
CN105635717A (en) * | 2016-02-17 | 2016-06-01 | 广东未来科技有限公司 | Image display method and device for multi-view stereo display |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120154394A1 (en) * | 2009-06-15 | 2012-06-21 | Ntt Docomo Inc | Apparatus for evaluating optical properties of three-dimensional display, and method for evaluating optical properties of three-dimensional display |
US20120206445A1 (en) * | 2011-02-14 | 2012-08-16 | Sony Corporation | Display device and display method |
US20130044101A1 (en) * | 2011-08-18 | 2013-02-21 | Lg Display Co., Ltd. | Three-Dimensional Image Display Device and Driving Method Thereof |
US20130266207A1 (en) * | 2012-04-05 | 2013-10-10 | Tao Zhang | Method for identifying view order of image frames of stereo image pair according to image characteristics and related machine readable medium thereof |
US20130336539A1 (en) * | 2012-06-15 | 2013-12-19 | Ryusuke Hirai | Position estimation device, position estimation method, and computer program product |
US20140035907A1 (en) * | 2012-07-31 | 2014-02-06 | Nlt Technologies, Ltd. | Stereoscopic image display device, image processing device, and stereoscopic image processing method |
US20150092030A1 (en) * | 2013-09-27 | 2015-04-02 | Samsung Electronics Co., Ltd. | Display apparatus and method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010028485A1 (en) | 1997-07-08 | 2001-10-11 | Stanley Kremen | Methods of preparing holograms |
JP2000003446A (en) * | 1998-06-15 | 2000-01-07 | Ricoh Co Ltd | Missing value estimating method, three-dimensional data input device and recording medium |
JP2012053165A (en) * | 2010-08-31 | 2012-03-15 | Sony Corp | Information processing device, program, and information processing method |
JP5766019B2 (en) * | 2011-05-11 | 2015-08-19 | シャープ株式会社 | Binocular imaging device, control method thereof, control program, and computer-readable recording medium |
CN102395037B (en) * | 2011-06-30 | 2014-11-05 | 深圳超多维光电子有限公司 | Format recognition method and device |
TWI514849B (en) * | 2012-01-11 | 2015-12-21 | Himax Tech Ltd | Calibration device used in stereoscopic display system and calibration method of the same |
JP5493055B2 (en) * | 2012-01-18 | 2014-05-14 | パナソニック株式会社 | Stereoscopic image inspection apparatus, stereoscopic image processing apparatus, and stereoscopic image inspection method |
JP5713939B2 (en) * | 2012-03-05 | 2015-05-07 | 株式会社東芝 | Target detection apparatus and target detection method |
-
2015
- 2015-01-22 CN CN201510033241.0A patent/CN104639934B/en active Active
-
2016
- 2016-01-21 JP JP2016009733A patent/JP6339601B2/en not_active Expired - Fee Related
- 2016-01-22 US US15/004,514 patent/US9743063B2/en active Active
-
2017
- 2017-07-13 US US15/648,598 patent/US20170310942A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120154394A1 (en) * | 2009-06-15 | 2012-06-21 | Ntt Docomo Inc | Apparatus for evaluating optical properties of three-dimensional display, and method for evaluating optical properties of three-dimensional display |
US20120206445A1 (en) * | 2011-02-14 | 2012-08-16 | Sony Corporation | Display device and display method |
US20130044101A1 (en) * | 2011-08-18 | 2013-02-21 | Lg Display Co., Ltd. | Three-Dimensional Image Display Device and Driving Method Thereof |
US20130266207A1 (en) * | 2012-04-05 | 2013-10-10 | Tao Zhang | Method for identifying view order of image frames of stereo image pair according to image characteristics and related machine readable medium thereof |
US20130336539A1 (en) * | 2012-06-15 | 2013-12-19 | Ryusuke Hirai | Position estimation device, position estimation method, and computer program product |
US20140035907A1 (en) * | 2012-07-31 | 2014-02-06 | Nlt Technologies, Ltd. | Stereoscopic image display device, image processing device, and stereoscopic image processing method |
US20150092030A1 (en) * | 2013-09-27 | 2015-04-02 | Samsung Electronics Co., Ltd. | Display apparatus and method |
Also Published As
Publication number | Publication date |
---|---|
CN104639934B (en) | 2017-11-21 |
CN104639934A (en) | 2015-05-20 |
JP6339601B2 (en) | 2018-06-06 |
US20160219259A1 (en) | 2016-07-28 |
US9743063B2 (en) | 2017-08-22 |
JP2016134922A (en) | 2016-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10659769B2 (en) | Image processing apparatus, image processing method, and storage medium | |
CN105100770B (en) | Three-dimensional source images calibration method and equipment | |
EP3097690B1 (en) | Multi-view display control | |
US9154762B2 (en) | Stereoscopic image system utilizing pixel shifting and interpolation | |
US9948913B2 (en) | Image processing method and apparatus for processing an image pair | |
US20160150222A1 (en) | Simulated 3d image display method and display device | |
EP3286601B1 (en) | A method and apparatus for displaying a virtual object in three-dimensional (3d) space | |
US20190269881A1 (en) | Information processing apparatus, information processing method, and storage medium | |
US20130106841A1 (en) | Dynamic depth image adjusting device and method thereof | |
US20170150212A1 (en) | Method and electronic device for adjusting video | |
KR102362345B1 (en) | Method and apparatus for processing image | |
US20170310942A1 (en) | Method and apparatus for processing three-dimensional (3d) pseudoscopic images | |
US9678991B2 (en) | Apparatus and method for processing image | |
JP2014072809A (en) | Image generation apparatus, image generation method, and program for the image generation apparatus | |
US20140362197A1 (en) | Image processing device, image processing method, and stereoscopic image display device | |
JP5765418B2 (en) | Stereoscopic image generation apparatus, stereoscopic image generation method, and stereoscopic image generation program | |
JP2012141753A5 (en) | ||
KR20160056132A (en) | Image conversion apparatus and image conversion method thereof | |
EP4319150A1 (en) | 3d format image detection method and electronic apparatus using the same method | |
US11902502B2 (en) | Display apparatus and control method thereof | |
JP5459231B2 (en) | Pseudo stereoscopic image generation apparatus, pseudo stereoscopic image generation program, and pseudo stereoscopic image display apparatus | |
US10880533B2 (en) | Image generation apparatus, image generation method, and storage medium, for generating a virtual viewpoint image | |
CN117635684A (en) | Stereo format image detection method and electronic device using same | |
US9852352B2 (en) | System and method for determining colors of foreground, and computer readable recording medium therefor | |
JP4777193B2 (en) | Stereoscopic image synthesizing apparatus, shape data generation method and program thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SUPERD CO. LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LUO, YANQING;REEL/FRAME:042994/0367 Effective date: 20160113 |
|
AS | Assignment |
Owner name: SUPERD TECHNOLOGY CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SUPERD CO. LTD.;REEL/FRAME:046278/0480 Effective date: 20180629 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |