US8749548B2 - Display system with image conversion mechanism and method of operation thereof - Google Patents
Display system with image conversion mechanism and method of operation thereof Download PDFInfo
- Publication number
- US8749548B2 US8749548B2 US13/224,271 US201113224271A US8749548B2 US 8749548 B2 US8749548 B2 US 8749548B2 US 201113224271 A US201113224271 A US 201113224271A US 8749548 B2 US8749548 B2 US 8749548B2
- Authority
- US
- United States
- Prior art keywords
- segment
- original image
- depth
- focus
- mean
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
Definitions
- the present invention relates generally to a display system and more particularly to a system for image conversion.
- Modern consumer and industrial electronics especially devices such as graphical display systems, televisions, projectors, cellular phones, portable digital assistants, and combination devices, are providing increasing levels of functionality to support modern life including three-dimensional display services.
- Research and development in the existing technologies can take a myriad of different directions.
- PDA personal digital assistant
- Three-dimensional display based services allow users to create, transfer, store, and/or consume information in order for users to create, transfer, store, and consume in the “real world”.
- One such use of three-dimensional display based services is to efficiently present three-dimensional images on a display.
- Three-dimensional display systems have been incorporated in projectors, televisions, notebooks, handheld devices, and other portable products. Today, these systems aid users by displaying available relevant information, such as diagrams, maps, or videos. The display of three-dimensional images provides invaluable relevant information.
- the present invention provides a method of operation of a display system, including: calculating a focus measure for an original image; calculating a segment mean based on the focus measure for a segment; generating an ordered segment based on the segment mean; generating a segment depth based on the ordered segment; and generating a three-dimensional image with the segment depth for displaying on a device.
- the present invention provides a display system, including: a focus calculation module for calculating a focus measure for an original image; a mean calculation module, coupled to the focus calculation module, for calculating a segment mean based on the focus measure for a segment; a segment order module, coupled to the mean calculation module, for generating an ordered segment based on the segment mean; a depth assignment module, coupled to the segment order module, for generating a segment depth based on the ordered segment; and a three-dimensional generation module, coupled to the depth assignment module, for generating a three-dimensional image with the segment depth for displaying on a device.
- FIG. 1 is a display system with image conversion mechanism in an embodiment of the present invention.
- FIG. 2 is an exemplary block diagram of the device.
- FIG. 3 is an example of an operation of the display system.
- FIG. 4 is a control flow of the display system.
- FIG. 5 is an example of the three-dimensional image.
- FIG. 6 is a flow chart of a method of operation of the display system in a further embodiment of the present invention.
- module referred to herein include software, hardware, or a combination thereof.
- the software can be machine code, firmware, embedded code, and application software.
- the hardware can be circuitry, processor, computer, integrated circuit, integrated circuit cores, a camera, a camcorder, a microelectromechanical system (MEMS), passive devices, or a combination thereof.
- MEMS microelectromechanical system
- Some approaches utilize segmentation for depth assignment with relative motion of segments for successive images by image matching. Thus, it can be considered as a depth-assigning algorithm, which combines segmentation and motion.
- DSD disparity space distribution
- the display system 100 can include a device 104 .
- the device 104 is defined as an electronic machine capable of storing and computing digital data.
- the device 104 can be of any of a variety of mobile devices, such as a cellular phone, a personal digital assistant, a tablet, a notebook computer, a tablet PC, a tabletop computer, a smart surface, or other multi-functional mobile communication or entertainment device.
- the device 104 can be an electronic machine, such as a camera, a mainframe, a server, a cluster server, rack mounted server, or a blade server, or as more specific examples, an IBM System z10TM Business Class mainframe or a HP ProLiant MLTM server.
- the device 104 can be a specialized machine, such as a streaming entertainment device, a portable computing device, a digital camera, a thin client, a notebook, a netbook, a smartphone, personal digital assistant, or a cellular phone, and as specific examples, a Samsung Galaxy TabTM, a Samsung 55′′ Class LED 8000 Series Smart TV, a Samsung 3D Blu-ray DiscTM Player, an Apple iPadTM, an Apple iPhoneTM, a Palm® CentroTM, or a MOTO QTM global.
- a streaming entertainment device such as a portable computing device, a digital camera, a thin client, a notebook, a netbook, a smartphone, personal digital assistant, or a cellular phone
- a Samsung Galaxy TabTM such as a Samsung 55′′ Class LED 8000 Series Smart TV, a Samsung 3D Blu-ray DiscTM Player, an Apple iPadTM, an Apple iPhoneTM, a Palm® CentroTM, or a MOTO QTM global.
- the device 104 can be a standalone device, or can be incorporated with a larger electronic system, for example a home theatre system, a personal computer, or a vehicle.
- the device 104 can be coupled to a communication path 106 to communicate with external devices, such as an external display 108 and a capture device 110 .
- the communication path 106 is defined as an interconnection between electronic terminals.
- the communication path 106 can be a variety of networks.
- the communication path 106 can include wireless communication, wired communication, optical, ultrasonic, or the combination thereof.
- Satellite communication, cellular communication, Bluetooth, Infrared Data Association standard (IrDA), wireless fidelity (WiFi), and worldwide interoperability for microwave access (WiMAX) are examples of wireless communication that can be included in the communication path 106 .
- Ethernet, digital subscriber line (DSL), fiber to the home (FTTH), and plain old telephone service (POTS) are examples of wired communication that can be included in the communication path 106 .
- the communication path 106 can traverse a number of network topologies and distances.
- the communication path 106 can include direct connection, personal area network (PAN), local area network (LAN), metropolitan area network (MAN), wide area network (WAN) or any combination thereof.
- PAN personal area network
- LAN local area network
- MAN metropolitan area network
- WAN wide area network
- the external display 108 is defined as a device for displaying stored images of the display system 100 .
- the external display 108 can be, for example, a 3D TV, a pair of goggles, an LCD screen, or a touch screen.
- the external display 108 can have observable depths of images and motion images, and capable of displaying three-dimensionally.
- the capture device 110 is defined as a device for recording images for the display system 100 .
- the capture device 110 can be, for example, a digital camera, a camcorder, a webcam, or an array of sensors.
- the display system 100 is described with the device 104 as a mobile computing device, although it is understood that the device 104 can be different types of computing devices.
- the device 104 can also be a non-mobile computing device, such as a server, a server farm, or a desktop computer.
- the device 104 can include a user interface 202 , a control unit 204 , and a storage unit 206 .
- the user interface 202 can include a display interface 208 .
- the control unit 204 can include a control interface 210 .
- the storage unit 206 can include a storage interface 212 .
- the user interface 202 allows a user to interface and interact with the device 104 .
- the user interface 202 can include an input device and an output device. Examples of the input device of the user interface 202 can include a keypad, a touchpad, soft-keys, a keyboard, a microphone, a touch pad, a camera, a webcam, or a combination thereof to provide data and communication inputs.
- the user interface 202 can include the display interface 208 .
- Examples of the output device of the user interface 202 can include the display interface 208 .
- the display interface 208 can include a display, a projector, a video screen, a speaker, or a combination thereof.
- the display interface 208 can also be a touch screen, such that inputs can be received from the display interface 208 .
- the control unit 204 can execute a software 214 to provide the intelligence of the device 104 .
- the control unit 204 can operate the user interface 202 to display information generated by the device 104 .
- the control unit 204 can also execute the software 214 for the other functions of the device 104 , including receiving image information from the capture device 110 of FIG. 1 .
- the control unit 204 can further execute the software 214 for adjusting and updating the image information to display on or through the display interface 208 .
- the control unit 204 can be implemented in a number of different manners.
- the control unit 204 can be a processor, an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine, a digital signal processor, or a combination thereof.
- the control unit 204 can include the control interface 210 .
- the control interface 210 can be used for communication between the control unit 204 and other modules in the device 104 .
- the control interface 210 can also be used for communication that is external to the device 104 .
- the control interface 210 can receive information from the other modules or from external sources, or can transmit information to the other modules or to external destinations.
- the external sources and the external destinations refer to sources and destinations external to the device 104 .
- the control interface 210 can be implemented in different ways and can include different implementations, depending on which modules or external units are interfacing with the control interface 210 .
- the control interface 210 can be implemented with a pressure sensor, an inertial sensor, a microelectromechanical system, optical circuitry, waveguides, wireless circuitry, wireline circuitry, or a combination thereof.
- the storage unit 206 can store the software 214 .
- the storage unit 206 can also store the relevant information, such as advertisements, preferred settings, operating system, previous adjustments and updates, or a combination thereof.
- the storage unit 206 can be a volatile memory, a nonvolatile memory, an internal memory, an external memory, or a combination thereof.
- the storage unit 206 can be a nonvolatile storage such as non-volatile random access memory, Flash memory, disk storage, or a volatile storage such as static random access memory.
- the storage unit 206 can include the storage interface 212 .
- the storage interface 212 can be used for communication between the control unit 204 and other modules in the device 104 .
- the storage interface 212 can also be used for communication that is external to the device 104 .
- the storage interface 212 can receive information from the other modules or from external sources, or can transmit information to the other modules or to external destinations.
- the storage interface 212 can be implemented differently depending on which modules or external units are being interfaced with the storage unit 206 .
- the storage interface 212 can be implemented with technologies and techniques similar to the implementation of the control interface 210 .
- the display system 100 can process an original image 302 to generate a segmentation map 304 and an edge map 306 .
- the original image 302 depicts a landscape 312 and mountains 314 with a sky 316 over the landscape 312 and clouds 318 in the sky 316 .
- Generation of the segmentation map 304 and the edge map 306 will be further described in a subsequent section.
- the segmentation map 304 is defined as a representation of the original image 302 that has been partitioned into multiple portions or sets of pixels.
- the edge map 306 is defined as a representation of the original image 302 that identifies points in the original image 302 at which image brightness changes sharply or has discontinuities.
- the display system 100 can generate a depth map 310 , which is defined as a representation of the original image 302 with depths assigned to the portions of the original image 302 .
- the depths are defined as distances from a planar surface providing a three-dimensional image. Generation of the depth map 310 will be further described in a subsequent section.
- the original image 302 can include multiple features, which are defined as objects that are shown in the original image 302 .
- the features can include human faces, printed characters, vehicles, landscapes, or any other objects that are captured when the original image 302 was generated.
- the features in the original image 302 are shown to include the landscape 312 , the mountains 314 , the sky 316 , and the clouds 318 .
- the depths in the depth map 310 are shown to include a landscape depth 320 , a mountain depth 322 , a sky depth 324 , and a cloud depth 326 , which correspond to the landscape 312 , the mountains 314 , the sky 316 , and the clouds 318 , respectively.
- order of the depths can be in an increasing order of the sky depth 324 , the cloud depth 326 , the mountain depth 322 , and the landscape depth 320 with the landscape depth 320 having the largest value indicating that the landscape 312 is closest to a viewer and the sky depth 324 having the smallest value indicating that the sky 316 is farthest from the viewer.
- FIG. 3 depicts an x-axis and a y-axis that are lines in a plane of the original image 302 , the segmentation map 304 , the edge map 306 , and the depth map 310 .
- the x-axis and the y-axis represent a horizontal line and a vertical line, respectively, along a longer side and a shorter side, respectively, of the original image 302 , the segmentation map 304 , the edge map 306 , and the depth map 310 .
- the display system 100 can include a segmentation module 402 , which is defined as a module that partitions the original image 302 into multiple segments 404 .
- the segments 404 are defined as groups of pixels of the original image 302 .
- the segments 404 can be generated from the original image 302 .
- each of the segments 404 be denoted as R.
- the original image 302 can be partitioned by dividing or segmenting the original image 302 into N number of the segments 404 , denoted as R 1 . . . R N .
- the segmentation module 402 can generate the segmentation map 304 of FIG. 3 with a number of the segments 404 .
- the segmentation module 402 can generate the segments 404 in a number of ways.
- the segmentation module 402 can include histogram-based methods.
- the segments 404 can be generated using a histogram computed from all of pixels in the original image 302 with peaks and valleys in the histogram used to locate clusters of the pixels in the original image 302 . Color or intensity can be used for measurement and identification of the peaks and the valleys.
- the segmentation module 402 can include clustering methods.
- the segments 404 can be generated using a K-means method, which is an iterative technique that is used to partition an image into K clusters, by assigning each pixel in the image to the cluster that minimizes the distance between the pixel and the cluster center and re-computing the cluster centers by averaging all of the pixels in the cluster.
- the segmentation module 402 can generate the segments 404 using a dynamic template generation.
- a dynamic template is a set of segmentations, which are generated by segmentation logic.
- One of the segmentations in the dynamic template can share similar depths and thus can share same degrees of focus.
- the display system 100 can include a focus estimation module 406 , which is defined as a module that determines image clarity for pixels or positions on the original image 302 .
- the focus estimation module 406 can include a filter module 408 , which is defined as a module that measures magnitudes of frequency changes of the original image 302 .
- the filter module 408 can generate a filter response 409 , which is defined as a frequency response having predetermined frequency components of the original image 302 .
- the predetermined frequency components are defined as frequencies that are above a cutoff frequency, which is defined as a frequency at which output voltage amplitude is ⁇ 3 dB of input voltage amplitude.
- the filter response 409 can represent an arbitrary focus measure function of the filter module 408 .
- a focus measure function can include any methods of measuring a degree of focus.
- an out-of-focus area can be blurry, and an in-focus area can be sharp.
- the degree of focus can be estimated by measuring high frequency components in a target area.
- the filter module 408 measures magnitudes of frequency changes of pixel characteristics at or near image pixels.
- the pixel characteristics are measurable qualities of pixels of the original image 302 .
- the pixel characteristics can include color, intensity, texture, tone, saturation, or a combination thereof.
- the filter module 408 can include a high pass filter to measure magnitudes of high frequency changes of the pixel characteristics.
- the filter response 409 can include a representation of the original image 302 with texture information.
- Texture information refers to high frequency areas or high frequency components of the original image 302 that are passed through the filter module 408 .
- the high frequency areas include details resulting in a sharpened image.
- the display system 100 can include a focus calculation module 410 , which is defined as a module that determines clarity of an image or portions of the image by calculating focus measures 412 for positions 414 on the original image 302 .
- Clarity is defined as lack of blurriness or a degree of focus of an object in an image.
- the focus measures 412 are defined as magnitudes determined by assigning each image point to a quantifiable magnitude of clarity.
- the focus measures 412 are edge strengths, which are degrees of focus, of the original image 302 .
- the focus measures 412 can include measures of how closely light rays originating from a surface point of an object converge to the image point.
- the focus calculation module 410 can generate the edge map 306 of FIG. 3 based on the filter response 409 and the focus measures 412 .
- the positions 414 are defined as specific pixels within an image.
- the positions 414 can be represented by coordinates along an x-axis and a y-axis, as depicted in FIG. 3 , for a two-dimensional (2D) image including the original image 302 .
- a method of generating the focus measures 412 for the positions 414 one of which is denoted as (x,y), in the original image 302 can be expressed by the following equation.
- H ( x,y ) F ( x,y ) ⁇ I ( x,y ) (1)
- I(x,y) is the original image 302
- F(x,y) is the filter response 409 of the filter module 408
- ⁇ denotes a general operation between functions including convolution
- H(x,y) is a function that describes a degree of focus at the point of (x,y).
- the focus measures 412 can be generated with the function H(x,y).
- the focus measures 412 can be generated by calculating a convolution of the original image 302 and the filter response 409 at each of the positions 414 .
- the display system 100 can include a mean estimation module 415 , which is defined as a module that calculates an average of the focus measures 412 for each of the segments 404 in the segmentation map 304 .
- the display system 100 can process averages of the focus measures 412 of the segments to generate the depth map 310 of FIG. 3 .
- the depth map 310 can include the landscape depth 320 of FIG. 3 , the mountain depth 322 of FIG. 3 , the sky depth 324 of FIG. 3 , and the cloud depth 326 of FIG. 3 .
- the mean estimation module 415 can include a sum-of-degree module 416 , which is defined as a module that determines a sum of degrees of focus at each position in a segment of the two-dimensional image.
- the sum-of-degree module 416 can generate a focus degree sum 418 , which is defined as a sum of degrees of focus at each of the positions 414 in one of the segments 404 .
- H(x,y) is one of the focus measures 412
- (x,y) is one of the positions 414
- R k is the k th or one of the segments 404
- ⁇ denotes a summation operation
- the focus degree sum 418 of one of the segments 404 can be calculated based on the focus measures 412 and the segments 404 .
- the focus degree sum 418 of one of the segments 404 can be calculated by calculating a summation of the focus measures 412 at the positions 414 in the one of the segments 404 .
- the mean estimation module 415 can include a sum-of-pixel module 420 , which is defined as a module that determines a sum of pixels in a segment of the two-dimensional image.
- the sum-of-pixel module 420 can generate a segment pixel sum 422 , which is defined as a sum of a number of pixels in one of the segments 404 .
- the segment pixel sum 422 can be calculated based on the segments 404 .
- the segment pixel sum 422 can be calculated by calculating a summation of a number of the positions 414 in the one of the segments 404 .
- the mean estimation module 415 can include a mean calculation module 424 , which is defined as a module that determines an average of a number of the focus degree sum 418 of all of the positions 414 in each of the segments 404 .
- the mean calculation module 424 can generate a segment mean 426 , which is defined as an average of a number of the focus degree sum 418 of all of the positions 414 in each of the segments 404 .
- the segment mean 426 can be expressed by the following equation.
- S k 1 segment_pixel ⁇ _sum ⁇ focus_degree ⁇ _sum ( 4 )
- S k is the segment mean 426 in the k th region, which is one of the segments 404 .
- the segment mean 426 can be calculated by dividing the focus degree sum 418 by the segment pixel sum 422 .
- the segment mean 426 can be calculated by substituting the focus degree sum 418 and the segment pixel sum 422 in equation 4 with equations 2 and 3, respectively, as expressed by the following equation.
- H(x,y) is one of the focus measures 412
- (x,y) is one of the positions 414
- R k is the k th or one of the segments 404
- ⁇ denotes a summation of.
- the segment mean 426 can represent a mean of a high frequency component or a focus measure component for each segmented area.
- the segment mean 426 can represent a mean of focus measure in the k th region, denoted as S k .
- the display system 100 can include a segment order module 428 , which is defined as a module that determines an order of means of focus measure values of segmented areas.
- the segment order module 428 can generate ordered segments 430 , which are defined as an arrangement or an order of means of focus measure values of segmented areas.
- the ordered segments 430 can be generated by arranging all of the segments 404 in a segment order 432 based on a value of the segment mean 426 of each of the segments 404 .
- the segment order 432 is defined as an arrangement of segmented areas of the two-dimensional image.
- the segment order 432 can be predetermined by configuring the segment order 432 to a known or fixed state prior to generating the ordered segments 430 .
- the segment order 432 can be stored in the storage unit 206 of FIG. 2 and read by the segment order module 428 .
- the segment order 432 can preferably include an increasing order of focus measure values, starting with the lowest focus measure value and ending with the highest focus measure value, although the segment order 432 can include any order.
- the segment order 432 can include a decreasing order.
- the display system 100 can include a depth assignment module 434 , which is defined as a module that determines a depth value of each segmented area.
- the depth assignment module 434 can generate segment depths 436 , which are defined as values of depth of segmented areas.
- the segment depths 436 can be generated based on the ordered segments 430 .
- the depth assignment module 434 can generate the depth map 310 with the segment depths 436 .
- One of the useful cues for depth estimation from a 2D image can include focus information, such as the focus measures 412 .
- focus information such as the focus measures 412 .
- an in-focus area can include a high depth value
- an out-of-focus area can include a low depth value.
- Depth assignment can be implemented by ordering each of segment patches, such as the segments 404 , to generate the ordered segments 430 based on mean values of high frequency components of each of the segment patches.
- the ordered segments 430 can be generated based on the segment mean 426 of a high frequency component of each of the segments 404 .
- the segment depths 436 can be generated based on the ordered segments 430 .
- the segment depths 436 can be generated based on the segment order 432 by which the segment mean 426 of each of the segments 404 is arranged.
- the higher a value of the segment mean 426 the higher a value of one of the segment depths 436 .
- one of the segments 404 having one of the segment depths 436 with the highest value can be closest to the viewer.
- Depth assignment method is not constrained to any particular method.
- Depth assignment method can include any method that assigns a value to each of the segment depths 436 for each of the ordered segments 430 .
- one of the segment depths 436 at pixel (x,y), which is one of the positions 414 can be expressed by the following equation.
- S k is the segment mean 426 of the k th of the segments 404
- (x,y) is one of the positions 414 , denotes a member of
- R k is the k th or one of the segments 404
- D(x,y) is one of the segment depths 436 at one of the positions 414 .
- S (i) denote the i th smallest focus measure value among ⁇ S 1 , S 2 , . . . , S N ⁇ , where S (1) ⁇ S (2) ⁇ . . . ⁇ S (N) .
- the segment depths 436 can be assigned to each of the ordered segments 430 , denoted as S (i) .
- the segment depths 436 can be expressed by the following equation. D 1 ⁇ D 2 ⁇ . . . ⁇ D N (7)
- ⁇ D 1 , D 2 , . . . , D N ⁇ represents a predetermined set of depth values for ⁇ S 1 , S 2 , . . . , S N ⁇ .
- the predetermined set of depth values are configured to known or fixed values prior to assigning the segment depths 436 to each of the ordered segments 430 , denoted as S (i) .
- Assignment of the segment depths 436 provides a novel way to assign depth to two-dimensional images.
- the features in the original image 302 depicted in FIG. 3 include an order of the sky 316 of FIG. 3 , the clouds 318 of FIG. 3 , the mountains 314 of FIG. 3 , and the landscape 312 of FIG. 3 with the landscape 312 represents an object that is in focus more than other features.
- the landscape 312 also represents the object that is closest to a viewer than the other features.
- the depth assignment module 434 can generate the segment depths 436 , one of which has the largest value and is assigned to the landscape depth 320 of FIG. 3 and another of which has the smallest value and is assigned to the sky depth 324 of FIG. 3 .
- the display system 100 can include a three-dimensional generation module 438 , which is defined as a module that determines a three-dimensional image 440 .
- the three-dimensional image 440 is defined as an image generated with information from a two-dimensional image and depth values.
- the display system 100 can include a method for ordering the segment depths 436 of segmented objects, such as the ordered segments 430 , with a scene in the original image 302 having texture information to generate the three-dimensional image 440 .
- the three-dimensional generation module 438 can generate the three-dimensional image 440 with the original image 302 and the segment depths 436 .
- the three-dimensional image 440 can be processed and stored in the storage unit 206 for displaying on the device 104 of FIG. 1 , the external display 108 of FIG. 1 , or a combination thereof.
- the display system 100 can include any ordering method. It is also understood that the display system 100 can include any depth assigning method.
- the control flow can optionally include a check for a number of conditions at the beginning of the control flow.
- the control flow can include a check for the original image 302 having a scene full of texture or full of high frequency areas.
- the control flow can include a check for the original image 302 having a scene with an out-of-focus object closer to a viewer than an in-focus object.
- the display system 100 can optionally bypass the modules described above and generate the three-dimensional image 440 to the same as the original image 302 without any of the segment depths 436 generated.
- segment order module 428 generating the ordered segments 430 with the segment order 432 for the depth assignment module 434 to assign the segment depths 436 provides improved 2D-to-3D conversion.
- distinctive depth information such as the segment depths 436
- segmentation information such as the segments 404 extracted from the original image 302 and generated with the segmentation module 402 , thereby eliminating problems caused by focus measurement not distinctive from object to object in cases where it is very crucial to have depth perception.
- utilizing good measure focus such as the edge map 306 and the focus measures 412 generated by the focus calculation module 410 , provides an effective method of generating and assigning depths, such as the segment depths 436 , to generate the three-dimensional image 440 with the original image 302 and the segment depths 436 for 2D-to-3D conversion.
- the focus degree sum 418 generated by the sum-of-degree module 416 and the segment pixel sum 422 generated by the sum-of-pixel module 420 provide generation of the segment mean 426 to effectively calculate the ordered segments 430 .
- the filter module 408 allows accurate calculation of the focus measures 412 by the focus calculation module 410 .
- the focus calculation module 410 can be coupled to the filter module 408 .
- the focus calculation module 410 and the segmentation module 402 can be coupled to the sum-of-degree module 416 .
- the sum-of-pixel module 420 can be coupled to the segmentation module 402 .
- the sum-of-pixel module 420 and the sum-of-degree module 416 can be coupled to the mean calculation module 424 .
- the segment order module 428 can be coupled to the mean calculation module 424 and the depth assignment module 434 .
- the depth assignment module 434 can be coupled to the three-dimensional generation module 438 .
- the segmentation module 402 can be implemented with the user interface 202 of FIG. 2 , the control unit 204 of FIG. 2 , the control interface 210 of FIG. 2 , the storage unit 206 , the storage interface 212 of FIG. 2 , and the software 214 of FIG. 2 .
- the user interface 202 , the control interface 210 , or a combination thereof can be implemented to receive the original image 302 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to generate the segmentation map 304 and the segments 404 .
- the filter module 408 can be implemented with the user interface 202 , the control unit 204 , the control interface 210 , the storage unit 206 , the storage interface 212 , and the software 214 .
- the user interface 202 , the control interface 210 , or a combination thereof can be implemented to receive the original image 302 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to measure magnitudes of frequency changes of the pixel characteristics of the original image 302 .
- the focus calculation module 410 can be implemented with the user interface 202 , the control unit 204 , the control interface 210 , the storage unit 206 , the storage interface 212 , and the software 214 .
- the user interface 202 , the control interface 210 , or a combination thereof can be implemented to receive the original image 302 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to generate the edge map 306 and the focus measures 412 .
- the sum-of-degree module 416 can be implemented with the control unit 204 , the storage unit 206 , the storage interface 212 , and the software 214 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to generate the focus degree sum 418 .
- the sum-of-pixel module 420 can be implemented with the control unit 204 , the storage unit 206 , the storage interface 212 , and the software 214 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to generate the segment pixel sum 422 .
- the mean calculation module 424 can be implemented with the control unit 204 , the storage unit 206 , the storage interface 212 , and the software 214 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to generate the segment mean 426 .
- the segment order module 428 can be implemented with the user interface 202 , the control unit 204 , the control interface 210 , the storage unit 206 , the storage interface 212 , and the software 214 .
- the user interface 202 , the control interface 210 , or a combination thereof can be implemented to preset or pre-configure the segment order 432 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to generate the ordered segments 430 .
- the depth assignment module 434 can be implemented with the control unit 204 , the storage unit 206 , the storage interface 212 , and the software 214 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to generate the segment depths 436 .
- the three-dimensional generation module 438 can be implemented with the user interface 202 , the control unit 204 , the display interface 208 of FIG. 2 , the control interface 210 , the storage unit 206 , the storage interface 212 , and the software 214 .
- the user interface 202 , the display interface 208 , the control interface 210 , or a combination thereof can be implemented to display the three-dimensional image 440 .
- the control unit 204 , the storage interface 212 , the software 214 , or a combination thereof can be implemented to generate the three-dimensional image 440 .
- the physical transformation from displaying the three-dimensional image 440 results in movement in the physical world, such as people moving in response to the three-dimensional image 440 when playing games or viewing the three-dimensional image 440 .
- the display interface 208 can display the three-dimensional image 440 by manipulating pixels at one of the positions 414 on the device 104 , thus resulting in movement in the physical world.
- the display system 100 describes the module functions or order as an example.
- the modules can be partitioned differently.
- the segment order module 428 and the depth assignment module 434 can be implemented together in one module.
- Each of the modules can operate individually and independently of the other modules.
- data generated in one module can be used by another module without being directly coupled to each other.
- modules can be implemented in a number of different fashions but refer to hardware implementation depending on context as used in this application, including the claims which follow later.
- the “modules” are hardware implementations as specialized hardware blocks separate from those shown in FIG. 2 and can be part of those shown in FIG. 2 , such as the control unit 204 or the display interface 208 .
- FIG. 5 therein is shown an example of the three-dimensional image 440 .
- the three-dimensional image 440 is depicted with the features in the original image 302 of FIG. 3 including the landscape 312 , the mountains 314 , the sky 316 , and the clouds 318 .
- the landscape 312 , the mountains 314 , the sky 316 , and the clouds 318 are shown in the three-dimensional image 440 based on the landscape depth 320 of FIG. 3 , the mountain depth 322 of FIG. 3 , the sky depth 324 of FIG. 3 , and the cloud depth 326 of FIG. 3 , respectively.
- depths of the features are represented by a density of horizontal lines in the three-dimensional image 440 as an example.
- the density of the horizontal lines is a number of the horizontal lines per a unit area in the three-dimensional image 440 .
- a feature having the lowest depth value can be considered farthest from the viewer, and thus, its depth can be represented by the highest density of the horizontal lines.
- a feature having the highest depth value can be considered closest to the viewer, and thus, its depth can be represented by the lowest density of the horizontal lines.
- an order of the depths can be in an increasing order of the sky depth 324 , the cloud depth 326 , the mountain depth 322 , and the landscape depth 320 .
- the sky 316 is shown having the highest density of the horizontal lines for being farthest from the viewer, and the landscape 312 is shown having the lowest density of the horizontal lines for being closest to the viewer.
- the method 600 includes: calculating a focus measure for an original image in a block 602 ; calculating a segment mean based on the focus measure for a segment in a block 604 ; generating an ordered segment based on the segment mean in a block 606 ; generating a segment depth based on the ordered segment in a block 608 ; and generating a three-dimensional image with the segment depth for displaying on a device in a block 610 .
- the display system of the present invention furnishes important and heretofore unknown and unavailable solutions, capabilities, and functional aspects for a display system with image conversion mechanism.
- the resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile, accurate, sensitive, and effective, and can be implemented by adapting known components for ready, efficient, and economical manufacturing, application, and utilization.
- Another important aspect of the present invention is that it valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
H(x,y)=F(x,y)●I(x,y) (1)
focus_degree_sum=Σ(x,y)εR
segment_pixel_sum=Σ(x,y)εR
where Sk is the segment mean 426 in the kth region, which is one of the
D 1 <D 2 < . . . <D N (7)
Claims (20)
D(x,y)|(x,y)εR
D(x,y)|(x,y)εR
D(x,y)|(x,y)εR
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/224,271 US8749548B2 (en) | 2011-09-01 | 2011-09-01 | Display system with image conversion mechanism and method of operation thereof |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/224,271 US8749548B2 (en) | 2011-09-01 | 2011-09-01 | Display system with image conversion mechanism and method of operation thereof |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20130057537A1 US20130057537A1 (en) | 2013-03-07 |
| US8749548B2 true US8749548B2 (en) | 2014-06-10 |
Family
ID=47752785
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/224,271 Expired - Fee Related US8749548B2 (en) | 2011-09-01 | 2011-09-01 | Display system with image conversion mechanism and method of operation thereof |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US8749548B2 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130063424A1 (en) * | 2011-07-12 | 2013-03-14 | Nobuo Ueki | Image processing device, image processing method, and image processing program |
| US20150379720A1 (en) * | 2013-01-31 | 2015-12-31 | Threevolution Llc | Methods for converting two-dimensional images into three-dimensional images |
| CN105787429A (en) * | 2015-01-08 | 2016-07-20 | 通用汽车环球科技运作有限责任公司 | Method and apparatus for inspecting an object employing machine vision |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5674023B2 (en) | 2011-01-27 | 2015-02-18 | ソニー株式会社 | Light source device and display device |
| JP4973794B1 (en) | 2011-04-06 | 2012-07-11 | ソニー株式会社 | Display device |
| JP2012237961A (en) * | 2011-04-28 | 2012-12-06 | Sony Corp | Display device and electronic apparatus |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH1091786A (en) | 1996-09-17 | 1998-04-10 | Iwane Kenkyusho:Kk | Three-dimensional image information extraction method and image forming method using the extraction method |
| US20060165315A1 (en) * | 2003-01-06 | 2006-07-27 | Ernst Fabian E | Method and apparatus for depth ordering of digital images |
| US7324687B2 (en) * | 2004-06-28 | 2008-01-29 | Microsoft Corporation | Color segmentation-based stereo 3D reconstruction system and process |
| US20080303894A1 (en) * | 2005-12-02 | 2008-12-11 | Fabian Edgar Ernst | Stereoscopic Image Display Method and Apparatus, Method for Generating 3D Image Data From a 2D Image Data Input and an Apparatus for Generating 3D Image Data From a 2D Image Data Input |
| KR20090080556A (en) | 2006-12-22 | 2009-07-24 | 퀄컴 인코포레이티드 | Complexity-adaptive 2d-to-3d video sequence conversion |
| KR20100034789A (en) | 2008-09-25 | 2010-04-02 | 삼성전자주식회사 | Method and apparatus for generating depth map for conversion two dimensional image to three dimensional image |
-
2011
- 2011-09-01 US US13/224,271 patent/US8749548B2/en not_active Expired - Fee Related
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH1091786A (en) | 1996-09-17 | 1998-04-10 | Iwane Kenkyusho:Kk | Three-dimensional image information extraction method and image forming method using the extraction method |
| US20060165315A1 (en) * | 2003-01-06 | 2006-07-27 | Ernst Fabian E | Method and apparatus for depth ordering of digital images |
| US7324687B2 (en) * | 2004-06-28 | 2008-01-29 | Microsoft Corporation | Color segmentation-based stereo 3D reconstruction system and process |
| US20080303894A1 (en) * | 2005-12-02 | 2008-12-11 | Fabian Edgar Ernst | Stereoscopic Image Display Method and Apparatus, Method for Generating 3D Image Data From a 2D Image Data Input and an Apparatus for Generating 3D Image Data From a 2D Image Data Input |
| KR20090080556A (en) | 2006-12-22 | 2009-07-24 | 퀄컴 인코포레이티드 | Complexity-adaptive 2d-to-3d video sequence conversion |
| US8330801B2 (en) * | 2006-12-22 | 2012-12-11 | Qualcomm Incorporated | Complexity-adaptive 2D-to-3D video sequence conversion |
| KR20100034789A (en) | 2008-09-25 | 2010-04-02 | 삼성전자주식회사 | Method and apparatus for generating depth map for conversion two dimensional image to three dimensional image |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130063424A1 (en) * | 2011-07-12 | 2013-03-14 | Nobuo Ueki | Image processing device, image processing method, and image processing program |
| US9071832B2 (en) * | 2011-07-12 | 2015-06-30 | Sony Corporation | Image processing device, image processing method, and image processing program |
| US20150379720A1 (en) * | 2013-01-31 | 2015-12-31 | Threevolution Llc | Methods for converting two-dimensional images into three-dimensional images |
| CN105787429A (en) * | 2015-01-08 | 2016-07-20 | 通用汽车环球科技运作有限责任公司 | Method and apparatus for inspecting an object employing machine vision |
| CN105787429B (en) * | 2015-01-08 | 2019-05-31 | 通用汽车环球科技运作有限责任公司 | The method and apparatus for being used to check object using machine vision |
Also Published As
| Publication number | Publication date |
|---|---|
| US20130057537A1 (en) | 2013-03-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9165367B2 (en) | Depth estimation system for two-dimensional images and method of operation thereof | |
| US8933927B2 (en) | Display system with image conversion mechanism and method of operation thereof | |
| US8571314B2 (en) | Three-dimensional display system with depth map mechanism and method of operation thereof | |
| CN108810538B (en) | Video coding method, device, terminal and storage medium | |
| US8749548B2 (en) | Display system with image conversion mechanism and method of operation thereof | |
| CN105283905B (en) | Use the robust tracking of Points And lines feature | |
| EP2887312A1 (en) | Method, apparatus and computer program product for depth estimation of stereo images | |
| CN110059685A (en) | Word area detection method, apparatus and storage medium | |
| US9946957B2 (en) | Method, apparatus, computer program and system for image analysis | |
| WO2021169404A1 (en) | Depth image generation method and apparatus, and storage medium | |
| JP2015173441A (en) | Method, apparatus and computer program product for parallax map estimation of stereoscopic images | |
| JP2018503066A (en) | Accuracy measurement of image-based depth detection system | |
| US10438405B2 (en) | Detection of planar surfaces for use in scene modeling of a captured scene | |
| US9639946B2 (en) | Image processing system with hybrid depth estimation and method of operation thereof | |
| EP2991036B1 (en) | Method, apparatus and computer program product for disparity estimation of foreground objects in images | |
| CN111523409B (en) | Method and device for generating position information | |
| EP2562717B1 (en) | Three-dimensional display system with depth map mechanism and method of operation thereof | |
| US8938129B2 (en) | Display system with edge map conversion mechanism and method of operation thereof | |
| CN110633877A (en) | Technical index determination method, device and storage medium | |
| US10176553B2 (en) | Image processing system with three-dimensional viewing and method of operation thereof | |
| EP2536160B1 (en) | Display system with image conversion mechanism and method of operation thereof | |
| WO2015038333A2 (en) | Imaging system with vanishing point detection using camera metadata and method of operation thereof | |
| KR101489586B1 (en) | Method for converting 3D image in mixed image and medium for recording the same | |
| EP2624208A2 (en) | Display system with image conversion mechanism and method of operation thereof | |
| CN106462960B (en) | Cooperative alignment of images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HONG, HUNSOP;KIM, YEONG-TAEG;SIGNING DATES FROM 20110831 TO 20110901;REEL/FRAME:026847/0881 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20220610 |