WO2011155698A2 - 입체 영상 오류 개선 방법 및 장치 - Google Patents
입체 영상 오류 개선 방법 및 장치 Download PDFInfo
- Publication number
- WO2011155698A2 WO2011155698A2 PCT/KR2011/002473 KR2011002473W WO2011155698A2 WO 2011155698 A2 WO2011155698 A2 WO 2011155698A2 KR 2011002473 W KR2011002473 W KR 2011002473W WO 2011155698 A2 WO2011155698 A2 WO 2011155698A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- error
- depth map
- map information
- image data
- threshold
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000012937 correction Methods 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 16
- 238000009877 rendering Methods 0.000 claims abstract description 15
- 238000004458 analytical method Methods 0.000 claims abstract description 5
- 238000012790 confirmation Methods 0.000 claims description 17
- 230000006872 improvement Effects 0.000 claims description 14
- 230000003247 decreasing effect Effects 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 4
- 230000008447 perception Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 241000272194 Ciconiiformes Species 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000169170 Boreogadus saida Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
Definitions
- One embodiment of the present invention relates to a method and apparatus for improving stereoscopic error.
- the stereoscopic image may be improved after determining the error of the depth map information so that the degradation of the depth map information does not appear due to the error of the depth map information.
- the present invention relates to a method and a device for improving an error.
- Stereoscopic image processing technology is a core technology in the field of next-generation information and communication services, and is a cutting-edge technology in which competition for technology development is fierce with the development of the information industry society.
- Such stereoscopic image processing technology is an essential element to provide high quality video services in multimedia applications.
- an embodiment of the present invention is to provide a method and apparatus for improving stereoscopic error, in which stereoscopic deterioration does not appear due to an error of depth map information in the process of outputting stereoscopic image content. have.
- the spatial histogram generator for generating spatial histogram (Histogram) information using the depth map (Depth Map) for the input image data;
- a peak frequency generator configured to generate a peak frequency using 2D image data of the input image data;
- An object analyzer which determines an error of each frame of the input image data based on the spatial histogram and the peak frequency;
- a depth map error corrector for modifying the depth map information if the error for the frame is determined, so that the error is improved;
- a rendering processor configured to generate a left view image and a right view image in order to construct a 3D image by using the modified depth map information.
- a spatial histogram generation step of generating spatial histogram information using depth map information on input image data; Generating a peak frequency using 2D image data of the input image data; An object analyzing step of determining an error for each frame of the input image data based on the spatial histogram and the peak frequency; A depth map error correcting step of modifying the depth map information so that the error is improved when the error for the frame is determined; And a rendering process of generating a left view image and a right view image to construct a stereoscopic image using the modified depth map information.
- each of the 2D image data Identifying a partial value outside the standard deviation value of spatial histogram information generated for each object classified according to the depth map information in a frame;
- An error prediction area classification step of classifying an area corresponding to the partial value as an error prediction area when the partial value exceeds the positive or negative direction of the vertical (Y) axis of the first threshold based on the first threshold value;
- the peak frequency for the error prediction region exceeds a second threshold in the + direction or the ⁇ direction, it is determined whether the result of the direction exceeding the first threshold and the direction result of exceeding the second threshold match. Confirming whether the directions match;
- an error determination step of determining that there is the error in the depth map information of the error predicted area when the direction result does not match, as a result of the confirmation of the direction result matching step.
- the depth map information in the process of providing a stereoscopic image by applying depth map information to the input 2D image data, the depth map information Confirming some values that deviate from a standard deviation value of spatial histogram information for the error determination region determined to be in error;
- the 2D image data Identifying a partial value outside of a standard deviation value of spatial histogram information generated for each object classified according to the depth map information in each frame of;
- An error prediction area classification step of classifying an area corresponding to the partial value as an error prediction area when the partial value exceeds a first threshold;
- a direction result confirming step of confirming whether or not a result of the direction exceeding the first threshold and the second threshold in the + direction or the ⁇ direction when the peak frequency for the error predicted region exceeds a second threshold value;
- An error determination step of determining that there is the error in the depth map information of the error prediction area when the direction result does not coincide with the confirmation result of the direction result checking step;
- an error improvement step of increasing or decreasing the partial value to a standard deviation value of the spatial histogram based on a result of the direction in which the partial value exceeds
- the error of the depth map information may not be reduced due to the error of the depth map information. It is effective to improve the error of the depth map information by determining the.
- the present invention it is possible to determine and improve all errors of the depth map information received from the 2D image data or the separate depth map information received together with the 2D image data. That is, by improving the error in the output of the three-dimensional image, there is an effect that the viewer can feel a more improved three-dimensional feeling.
- FIG. 1 is a block diagram schematically illustrating an apparatus for improving stereoscopic error according to an embodiment of the present invention
- FIG. 2 is a flowchart illustrating a method for improving stereoscopic error according to an embodiment of the present invention
- FIG. 3 is an exemplary diagram of a spatial histogram according to an embodiment of the present invention.
- FIG. 4 is an exemplary diagram of input 2D image data according to an embodiment of the present invention.
- FIG. 5 is an exemplary diagram of normal depth map information of a specific frame according to an embodiment of the present invention.
- FIG. 6 is an exemplary view illustrating an error predicted area of a specific frame according to an embodiment of the present invention.
- FIG. 7 is an exemplary diagram illustrating an error predicted area of a specific object according to an embodiment of the present invention.
- FFT fast fourier transform
- stereoscopic image improving apparatus 112 video receiving unit
- 140 object analysis unit 150: depth map error correction
- FIG. 1 is a block diagram schematically illustrating an apparatus for improving stereoscopic error according to an embodiment of the present invention.
- the stereoscopic image error improving apparatus 100 includes an image receiver 112, a depth map estimator 114, a depth map receiver 120, a spatial histogram generator 122, and a 2D image receiver ( 130, a peak frequency generator 132, an object analyzer 140, a depth map error corrector 150, and a rendering processor 160.
- the stereoscopic image error improving apparatus 100 includes an image receiver 112, a depth map estimator 114, a depth map receiver 120, a spatial histogram generator 122, and a 2D image receiver.
- the stereoscopic image error improving apparatus 100 converts the input image data into stereoscopic image data, and in the process of converting the input image data into the stereoscopic image data, the apparatus for determining and improving the error.
- the 3D video error improving apparatus 100 refers to a device capable of receiving 2D video data from a video content provider such as a broadcasting station and converting the 2D video data to a 3D video before displaying it.
- the 3D video error improving apparatus 100 may receive depth map information provided separately from the 2D video data from a video content provider such as a broadcasting station.
- the 3D video error improving apparatus 100 may determine and improve an error of the depth map information provided or estimated from an image content provider.
- the 3D video error improving device 100 may be implemented in a form mounted on a display device such as a TV or a monitor, or implemented as a separate device such as a set-top box to be linked to a display device.
- the stereoscopic error correction apparatus 100 describes only the conversion of the input image data into the left view image and the right view image, which are stereoscopic images, and then the stereoscopic image is displayed on the display device. Stereoscopic or auto-stereoscopic displays are not mentioned separately.
- a 3D image may be defined as an image that a user can feel as a part of the image protrudes from the screen by applying depth map information to the image.
- the 3D image may be defined as an image that provides a user with various viewpoints and thereby allows the user to feel a reality in the image. That is, the stereoscopic image described in the present invention refers to an image that provides a lively and realistic feeling by making the viewer feel an audiovisual stereoscopic sense.
- the stereoscopic image may be classified into a binocular, a polycular, an integral photography, a multiview (Omni, a panorama), a hologram, and the like according to an acquisition method, a depth impression, a display method, and the like.
- a method of representing a stereoscopic image includes image-based representation and mesh-based representation.
- two images taken from the left and the right side like the human eye are required. That is, the image taken from the left is seen by the left eye, and the image taken by the right is seen by the right eye, so that the viewer feels a three-dimensional effect.
- two cameras are attached or two lenses are provided. Since the stereoscopic image requires the left image and the right image, the size of the image data is doubled compared to general broadcasting, and when the stereoscopic image is transmitted as a broadcast signal, it occupies twice the frequency band bandwidth.
- the left and right images can be reduced in half and synthesized to be transmitted in the data size of a general broadcast signal, but data loss is because the amount of data is basically compressed in the horizontal direction. This happens.
- there is a method of transmitting depth map information having information about 2D image data and stereoscopic image data corresponding to a general image. Since the amount of depth map information generally corresponds to about 10% of data compared to 2D image data, a stereoscopic image can be realized by adding only about 10% of information.
- the 3D image error improving apparatus 100 which is a receiving unit capable of converting 2D image data to 3D image data, requires the same amount of information as the general image data, so that the 3D image is implemented without additional data reception. It is possible.
- the signal system for viewing stereoscopic images includes transmitting left and right image data, compressing and transmitting left and right image data in a horizontal direction, transmitting 2D image data and depth map information together, or receiving a stereoscopic image as a receiver. It may be the case that the error correction apparatus 100 converts 2D image data into stereoscopic image data. On the other hand, when the image content provider transmits the image data by transmitting both left and right image data or by contracting the left and right image data in a horizontal direction, the stereoscopic image of the stereoscopic image is transmitted. Modifications to are not possible.
- the 3D image error improving apparatus 100 may determine an error of depth map information capable of expressing 3D, and provide the viewer with an improved 3D image.
- Such a three-dimensional effect corresponds to a subjective judgment in which there are many differences depending on the age or standard of the viewer watching, but the difference in the three-dimensional feeling may be determined by the following objective items.
- Objects with a large difference in depth map information exist in one image at the same time, or the left and right images differ greatly from the angles seen by the human eye, or the three-dimensional images that exist naturally and the three-dimensional images representing them are viewed. It is a case where the three-dimensional feeling which is felt at the time is different in a specific object or a specific area, or the three-dimensional effect is intentionally expressed in order to emphasize the three-dimensional feeling.
- the stereoscopic image error improving apparatus 100 may reduce the degree of error by determining an error of the depth map information and improving the error.
- the stereoscopic image error improving apparatus 100 which is a receiver, estimates depth map information from 2D image data and converts the depth map information into a left view image and a right view image, which are stereoscopic images, the image recognition, analysis, and object are performed.
- An error occurs during the stereoscopic image conversion process through the process of the separation, the decision of the object before and after the object, and thus the error may be included in the depth map information. Due to such an error, a viewer watching a stereoscopic image may feel dizzy. That is, when the 3D image error improving apparatus 100 receives separate depth map information from an image content provider or estimates depth map information from 2D image data, the error may be determined from the depth map information.
- the stereoscopic error correction apparatus 100 may determine and improve an error included in the depth map information. The number will be.
- depth image-based rendering refers to a method of generating frames at different views using reference images having information such as depth or difference angle for each pixel.
- Such depth image based rendering not only renders difficult and complex shapes of 3D models, but also enables the application of signal processing methods such as general image filtering, and can generate high quality stereoscopic images.
- the depth image based rendering uses a depth image and a texture image obtained through a depth camera and a multi-view camera.
- the depth image is an image showing the distance between the object located in the 3D space and the camera photographing the object in black and white units.
- the depth image is used for 3D reconstruction or 3D warping through depth map information and camera parameters.
- depth images are applied to free-view TVs and 3D TVs.
- a free view TV refers to a TV that enables a user to watch an image at an arbitrary point of time according to a user's selection without viewing the image only at a predetermined point in time.
- 3D TV realizes the image which added the depth image to the existing 2D TV. In such a free-view TV and a 3D TV, an intermediate image must be generated for smooth view switching, and accurate depth map information must be estimated for this. Meanwhile, a method of estimating depth map information in the present invention will be described in detail through the depth map estimator 114.
- the image receiver 112 checks whether the depth map information with respect to the input image data is separately input, and if the depth map information is separately input based on the check result, the image receiver 112 sends the depth map information to the depth map receiver 120. send.
- the depth map estimator 114 estimates the depth map information for each frame of the input image data, and estimates the depth map information for the depth map receiver ( 120). That is, the depth map estimator 114 estimates depth map information for each pixel (Pixel) existing in each frame of the input image data.
- each pixel may include R, G, and B subpixels.
- the input image data refers to 2D image data.
- the depth map estimator 114 may use a stereo matching algorithm as a general method for estimating the depth map information. The stereo matching algorithm searches only the horizontal direction from the surrounding images to obtain the disparity value, and inputs only the images obtained from the parallel camera configuration or the processed images.
- Depth map information described in the present invention means information indicating a sense of depth, also referred to as Z-buffer (Buffer).
- the depth map estimator 114 analyzes each frame to estimate depth map information using at least one or more information among a tilt of a screen, a shadow of an object, a screen focus, and an object pattern. For example, the depth map estimator 114 may estimate the depth map information by determining that an object located at the bottom of the screen in the frame is close and an image located at the top is far in a manner using a slope in the frame. In addition, the depth map estimator 114 may estimate the depth map information by determining that the dark part of the object in the frame is far away and the bright part is near by using the shadow of the object. In other words, shadows always use the principle behind objects.
- the depth map estimator 114 may estimate the depth map information by determining that a clear object is in front and a blurry object is behind in a manner of using the screen focus. In addition, the depth map estimator 114 may estimate the depth map information by determining that the size of the pattern is larger than the smaller one when the pattern of the same type is continuously provided by using the object pattern. .
- the depth map receiver 120 receives depth map information on image data input from the image receiver 112 or the depth map estimator 114. That is, when the depth map receiver 120 receives the depth map information separate from the image data from the image content provider, the depth map receiver 120 receives the received depth map information except for the image data through the image receiver 112 and the image. When only the image data is received without receiving the depth map information from the content provider, the depth map information estimated from the image data input through the depth map estimator 114 is received.
- the spatial histogram generator 122 generates spatial histogram information using depth map information on the input image data.
- the spatial histogram generator 122 generates spatial histogram information in which the depth map information is histograms on the horizontal (X) axis and the vertical (Y) axis for each object classified according to the depth map information in each frame of the input image data. do.
- the spatial histogram generator 122 generates a spatial histogram table that histograms the depth map information for each object. That is, the spatial histogram information described in the present invention refers to information obtained by histogram-forming objects classified according to the depth map information in the horizontal (X) axis and the vertical (Y) axis.
- the 2D image receiver 130 receives 2D image data with respect to the image data input from the image receiver 112 or the depth map estimator 114. That is, when the 2D image receiving unit 130 receives depth map information separate from the image data from the image content provider, the 2D image receiving unit 130 receives 2D image data, which is image data excluding the depth map information, through the image receiving unit 112. When only the image data is received without receiving the depth map information from the content provider, only the 2D image data, which is the image data, is received through the depth map estimator 114.
- the peak frequency generator 132 generates a peak frequency using 2D image data of the input image data.
- the peak frequency generator 132 scans each frame of the input image data in units of a predetermined macro block, and divides the frame into a plurality of regions, and the pixels present in each region for each divided region.
- the peak frequency is used to calculate the peak frequency.
- the peak frequency generator 132 divides the high frequency component and the low frequency component using a fast fourier transform (FFT) to calculate the peak frequency, and determines a ratio corresponding to the coefficient of the high frequency component as the peak frequency.
- the peak frequency generator 132 generates a peak frequency calculated for each region as a peak frequency table.
- FFT fast fourier transform
- the macro block described in the present invention may be applied in various sizes such as 8 ⁇ 8, 16 ⁇ 16, 32 ⁇ 32, 8 ⁇ 16, 16 ⁇ 8, 16 ⁇ 32, 32 ⁇ 16, etc. It can be applied in various sizes.
- the macro block may scan a frame in a form including a sub block.
- the object analyzer 140 determines an error of each frame of the input image data based on the spatial histogram received from the spatial histogram generator 122 and the peak frequency received from the peak frequency generator 132.
- the object analyzer 140 classifies a corresponding area of the object as an error prediction area 410.
- the object analyzer 140 when the partial deviation from the standard deviation value of the spatial histogram information included in the object exceeds in the + direction or-direction of the vertical (Y) axis of the first threshold based on the first threshold, An area corresponding to some value is classified as an error prediction area 410.
- the object analyzer 140 scans each frame of the input image data in units of predetermined macroblocks, selects a region corresponding to the error predicted region 410, and selects a region of the selected region. If the peak frequency exceeds the second threshold, it is determined that there is an error in the depth map information of the error prediction area 410, and the error prediction area 410 is recognized as an error determination area.
- the object analyzer 140 may have a peak frequency corresponding to the error prediction area 410 exceeding the second threshold in the + direction or the ⁇ direction, but exceeding the first threshold in the + or ⁇ direction result and the second threshold. If the exceeded + or ⁇ direction results do not match, it is determined that there is an error in the depth map information of the error prediction area 410, and the error prediction area 410 is recognized as an error confirmation area.
- the depth map error corrector 150 modifies the depth map information when the error for the frame is determined, so that the error is improved.
- the depth map error correction unit 150 corrects an error of the depth map information of the error determination region.
- the depth map error correction unit 150 may be implemented to improve an error on the depth map information of the entire frame.
- the depth map of the error determination region is determined. It can be implemented to improve the information.
- the depth map error corrector 150 exceeds a value in the + direction of the vertical (Y) axis of the first threshold based on the first threshold, a partial value outside the standard deviation value of the spatial histogram information for the error confirmation region is exceeded.
- the depth map error corrector 150 exceeds a value in the-direction of the vertical (Y) axis of the first threshold based on the first threshold, a partial value outside the standard deviation value of the spatial histogram information for the error confirmation region is exceeded. And to increase a part of the value exceeding the first threshold in the-direction to a standard deviation value of the spatial histogram.
- the spatial histogram generator 122 generates overall spatial histogram information for one frame by default and generates the generated histogram information. If the spatial histogram is larger than a certain area, a spatial histogram table is formed. In this case, elements of each formed spatial histogram table may be separated into one object.
- the object analyzer 140 obtains the peak frequency of the object through peak frequency analysis in the corresponding region of the 2D image data for each object separated by the peak frequency generator 132. That is, the object analyzer 140 has spatial histogram information and peak frequency for each object through the spatial histogram generator 122 and the peak frequency generator 132. In this case, the spatial histogram table and the peak frequency table acquired by the object analyzer 140 are as shown in [Table 1].
- the object analyzer 140 uses the spatial histogram table and the peak frequency table acquired through the spatial histogram generator 122 and the peak frequency generator 132 to determine the conditions of Table 2 for each object. You can distinguish the object that satisfies, and it is expressed as 'C program' as shown in [Table 2].
- the depth map error corrector 150 of the 3D image error improving apparatus 100 determines that there is an error in the corresponding depth map information, and reduces the depth map information of the corresponding area.
- the depth map error corrector 150 determines that there is an error in the corresponding depth map information and increases the depth map information of the corresponding area. That is, the depth map information of the object is controlled according to these variables.
- the above-described example may not be satisfied under all conditions, but if only about 80% of the errors generated are improved, many errors may be improved in the average image.
- the rendering processor 160 generates an image for the left view and an image for the right view, which are stereoscopic images, using the modified depth map information. That is, the rendering processor 160 generates the left view image and the right view image, which are stereoscopic images, by applying the modified depth map information to the 2D image data.
- FIG. 2 is a flowchart illustrating a method of improving a stereoscopic image error according to an embodiment of the present invention.
- the 3D image error improving apparatus 100 receives image data from an image content provider (S210). That is, when the depth map information of the input image data is separately input, the image receiving unit 112 of the 3D image error improving apparatus 100 determines whether or not the depth map information is separately input based on the check result. The depth map information is transmitted to the depth map receiver 120. Meanwhile, when the depth map information of the input image data is not separately input, the depth map estimator 114 of the 3D image error improving apparatus 100 estimates the depth map information for each frame of the input image data. The estimated depth map information is transmitted to the depth map receiver 120.
- the depth map estimator 114 may analyze the respective frames to estimate the depth map information using at least one or more information among the tilt of the screen, the shadow of the object, the screen focus, and the object pattern.
- the depth map receiver 120 of the 3D image error improving apparatus 100 receives depth map information on image data input from the image receiver 112 or the depth map estimator 114 (S212). That is, when the depth map receiver 120 of the 3D image error improving apparatus 100 receives the depth map information separate from the image data from the image content provider, the depth map receiver 120 separately receives the image data through the image receiver 112 except for the image data. When receiving depth map information and receiving only image data without receiving depth map information from an image content provider, depth map information estimated from image data input through the depth map estimator 114 is received. It is.
- the spatial histogram generator 122 of the 3D image error improving apparatus 100 generates spatial histogram information by using depth map information on the input image data (S220).
- the spatial histogram generator 122 of the 3D image error improving apparatus 100 displays the depth map information in the horizontal (X) axis and the vertical (Y) axis for each object classified according to the depth map information in each frame of the input image data. Generates histogram-spaced histogram information.
- the spatial histogram generator 122 of the 3D image error improving apparatus 100 generates a spatial histogram table that histograms depth map information for each object (S222).
- the 2D image receiver 130 of the 3D image error improving apparatus 100 receives 2D image data with respect to the image data input from the image receiver 112 or the depth map estimator 114 (S230). That is, when the 2D image receiver 130 of the 3D image error improving apparatus 100 receives depth map information separate from the image data from the image content provider, the 2D image receiver 130 is image data excluding the depth map information through the image receiver 112. When receiving 2D image data and receiving only image data without receiving separate depth map information from an image content provider, only 2D image data, which is image data, is received through the depth map estimator 114.
- the peak frequency generator 132 of the 3D video error improving apparatus 100 generates the peak frequency using 2D image data with respect to the input image data (S232). That is, the peak frequency generator 132 of the 3D video error improving apparatus 100 scans each frame of the input image data in units of predetermined macroblocks, and divides the data into a plurality of areas, and each area of each of the divided areas. The peak frequency is calculated using the pixel value present in.
- the peak frequency generator 132 divides the high frequency component and the low frequency component using an FFT to calculate the peak frequency, and determines a ratio corresponding to the coefficient of the high frequency component as the peak frequency.
- the peak frequency generator 132 generates a peak frequency calculated for each region as a peak frequency table (S234).
- the object analyzer 140 of the 3D image error improving apparatus 100 checks whether the spatial histogram information and the peak frequency are generated for the entire frame included in the input image data (S240). As a result of checking in operation S240, when the spatial histogram information and the peak frequency are generated for the entire frame included in the input image data, the object analyzer 140 of the 3D image error improving apparatus 100 classifies the image according to the depth map information. It is checked whether or not the spatial histogram information included in the object exceeds the first threshold (S250). On the other hand, if the result of the check in step S240, the spatial histogram information and the peak frequency is not generated for the entire frame included in the input image data, the stereoscopic error correction apparatus 100 performs steps S220 to S234 again.
- the object analyzer 140 of the 3D image error improving apparatus 100 is a corresponding region of the object.
- the first threshold exceeded area is classified as an error predicted area 410 (S252). That is, the object analyzer 140 of the 3D image error improving apparatus 100 may determine that a part of the value outside the standard deviation value of the spatial histogram information included in the object is + of the vertical (Y) axis of the first threshold based on the first threshold. In the case of exceeding the direction or minus direction, an area corresponding to some value is classified as an error predicted area 410.
- the object analyzer 140 of the 3D image error improving apparatus 100 scans each frame of the input image data in units of predetermined macroblocks, and searches for a region corresponding to the error predicted region 410 among the plurality of divided regions. It is selected (S254).
- the object analyzer 140 of the 3D image error improving apparatus 100 checks whether or not the peak frequency of the selected region exceeds the second threshold in step S254 (S256).
- the object analyzer 140 of the stereoscopic image error improving apparatus 100 has an error in the depth map information of the error predicted region 410.
- the error prediction area 410 is recognized as an error determination area. That is, the object analyzer 140 of the stereoscopic image error improving apparatus 100 has a peak frequency corresponding to the error prediction region 410 exceeding the second threshold in the + direction or the ⁇ direction, but exceeding the first threshold. Or-if the result of the direction and the + or-direction result exceeding the second threshold do not match, it is determined that there is an error in the depth map information of the error prediction area 410, and the error prediction area 410 is determined as an error confirmation area. Recognize.
- the depth map error corrector 150 of the 3D image error improving apparatus 100 corrects an error of the depth map information of the error determining region (S260). That is, the depth map error corrector 150 of the 3D image error improving apparatus 100 may determine that some values that deviate from the standard deviation value of the spatial histogram information of the error determining region are vertical to the first threshold based on the first threshold (Y). When exceeding in the + direction of the) axis, some values exceeding the first threshold in the + direction are reduced to the standard deviation value of the spatial histogram.
- the depth map error corrector 150 of the 3D image error improving apparatus 100 may determine that some values deviating from the standard deviation value of the spatial histogram information of the error determining region are perpendicular to the first threshold based on the first threshold (Y). When exceeding in the-direction of the axis), some values exceeding the first threshold in the-direction are increased to the standard deviation value of the spatial histogram.
- the depth map error corrector 150 of the 3D image error improving apparatus 100 checks whether or not an error with respect to the depth map information of all error determination regions is corrected (S270). As a result of checking in step S270, the depth map error correction unit 150 of the stereoscopic image error improving apparatus 150 renders the error of the stereoscopic image error improving apparatus 100 when an error with respect to the depth map information of all the error determining regions is corrected.
- the processor 160 generates a left view image and a right view image, which are stereoscopic images, using the corrected depth map information (S280). That is, the rendering processor 160 generates the left view image and the right view image, which are stereoscopic images, by applying the modified depth map information to the 2D image data.
- steps S210 to S280 are described as being sequentially executed. However, this is merely illustrative of the technical idea of an embodiment of the present invention, and the general knowledge in the technical field to which an embodiment of the present invention belongs. Those having a variety of modifications and variations may be applicable by changing the order described in FIG. 2 or executing one or more steps of steps S210 to S280 in parallel without departing from the essential characteristics of one embodiment of the present invention. 2 is not limited to the time series order.
- the stereoscopic error correction method according to an embodiment of the present invention described in FIG. 2 may be implemented in a program and recorded in a computer-readable recording medium.
- a computer-readable recording medium having recorded thereon a program for implementing a method for improving stereoscopic error according to an embodiment of the present invention includes all kinds of recording devices storing data that can be read by a computer system. Examples of such computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, and are implemented in the form of a carrier wave (for example, transmission over the Internet). It includes being.
- the computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
- functional programs, codes, and code segments for implementing an embodiment of the present invention may be easily deduced by programmers in the art to which an embodiment of the present invention belongs.
- FIG. 3 is an exemplary diagram of a spatial histogram according to an embodiment of the present invention.
- depth map information When analyzing general image data, there is a characteristic of depth map information which rarely occurs, and when viewed statistically, the error is most likely to occur due to image analysis.
- Depth map information is generally rare in the natural state where very small objects are located remotely or near each other apart from the surrounding area.
- the depth map information has a spatial frequency characteristic that is about 1/2 of the resolution of the image due to low spatial frequency characteristics, and displays the degree of distance in a single channel, that is, a monochrome image. Accordingly, when the depth map information is separately transmitted from the image content provider, it corresponds to about 10% of the information amount of the 2D image data in consideration of compression.
- an area corresponding to an error may be found by considering features of a general depth map as a precondition.
- the horizontal (X) axis is a cross section obtained by cutting the 2D space of the screen in the horizontal space
- the vertical (Y) axis is a depth map of an object located near by a larger value.
- the object analyzer 140 of the 3D image error improving apparatus 100 may respectively input image data based on the spatial histogram received from the spatial histogram generator 122 and the peak frequency received from the peak frequency generator 132. Determine the error for the frame of the.
- the object analyzer 140 of the 3D image error improving apparatus 100 may determine that spatial histogram information included in an object exceeds a first threshold (middle dotted line). In this case, the corresponding area of the object is classified as an error prediction area 410.
- the value of the graph shown in FIG. 3A is y1 / x1
- a value of the graph shown in FIG. 3B is y2 / x2
- the value of y2 / x2, which is the value of the graph is very larger than y1 / x1, which is the value of the graph shown in FIG.
- the areas exceeding the first threshold in the + direction or the ⁇ direction based on the first threshold are separated into one object, and the area of the area (corresponding to X) and the corresponding area for the separated areas.
- the spatial deviation histogram information table for each object can be formed by obtaining a standard deviation value (corresponding to Y) of the depth map value in.
- the 2D image data may receive 2D image data with respect to the image data input from the image receiver 112 or the depth map estimator 114 through the 2D image receiver 130. That is, when the 2D image receiving unit 130 receives depth map information separate from the image data from the image content provider, the 2D image receiving unit 130 receives 2D image data, which is image data excluding the depth map information, through the image receiving unit 112. When only the image data is received without receiving the depth map information from the content provider, only the 2D image data, which is the image data, is received through the depth map estimator 114. Meanwhile, the 2D image data shown in FIG. 4 is preferably color image data, but is not necessarily limited thereto.
- FIG 5 is an exemplary diagram of normal depth map information of a specific frame according to an embodiment of the present invention. That is, when depth map information is applied to the image data shown in FIG. That is, the spatial histogram generator 122 of the 3D image error improving apparatus 100 displays the depth map information vertically (Y) and horizontal (Y) for each object classified according to the depth map information in each frame of the input image data. Generates spatial histogram information histogramized with) axis.
- FIG. 6 is an exemplary diagram illustrating an error predicted area of a specific frame according to an embodiment of the present invention.
- the object analyzer 140 of the 3D image error improving apparatus 100 may add the spatial histogram received from the spatial histogram generator 122 and the peak frequency received from the peak frequency generator 132. On the basis of this, errors for each frame of the input image data are determined.
- the object analyzer 140 classifies a corresponding area of the object as an error prediction area 410 as shown in FIG. 6. That is, as shown in FIG. 6, the hat rim part of the front penguin and the face part (+ direction) of the rear penguin or the part of the background are determined to be obvious errors.
- Such errors can be extracted by analyzing the spatial histogram information. This is because the front column or the body of the penguin is close over a relatively large space, so the spatial histogram information will be small, and the errors in the error prediction region 410 of FIG. 6 will have very large spatial histogram information.
- FIG. 7 is an exemplary view illustrating an error predicted area of a specific object according to an embodiment of the present invention.
- the object analyzer 140 may determine that some values deviating from the standard deviation value of the spatial histogram information included in the object are based on the first threshold. In the case of exceeding in the + direction or the ⁇ direction of the vertical (Y) axis of, an area corresponding to some value is classified as an error prediction area 410.
- the object analyzer 140 scans each frame of the input image data in units of predetermined macroblocks, selects a region corresponding to the error predicted region 410, and selects a region of the selected region.
- the object analyzer 140 may have a peak frequency corresponding to the error prediction area 410 exceeding the second threshold in the + direction or the ⁇ direction, but exceeding the first threshold in the + or ⁇ direction result and the second threshold. If the exceeded + or ⁇ direction results do not match, it is determined that there is an error in the depth map information of the error prediction area 410, and the error prediction area 410 is recognized as an error confirmation area.
- the depth map error correction unit 150 of the 3D image error improving apparatus 100 may perform a spatial histogram of the error confirmation area. If some value outside the standard deviation value of the information exceeds the + direction of the vertical (Y) axis of the first threshold with respect to the first threshold, the standard deviation value of the spatial histogram exceeds the value of the first threshold in the + direction. To reduce the function.
- the depth map error corrector 150 exceeds a value in the-direction of the vertical (Y) axis of the first threshold based on the first threshold, a partial value outside the standard deviation value of the spatial histogram information for the error confirmation region is exceeded. Some values that exceed the first threshold in the ⁇ direction are increased to the standard deviation of the spatial histogram.
- FIG. 8 is an exemplary diagram of peak frequencies according to FFT application according to an embodiment of the present invention.
- FIG. 8 illustrates an example of obtaining a peak frequency of the error predicted region 410 of the penguin face shown in FIG. 7, wherein the peak frequency generator 132 of the stereoscopic image error improving apparatus 100 receives the input image data.
- Each frame is scanned in units of predetermined macroblocks and divided into a plurality of regions, and pixel values existing in each region are calculated for each of the divided regions as shown in FIG.
- the peak frequency generator 132 of the 3D video error improving apparatus 100 applies an FFT to FIG. 8A to distinguish between a high frequency component and a low frequency component as shown in FIG. 8B.
- the ratio corresponding to the coefficient of the high frequency component may be determined as the peak frequency.
- the peak frequency generator 132 of the 3D video error improving apparatus 100 may generate the peak frequencies calculated for each region as a peak frequency table.
- the macro block may be applied in various sizes such as 8 ⁇ 8, 16 ⁇ 16, 32 ⁇ 32, 8 ⁇ 16, 16 ⁇ 8, 16 ⁇ 32, 32 ⁇ 16, and the like.
- the macro block may scan a frame in a form including a sub block.
- the area corresponding to the upper left side in the block of 8 ⁇ 8 means the region of the low frequency component, and as it moves to the area of the lower right side, it corresponds to the region of the high frequency component.
- the sum of the specific gravity of the selected region may be defined as the peak frequency component.
- a large peak frequency value means a clear portion, and means that an object is located at a short distance due to the characteristics of an image relatively.
- a criterion for evaluating whether an object having a large value in the previous spatial histogram information is actually a short-range object is used. do.
- each frame of the input image data is scanned in a predetermined macroblock unit to be divided into a plurality of areas to obtain peak frequency values for the divided areas. After the calculation, the average of the average peak frequency values of each region may be selected as the representative value of the object.
- Equations 1 and 2 may be used to apply the FFT to FIG. 8A.
- [Equation 1] and [Equation 2] it shows the conversion relationship between the time function f (n) and the frequency function F (x), and [Equation 1] and [Equation 2] is one-dimensional
- [Equation 1] and [Equation 2] is a formula showing a general example of the FFT may be applied to the constants included in [Equation 1] and [Equation 2] to calculate the high frequency components .
- F (n) is represented by the spatial pixel coordinate system f (m, n)
- the pixel position is (m, n)
- F (x, y) becomes the spatial frequency component.
- the present invention is applied to various fields such that the deterioration of the stereoscopic effect does not appear due to the error of the depth map information in the process of outputting the 3D image content, thereby determining the error of the depth map information to improve the error of the depth map information. It is a useful invention that produces an effect that can be achieved.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Controls And Circuits For Display Device (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (24)
- 입력된 영상 데이터에 대한 깊이 맵 정보(Depth Map)를 이용하여 공간 히스토그램(Histogram) 정보를 생성하는 공간 히스토그램 생성부;상기 입력된 영상 데이터에 대한 2D 영상 데이터를 이용하여 피크(Peak) 주파수를 생성하는 피크 주파수 생성부;상기 공간 히스토그램과 상기 피크 주파수에 근거하여 상기 입력된 영상 데이터의 각각의 프레임(Frame)에 대한 오류를 판별하는 오브젝트(Object) 분석부;상기 프레임에 대한 상기 오류가 판별되면, 상기 오류가 개선되도록 상기 깊이 맵 정보를 수정하는 깊이 맵 오류 수정부; 및상기 수정된 깊이 맵 정보를 이용하여 입체 영상을 구성하기 위하여 좌측 시야용 이미지 및 우측 시야용 이미지를 생성하는 렌더링(Rendering) 처리부를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 1 항에 있어서,상기 공간 히스토그램 생성부는,상기 입력된 영상 데이터의 각각의 프레임 내에 상기 깊이 맵 정보에 따라 구분되는 오브젝트 별로 깊이 맵 정보를 수평(X)축과 수직(Y)축으로 히스토그램화한 상기 공간 히스토그램 정보를 생성하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 2 항에 있어서,상기 공간 히스토그램 생성부는,상기 각각의 오브젝트 별로 깊이 맵 정보를 히스토그램화한 공간 히스토그램 테이블을 생성하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 1 항에 있어서,상기 피크 주파수 생성부는,상기 입력된 영상 데이터의 각각의 프레임을 기 설정된 매크로 블록(Macro Block) 단위로 스캔(Scan)하여 복수 개의 영역으로 구분하고, 상기 구분된 영역 별로 각각의 영역에 존재하는 픽셀 값을 이용하여 상기 피크 주파수를 산출하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 4 항에 있어서,상기 피크 주파수 생성부는,상기 피크 주파수를 산출하기 위해 FFT(Fast Fourier Transform)을 이용하여 고주파 성분과 저주파 성분으로 구분하고, 상기 고주파 성분의 계수에 해당하는 비율을 상기 피크 주파수로 결정하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 5 항에 있어서,상기 피크 주파수 생성부는,상기 각각의 영역 별로 산출한 상기 피크 주파수를 피크 주파수 테이블로 생성하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 2 항에 있어서,상기 오브젝트 분석부는,상기 오브젝트에 포함된 공간 히스토그램 정보가 제 1 임계치(Threshold)를 초과하는 경우, 상기 오브젝트의 해당 영역을 오류 예상 영역으로 분류하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 7 항에 있어서,상기 오브젝트 분석부는,상기 오브젝트에 포함된 공간 히스토그램 정보의 표준 편차값을 벗어난 일부 값이 상기 제 1 임계치를 기준으로 상기 제 1 임계치의 수직(Y)축의 + 방향 또는 - 방향으로 초과하는 경우, 상기 일부 값에 해당하는 영역을 상기 오류 예상 영역으로 분류하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 7 항에 있어서,상기 오브젝트 분석부는,상기 입력된 영상 데이터의 각각의 프레임을 기 설정된 매크로 블록 단위로 스캔하여 복수 개로 구분된 영역 중 상기 오류 예상 영역에 해당하는 영역을 선별하고, 상기 선별된 영역의 상기 피크 주파수가 제 2 임계치를 초과하는 경우 상기 오류 예상 영역의 깊이 맵 정보에 오류가 있는 것으로 확정하여 상기 오류 예상 영역을 오류 확정 영역으로 인식하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 9 항에 있어서,상기 오브젝트 분석부는,상기 오류 예상 영역에 해당하는 상기 피크 주파수가 상기 제 2 임계치를 + 방향 또는 - 방향으로 초과하되, 상기 제 1 임계치를 초과한 + 또는 - 방향 결과와 상기 제 2 임계치를 초과한 + 또는 - 방향 결과가 일치하지 않는 경우, 상기 오류 예상 영역의 깊이 맵 정보에 상기 오류가 있는 것으로 확정하여 상기 오류 예상 영역을 상기 오류 확정 영역으로 인식하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 9 항에 있어서,상기 깊이 맵 오류 수정부는,상기 오류 확정 영역의 깊이 맵 정보에 대한 오류를 수정하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 9 항에 있어서,상기 깊이 맵 오류 수정부는,상기 오류 확정 영역에 대한 상기 공간 히스토그램 정보의 표준 편차값을 벗어난 일부 값이 상기 제 1 임계치를 기준으로 상기 제 1 임계치의 수직(Y)축의 + 방향으로 초과한 경우, 상기 제 1 임계치를 + 방향으로 초과한 일부 값을 상기 공간 히스토그램의 표준 편차값으로 감소시키는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 9 항에 있어서,상기 깊이 맵 오류 수정부는,상기 오류 확정 영역에 대한 상기 공간 히스토그램 정보의 표준 편차값을 벗어난 일부 값이 상기 제 1 임계치를 기준으로 상기 제 1 임계치의 수직(Y)축의 - 방향으로 초과한 경우, 상기 제 1 임계치를 - 방향으로 초과한 일부 값을 상기 공간 히스토그램의 표준 편차값으로 증가시키는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 1 항에 있어서,상기 입력된 영상 데이터에 대한 상기 깊이 맵 정보를 수신하는 깊이 맵 수신부; 및상기 입력된 영상 데이터에 대한 상기 2D 영상 데이터를 수신하는 2D 영상 수신부를 추가로 포함하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 14 항에 있어서,상기 입력된 영상 데이터에 대한 상기 깊이 맵 정보가 별도로 입력되는지의 여부를 확인하고, 확인 결과에 근거하여 상기 깊이 맵 정보가 별도로 입력되는 경우, 상기 깊이 맵 정보를 상기 깊이 맵 수신부로 전송하는 영상 수신부; 및상기 확인 결과에 근거하여 상기 깊이 맵 정보가 별도로 입력되지 않는 경우, 상기 입력된 영상 데이터의 각각의 프레임 별로 상기 깊이 맵 정보를 추정하고, 상기 추정된 깊이 맵 정보를 상기 깊이 맵 수신부로 전송하는 깊이 맵 추정부를 추가로 포함하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 제 15 항에 있어서,상기 깊이 맵 추정부는,상기 각각의 프레임을 분석하여 화면의 기울기, 오브젝트의 그림자, 화면 초점 및 오브젝트 패턴 중 적어도 하나 이상의 정보를 이용하여 상기 깊이 맵 정보를 추정하는 것을 특징으로 하는 입체 영상 오류 개선 장치.
- 입력된 영상 데이터에 대한 깊이 맵 정보(Depth Map)를 이용하여 공간 히스토그램(Histogram) 정보를 생성하는 공간 히스토그램 생성 단계;상기 입력된 영상 데이터에 대한 2D 영상 데이터를 이용하여 피크(Peak) 주파수를 생성하는 피크 주파수 생성 단계;상기 공간 히스토그램과 상기 피크 주파수에 근거하여 상기 입력된 영상 데이터의 각각의 프레임(Frame)에 대한 오류를 판별하는 오브젝트 분석 단계;상기 프레임에 대한 상기 오류가 판별되면, 상기 오류가 개선되도록 상기 깊이 맵 정보를 수정하는 깊이 맵 오류 수정 단계; 및상기 수정된 깊이 맵 정보를 이용하여 입체 영상을 구성하기 위하여 좌측 시야용 이미지 및 우측 시야용 이미지를 생성하는 렌더링 처리 단계를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 방법.
- 제 17 항에 있어서,상기 공간 히스토그램 생성 단계는,상기 입력된 영상 데이터의 각각의 프레임 내에 상기 깊이 맵 정보에 따라 구분되는 오브젝트 별로 깊이 맵 정보를 수평(X)축과 수직(Y)축으로 히스토그램화한 상기 공간 히스토그램 정보를 생성하는 단계를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 방법.
- 제 17 항에 있어서,상기 피크 주파수 생성 단계는,상기 입력된 영상 데이터의 각각의 프레임을 기 설정된 매크로 블록 단위로 스캔하여 복수 개의 영역으로 구분하는 단계;상기 구분된 영역 별로 각각의 영역에 존재하는 픽셀의 주파수 성분을 FFT(Fast Fourier Transform)하여 고주파 성분과 저주파 성분으로 구분하는 단계; 및상기 고주파 성분의 계수에 해당하는 비율을 상기 피크 주파수로 결정하는 단계를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 방법.
- 제 17 항에 있어서,상기 오브젝트 분석 단계는,상기 오브젝트에 포함된 공간 히스토그램 정보가 제 1 임계치(Threshold)를 초과하는 경우, 상기 오브젝트의 해당 영역을 오류 예상 영역으로 분류하는 단계; 및상기 입력된 영상 데이터의 각각의 프레임을 기 설정된 매크로 블록 단위로 스캔하여 복수 개로 구분된 영역 중 상기 오류 예상 영역에 해당하는 영역을 선별하는 단계; 및상기 선별된 영역의 상기 피크 주파수가 제 2 임계치를 초과하는 경우 상기 오류 예상 영역의 깊이 맵 정보에 오류가 있는 것으로 확정하여 상기 오류 예상 영역 오류가 있는 것으로 판별하는 단계를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 방법.
- 입력된 2D 영상 데이터에 깊이 맵 정보를 적용하여 입체 영상을 제공하는 과정에서 상기 깊이 맵 정보에 대한 오류를 판별하는 방법에 있어서,상기 2D 영상 데이터의 각각의 프레임 내에 상기 깊이 맵 정보에 따라 구분되는 오브젝트 별로 생성된 공간 히스토그램 정보의 표준 편차값을 벗어난 일부 값을 확인하는 단계;상기 일부 값이 제 1 임계치를 기준으로 상기 제 1 임계치의 수직(Y)축의 + 방향 또는 - 방향으로 초과하는 경우, 상기 일부 값에 해당하는 영역을 오류 예상 영역으로 분류하는 오류 예상 영역 분류 단계;상기 오류 예상 영역에 대한 피크 주파수가 제 2 임계치를 + 방향 또는 - 방향으로 초과하는 경우, 상기 제 1 임계치를 초과한 방향 결과와 상기 제 2 임계치를 초과한 방향 결과가 일치하는 지의 여부를 확인하는 방향 결과 일치 여부 확인 단계; 및상기 방향 결과 일치 여부 확인 단계의 확인 결과, 방향 결과가 일치하지 않는 경우, 상기 오류 예상 영역의 상기 깊이 맵 정보에 상기 오류가 있는 것으로 판별하는 오류 판별 단계를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 방법.
- 입력된 2D 영상 데이터에 깊이 맵 정보를 적용하여 입체 영상을 제공하는 과정에서 오류가 판별된 상기 깊이 맵 정보를 개선하는 방법에 있어서,상기 깊이 맵 정보에 상기 오류가 있는 것으로 판별된 오류 확정 영역에 대한 공간 히스토그램 정보의 표준 편차값을 벗어난 일부 값을 확인하는 단계;상기 일부 값이 제 1 임계치를 기준으로 상기 제 1 임계치의 수직(Y)축의 + 방향과 - 방향 중 어느 방향으로 초과하였는지의 여부를 확인하는 방향 확인 단계; 및상기 방향 확인 단계의 확인 결과, 상기 일부 값이 상기 제 1 임계치를 초과한 방향에 근거하여 상기 일부 값을 상기 공간 히스토그램의 표준 편차값으로 증가 또는 감소시키는 오류 개선 단계를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 방법.
- 상기 제 22 항에 있어서,상기 오류 개선 단계는,상기 방향 확인 단계의 확인 결과에 근거하여, 상기 일부 값이 상기 제 1 임계치를 + 방향으로 초과한 경우 상기 일부 값을 상기 공간 히스토그램의 표준 편차값으로 감소시키며, 상기 일부 값이 상기 제 1 임계치의 수직(Y)축의 - 방향으로 초과한 경우 상기 일부 값을 상기 공간 히스토그램의 표준 편차값으로 증가시키는 단계를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 방법.
- 입력된 2D 영상 데이터에 깊이 맵 정보를 적용하여 입체 영상을 제공하는 과정에서 상기 깊이 맵 정보에 대한 오류를 판별하고, 개선하는 방법에 있어서,상기 2D 영상 데이터의 각각의 프레임 내에 상기 깊이 맵 정보에 따라 구분되는 오브젝트 별로 생성된 공간 히스토그램 정보의 표준 편차값을 벗어난 일부 값을 확인하는 단계;상기 일부 값이 제 1 임계치를 초과하는 경우, 상기 일부 값에 해당하는 영역을 오류 예상 영역으로 분류하는 오류 예상 영역 분류 단계;상기 오류 예상 영역에 대한 피크 주파수가 제 2 임계치를 초과하는 경우, 상기 제 1 임계치와 상기 제 2 임계치를 + 방향 또는 - 방향으로 초과한 방향 결과가 일치하는 지의 여부를 확인하는 방향 결과 확인 단계;상기 방향 결과 확인 단계의 확인 결과, 방향 결과가 일치하지 않는 경우, 상기 오류 예상 영역의 상기 깊이 맵 정보에 상기 오류가 있는 것으로 판별하는 오류 판별 단계; 및상기 일부 값이 상기 제 1 임계치를 초과한 방향 결과에 근거하여 상기 일부 값을 상기 공간 히스토그램의 표준 편차값으로 증가 또는 감소시키는 오류 개선 단계를 포함하는 것을 특징으로 하는 입체 영상 오류 개선 방법.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013501200A JP5750505B2 (ja) | 2010-06-08 | 2011-04-08 | 立体映像エラー改善方法及び装置 |
EP11792609.7A EP2560398B1 (en) | 2010-06-08 | 2011-04-08 | Method and apparatus for correcting errors in stereo images |
US13/636,998 US8503765B2 (en) | 2010-06-08 | 2011-04-08 | Method and apparatus for correcting errors in stereo images |
CN201180028346.7A CN103119947B (zh) | 2010-06-08 | 2011-04-08 | 用于校正立体图像中的误差的方法和设备 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2010-0053971 | 2010-06-08 | ||
KR1020100053971A KR101291071B1 (ko) | 2010-06-08 | 2010-06-08 | 입체 영상 오류 개선 방법 및 장치 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2011155698A2 true WO2011155698A2 (ko) | 2011-12-15 |
WO2011155698A3 WO2011155698A3 (ko) | 2012-02-02 |
Family
ID=45098485
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2011/002473 WO2011155698A2 (ko) | 2010-06-08 | 2011-04-08 | 입체 영상 오류 개선 방법 및 장치 |
Country Status (6)
Country | Link |
---|---|
US (1) | US8503765B2 (ko) |
EP (1) | EP2560398B1 (ko) |
JP (2) | JP5750505B2 (ko) |
KR (1) | KR101291071B1 (ko) |
CN (1) | CN103119947B (ko) |
WO (1) | WO2011155698A2 (ko) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130003128A1 (en) * | 2010-04-06 | 2013-01-03 | Mikio Watanabe | Image generation device, method, and printer |
KR101803571B1 (ko) * | 2011-06-17 | 2017-11-30 | 엘지디스플레이 주식회사 | 입체영상표시장치와 이의 구동방법 |
CN103959340A (zh) * | 2011-12-07 | 2014-07-30 | 英特尔公司 | 用于自动立体三维显示器的图形呈现技术 |
KR101393621B1 (ko) * | 2012-01-10 | 2014-05-12 | 에스케이플래닛 주식회사 | 3차원 입체영상의 품질 분석 장치 및 방법 |
ITTO20120208A1 (it) * | 2012-03-09 | 2013-09-10 | Sisvel Technology Srl | Metodo di generazione, trasporto e ricostruzione di un flusso video stereoscopico |
KR101913321B1 (ko) | 2012-05-10 | 2018-10-30 | 삼성전자주식회사 | 깊이 센서 기반 반사 객체의 형상 취득 방법 및 장치 |
TWI594616B (zh) * | 2012-06-14 | 2017-08-01 | 杜比實驗室特許公司 | 用於立體及自動立體顯示器之深度圖傳遞格式 |
US9544612B2 (en) * | 2012-10-04 | 2017-01-10 | Intel Corporation | Prediction parameter inheritance for 3D video coding |
US9894269B2 (en) * | 2012-10-31 | 2018-02-13 | Atheer, Inc. | Method and apparatus for background subtraction using focus differences |
KR101970563B1 (ko) * | 2012-11-23 | 2019-08-14 | 엘지디스플레이 주식회사 | 3차원 입체 영상용 깊이지도 보정장치 및 보정방법 |
TWI502545B (zh) * | 2013-06-25 | 2015-10-01 | 儲存3d影像內容的方法 | |
KR102106135B1 (ko) * | 2013-10-01 | 2020-05-04 | 한국전자통신연구원 | 행동 인식 기반의 응용 서비스 제공 장치 및 그 방법 |
CN103792950B (zh) * | 2014-01-06 | 2016-05-18 | 中国航空无线电电子研究所 | 一种使用基于压电陶瓷的立体拍摄光学误差纠偏装置进行误差纠偏的方法 |
KR102224716B1 (ko) * | 2014-05-13 | 2021-03-08 | 삼성전자주식회사 | 스테레오 소스 영상 보정 방법 및 장치 |
KR102249831B1 (ko) | 2014-09-26 | 2021-05-10 | 삼성전자주식회사 | 3d 파노라마 이미지 생성을 위한 영상 생성 장치 및 방법 |
JP5874077B1 (ja) * | 2015-03-02 | 2016-03-01 | ブレステクノロジー株式会社 | フィルタ装置、酸素濃縮装置 |
PL411602A1 (pl) * | 2015-03-17 | 2016-09-26 | Politechnika Poznańska | System do estymacji ruchu na obrazie wideo i sposób estymacji ruchu na obrazie wideo |
KR20160114983A (ko) * | 2015-03-25 | 2016-10-06 | 한국전자통신연구원 | 영상 변환 장치 및 방법 |
US10346950B2 (en) | 2016-10-05 | 2019-07-09 | Hidden Path Entertainment, Inc. | System and method of capturing and rendering a stereoscopic panorama using a depth buffer |
CN108072663B (zh) * | 2017-08-03 | 2020-09-08 | 安徽省徽腾智能交通科技有限公司泗县分公司 | 工件缺陷在线分析装置 |
CN107492107B (zh) * | 2017-08-10 | 2020-09-22 | 昆山伟宇慧创智能科技有限公司 | 基于平面与空间信息融合的物体识别与重建方法 |
US10735707B2 (en) * | 2017-08-15 | 2020-08-04 | International Business Machines Corporation | Generating three-dimensional imagery |
US10679368B2 (en) * | 2017-12-21 | 2020-06-09 | Intel IP Corporation | Methods and apparatus to reduce depth map size in collision avoidance systems |
EP3903500A4 (en) | 2018-12-26 | 2022-10-19 | Snap Inc. | CREATION AND USER INTERACTIONS WITH THREE-DIMENSIONAL WALLPAPER ON COMPUTER DEVICES |
US11276166B2 (en) * | 2019-12-30 | 2022-03-15 | GE Precision Healthcare LLC | Systems and methods for patient structure estimation during medical imaging |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100053971A (ko) | 2008-11-13 | 2010-05-24 | 삼성에스디아이 주식회사 | 유기전해액 및 이를 채용한 리튬전지 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2802034B2 (ja) * | 1994-02-23 | 1998-09-21 | 松下電工株式会社 | 三次元物体の計測方法 |
US6163337A (en) * | 1996-04-05 | 2000-12-19 | Matsushita Electric Industrial Co., Ltd. | Multi-view point image transmission method and multi-view point image display method |
JP3454684B2 (ja) * | 1997-09-22 | 2003-10-06 | 三洋電機株式会社 | 2次元映像を3次元映像に変換する装置 |
US6765618B1 (en) * | 1998-10-19 | 2004-07-20 | Pentax Corporation | Subject selection device and in-focus portion indicating device |
JP2001238230A (ja) * | 2000-02-22 | 2001-08-31 | Nippon Hoso Kyokai <Nhk> | 多眼式立体テレビシステムにおける3次元構造情報を抽出する装置 |
JP2003061116A (ja) * | 2001-08-09 | 2003-02-28 | Olympus Optical Co Ltd | 立体映像表示装置 |
WO2008041178A2 (en) * | 2006-10-04 | 2008-04-10 | Koninklijke Philips Electronics N.V. | Image enhancement |
WO2008053417A1 (en) * | 2006-10-30 | 2008-05-08 | Koninklijke Philips Electronics N.V. | Video depth map alignment |
KR20080076628A (ko) * | 2007-02-16 | 2008-08-20 | 삼성전자주식회사 | 영상의 입체감 향상을 위한 입체영상 표시장치 및 그 방법 |
KR100888459B1 (ko) * | 2007-03-14 | 2009-03-19 | 전자부품연구원 | 피사체의 깊이 정보 검출 방법 및 시스템 |
JP4706068B2 (ja) * | 2007-04-13 | 2011-06-22 | 国立大学法人名古屋大学 | 画像情報処理方法及び画像情報処理システム |
EP2153669B1 (en) | 2007-05-11 | 2012-02-01 | Koninklijke Philips Electronics N.V. | Method, apparatus and system for processing depth-related information |
EP2353298B1 (en) * | 2008-11-07 | 2019-04-03 | Telecom Italia S.p.A. | Method and system for producing multi-view 3d visual contents |
KR101580275B1 (ko) * | 2008-11-25 | 2015-12-24 | 삼성전자주식회사 | 멀티 레이어 디스플레이에 3차원 영상을 표현하기 위한 영상 처리 장치 및 방법 |
-
2010
- 2010-06-08 KR KR1020100053971A patent/KR101291071B1/ko active IP Right Grant
-
2011
- 2011-04-08 EP EP11792609.7A patent/EP2560398B1/en active Active
- 2011-04-08 US US13/636,998 patent/US8503765B2/en active Active
- 2011-04-08 WO PCT/KR2011/002473 patent/WO2011155698A2/ko active Application Filing
- 2011-04-08 CN CN201180028346.7A patent/CN103119947B/zh active Active
- 2011-04-08 JP JP2013501200A patent/JP5750505B2/ja active Active
-
2014
- 2014-01-14 JP JP2014004400A patent/JP6027034B2/ja active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100053971A (ko) | 2008-11-13 | 2010-05-24 | 삼성에스디아이 주식회사 | 유기전해액 및 이를 채용한 리튬전지 |
Also Published As
Publication number | Publication date |
---|---|
CN103119947B (zh) | 2014-12-31 |
CN103119947A (zh) | 2013-05-22 |
JP6027034B2 (ja) | 2016-11-16 |
US20130009955A1 (en) | 2013-01-10 |
EP2560398A2 (en) | 2013-02-20 |
KR101291071B1 (ko) | 2013-08-01 |
US8503765B2 (en) | 2013-08-06 |
EP2560398B1 (en) | 2019-08-14 |
KR20110134147A (ko) | 2011-12-14 |
JP5750505B2 (ja) | 2015-07-22 |
WO2011155698A3 (ko) | 2012-02-02 |
JP2014103689A (ja) | 2014-06-05 |
JP2013527646A (ja) | 2013-06-27 |
EP2560398A4 (en) | 2014-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011155698A2 (ko) | 입체 영상 오류 개선 방법 및 장치 | |
JP2013527646A5 (ko) | ||
WO2011155697A2 (ko) | 깊이 맵 정보를 이용한 입체 영상 변환 방법 및 장치 | |
KR101185870B1 (ko) | 3d 입체 영상 처리 장치 및 방법 | |
US8488869B2 (en) | Image processing method and apparatus | |
KR101340911B1 (ko) | 다중 뷰들의 효율적인 인코딩 방법 | |
Huynh-Thu et al. | Video quality assessment: From 2D to 3D—Challenges and future trends | |
KR100838351B1 (ko) | 3d 이미지 생성 방법 및 장치 | |
US20110080466A1 (en) | Automated processing of aligned and non-aligned images for creating two-view and multi-view stereoscopic 3d images | |
EP2384009A2 (en) | Display device and method of outputting audio signal | |
KR100902353B1 (ko) | 깊이맵 추정장치와 방법, 이를 이용한 중간 영상 생성 방법및 다시점 비디오의 인코딩 방법 | |
US8659644B2 (en) | Stereo video capture system and method | |
US20120133733A1 (en) | Three-dimensional video image processing device, three-dimensional display device, three-dimensional video image processing method and receiving device | |
KR20100008677A (ko) | 깊이맵 추정장치와 방법, 이를 이용한 중간 영상 생성 방법및 다시점 비디오의 인코딩 방법 | |
WO2013133627A1 (ko) | 비디오 신호 처리 방법 | |
US20120050477A1 (en) | Method and System for Utilizing Depth Information for Providing Security Monitoring | |
KR20110130845A (ko) | 영상을 깊이에 따라 계층별로 분리하여 히스토그램 매칭을 하는 다시점 영상의 조명보상 방법 및 그 기록매체 | |
US20130050413A1 (en) | Video signal processing apparatus, video signal processing method, and computer program | |
CN114449303A (zh) | 直播画面生成方法和装置、存储介质及电子装置 | |
WO2013061810A1 (ja) | 画像処理装置、画像処理方法、および記録媒体 | |
JP2014072809A (ja) | 画像生成装置、画像生成方法、画像生成装置用プログラム | |
CN104982038B (zh) | 处理视频信号的方法和设备 | |
JP5076002B1 (ja) | 画像処理装置及び画像処理方法 | |
CN115278189A (zh) | 图像色调映射方法及装置、计算机可读介质和电子设备 | |
US12008776B2 (en) | Depth map processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180028346.7 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11792609 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013501200 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13636998 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011792609 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |