WO2014057989A1 - 互いに類似した情報を含む複数画像のデータサイズを低減する方法、プログラムおよび装置 - Google Patents
互いに類似した情報を含む複数画像のデータサイズを低減する方法、プログラムおよび装置 Download PDFInfo
- Publication number
- WO2014057989A1 WO2014057989A1 PCT/JP2013/077516 JP2013077516W WO2014057989A1 WO 2014057989 A1 WO2014057989 A1 WO 2014057989A1 JP 2013077516 W JP2013077516 W JP 2013077516W WO 2014057989 A1 WO2014057989 A1 WO 2014057989A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- information
- generating
- target image
- images
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 192
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 239000002131 composite material Substances 0.000 claims description 60
- 238000004364 calculation method Methods 0.000 claims description 19
- 230000008859 change Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 abstract description 12
- 230000008569 process Effects 0.000 description 100
- 238000012545 processing Methods 0.000 description 62
- 238000005549 size reduction Methods 0.000 description 33
- 230000010365 information processing Effects 0.000 description 30
- 238000010586 diagram Methods 0.000 description 20
- 238000009826 distribution Methods 0.000 description 17
- 238000005070 sampling Methods 0.000 description 12
- 230000005540 biological transmission Effects 0.000 description 9
- 239000000203 mixture Substances 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000009499 grossing Methods 0.000 description 8
- 230000015654 memory Effects 0.000 description 8
- 238000001914 filtration Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 230000000877 morphologic effect Effects 0.000 description 4
- 238000011282 treatment Methods 0.000 description 4
- 238000003708 edge detection Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000013144 data compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/211—Image signal generators using stereoscopic image cameras using a single 2D image sensor using temporal multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/31—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/395—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability involving distributed video coding [DVC], e.g. Wyner-Ziv video coding or Slepian-Wolf video coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
Definitions
- the present invention relates to a method, a program, and an apparatus for reducing the data size of a plurality of images including information similar to each other.
- Non-Patent Document 1 discloses a method called adaptive distributed coding of multi-view images. More specifically, this method is based on a modulo arithmetic unit, and encodes images obtained from the respective viewpoints without exchanging information with each other. Exchange of information is allowed.
- the method disclosed in Non-Patent Document 1 mainly assumes application to distributed source coding, distributed video frame coding, and the like, the linkage between viewpoints is not considered in the encoding process. This is because the method disclosed in Non-Patent Document 1 is mainly directed to a low power consumption device (for example, a portable terminal) that does not have a high processing capability.
- Non-Patent Document 1 side information is used in the encoding process and the decoding process.
- side information an original image is used in the encoder (encoder), and a reduced image, a virtual image, or a combination thereof is used in the decoder.
- Tanimoto “Real-time free viewpoint image rendering by using fast multi-pass dynamic programming,” in Proc. 3DTV-CON, (June 2010) A. Smolic, P. Kauff, S. Knorr, A. Hornnung, M. Kunter, M. Muller, and M. Lang, “Three-Dimensional Video Postproduction and Processing,” in Proc. -625, (Apr. 2011)
- the inventors of the present application have obtained new knowledge that the image quality after decoding can be improved and the application range can be further expanded by exchanging information between images similar to each other.
- the decoder and the encoder do not exchange information between images similar to each other, and as a result, how to optimize the processing None is known.
- the present invention has been made to solve the above problems, and an object thereof is to provide a method, a program, and an apparatus for more efficiently reducing the data size of a plurality of images including information similar to each other. It is to be.
- a method for reducing the data size of a plurality of images including information similar to each other acquires a plurality of images, and selects a target image, a first reference image and a second reference image similar to the target image, and the first reference image and the second reference from the plurality of images.
- Generating a composite image corresponding to the target image based on the image generating side information that is virtual visual field information at the position of the target image based on at least one of the target image and the composite image; Generating a gradient intensity image from the side information; determining a coefficient corresponding to the gradient intensity for each pixel position of the gradient intensity image; and calculating a coefficient corresponding to the luminance value at each pixel position of the target image Performing a modulo operation to generate a residue image composed of the remainder of each pixel position calculated by the modulo operation, a target image, Of the reference image, and as the information representing the second reference picture, and outputting a first reference picture, the second reference image, and the residue image.
- the step of generating side information includes a step of generating side information by combining the reduced image of the target image and the composite image.
- the step of generating the side information includes a step of determining an error distribution based on a difference between an image obtained by upsampling the reduced image and a composite image, and a reduced image in a region where the error is relatively high. And assigning information on the image obtained by upsampling the image, and assigning information on the composite image to a region having a relatively low error.
- the step of generating the side information includes a step of determining an error distribution based on a difference between an image obtained by up-sampling the reduced image and a composite image, and a reduction to a region where the error is relatively high. Assigning more information of the image obtained by upsampling the image and assigning more information of the composite image to a region with a relatively low error.
- the step of generating the gradient intensity image includes a step of generating an image in which a region having a larger texture change in the side information has a higher luminance.
- the step of generating the gradient intensity image includes a step of generating a gradient intensity image for each color component constituting the side information.
- the step of generating the gradient intensity image sequentially applies an edge detection process, a smoothing process, a series of morphological processes, and a smoothing process to the grayscale image of each color component constituting the side information. Includes steps.
- the step of generating the remainder image includes a step of selecting a coefficient corresponding to the gradient strength with reference to a predetermined correspondence relationship.
- a coefficient is determined for each color component for each pixel position of the gradient intensity image.
- the selecting step includes a step of selecting the target image and the first reference image and the second reference image based on the baseline distance when the plurality of images are multi-viewpoint images, and the plurality of images are video images.
- the method includes a step of selecting a target image and a first reference image and a second reference image based on a frame rate.
- the method corresponds to the target image based on the steps of obtaining the output first reference image, second reference image, and remainder image, and the first reference image and the second reference image.
- Generating a composite image generating side information from the acquired information, generating a gradient intensity image from the side information, and determining a coefficient corresponding to the gradient intensity for each pixel position of the gradient intensity image
- candidate values calculated by inverse modulo calculation using the determined coefficient as a modulus and the value of the corresponding pixel position of the remainder image as a remainder the one having the smallest difference from the value of the corresponding pixel position of the side information is calculated. And determining the luminance value of the corresponding pixel position of the target image.
- a program for reducing the data size of a plurality of images including information similar to each other acquires a plurality of images to a computer, selects a target image and a first reference image and a second reference image similar to the target image from the plurality of images, and the first reference image and the first reference image Generating a composite image corresponding to the target image based on the reference image of 2, and generating side information that is virtual visual field information at the position of the target image based on at least one of the target image and the composite image And a step of generating a gradient intensity image from the side information; determining a coefficient corresponding to the gradient intensity for each pixel position of the gradient intensity image; and a coefficient corresponding to the luminance value at each pixel position of the target image
- modulo operation modulo By performing modulo operation modulo, a residue image is generated that consists of the remainder of each pixel position calculated by modulo operation. And-up, target image, as information representing the first reference picture, and the second reference picture, the first
- an apparatus for reducing the data size of a plurality of images including information similar to each other acquires a plurality of images, and selects a target image, a first reference image similar to the target image, and a second reference image among the plurality of images, a first reference image, and a second reference Means for generating a composite image corresponding to the target image based on the image, and means for generating side information which is virtual visual field information at the position of the target image based on at least one of the target image and the composite image , Means for generating a gradient intensity image from side information, determining a coefficient corresponding to the gradient intensity for each pixel position of the gradient intensity image, and calculating a coefficient corresponding to the luminance value of each pixel position of the target image Performing a modulo operation to generate a remainder image including a remainder of each pixel position calculated by the modulo operation, a target image, a first reference image, and As information representing the second reference picture, including a
- the data size of a plurality of images including information similar to each other can be more efficiently reduced.
- FIG. 1 is a diagram showing a stereoscopic video reproduction system 1 to which a data size reduction method according to the present embodiment is applied. It is a schematic diagram which shows an example of the several image (multiview image) containing the mutually similar information which concerns on this Embodiment. It is a schematic diagram which shows an example of the several image (video frame row
- FIG. 1 An example of the target image input into the encoding process of the data size reduction method which concerns on this Embodiment is shown.
- generated from the target image shown in FIG. It is a block diagram which shows the function structure which concerns on the decoding process of the data size reduction method which concerns on this Embodiment. It is a schematic diagram for demonstrating the outline
- FIG. 1 is a diagram showing a stereoscopic video reproduction system 1 to which the data size reduction method according to the present embodiment is applied.
- a multi-viewpoint image is generated by capturing a subject 2 from a plurality of different viewpoints using a plurality of cameras 10 (camera arrays), and the generated multi-viewpoint is generated.
- a stereoscopic image is displayed on the stereoscopic display device 300 using the image.
- the stereoscopic video reproduction system 1 decodes data transmitted from the information processing apparatus 100 and an information processing apparatus 100 that functions as an encoder to which images (parallax images) are input from a plurality of cameras 10. And an information processing device 200 that functions as a decoder that outputs a multi-viewpoint image to the stereoscopic display device 300.
- the information processing apparatus 100 generates data suitable for storage and / or transmission by performing data compression processing as will be described later together with encoding processing.
- the information processing apparatus 100 wirelessly transmits data (compressed data) including information on the generated multi-viewpoint image using the connected wireless transmission apparatus 102.
- the wirelessly transmitted data is received by the wireless transmission device 202 connected to the information processing device 200 via the wireless base station 400 or the like.
- the stereoscopic display device 300 controls a projection screen mainly composed of a diffusion film 306 and a condenser lens 308, a projector array 304 that projects a multi-viewpoint image on the display screen, and a projection image by each projector of the projector array 304.
- Controller 302. The controller 302 projects each parallax image included in the multi-viewpoint image output from the information processing apparatus 200 on a corresponding projector.
- a viewer in front of the display screen is provided with a reproduced stereoscopic image of the subject 2.
- the parallax image that enters the viewer's field of view changes according to the relative position of the display screen and the viewer, and the viewer has the experience of being in front of the subject 2. can get.
- Such a stereoscopic video reproduction system 1 is used in a movie theater or an amusement facility as a general use, and is used as an electronic advertisement system such as a telemedicine system, an industrial design design system, or public viewing as an industrial use. It is expected that
- [B. Overview] Considering a multi-viewpoint image or a moving image generated by imaging the subject 2 with a camera array as shown in FIG. 1, redundant information may be included between images constituting the image.
- the data size reduction method according to the present embodiment considers such redundant information and generates data excluding it. That is, the data size reduction method according to the present embodiment is intended to reduce the data size of a plurality of images including information similar to each other.
- the data size reduction method according to this embodiment can be applied to multi-view data representation (multi-view data representation) as described above, and can also be applied to distributed source coding.
- the data size reduction method according to the present embodiment can be applied not only to video frame representation (video frames representation) but also to distributed video frame coding (distributed video frame encoding). Note that the data size reduction method according to the present embodiment may be used alone or as part of preprocessing before data transmission.
- the virtual field of view at the position of the image to be converted to the remainder image is synthesized (estimated) using the image and the distance image that are maintained as they are.
- This distance image can also be used in decoding processing (processing for inversely converting the converted image / processing for returning to the original image format).
- the distance image for the image that is maintained as it is may be reconstructed using the image that is maintained as it is in the inverse transformation process.
- side information that is virtual visual field information at the position of an image to be converted is used in generating a remainder image.
- a synthesized virtual image virtual field of view
- a virtual image may be synthesized using an image and a distance image that are maintained as they are, and the synthesized virtual image may be used as side information.
- the target image itself that is to be converted to the remainder image may be used as side information before the conversion to the remainder image.
- a synthesized virtual image and / or an image obtained by reducing the target image is used as side information.
- the input image is a video frame sequence
- a frame in which the frames are interpolated or extrapolated can be used as the side information.
- a gradient intensity image is generated.
- the value of each gradient strength is an integer value, and a modulo operation or an inverse modulo operation is executed using the integer value.
- FIG. 2A is schematic diagrams showing an example of a plurality of images including information similar to each other according to the present embodiment.
- a subject is imaged using a plurality of cameras (camera arrays) arranged close to each other, thereby having a parallax corresponding to the corresponding camera position.
- a group of parallax images is generated.
- the field of view is at least the field of view of other images (hereinafter also referred to as “reference images”) captured using a camera at a close camera position.
- the images overlap partially, and redundant information exists between the target image 170 and the reference images 172 and 182 due to such overlapping of the visual fields.
- information included in the target image 170 can be reconstructed from information included in the reference images 172 and 182 and some additional information.
- the data size reduction method generates a remainder image 194 that can reconstruct the information of the target image 170 from the information of the adjacent reference images 172 and 182, and replaces the target image 170 with the remainder image 194. Is output.
- the remainder image 194 interpolates information that is insufficient in the information included in the reference images 172 and 182 among the information included in the target image 170, and is redundant compared to the case where the target image 170 is output as it is. Sex can be excluded. Therefore, the data size can be reduced as compared with the case where the target image 170 and the reference images 172 and 182 are output as they are.
- the target image 170 and the reference images 172 and 182 can be selected at arbitrary intervals as long as they include similar information.
- remainder images 194-1, 194-2, and 194-3 may be generated for each of the target images 170-1, 170-2, and 170-3. That is, for a pair of reference images, one or a plurality of target images can be converted into a remainder image.
- the same logic can be applied to the video frame sequence. That is, since the frame period of a normal moving image is sufficiently short, some of the information included in it can overlap each other if an adjacent frame is appropriately selected. Therefore, the data size can be reduced by generating an image 194 with reference to the reference images 172 and 182 in adjacent frames, with the image of a certain frame as the target image 170.
- the target image 170 and the reference images 172 and 182 can be selected at arbitrary frame intervals as long as they include information similar to each other. For example, as shown in FIG.
- residual images 194-1, 194-2, and 194-3 may be generated for each of the target images 170-1, 170-2, and 170-3. That is, for a pair of reference images, one or a plurality of target images can be converted into a remainder image.
- the data size reduction method according to this embodiment can be used alone or as part of pre-processing before data transmission.
- imaging refers to a process of obtaining an image of a subject using a real camera, in addition to arranging an object in a virtual space such as computer graphics. It may include a process of rendering an image from a viewpoint that is arbitrarily set with respect to the arranged object (that is, virtual imaging in a virtual space).
- a camera can be arbitrarily arranged in a camera array that captures an image of a subject.
- a one-dimensional array (cameras are arranged on a straight line), a two-dimensional array (cameras are arranged in a matrix), a circular array (cameras are arranged along all or part of the circumference), a helical array (cameras are arranged)
- Arbitrary arrangements such as a spiral arrangement
- a random arrangement the camera is arranged without any rules
- FIG. 4 is a schematic diagram illustrating a hardware configuration of the information processing apparatus 100 that functions as the encoder illustrated in FIG. 1.
- FIG. 5 is a schematic diagram illustrating a hardware configuration of the information processing apparatus 200 that functions as the decoder illustrated in FIG. 1.
- the information processing apparatus 100 includes a processor 104, a memory 106, a camera interface 108, a hard disk 110, an input unit 116, a display unit 118, and a communication interface 120. Each of these components is configured to be able to perform data communication with each other via a bus 122.
- the processor 104 reads out a program stored in the hard disk 110 or the like, develops it in the memory 106 and executes it, thereby realizing the encoding process according to the present embodiment.
- the memory 106 functions as a working memory for the processor 104 to execute processing.
- the camera interface 108 is connected to a plurality of cameras 10 and acquires images captured by the respective cameras 10.
- the acquired image may be stored in the hard disk 110 or the memory 106.
- the hard disk 110 holds the image data 112 including the acquired image and the encoding program 114 for realizing the encoding process and the data compression process in a nonvolatile manner.
- the encoding program 114 is read and executed by the processor 104, thereby realizing an encoding process to be described later.
- the input unit 116 typically includes a mouse, a keyboard, etc., and accepts an operation from the user.
- the display unit 118 notifies the user of processing results and the like.
- the communication interface 120 is connected to the wireless transmission device 102 and the like, and outputs data output as a result of processing by the processor 104 to the wireless transmission device 102.
- information processing device 200 includes a processor 204, a memory 206, a projector interface 208, a hard disk 210, an input unit 216, a display unit 218, and a communication interface 220. Each of these components is configured to be able to perform data communication with each other via a bus 222.
- the processor 204, the memory 206, the input unit 216, and the display unit 218 are the same as the processor 104, the memory 106, the input unit 116, and the display unit 118 shown in FIG. Absent.
- the projector interface 208 is connected to the stereoscopic display device 300 and outputs the multi-viewpoint image decoded by the processor 204 to the stereoscopic display device 300.
- the communication interface 220 is connected to the wireless transmission apparatus 202 or the like, receives image data transmitted from the information processing apparatus 100, and outputs it to the processor 204.
- the hard disk 210 holds the image data 212 including the decoded image and the decoding program 214 for realizing the decoding process in a nonvolatile manner.
- the decoding program 214 is read and executed by the processor 204, whereby a decoding process described later is realized.
- the hardware itself and the operation principle of the information processing apparatuses 100 and 200 shown in FIGS. 4 and 5 are general, and an essential part for realizing the encoding process / decoding process according to the present embodiment is as follows.
- Software instruction codes
- the encoding program 114 and / or the decoding program 214 may be configured to execute processing using a module provided by an OS (Operating System).
- OS Operating System
- the encoding program 114 and / or the decoding program 214 do not include a part of the modules, but even such a case is included in the technical scope of the present invention.
- All or some of the functions of the information processing apparatus 100 and / or the information processing apparatus 200 may be realized using a dedicated integrated circuit such as an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array). Or programmable hardware such as DSP (Digital Signal Processor).
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- DSP Digital Signal Processor
- a single information processing apparatus executes an encoding process and a decoding process.
- FIG. 6 is a flowchart showing an overall processing procedure of the data size reduction method according to the present embodiment.
- the data size reduction method shown in FIG. 6 mainly includes an encoding process, but practically includes a decoding process for reconstructing the original image from the encoded data.
- the encoding process and the decoding process are executed by different information processing apparatuses.
- a single information processing apparatus executes encoding processing and decoding processing. That is, encoding processing is executed as preprocessing before data storage, and decoding processing is executed at the time of data reconstruction. In any case, typically, the processing of each step is realized by the processor executing the program.
- steps S100 to S110 are executed as the encoding process.
- the processor 104 acquires a plurality of images including information similar to each other, stores the acquired images in a predetermined storage area, and sets one of the acquired images as a target image. Then, at least two images similar to the target image are set as reference images (step S100). That is, the processor 104 acquires a plurality of images including information similar to each other and selects a target image and two reference images similar to the target image from among the plurality of images. Subsequently, the processor 104 generates a composite image corresponding to the target image from the set two reference images (step S102).
- the processor 104 generates side information based on part or all of the target image and the composite image (step S104).
- the processor 104 generates side information that is virtual visual field information at the position of the target image based on at least one of the target image and the composite image.
- the side information includes information necessary for reconstructing the target image from the remainder image and the reference image.
- the processor 104 generates a gradient intensity image from the generated side information (step S106). Then, the processor 104 generates a remainder image of the target image from the generated gradient intensity image (step S108).
- the processor 104 outputs at least a remainder image and a reference image as information corresponding to the target image and the reference image (step S110). That is, the processor 104 outputs two reference images and a remainder image as information representing the target image and the two reference images.
- steps S200 to S210 are executed. Specifically, the processor 204 acquires information output as a result of the encoding process (step S200). That is, the processor 204 acquires at least two output reference images and remainder images.
- the processor 204 generates a composite image corresponding to the target image from the reference image included in the acquired information (step S202).
- the processor 204 generates side information from the acquired information (step S204). Then, the processor 204 generates a gradient intensity image from the generated side information (step S206).
- the processor 204 reconstructs the target image from the side information, the gradient intensity image, and the remainder image (step S208). Finally, the processor 104 outputs the reconstructed target image and reference image (step S210).
- FIG. 7 is a block diagram showing a functional configuration related to the encoding process of the data size reduction method according to the present embodiment.
- information processing apparatus 100 has, as its functional configuration, input image buffer 150, distance information estimation unit 152, distance information buffer 154, subsampling unit 156, image synthesis unit 158, side An information selection unit 160, a gradient intensity image generation unit 162, a coefficient selection unit 164, a Lookup table 166, and a modulo calculation unit 168 are included.
- the image acquisition process shown in step S100 of FIG. 6 is realized by the input image buffer 150, the distance information estimation unit 152, and the distance information buffer 154 of FIG.
- the information processing apparatus 100 receives a multi-viewpoint image including a plurality of parallax images captured by a plurality of cameras 10 (camera array), and stores the multi-viewpoint image in the input image buffer 150.
- the information processing apparatus 100 may receive a series of videos including images arranged in frame order and store them in the input image buffer 150. These input images are processed.
- the description will be given focusing on a set of one target image 170 and two reference images 172 and 182, depending on the required data size reduction rate, the processing capability of the information processing apparatus 100, and the like.
- the data size reduction method according to the present embodiment may be applied to an arbitrary number of sets.
- the target image 170 and the reference images 172 and 182 are based on the baseline distance. Preferably it is selected. That is, the target image 170 and the reference images 172 and 182 are selected according to the parallax that occurs during that time. For a video frame sequence (moving image), a target frame is selected based on the frame rate. That is, the process of step S100 in FIG. 6 includes a process of selecting the target image 170 and the reference images 172 and 182 based on the baseline distance when a plurality of images are multi-viewpoint images (see FIG. 2), When the image is a video frame sequence (see FIG. 3), the processing includes selecting the target image 170 and the reference images 172 and 182 based on the frame rate.
- the target image 170 is expressed as “VT” which means a target view (target view for representation) represented by the target image 170, and the reference image 172 located on the right side of the target image 170 is the target image 170.
- the reference field 182 on the left side of the target image 170 is represented by “VR” which means the original field of view (original view at the right side of the VT) on the right side of the target image 170.
- VL means view at the left side of VT). Note that the expressions “right side” and “left side” are for convenience of explanation, and may not always match the actual camera arrangement.
- a composite image 176 corresponding to the target image may be generated using the distance images of the reference images 172 and 182. Therefore, the distance image 174 of the reference image 172 and the distance image 184 of the reference image 182 are acquired using an arbitrary method.
- a distance image may be acquired at the same time.
- the field of view is not changed between the reference image and the corresponding distance image. Therefore, if possible, it is preferable to acquire each distance image using such a camera array.
- the reference image and the corresponding distance image are simultaneously input to the information processing apparatus. Therefore, when the distance image corresponding to the reference image can be acquired, the distance information estimation unit 152 shown in FIG.
- the distance image 174 corresponding to the reference image 172 is represented as “DR” meaning the distance image (depth map at location of VR) at the position, and the distance image 184 corresponding to the reference image 182 is illustrated. It is expressed as “DL” which means the distance image at that position (depth map at location of VL).
- the distance information estimation unit 152 corresponds to the reference images 172 and 182 respectively. Images 174 and 184 are generated.
- various methods based on stereo matching using energy optimization as disclosed in Non-Patent Document 2 can be employed. For example, optimization can be performed using a graph cut as disclosed in Non-Patent Document 3.
- the distance images 174 and 184 generated by the distance information estimation unit 152 are stored in the distance information buffer 154.
- one set of input data includes the target image 170, the reference image 172 and the corresponding distance image 174, and the reference image 182 and the corresponding distance image 184 will be mainly described as a typical example.
- the composite image generation process shown in step S102 of FIG. 6 is realized by the image composition unit 158 of FIG. More specifically, the image composition unit 158 uses the reference image 172 and the corresponding distance image 174, and the reference image 182 and the corresponding distance image 184 to compose a virtual field of view at the position of the target image 170.
- An image 176 is generated.
- the composite image 176 is expressed as “VT (virtual)” which means a virtual visual field of the target visual field.
- VT virtual
- the composite image 176 can be generated by using an interpolation process as disclosed in Non-Patent Document 6 and Non-Patent Document 7.
- FIG. 8 is a diagram showing the result of the composite image generation process according to the present embodiment. As illustrated in FIG. 8, a composite image 176 corresponding to the target image 170 is generated from the reference image 172 and the corresponding distance image 174, and the reference image 182 and the corresponding distance image 184.
- the target image 170 is obtained by performing interpolation processing or extrapolation processing from information on frames corresponding to the two reference images 172 and 182. Can be used as a composite image 176.
- the side information generation process shown in step S104 of FIG. 6 is realized by the sub-sampling unit 156 and the side information selection unit 160 of FIG.
- the side information 190 is information of a virtual field of view at the position of the target image 170, and is combined with the target image 170, a reduced image of the target image 170, a composite image 176, and a reduced image of the target image 170. It is generated using an image combined with the image 176 or the like.
- the side information selection unit 160 appropriately selects input information (image) and outputs side information 190.
- the side information 190 is represented as “VT (side information)”.
- the subsampling unit 156 generates a reduced image 178 from the target image 170.
- the reduced image 178 is expressed as “VT (sub-sampled)” which means that it is obtained by sub-sampling the target image 170.
- the generation process of the reduced image 178 in the sub-sampling unit 156 can employ any method. For example, by extracting pixel information from the target image 170 at predetermined intervals, it can be output as a reduced image 178.
- the reduced image 178 may be generated using an arbitrary filtering process (for example, nearest neighbor method, interpolation method, bicubic method, bilateral filter.
- the target image 170 may be an area of a predetermined size (for example, 2 ⁇ 2 pixels, 3 ⁇ 3 pixels, etc.), and in each region, a linear or non-linear interpolation process is performed on the information of a plurality of pixels included in the region, thereby reducing a reduced image of any size 178 can be generated.
- the method for generating the side information 190 can be arbitrarily selected from the following four methods (a) to (d).
- the side information selection unit 160 outputs the input target image 170 as the side information 190 as it is.
- a composite image generated from the reference image is used as side information.
- the side information selection unit 160 outputs the reduced image 178 generated by the subsampling unit 156 as it is.
- the side information selection unit 160 outputs the composite image 176 generated by the image composition unit 158 as it is.
- the side information selection unit 160 When a combination of the reduced image 178 and the composite image 176 is used as the side information 190:
- the side information selection unit 160 generates the side information 190 according to a method described later. That is, the side information generation process shown in step S ⁇ b> 104 of FIG. 6 includes a process of generating side information 190 by combining the reduced image 178 and the composite image 176 of the target image 170.
- the side information selection unit 160 first calculates a weighting coefficient used for the combination.
- This weight coefficient is associated with the reliability distribution of the composite image 176 with respect to the reduced image 178 of the target image 170. That is, the weighting coefficient is determined based on the error (or the degree of coincidence) between the composite image 176 and the reduced image 178 (target image 170).
- the calculated error distribution corresponds to an inverted version of the reliability distribution, and it can be considered that the smaller the error is, the higher the reliability is. That is, since it is considered that the region having a larger error has a lower reliability of the composite image 176, more information of the reduced image 178 (target image 170) is assigned to such a region. On the other hand, since the region with a smaller error is considered to have higher reliability of the composite image 176, more information of the composite image 176 with lower redundancy is assigned.
- FIG. 9 is a schematic diagram for explaining an error distribution calculation process used for side information selection in the data size reduction method according to the present embodiment.
- the side information selection unit 160 includes an enlarged image 179 obtained by upsampling a reduced image 178 (VT (sub-sampled)) of the target image 170, and a composite image 176 (VT (virtual)).
- the error distribution R is determined by taking the difference of the absolute value of the luminance value between the corresponding pixels.
- the reason for up-sampling the reduced image 178 is to make the size coincide with the synthesized image 176 and to calculate an error assuming processing in the process of reconstructing the target image 170.
- the side information selection unit 160 calculates the error distribution based on the difference between the enlarged image 179 obtained by up-sampling the reduced image 178 and the synthesized image 176. decide.
- the side information selection unit 160 generates side information 190 by combining the reduced image 178 (or the enlarged image 179) and the composite image 176 based on the determined error distribution R.
- Various methods are conceivable as a method for generating the side information 190 using the calculated error distribution R. For example, the following processing examples can be adopted.
- processing example 1 binary weighted combination method
- the calculated error distribution R is classified into two regions using an arbitrary threshold value. Typically, a region where the error is higher than the threshold is a Hi region, and a region where the error is lower than the threshold is a Lo region.
- the information of the reduced image 178 (substantially, the enlarged image 179) or the composite image 176 is assigned to each pixel of the side information 190 corresponding to the Hi region and the Lo region of the error distribution R. More specifically, the value of the corresponding pixel position of the enlarged image 179 obtained by up-sampling the reduced image 178 is assigned to the pixel position of the side information 190 corresponding to the Hi region of the error distribution R.
- the pixel position corresponding to the Lo region of the distribution R is assigned the value of the corresponding pixel position of the composite image 176.
- the enlarged image 179 an image obtained by upsampling the reduced image 178) is represented by SS and the composite image 176 is represented by SY
- the value at the pixel position (x, y) of the side information 190 is predetermined.
- the threshold TH the following is obtained.
- the side information selection unit 160 assigns information on the enlarged image 179 obtained by up-sampling the reduced image 178 to a region where the error is relatively high, and the error is relatively low. Information of the composite image 176 is assigned to the area.
- (Ii) Processing example 2 Discrete weighted combination
- the calculated error distribution R is classified into n types of regions using (n ⁇ 1) threshold values. Assuming that the number k of the classified area is 1, 2,..., N from the lowest error, the value at the pixel position (x, y) of the side information 190 (SI) uses the number k of the classified area. It becomes as follows.
- the side information selection unit 160 assigns information on the enlarged image 179 obtained by up-sampling the reduced image 178 to a region where the error is relatively high, and the error is relatively low. Information of the composite image 176 is assigned to the area.
- the side information selection unit 160 assigns information on the enlarged image 179 obtained by up-sampling the reduced image 178 to a region where the error is relatively high, and the error is relatively low. Information of the composite image 176 is assigned to the area. In this processing example, the larger the error, the more dominant the enlarged image 179 (reduced image 178), and the lower the error, the more dominant the synthesized image 176.
- the gradient intensity image generation process shown in step S106 of FIG. 6 is realized by the gradient intensity image generation unit 162 of FIG. More specifically, the gradient intensity image generation unit 162 generates a gradient intensity image 192 indicating a change in the image space from the side information 190.
- the gradient intensity image 192 means an image in which a region having a larger texture change in the side information 190 has a higher luminance.
- the gradient intensity image 192 is represented as “VT (gradient)”.
- Arbitrary filtering processing can be used as the generation processing of the gradient intensity image 192.
- the value of each pixel of the gradient intensity image 192 is normalized so as to take any integer value within a predetermined range (for example, 0 to 255).
- the gradient intensity image 192 is generated by the following processing procedure.
- the side information 190 is resized to the image size of the output remainder image.
- a gradient intensity image is generated for each color component constituting the side information 190. That is, the generation processing of the gradient intensity image 192 shown in step S106 of FIG. 6 includes edge detection processing, smoothing processing, a series of morphological processing, and a grayscale image of each color component constituting the side information 190, and This includes processing for applying smoothing processing in order. By such processing, gray scale images are generated as many as the number of color components included in the side information 190, and a gradient intensity image is generated for each gray scale image.
- the processing procedure shown here is an example, and the processing content and processing procedure of Gaussian smoothing processing and morphological processing can be designed as appropriate.
- a process for generating a pseudo gradient intensity image may be employed. That is, any filtering process may be adopted as long as an image in which a region having a larger texture change in the side information 190 has a higher luminance can be generated.
- the remainder image generation process shown in step S108 of FIG. 6 is realized by the coefficient selection unit 164, the Lookup table 166, and the modulo calculation unit 168 of FIG.
- the remainder image 194 shows a remainder obtained by modulo calculation of the value at each pixel position of the gradient intensity image 192.
- a modulus coefficient D is selected according to the value of each pixel position in the gradient intensity image 192.
- the coefficient selection unit 164 selects the coefficient D according to the value of each pixel position in the gradient intensity image 192.
- the remainder image generation processing shown in step S108 of FIG. 6 determines the coefficient D corresponding to the gradient intensity for each pixel position of the gradient intensity image 192 and sets the luminance value at each pixel position of the target image 170.
- it includes a process of generating a remainder image 194 composed of the remainder of each pixel position calculated by the modulo calculation by performing a modulo calculation modulo the corresponding coefficient D.
- the coefficient D is determined nonlinearly with respect to the gradient intensity image 192 in the present embodiment. Specifically, the coefficient D corresponding to each pixel position of the gradient intensity image 192 is selected with reference to the Lookup table 166. Here, the coefficient D is determined for each pixel position of each color component included in the gradient intensity image 192.
- the remainder image generation process shown in step S108 of FIG. 6 includes a process of selecting a coefficient D corresponding to the gradient strength with reference to a predetermined correspondence relationship. At this time, for each pixel position of the gradient intensity image 192, a coefficient D is determined for each color component.
- FIG. 10 is a diagram showing an example of the Lookup table 166 used for generating the remainder image according to the present embodiment.
- the coefficient D is discretized in a plurality of stages, and a coefficient D corresponding to the value of each pixel position in the gradient intensity image 192 is selected.
- the Lookup table 166 shown in FIG. 10A is designed so that a value that is a modulo arithmetic method is a power of 2. By assigning the coefficient D in this way, the modulo operation can be speeded up.
- the Lookup table 166 can be arbitrarily designed. For example, a Lookup table 166 having a smaller number of stages as shown in FIG. 10B may be adopted. Further, the Lookup table is not necessarily used, and the coefficient D may be determined using a predetermined function or the like.
- the coefficient selection unit 164 selects the coefficient D for each color component for each pixel position of the gradient intensity image 192.
- the modulo operation unit 168 uses the coefficient D determined according to the gradient intensity image 192 to perform a modulo operation on the target image 170 to generate a remainder image 194.
- q is a quotient and m is a remainder.
- luminance value P k ⁇ D + m
- the remainder image 194 is expressed as “VT (Remainder)” or “Rem”.
- the surplus image 194 may be resized to an arbitrary size using a known down-sampling method or up-sampling method.
- FIG. 11 and FIG. 12 are diagrams showing the result of the surplus image generation processing according to the present embodiment.
- FIG. 11 shows an example in which the gradient intensity image 192 is generated from the composite image 176. Based on the gradient intensity image 192, the coefficient D of each pixel position for each color component is selected with reference to the Lookup table 166. The Then, as shown in FIG. 12, a modulo operation modulo the selected coefficient is performed on the target image 170. As a result, a remainder image 194 is generated.
- the reference images 172 and 182 that have been input and the remainder image 194 that is the processing result are stored.
- the distance image 174 of the reference image 172 and the distance image 184 of the reference image 182 may be output.
- a reduced image 178 may be output together with the remainder image 194.
- Information (image) added as these options is appropriately selected according to the processing content in the decoding process.
- the description has been given focusing on a set of one target image 170 and two reference images 172 and 182, but it is set for a plurality of input images (multi-viewpoint images or video frame sequences). The same processing is executed for all target images and reference images corresponding to the respective target images.
- FIG. 13 shows an example of the target image 170 input to the encoding process of the data size reduction method according to the present embodiment.
- FIG. 14 shows an example of the remainder image 194 generated from the target image 170 shown in FIG. Even in the case of a high-definition target image 170 as shown in FIG. 13, as shown in FIG. 14, many portions of the remainder image 194 are black, and it can be seen that the amount of information is reduced.
- FIG. 15 is a block diagram showing a functional configuration related to the decoding process of the data size reduction method according to the present embodiment.
- FIG. 16 is a schematic diagram for explaining the outline of the decoding process of the data size reduction method according to the present embodiment. The notation in FIG. 15 is based on the notation in FIG.
- information processing apparatus 200 has, as its functional configuration, input data buffer 250, distance information estimation unit 252, distance information buffer 254, image composition unit 258, side information selection unit 260, A gradient intensity image generation unit 262, a coefficient selection unit 264, a Lookup table 266, and an inverse modulo calculation unit 268 are included.
- the information processing apparatus 200 reconstructs the original target image 170 using the encoded information (reference images 172 and 182 and the remainder image 194). For example, as illustrated in FIG. 16, the reference images 172 and 182 and the remainder image 194 are alternately arranged, and the information processing apparatus 200 displays the corresponding reference images 172 and 182 for each of the remainder images 194. By using the decoding process, the reconstructed image 294 corresponding to the original target image is restored. As shown in FIG. 16, one reference image may be associated with a plurality of target images.
- the acquisition process of the encoding process shown in step S200 of FIG. 6 is realized by the input data buffer 250, the distance information estimation unit 252 and the distance information buffer 254 of FIG. Specifically, the information processing apparatus 200 receives at least the reference images 172 and 182 and the remainder image 194 generated by the above decoding process. As described above, when the distance images 174 and 184 corresponding to the reference images 172 and 182 are transmitted together, these distance images are also used for the decoding process.
- the distance information estimation unit 252 when the distance images 174 and 184 are not input, the distance information estimation unit 252 generates distance images 174 and 184 corresponding to the reference images 172 and 182, respectively. Since the distance image estimation method in distance information estimation unit 252 is the same as the distance image estimation method in distance information estimation unit 152 (FIG. 7) described above, detailed description will not be repeated.
- the distance images 174 and 184 generated by the distance information estimation unit 252 are stored in the distance information buffer 254.
- the composite image generation process shown in step S202 of FIG. 6 is realized by the image composition unit 258 of FIG. More specifically, the image composition unit 258 uses the reference image 172 and the corresponding distance image 174, and the reference image 182 and the corresponding distance image 184 to compose a virtual field of view at the position of the target image 170. An image 276 is generated.
- the method of generating a composite image in image combining unit 258 is the same as the method of generating a composite image in image combining unit 158 (FIG. 7) described above, and detailed description thereof will not be repeated.
- the target image 170 is obtained by performing an interpolation process or an extrapolation process from information on frames corresponding to the two reference images 172 and 182. Can be generated.
- the side information generation process shown in step S204 of FIG. 6 is realized by the side information selection unit 260 of FIG. More specifically, the side information selection unit 260 generates the side information 290 based on the reduced image 178 (when included in the input data), the composite image 276, and a combination thereof.
- the side information selection unit 160 generates the side information 290 based on the composite image 276 generated by the image composition unit 258. To do.
- the side information selection unit 160 may use the reduced image 178 as the side information 290, or the side information depending on the combination of the reduced image 178 and the composite image 276.
- Information 290 may be generated.
- the binarization weighting combination method, the discretization weighting combination method, and the continuous weighting combination method are used using the error distribution as described above. Etc. can be adopted. Since these processes have been described above, detailed description will not be repeated.
- the gradient intensity image generation process shown in step S206 of FIG. 6 is realized by the gradient intensity image generation unit 262 of FIG. More specifically, the gradient intensity image generation unit 262 generates a gradient intensity image 292 indicating a change in the image space from the side information 290. Since the gradient intensity image generation method in gradient intensity image generation unit 262 is the same as the gradient intensity image generation method in gradient intensity image generation unit 162 (FIG. 7) described above, detailed description will not be repeated.
- (F6: target image reconstruction) 6 is realized by the coefficient selection unit 264, the Lookup table 266, and the inverse modulo calculation unit 268 in FIG.
- the luminance value at each pixel position of the target image is obtained by inverse modulo from the value of the corresponding pixel position (residue m) of the remainder image 194 included in the input data and the coefficient D used when the remainder image 194 is generated. Estimated by calculation.
- the coefficient D used when generating the remainder image 194 in the encoding process is estimated (selected) based on the gradient intensity image 292. That is, the coefficient selection unit 264 selects the coefficient D according to the value of each pixel position in the gradient intensity image 292.
- the coefficient D at each pixel position is selected with reference to the Lookup table 266.
- the Lookup table 266 is the same as the Lookup table 166 (FIG. 10) used in the encoding process.
- the coefficient selection unit 264 refers to the Lookup table 266 and selects the coefficient D for each color component for each pixel position of the gradient intensity image 292.
- the candidate value C (q ′) is as follows.
- the candidate value C (1) having the smallest difference from the corresponding value SI of the side information 290 is selected, and the corresponding luminance value of the reconstructed image 294 is “11”. To be determined. In this way, the luminance value at each pixel position of the reconstructed image 294 is determined for each color component.
- the reconstruction processing of the target image shown in step S208 of FIG. 6 determines the coefficient D corresponding to the gradient intensity for each pixel position of the gradient intensity image 292, and uses the determined coefficient D as a modulo, and the remainder image 194.
- the candidate values C (q ′) calculated by inverse modulo calculation using the value of the corresponding pixel position of the remainder m as the remainder m the smallest difference with respect to the value of the corresponding pixel position of the side information 290 is obtained as the target image 170.
- the process of determining as a luminance value of the corresponding pixel position is included.
- the reconstructed image 294 obtained as a result of the process and the reference images 172 and 182 as input are output and / or stored.
- the distance image 174 of the reference image 172 and the distance image 184 of the reference image 182 may be output.
- the reconstructed image 294 may be resized to an arbitrary size according to the difference in size from the original target image 170 and / or the remainder image 194.
- the description has been given focusing on a set of one target image 170 and two reference images 172 and 182, but it is set for a plurality of input images (multi-viewpoint images or video frame sequences). The same processing is executed for all target images and reference images corresponding to the respective target images.
- This embodiment can be applied to various applications of an image processing system such as data representation of multi-viewpoint images and a new data format before image compression.
- the present embodiment more efficient expression is possible by using a remainder-based data format for a large-scale multi-viewpoint image.
- the converted data format can be used for a device with a small power capacity such as a mobile device. Therefore, according to the present embodiment, it is possible to increase the possibility of providing a three-dimensional image more easily on a mobile device or a device with low power consumption.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Architecture (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Processing (AREA)
Abstract
Description
まず、本実施の形態に係るデータサイズ低減方法について理解を容易にするため、典型的な応用例について説明する。なお、本実施の形態に係るデータサイズ低減方法の応用範囲は、以下に示す構成に限定されるものではなく、任意の構成に応用できる。
図1に示すようなカメラアレイで被写体2を撮像することで生成される多視点画像や動画などを考えると、それを構成する画像間は冗長な情報を含み得る。本実施の形態に係るデータサイズ低減方法は、このような冗長な情報を考慮して、それを排除したデータを生成する。すなわち、本実施の形態に係るデータサイズ低減方法は、互いに類似した情報を含む複数画像のデータサイズを低減しようとするものである。
次に、本実施の形態に係るデータサイズ低減方法を実現するためのハードウェアの構成例について説明する。図4は、図1に示すエンコーダーとして機能する情報処理装置100のハードウェア構成を示す模式図である。図5は、図1に示すデコーダーとして機能する情報処理装置200のハードウェア構成を示す模式図である。
次に、本実施の形態に係るデータサイズ低減方法の全体処理手順について説明する。図6は、本実施の形態に係るデータサイズ低減方法の全体処理手順を示すフローチャートである。図6に示すデータサイズ低減方法は、主としてエンコード処理からなるが、実用的には、エンコードされたデータから元の画像を再構成するためのデコード処理を含む。図1に示すような立体映像再生システム1では、エンコード処理およびデコード処理はそれぞれ異なる情報処理装置によって実行される。一方、画像を格納するためのサーバシステムなどでは、単一の情報処理装置がエンコード処理およびデコード処理を実行することになる。すなわち、データ格納前の前処理としてエンコード処理が実行され、データ再構成時にデコード処理が実行される。いずれの場合であっても、典型的には、プロセッサがプログラムを実行することで、各ステップの処理が実現される。
次に、本実施の形態に係るデータサイズ低減方法のエンコード処理(図6のステップS100~S110)の詳細について説明する。
図7は、本実施の形態に係るデータサイズ低減方法のエンコード処理に係る機能構成を示すブロック図である。図7を参照して、情報処理装置100は、その機能構成として、入力画像バッファ150と、距離情報推定部152と、距離情報バッファ154と、サブサンプリング部156と、画像合成部158と、サイド情報選択部160と、勾配強度画像生成部162と、係数選択部164と、Lookupテーブル166と、モジュロ演算部168とを含む。
図6のステップS100に示す画像取得処理は、図7の入力画像バッファ150、距離情報推定部152、および距離情報バッファ154によって実現される。具体的には、情報処理装置100は、複数のカメラ10(カメラアレイ)によって撮像された複数の視差画像からなる多視点画像を受信し、入力画像バッファ150に格納する。あるいは、情報処理装置100は、フレーム順に配置された画像からなる一連の映像を受信し、入力画像バッファ150に格納してもよい。これらの入力画像が処理対象になる。説明の簡略化のため、1つの対象画像170と2つの参照画像172,182とのセットに着目して説明するが、要求されるデータサイズの低減率や情報処理装置100の処理能力などに応じて、任意の数のセットに対して、本実施の形態に係るデータサイズ低減方法を適用すればよい。
図6のステップS102に示す合成画像の生成処理は、図7の画像合成部158によって実現される。より具体的には、画像合成部158は、参照画像172および対応する距離画像174、ならびに、参照画像182および対応する距離画像184を用いて、対象画像170の位置における仮想的な視野を示す合成画像176を生成する。図7において、この合成画像176については、対象視野の仮想視野を意味する「VT(virtual)」と表す。このような画像合成としては、例えば、非特許文献4および非特許文献5に開示されるような方法を採用できる。また、距離画像の精度が低い場合には、非特許文献6および非特許文献7に開示されるような、内挿処理を用いることで、合成画像176を生成できる。
図6のステップS104に示すサイド情報の生成処理は、図7のサブサンプリング部156およびサイド情報選択部160によって実現される。上述したように、サイド情報190は、対象画像170の位置における仮想的な視野の情報であり、対象画像170、対象画像170の縮小画像、合成画像176、および、対象画像170の縮小画像と合成画像176とを組合せた画像などを用いて生成される。サイド情報選択部160は、入力される情報(画像)を適宜選択してサイド情報190を出力する。図7において、サイド情報190を「VT(side information)」と表す。
サイド情報選択部160は、入力された対象画像170をそのままサイド情報190として出力する。また、デコード処理において対象画像170そのものを利用できないので、参照画像から生成される合成画像がサイド情報として用いられる。
サイド情報選択部160は、サブサンプリング部156により生成された縮小画像178をそのまま出力する。
サイド情報選択部160は、画像合成部158により生成された合成画像176をそのまま出力する。
サイド情報選択部160は、後述するような方法に従って、サイド情報190を生成する。すなわち、図6のステップS104に示すサイド情報の生成処理は、対象画像170の縮小画像178と合成画像176とを組合せてサイド情報190を生成する処理を含む。
本処理例においては、算出された誤差分布Rを任意のしきい値を用いて2つの領域に分類する。典型的には、その誤差がしきい値より高い領域をHi領域とし、その誤差がしきい値より低い領域をLo領域とする。そして、サイド情報190の各画素には、誤差分布RのHi領域およびLo領域に対応して、縮小画像178(実質的には、拡大画像179)または合成画像176の情報が割当てられる。より具体的には、誤差分布RのHi領域に対応するサイド情報190の画素位置には、縮小画像178をアップサンプリングして得られた拡大画像179の対応する画素位置の値が割当てられ、誤差分布RのLo領域に対応する画素位置には、合成画像176の対応する画素位置の値が割当てられる。
=SY(x,y){if R(x,y)<TH}
このように、本処理例において、サイド情報選択部160は、誤差が相対的に高い領域に縮小画像178をアップサンプリングして得られた拡大画像179の情報を割当てるとともに、誤差が相対的に低い領域に合成画像176の情報を割当てる。
本処理例においては、算出された誤差分布Rを(n-1)個のしきい値を用いて、n種類の領域に分類する。分類された領域の番号kを誤差が低い方から1,2,…,nとすると、サイド情報190(SI)の画素位置(x,y)における値は、分類された領域の番号kを用いて、以下のようになる。
このように、本処理例において、サイド情報選択部160は、誤差が相対的に高い領域に縮小画像178をアップサンプリングして得られた拡大画像179の情報を割当てるとともに、誤差が相対的に低い領域に合成画像176の情報を割当てる。
本処理例においては、画素位置の誤差の逆数を重み付け係数とみなし、これを用いて、サイド情報190を算出する。具体的には、サイド情報190の画素位置(x,y)における値SI(x,y)は以下のようになる。
このように、本処理例において、サイド情報選択部160は、誤差が相対的に高い領域に縮小画像178をアップサンプリングして得られた拡大画像179の情報を割当てるとともに、誤差が相対的に低い領域に合成画像176の情報を割当てる。本処理例においては、誤差が高いほど拡大画像179(縮小画像178)が優位になり、誤差が低いほど合成画像176が優位になる。
図6のステップS106に示す勾配強度画像の生成処理は、図7の勾配強度画像生成部162によって実現される。より具体的には、勾配強度画像生成部162は、サイド情報190から画像空間上の変化を示す勾配強度画像192を生成する。勾配強度画像192は、サイド情報190内のテクスチャー変化がより大きな領域がより大きな輝度をもつような画像を意味する。図7において、勾配強度画像192を「VT(gradient)」と表す。勾配強度画像192の生成処理としては、任意のフィルタリング処理を用いることができる。また、勾配強度画像192の各画素の値は所定範囲内(例えば、0~255)のいずれかの整数値をとるように正規化される。
(a)サイド情報190を出力される剰余画像の画像サイズにリサイズする。
(d2)(1回以上の)ガウシアンスムージング処理(あるいは、メディアンフィルタ処理)
(d3)一連のモルフォロジカル処理(例えば、(1回以上の)膨脹処理、(1回以上の)収縮処理、(1回以上の)膨脹処理)
(d4)(1回以上の)ガウシアンスムージング処理
以上のような処理によって、サイド情報190を構成するカラーコンポーネント別に勾配強度画像が生成される。すなわち、図6のステップS106に示す勾配強度画像192の生成処理は、サイド情報190を構成する各カラーコンポーネントのグレイスケール画像に対して、エッジ検出処理、スムージング処理、一連のモルフォロジカル処理、および、スムージング処理を順に適用する処理を含む。このような処理によって、サイド情報190に含まれるカラーコンポーネントの数だけグレイスケール画像が生成され、それぞれのグレイスケール画像について勾配強度画像が生成される。
図6のステップS108に示す剰余画像の生成処理は、図7の係数選択部164、Lookupテーブル166、およびモジュロ演算部168によって実現される。剰余画像194は、勾配強度画像192の各画素位置における値をモジュロ演算して得られる剰余を示す。このモジュロ演算にあたって、勾配強度画像192の各画素位置の値に応じて、法となる係数Dが選択される。係数選択部164は、勾配強度画像192の各画素位置の値に応じて係数Dを選択する。
以下、本実施の形態に係るデータサイズ低減方法のエンコード処理の処理例を示す。
次に、本実施の形態に係るデータサイズ低減方法のデコード処理(図6のステップS200~S210)の詳細について説明する。基本的には、エンコード処理の逆処理であるので、同様の処理についての詳細な説明は繰返さない。
図15は、本実施の形態に係るデータサイズ低減方法のデコード処理に係る機能構成を示すブロック図である。図16は、本実施の形態に係るデータサイズ低減方法のデコード処理の概要を説明するための模式図である。図15における表記は、図7における表記に準じている。
図6のステップS200に示すエンコード処理の取得処理は、図15の入力データバッファ250と、距離情報推定部252と、距離情報バッファ254によって実現される。具体的には、情報処理装置200は、上述のデコード処理によって生成された、参照画像172および182ならびに剰余画像194を少なくとも受信する。上述したように、参照画像172および182にそれぞれ対応する距離画像174および184が併せて送信される場合には、これらの距離画像もデコード処理に用いられる。
図6のステップS202に示す合成画像の生成処理は、図15の画像合成部258によって実現される。より具体的には、画像合成部258は、参照画像172および対応する距離画像174、ならびに、参照画像182および対応する距離画像184を用いて、対象画像170の位置における仮想的な視野を示す合成画像276を生成する。画像合成部258における合成画像の生成方法は、上述した画像合成部158(図7)における合成画像の生成方法と同様であるので、詳細な説明は繰返さない。なお、受信された複数画像が映像フレーム列(動画像)である場合には、2つの参照画像172および182に対応するフレームの情報から内挿処理または外挿処理を行なうことで、対象画像170に対応するフレームの情報を生成できる。
図6のステップS204に示すサイド情報の生成処理は、図15のサイド情報選択部260によって実現される。より具体的には、サイド情報選択部260は、縮小画像178(入力データに含まれている場合)、合成画像276、およびこれらの組合せに基づいて、サイド情報290を生成する。
図6のステップS206に示す勾配強度画像の生成処理は、図15の勾配強度画像生成部262によって実現される。より具体的には、勾配強度画像生成部262は、サイド情報290から画像空間上の変化を示す勾配強度画像292を生成する。勾配強度画像生成部262における勾配強度画像の生成方法は、上述した勾配強度画像生成部162(図7)における勾配強度画像の生成方法と同様であるので、詳細な説明は繰返さない。
図6のステップS208に示す対象画像の再構成処理は、図15の係数選択部264、Lookupテーブル266、および逆モジュロ演算部268によって実現される。対象画像の各画素位置の輝度値は、入力データに含まれる剰余画像194の対応する画素位置の値(剰余m)と、剰余画像194を生成する際に用いられた係数Dとから、逆モジュロ演算によって推定される。
候補値C(1)=1×8+3=11 (SIとの差=3)
候補値C(2)=2×8+3=19 (SIとの差=11)
…
これらの候補値C(q’)のうち、サイド情報290の対応する値SIとの差が最も小さくなる候補値C(1)が選択され、再構成画像294の対応する輝度値は「11」に決定される。このようにして、再構成画像294の各画素位置の輝度値がカラーコンポーネント別にそれぞれ決定される。
本実施の形態によれば、従来に比較してより適切なサイド情報を生成できるとともに、本実施の形態に係るサイド情報を用いることで、再構成画像の品質を高めることができる。
Claims (8)
- 互いに類似した情報を含む複数画像のデータサイズを低減する方法であって、
前記複数画像を取得するとともに、前記複数画像のうち対象画像ならびに前記対象画像に類似した第1の参照画像および第2の参照画像を選択するステップと、
前記第1の参照画像および前記第2の参照画像に基づいて、前記対象画像に対応する合成画像を生成するステップと、
前記対象画像および前記合成画像の少なくとも一方に基づいて、前記対象画像の位置における仮想的な視野の情報であるサイド情報を生成するステップと、
前記サイド情報から勾配強度画像を生成するステップと、
前記勾配強度画像の各画素位置について勾配強度に応じた係数を決定するとともに、前記対象画像の各画素位置の輝度値に対して対応する係数を法とするモジュロ演算を行なうことで、前記モジュロ演算によって算出される各画素位置の剰余からなる剰余画像を生成するステップと、
前記対象画像、前記第1の参照画像、および前記第2の参照画像を表現する情報として、前記第1の参照画像、前記第2の参照画像、および前記剰余画像を出力するステップとを備える、方法。 - 前記サイド情報を生成するステップは、前記対象画像の縮小画像と前記合成画像とを組合せて前記サイド情報を生成するステップを含む、請求項1に記載の方法。
- 前記勾配強度画像を生成するステップは、前記サイド情報内のテクスチャー変化がより大きな領域がより大きな輝度をもつような画像を生成するステップを含む、請求項1または2に記載の方法。
- 前記剰余画像を生成するステップは、予め定められた対応関係を参照して、勾配強度に対応する係数を選択するステップを含む、請求項1~3のいずれか1項に記載の方法。
- 前記選択するステップは、
前記複数画像が多視点画像である場合に、ベースライン距離に基づいて、前記対象画像ならびに前記第1の参照画像および第2の参照画像を選択するステップと、
前記複数画像が映像フレーム列である場合に、フレームレートに基づいて、前記対象画像ならびに前記第1の参照画像および第2の参照画像を選択するステップとを含む、請求項1~4のいずれか1項に記載の方法。 - 出力された前記第1の参照画像、前記第2の参照画像、および前記剰余画像を取得するステップと、
前記第1の参照画像および前記第2の参照画像に基づいて、前記対象画像に対応する合成画像を生成するステップと、
取得された情報からサイド情報を生成するとともに、前記サイド情報から勾配強度画像を生成するステップと、
前記勾配強度画像の各画素位置について勾配強度に応じた係数を決定するとともに、決定した係数を法とし、前記剰余画像の対応する画素位置の値を剰余とする逆モジュロ演算により算出される候補値のうち、前記サイド情報の対応する画素位置の値に対する差が最も小さいものを、前記対象画像の対応する画素位置の輝度値として決定するステップとをさらに備える、請求項1~5のいずれか1項に記載の方法。 - 互いに類似した情報を含む複数画像のデータサイズを低減するプログラムであって、当該プログラムは、コンピュータに、
前記複数画像を取得するとともに、前記複数画像のうち対象画像ならびに前記対象画像に類似した第1の参照画像および第2の参照画像を選択するステップと、
前記第1の参照画像および前記第2の参照画像に基づいて、前記対象画像に対応する合成画像を生成するステップと、
前記対象画像および前記合成画像の少なくとも一方に基づいて、前記対象画像の位置における仮想的な視野の情報であるサイド情報を生成するステップと、
前記サイド情報から勾配強度画像を生成するステップと、
前記勾配強度画像の各画素位置について勾配強度に応じた係数を決定するとともに、前記対象画像の各画素位置の輝度値に対して対応する係数を法とするモジュロ演算を行なうことで、前記モジュロ演算によって算出される各画素位置の剰余からなる剰余画像を生成するステップと、
前記対象画像、前記第1の参照画像、および前記第2の参照画像を表現する情報として、前記第1の参照画像、前記第2の参照画像、および前記剰余画像を出力するステップとを実行させる、プログラム。 - 互いに類似した情報を含む複数画像のデータサイズを低減する装置であって、
前記複数画像を取得するとともに、前記複数画像のうち対象画像ならびに前記対象画像に類似した第1の参照画像および第2の参照画像を選択する手段と、
前記第1の参照画像および前記第2の参照画像に基づいて、前記対象画像に対応する合成画像を生成する手段と、
前記対象画像および前記合成画像の少なくとも一方に基づいて、前記対象画像の位置における仮想的な視野の情報であるサイド情報を生成する手段と、
前記サイド情報から勾配強度画像を生成する手段と、
前記勾配強度画像の各画素位置について勾配強度に応じた係数を決定するとともに、前記対象画像の各画素位置の輝度値に対して対応する係数を法とするモジュロ演算を行なうことで、前記モジュロ演算によって算出される各画素位置の剰余からなる剰余画像を生成する手段と、
前記対象画像、前記第1の参照画像、および前記第2の参照画像を表現する情報として、前記第1の参照画像、前記第2の参照画像、および前記剰余画像を出力する手段とを備える、装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/433,820 US20150256819A1 (en) | 2012-10-12 | 2013-10-09 | Method, program and apparatus for reducing data size of a plurality of images containing mutually similar information |
KR1020157012240A KR20150070258A (ko) | 2012-10-12 | 2013-10-09 | 서로 유사한 정보를 포함하는 복수 화상의 데이터 사이즈를 저감하는 방법, 프로그램 및 장치 |
EP13844654.7A EP2908527A4 (en) | 2012-10-12 | 2013-10-09 | DEVICE, PROGRAM, AND METHOD FOR REDUCING THE DATA SIZE OF MULTIPLE IMAGES CONTAINING SIMILAR INFORMATION |
CN201380053494.3A CN104737539A (zh) | 2012-10-12 | 2013-10-09 | 减小含有相互类似的信息的多个图像的数据量的方法、程序及装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-227262 | 2012-10-12 | ||
JP2012227262A JP2014082541A (ja) | 2012-10-12 | 2012-10-12 | 互いに類似した情報を含む複数画像のデータサイズを低減する方法、プログラムおよび装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014057989A1 true WO2014057989A1 (ja) | 2014-04-17 |
Family
ID=50477456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/077516 WO2014057989A1 (ja) | 2012-10-12 | 2013-10-09 | 互いに類似した情報を含む複数画像のデータサイズを低減する方法、プログラムおよび装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20150256819A1 (ja) |
EP (1) | EP2908527A4 (ja) |
JP (1) | JP2014082541A (ja) |
KR (1) | KR20150070258A (ja) |
CN (1) | CN104737539A (ja) |
WO (1) | WO2014057989A1 (ja) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150063464A1 (en) * | 2013-08-30 | 2015-03-05 | Qualcomm Incorporated | Lookup table coding |
CN103945208B (zh) * | 2014-04-24 | 2015-10-28 | 西安交通大学 | 一种针对多视点裸眼3d显示的并行同步缩放引擎及方法 |
CN111340706B (zh) * | 2020-02-26 | 2021-02-02 | 上海安路信息科技有限公司 | 图像缩小方法与图像缩小系统 |
KR102434428B1 (ko) * | 2022-03-23 | 2022-08-19 | 국방과학연구소 | 합성 영상 생성 방법, 합성 영상 생성 장치 및 상기 방법을 실행시키기 위하여 기록매체에 저장된 컴퓨터 프로그램 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04343576A (ja) * | 1991-05-21 | 1992-11-30 | Matsushita Electric Ind Co Ltd | 高能率符号化方法と高能率符号の復号方法 |
JP2006261835A (ja) * | 2005-03-15 | 2006-09-28 | Toshiba Corp | 画像送信装置、画像受信装置および画像伝送システム |
Family Cites Families (146)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4496972A (en) * | 1980-05-10 | 1985-01-29 | Deutsche Forschungs-Und Versuchsanstalt Fur Luft-Und Raumfahrt E.V. | Method for the representation of video images or scenes, in particular aerial images transmitted at reduced frame rate |
US4998167A (en) * | 1989-11-14 | 1991-03-05 | Jaqua Douglas A | High resolution translation of images |
US5115311A (en) * | 1989-11-14 | 1992-05-19 | Shell Oil Company | High resolution translation of images |
JP3030485B2 (ja) * | 1994-03-17 | 2000-04-10 | 富士通株式会社 | 3次元形状抽出方法及び装置 |
JP3749282B2 (ja) * | 1995-05-16 | 2006-02-22 | 株式会社東芝 | 画像処理装置 |
US6095566A (en) * | 1996-03-14 | 2000-08-01 | Kabushiki Kaisha Toshiba | Image recorded product, image recording system, image reproducing system, and recording medium for use to superimpose-record/reproduce additional information |
US6317220B1 (en) * | 1996-12-06 | 2001-11-13 | Seiko Epson Corporation | Image forming apparatus capable of preventing linear nonuniformity and improving image quality |
US5909251A (en) * | 1997-04-10 | 1999-06-01 | Cognitech, Inc. | Image frame fusion by velocity estimation using region merging |
US6782132B1 (en) * | 1998-08-12 | 2004-08-24 | Pixonics, Inc. | Video coding and reconstruction apparatus and methods |
US6498816B1 (en) * | 1999-09-03 | 2002-12-24 | Equator Technologies, Inc. | Circuit and method for formatting each of a series of encoded video images into respective regions |
US7079157B2 (en) * | 2000-03-17 | 2006-07-18 | Sun Microsystems, Inc. | Matching the edges of multiple overlapping screen images |
US6940529B2 (en) * | 2000-03-17 | 2005-09-06 | Sun Microsystems, Inc. | Graphics system configured to perform distortion correction |
US7002589B2 (en) * | 2000-03-17 | 2006-02-21 | Sun Microsystems, Inc. | Blending the edges of multiple overlapping screen images |
JP3269056B2 (ja) * | 2000-07-04 | 2002-03-25 | 松下電器産業株式会社 | 監視システム |
US6956670B1 (en) * | 2000-10-24 | 2005-10-18 | International Business Machines Corporation | Method, system, program, and data structures for halftoning with line screens having different lines per inch (LPI) |
US6891890B1 (en) * | 2001-04-24 | 2005-05-10 | Vweb Corporation | Multi-phase motion estimation system and method |
US7433489B2 (en) * | 2001-11-28 | 2008-10-07 | Sony Electronics Inc. | Method to ensure temporal synchronization and reduce complexity in the detection of temporal watermarks |
BR0304545A (pt) * | 2002-01-14 | 2004-11-03 | Nokia Corp | Método de codificação das imagens em uma sequência de vìdeo digital para fornecer os dados de vìdeo codificados, codificador de vìdeo, método de decodificação dos dados indicativos de uma sequência de vìdeo digital, decodificador de vìdeo, e, sistema de decodificação de vìdeo |
US7620109B2 (en) * | 2002-04-10 | 2009-11-17 | Microsoft Corporation | Sub-pixel interpolation in motion estimation and compensation |
AU2003278789A1 (en) * | 2002-06-20 | 2004-01-06 | Alberto Baroncelli | Vector graphics circuit accelerator for display systems |
US7986358B1 (en) * | 2003-02-25 | 2011-07-26 | Matrox Electronic Systems, Ltd. | Bayer image conversion using a graphics processing unit |
HUP0301368A3 (en) * | 2003-05-20 | 2005-09-28 | Amt Advanced Multimedia Techno | Method and equipment for compressing motion picture data |
CN100544450C (zh) * | 2003-06-12 | 2009-09-23 | 株式会社尼康 | 图像处理方法 |
US7190380B2 (en) * | 2003-09-26 | 2007-03-13 | Hewlett-Packard Development Company, L.P. | Generating and displaying spatially offset sub-frames |
US7188310B2 (en) * | 2003-10-09 | 2007-03-06 | Hewlett-Packard Development Company, L.P. | Automatic layout generation for photobooks |
US7672507B2 (en) * | 2004-01-30 | 2010-03-02 | Hewlett-Packard Development Company, L.P. | Image processing methods and systems |
US8208559B2 (en) * | 2004-05-14 | 2012-06-26 | Nxp B.V. | Device for producing progressive frames from interlaced encoded frames |
TWI268715B (en) * | 2004-08-16 | 2006-12-11 | Nippon Telegraph & Telephone | Picture encoding method, picture decoding method, picture encoding apparatus, and picture decoding apparatus |
JP2006174415A (ja) * | 2004-11-19 | 2006-06-29 | Ntt Docomo Inc | 画像復号装置、画像復号プログラム、画像復号方法、画像符号化装置、画像符号化プログラム及び画像符号化方法 |
KR100703734B1 (ko) * | 2004-12-03 | 2007-04-05 | 삼성전자주식회사 | Dct 업샘플링을 이용한 다 계층 비디오 인코딩/디코딩방법 및 장치 |
US8854486B2 (en) * | 2004-12-17 | 2014-10-07 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for processing multiview videos for view synthesis using skip and direct modes |
US8396329B2 (en) * | 2004-12-23 | 2013-03-12 | General Electric Company | System and method for object measurement |
US10148897B2 (en) * | 2005-07-20 | 2018-12-04 | Rearden, Llc | Apparatus and method for capturing still images and video using coded lens imaging techniques |
JP4634292B2 (ja) * | 2005-12-06 | 2011-02-16 | 株式会社リコー | 画像処理装置、画像処理方法、その方法をコンピュータに実行させるプログラム |
JP2007180981A (ja) * | 2005-12-28 | 2007-07-12 | Victor Co Of Japan Ltd | 画像符号化装置、画像符号化方法、及び画像符号化プログラム |
GB0600141D0 (en) * | 2006-01-05 | 2006-02-15 | British Broadcasting Corp | Scalable coding of video signals |
TW200806040A (en) * | 2006-01-05 | 2008-01-16 | Nippon Telegraph & Telephone | Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media for storing the programs |
CA2845591C (en) * | 2006-01-05 | 2015-12-08 | Nippon Telegraph And Telephone Corporation | Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media for storing the programs |
US8184712B2 (en) * | 2006-04-30 | 2012-05-22 | Hewlett-Packard Development Company, L.P. | Robust and efficient compression/decompression providing for adjustable division of computational complexity between encoding/compression and decoding/decompression |
CN101641954B (zh) * | 2007-03-23 | 2011-09-14 | Lg电子株式会社 | 用于解码/编码视频信号的方法和装置 |
JP4947593B2 (ja) * | 2007-07-31 | 2012-06-06 | Kddi株式会社 | 局所領域分割による自由視点画像の生成装置およびプログラム |
JP4864835B2 (ja) * | 2007-08-21 | 2012-02-01 | Kddi株式会社 | 色補正装置、方法及びプログラム |
US8238437B2 (en) * | 2007-09-20 | 2012-08-07 | Canon Kabushiki Kaisha | Image encoding apparatus, image decoding apparatus, and control method therefor |
CA2701893C (en) * | 2007-10-15 | 2015-09-29 | Nippon Telegraph And Telephone Corporation | Image encoding and decoding apparatuses, image encoding and decoding methods, programs thereof, and recording media recorded with the programs |
US20120269458A1 (en) * | 2007-12-11 | 2012-10-25 | Graziosi Danillo B | Method for Generating High Resolution Depth Images from Low Resolution Depth Images Using Edge Layers |
US8040382B2 (en) * | 2008-01-07 | 2011-10-18 | Dp Technologies, Inc. | Method and apparatus for improving photo image quality |
JP4586880B2 (ja) * | 2008-05-14 | 2010-11-24 | ソニー株式会社 | 画像処理装置、画像処理方法、およびプログラム |
JP5121592B2 (ja) * | 2008-06-18 | 2013-01-16 | キヤノン株式会社 | 画像形成装置および画像処理方法 |
US8908763B2 (en) * | 2008-06-25 | 2014-12-09 | Qualcomm Incorporated | Fragmented reference in temporal compression for video coding |
JP5035195B2 (ja) * | 2008-09-25 | 2012-09-26 | Kddi株式会社 | 画像生成装置及びプログラム |
KR101468267B1 (ko) * | 2008-10-02 | 2014-12-15 | 프라운호퍼-게젤샤프트 추르 푀르데룽 데어 안제반텐 포르슝 에 파우 | 중간 뷰 합성 및 멀티-뷰 데이터 신호 추출 |
US7982916B2 (en) * | 2008-11-21 | 2011-07-19 | Eastman Kodak Company | Color error diffusion with error signal offset |
JP5325638B2 (ja) * | 2008-11-26 | 2013-10-23 | 日立コンシューマエレクトロニクス株式会社 | 画像復号化方法 |
US8451903B2 (en) * | 2009-05-14 | 2013-05-28 | Massachusetts Institute Of Technology | Selecting transforms for compressing visual data |
US8509309B2 (en) * | 2009-05-14 | 2013-08-13 | Massachusetts Institute Of Technology | Selecting transforms for compressing visual data |
US10095953B2 (en) * | 2009-11-11 | 2018-10-09 | Disney Enterprises, Inc. | Depth modification for display applications |
US9210436B2 (en) * | 2010-01-11 | 2015-12-08 | Sungkyunkwan University Foundation For Corporate Collaboration | Distributed video coding/decoding method, distributed video coding/decoding apparatus, and transcoding apparatus |
US20110206118A1 (en) * | 2010-02-19 | 2011-08-25 | Lazar Bivolarsky | Data Compression for Video |
US20110206132A1 (en) * | 2010-02-19 | 2011-08-25 | Lazar Bivolarsky | Data Compression for Video |
US8542737B2 (en) * | 2010-03-21 | 2013-09-24 | Human Monitoring Ltd. | Intra video image compression and decompression |
JP2011211370A (ja) * | 2010-03-29 | 2011-10-20 | Renesas Electronics Corp | 画像処理装置および画像処理方法 |
WO2011128272A2 (en) * | 2010-04-13 | 2011-10-20 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Hybrid video decoder, hybrid video encoder, data stream |
JP2011244025A (ja) * | 2010-05-13 | 2011-12-01 | Sony Corp | 画像処理装置および画像処理方法 |
CN102907092B (zh) * | 2010-05-26 | 2017-02-15 | 高通股份有限公司 | 相机参数辅助式视频帧速率上转换 |
CN103155568B (zh) * | 2010-07-08 | 2016-07-27 | 杜比实验室特许公司 | 用于使用参考处理信号进行多层图像和视频传输的系统和方法 |
WO2012007038A1 (en) * | 2010-07-15 | 2012-01-19 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Hybrid video coding supporting intermediate view synthesis |
JP5749340B2 (ja) * | 2010-07-21 | 2015-07-15 | ドルビー ラボラトリーズ ライセンシング コーポレイション | マルチレイヤフレーム準拠ビデオ配信のためのシステム及び方法 |
JP2012034198A (ja) * | 2010-07-30 | 2012-02-16 | On Semiconductor Trading Ltd | フレーム補間装置 |
KR20120030813A (ko) * | 2010-09-20 | 2012-03-29 | 삼성전자주식회사 | 영상 데이터 처리 방법 및 이를 수행하는 표시 장치 |
WO2012053979A1 (en) * | 2010-10-20 | 2012-04-26 | Agency For Science, Technology And Research | Method and apparatus for packetizing data |
HU1000640D0 (en) * | 2010-11-29 | 2011-02-28 | Holografika Hologrameloeallito Fejlesztoe Es Forgalmazo Kft | Image coding and decoding method and apparatus for efficient encoding and decoding of 3d field content |
US20120133639A1 (en) * | 2010-11-30 | 2012-05-31 | Microsoft Corporation | Strip panorama |
US8867823B2 (en) * | 2010-12-03 | 2014-10-21 | National University Corporation Nagoya University | Virtual viewpoint image synthesizing method and virtual viewpoint image synthesizing system |
KR101798408B1 (ko) * | 2011-01-03 | 2017-11-20 | 삼성전자주식회사 | 3d 디스플레이를 위한 고품질 멀티-뷰 영상의 생성 방법 및 장치 |
US9237331B2 (en) * | 2011-01-18 | 2016-01-12 | Disney Enterprises, Inc. | Computational stereoscopic camera system |
JP2012177676A (ja) * | 2011-01-31 | 2012-09-13 | Sony Corp | 画像処理装置および方法、並びにプログラム |
US10122986B2 (en) * | 2011-02-18 | 2018-11-06 | Sony Corporation | Image processing device and image processing method |
US9087375B2 (en) * | 2011-03-28 | 2015-07-21 | Sony Corporation | Image processing device, image processing method, and program |
WO2012131895A1 (ja) * | 2011-03-29 | 2012-10-04 | 株式会社東芝 | 画像符号化装置、方法及びプログラム、画像復号化装置、方法及びプログラム |
BR122015001002B1 (pt) * | 2011-06-30 | 2022-07-26 | Sony Corporation | Dispositivo e método de processamento de imagem |
US9723315B2 (en) * | 2011-07-01 | 2017-08-01 | Apple Inc. | Frame encoding selection based on frame similarities and visual quality and interests |
JP2013038602A (ja) * | 2011-08-08 | 2013-02-21 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
EP2745517A1 (en) * | 2011-08-15 | 2014-06-25 | Telefonaktiebolaget LM Ericsson (PUBL) | Encoder, method in an encoder, decoder and method in a decoder for providing information concerning a spatial validity range |
EP2560363B1 (en) * | 2011-08-17 | 2018-07-04 | Seiko Epson Corporation | Image processing device |
US9153025B2 (en) * | 2011-08-19 | 2015-10-06 | Adobe Systems Incorporated | Plane detection and tracking for structure from motion |
US20130229485A1 (en) * | 2011-08-30 | 2013-09-05 | Nokia Corporation | Apparatus, a Method and a Computer Program for Video Coding and Decoding |
US8705828B2 (en) * | 2011-08-31 | 2014-04-22 | Carestream Health, Inc. | Methods and apparatus for super resolution scanning for CBCT system and cone-beam image reconstruction |
WO2013030458A1 (en) * | 2011-08-31 | 2013-03-07 | Nokia Corporation | Multiview video coding and decoding |
US20130083840A1 (en) * | 2011-09-30 | 2013-04-04 | Broadcom Corporation | Advance encode processing based on raw video data |
WO2013055148A2 (ko) * | 2011-10-12 | 2013-04-18 | 엘지전자 주식회사 | 영상 인코딩 방법 및 디코딩 방법 |
WO2013073316A1 (ja) * | 2011-11-14 | 2013-05-23 | 独立行政法人情報通信研究機構 | 立体映像符号化装置、立体映像復号化装置、立体映像符号化方法、立体映像復号化方法、立体映像符号化プログラム及び立体映像復号化プログラム |
CN107241606B (zh) * | 2011-12-17 | 2020-02-21 | 杜比实验室特许公司 | 解码系统、方法和设备以及计算机可读介质 |
US8659701B2 (en) * | 2011-12-19 | 2014-02-25 | Sony Corporation | Usage of dither on interpolated frames |
US9786253B2 (en) * | 2012-01-25 | 2017-10-10 | Lumenco, Llc | Conversion of a digital stereo image into multiple views with parallax for 3D viewing without glasses |
GB2505643B (en) * | 2012-08-30 | 2016-07-13 | Canon Kk | Method and device for determining prediction information for encoding or decoding at least part of an image |
CN102647610B (zh) * | 2012-04-18 | 2014-05-07 | 四川大学 | 基于像素提取的集成成像方向性显示方法 |
US20130287093A1 (en) * | 2012-04-25 | 2013-10-31 | Nokia Corporation | Method and apparatus for video coding |
US9031319B2 (en) * | 2012-05-31 | 2015-05-12 | Apple Inc. | Systems and methods for luma sharpening |
US8953882B2 (en) * | 2012-05-31 | 2015-02-10 | Apple Inc. | Systems and methods for determining noise statistics of image data |
US20130329800A1 (en) * | 2012-06-07 | 2013-12-12 | Samsung Electronics Co., Ltd. | Method of performing prediction for multiview video processing |
US9998726B2 (en) * | 2012-06-20 | 2018-06-12 | Nokia Technologies Oy | Apparatus, a method and a computer program for video coding and decoding |
US11284133B2 (en) * | 2012-07-10 | 2022-03-22 | Avago Technologies International Sales Pte. Limited | Real-time video coding system of multiple temporally scaled video and of multiple profile and standards based on shared video coding information |
CN102819400A (zh) * | 2012-08-14 | 2012-12-12 | 北京小米科技有限责任公司 | 一种移动终端的桌面系统及界面交互方法、装置 |
CN103634606B (zh) * | 2012-08-21 | 2015-04-08 | 腾讯科技(深圳)有限公司 | 视频编码方法和装置 |
US9524008B1 (en) * | 2012-09-11 | 2016-12-20 | Pixelworks, Inc. | Variable frame rate timing controller for display devices |
JP6787667B2 (ja) * | 2012-09-21 | 2020-11-18 | ノキア テクノロジーズ オサケユイチア | ビデオコーディングのための方法と装置 |
EP2951647A4 (en) * | 2012-09-25 | 2016-10-19 | Asociación Ct De Investigación Cooperativa En Nanociencias Cic Nanogune | SYNTHETIC OPTICAL HOLOGRAPHY |
US9355613B2 (en) * | 2012-10-09 | 2016-05-31 | Mediatek Inc. | Data processing apparatus for transmitting/receiving compression-related indication information via display interface and related data processing method |
WO2014056150A1 (en) * | 2012-10-09 | 2014-04-17 | Nokia Corporation | Method and apparatus for video coding |
JP5664626B2 (ja) * | 2012-10-12 | 2015-02-04 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP2014082540A (ja) * | 2012-10-12 | 2014-05-08 | National Institute Of Information & Communication Technology | 互いに類似した情報を含む複数画像のデータサイズを低減する方法、プログラム、および装置、ならびに、互いに類似した情報を含む複数画像を表現するデータ構造 |
US9036089B2 (en) * | 2012-11-27 | 2015-05-19 | Disney Enterprises, Inc. | Practical temporal consistency for video applications |
KR101918030B1 (ko) * | 2012-12-20 | 2018-11-14 | 삼성전자주식회사 | 하이브리드 멀티-뷰 랜더링 방법 및 장치 |
GB2513090B (en) * | 2013-01-28 | 2019-12-11 | Microsoft Technology Licensing Llc | Conditional concealment of lost video data |
EP2961167A4 (en) * | 2013-02-25 | 2016-08-10 | Samsung Electronics Co Ltd | DEVICE AND METHOD FOR EVOLVING VIDEO ENCODING USING MEMORY BANDWIDTH AND QUANTITY OF CALCULATION, AND DEVICE AND METHOD FOR EVOLVING VIDEO DECODING |
US20160005155A1 (en) * | 2013-03-19 | 2016-01-07 | Sony Corporation | Image processing device and image processing method |
JP2014192701A (ja) * | 2013-03-27 | 2014-10-06 | National Institute Of Information & Communication Technology | 複数の入力画像をエンコーディングする方法、プログラムおよび装置 |
JP2014192702A (ja) * | 2013-03-27 | 2014-10-06 | National Institute Of Information & Communication Technology | 複数の入力画像をエンコーディングする方法、プログラムおよび装置 |
US9648353B2 (en) * | 2013-04-04 | 2017-05-09 | Qualcomm Incorporated | Multiple base layer reference pictures for SHVC |
CN105308966B (zh) * | 2013-04-05 | 2019-01-04 | 三星电子株式会社 | 视频编码方法及其设备以及视频解码方法及其设备 |
CN105308958A (zh) * | 2013-04-05 | 2016-02-03 | 三星电子株式会社 | 用于使用视点合成预测的层间视频编码方法和设备以及用于使用视点合成预测的层间视频解码方法和设备 |
US20140301463A1 (en) * | 2013-04-05 | 2014-10-09 | Nokia Corporation | Method and apparatus for video coding and decoding |
CN105340275A (zh) * | 2013-04-17 | 2016-02-17 | 三星电子株式会社 | 使用视点合成预测的多视点视频编码方法及其设备以及多视点视频解码方法及其设备 |
JP2015019326A (ja) * | 2013-07-12 | 2015-01-29 | ソニー株式会社 | 符号化装置および符号化方法、並びに、復号装置および復号方法 |
KR101631281B1 (ko) * | 2013-07-12 | 2016-06-16 | 삼성전자주식회사 | 뷰 합성 예측을 이용한 인터 레이어 비디오 복호화 방법 및 그 장치 뷰 합성 예측을 이용한 인터 레이어 비디오 부호화 방법 및 장치 |
JP2015041796A (ja) * | 2013-08-20 | 2015-03-02 | 独立行政法人情報通信研究機構 | 画像処理方法、画像処理装置、画像処理プログラム、およびデータ構造 |
ITTO20130784A1 (it) * | 2013-09-30 | 2015-03-31 | Sisvel Technology Srl | Method and device for edge shape enforcement for visual enhancement of depth image based rendering |
KR20150041225A (ko) * | 2013-10-04 | 2015-04-16 | 삼성전자주식회사 | 영상 처리 방법 및 장치 |
WO2015053330A1 (ja) * | 2013-10-10 | 2015-04-16 | シャープ株式会社 | 画像復号装置 |
KR102264679B1 (ko) * | 2013-10-14 | 2021-06-14 | 삼성전자주식회사 | 휘도 보상 여부에 따른 뷰 합성 예측 적용 방법 및 장치 |
KR102250092B1 (ko) * | 2013-10-14 | 2021-05-10 | 삼성전자주식회사 | 다시점 비디오 부호화 방법 및 장치, 다시점 비디오 복호화 방법 및 장치 |
WO2015099506A1 (ko) * | 2013-12-26 | 2015-07-02 | 삼성전자 주식회사 | 서브블록 기반 예측을 수행하는 인터 레이어 비디오 복호화 방법 및 그 장치 및 서브블록 기반 예측을 수행하는 인터 레이어 비디오 부호화 방법 및 그 장치 |
TW201528775A (zh) * | 2014-01-02 | 2015-07-16 | Ind Tech Res Inst | 景深圖校正方法及系統 |
KR101797506B1 (ko) * | 2014-02-21 | 2017-11-15 | 엘지전자 주식회사 | 방송 신호 송신 장치 및 방송 신호 수신 장치 |
CN106464889A (zh) * | 2014-03-06 | 2017-02-22 | 三星电子株式会社 | 执行基于子块的预测的层间视频解码方法和层间视频编码方法及其设备 |
WO2015137723A1 (ko) * | 2014-03-11 | 2015-09-17 | 삼성전자 주식회사 | 인터 레이어 비디오 부호화를 위한 디스패리티 벡터 예측 방법 및 장치와 인터 레이어 비디오 복호화를 위한 디스패리티 벡터 예측 방법 및 장치 |
US9558680B2 (en) * | 2014-04-04 | 2017-01-31 | Sizhe Tan | Payload in picture encoding |
US9756263B2 (en) * | 2014-05-01 | 2017-09-05 | Rebellion Photonics, Inc. | Mobile gas and chemical imaging camera |
US9378543B2 (en) * | 2014-07-28 | 2016-06-28 | Disney Enterprises, Inc. | Temporally coherent local tone mapping of high dynamic range video |
US9361679B2 (en) * | 2014-07-28 | 2016-06-07 | Disney Enterprises, Inc. | Temporally coherent local tone mapping of HDR video |
US10484697B2 (en) * | 2014-09-09 | 2019-11-19 | Qualcomm Incorporated | Simultaneous localization and mapping for video coding |
US9607388B2 (en) * | 2014-09-19 | 2017-03-28 | Qualcomm Incorporated | System and method of pose estimation |
US10750153B2 (en) * | 2014-09-22 | 2020-08-18 | Samsung Electronics Company, Ltd. | Camera system for three-dimensional video |
US11330284B2 (en) * | 2015-03-27 | 2022-05-10 | Qualcomm Incorporated | Deriving motion information for sub-blocks in video coding |
JP6390516B2 (ja) * | 2015-05-27 | 2018-09-19 | コニカミノルタ株式会社 | 超音波診断装置及び超音波診断装置の制御方法 |
US20160360220A1 (en) * | 2015-06-04 | 2016-12-08 | Apple Inc. | Selective packet and data dropping to reduce delay in real-time video communication |
-
2012
- 2012-10-12 JP JP2012227262A patent/JP2014082541A/ja not_active Withdrawn
-
2013
- 2013-10-09 CN CN201380053494.3A patent/CN104737539A/zh active Pending
- 2013-10-09 KR KR1020157012240A patent/KR20150070258A/ko not_active Application Discontinuation
- 2013-10-09 EP EP13844654.7A patent/EP2908527A4/en not_active Withdrawn
- 2013-10-09 US US14/433,820 patent/US20150256819A1/en not_active Abandoned
- 2013-10-09 WO PCT/JP2013/077516 patent/WO2014057989A1/ja active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04343576A (ja) * | 1991-05-21 | 1992-11-30 | Matsushita Electric Ind Co Ltd | 高能率符号化方法と高能率符号の復号方法 |
JP2006261835A (ja) * | 2005-03-15 | 2006-09-28 | Toshiba Corp | 画像送信装置、画像受信装置および画像伝送システム |
Non-Patent Citations (10)
Title |
---|
A. SMOLIC; P. KAUFF; S. KNORR; A. HORNUNG; M. KUNTER; M. MULLER; M. LANG: "Three-Dimensional Video Postproduction and Processing", PROC. IEEE, vol. 99, no. 4, April 2011 (2011-04-01), pages 607 - 625 |
HISAYOSHI FURIHATA ET AL.: "Residual Prediction for Free Viewpoint Image Generation", IEICE TECHNICAL REPORT, THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. 109, no. 63, 21 May 2009 (2009-05-21), pages 199 - 203, XP008179072 * |
L. YANG; T. YENDO; M. PANAHPOUR TEHRANI; T. FUJII; M. TANIMOTO: "Probabilistic reliability based view synthesis for FTV", PROC. ICIP, September 2010 (2010-09-01), pages 1785 - 1788 |
MEHRDAD PANAHPOUR TEHRANI; TOSHIAKI FUJII; MASAYUKI TANIMOTO: "The Adaptive Distributed Source Coding of Multi-View Images in Camera Sensor Networks", IEICE TRANS, E88-A, 2005, pages 2835 - 2843 |
N. FUKUSHIMA; T. FUJII; Y. ISHIBASHI; T. YENDO; M. TANIMOTO: "Real-time free viewpoint image rendering by using fast multi-pass dynamic programming", PROC. 3DTV-CON, June 2010 (2010-06-01) |
R. SZELISKI; R. ZABIH; D. SCHARSTEIN; O. VEKSLER; V. KOLMOGOROV; A. AGARWALA; M. TAPPEN; C. ROTHER: "A comparative study of energy minimization methods for Markov random fields with smoothness-based priors", IEEE TRANS. PATTERN ANAL. MACHINE INTELL., vol. 30, no. 6, 2008, pages 1068 - 1080 |
See also references of EP2908527A4 |
Y. BOYKOV; O. VEKSLER; R. ZABIH: "Fast approximate energy minimization via graph cuts", IEEE TRANS. PATTERN ANAL. MACHINE INTELL., vol. 23, November 2001 (2001-11-01), pages 1222 - 1239 |
Y. MORI; N. FUKUSHIMA; T. YENDO; T. FUJII; M. TANIMOTO: "View generation with 3D warping using depth information for FTV", SIGNAL PROCESS.: IMAGE COMMUN., vol. 24, January 2009 (2009-01-01), pages 65 - 72 |
YUTA HIGUCHI ET AL.: "N-View N-depth Coding for Free View Generation", IEICE TECHNICAL REPORT, THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. 111, no. 349, 8 December 2011 (2011-12-08), pages 1 - 6, XP008179069 * |
Also Published As
Publication number | Publication date |
---|---|
CN104737539A (zh) | 2015-06-24 |
KR20150070258A (ko) | 2015-06-24 |
EP2908527A1 (en) | 2015-08-19 |
US20150256819A1 (en) | 2015-09-10 |
JP2014082541A (ja) | 2014-05-08 |
EP2908527A4 (en) | 2016-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014057988A1 (ja) | 互いに類似した情報を含む複数画像のデータサイズを低減する方法、プログラム、および装置、ならびに、互いに類似した情報を含む複数画像を表現するデータ構造 | |
CN108475330B (zh) | 用于有伪像感知的视图合成的辅助数据 | |
KR102165147B1 (ko) | 계층형 신호 디코딩 및 신호 복원 | |
JP6094863B2 (ja) | 画像処理装置、画像処理方法、プログラム、集積回路 | |
EP4171039A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device and point cloud data reception method | |
Graziosi et al. | Depth assisted compression of full parallax light fields | |
CN112017228A (zh) | 一种对物体三维重建的方法及相关设备 | |
Li et al. | A real-time high-quality complete system for depth image-based rendering on FPGA | |
Jantet et al. | Object-based layered depth images for improved virtual view synthesis in rate-constrained context | |
US20230290006A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
WO2014057989A1 (ja) | 互いに類似した情報を含む複数画像のデータサイズを低減する方法、プログラムおよび装置 | |
US20230059625A1 (en) | Transform-based image coding method and apparatus therefor | |
US20220337872A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
Chiang et al. | High-dynamic-range image generation and coding for multi-exposure multi-view images | |
CN116503551A (zh) | 一种三维重建方法及装置 | |
Lu et al. | A survey on multiview video synthesis and editing | |
JP7440546B2 (ja) | ポイントクラウドデータ処理装置及び方法 | |
JP6979290B2 (ja) | 画像符号化装置および画像復号化装置、並びに、画像符号化プログラムおよび画像復号化プログラム | |
JP7389565B2 (ja) | 符号化装置、復号装置、及びプログラム | |
JP5024962B2 (ja) | 多視点距離情報符号化方法,復号方法,符号化装置,復号装置,符号化プログラム,復号プログラムおよびコンピュータ読み取り可能な記録媒体 | |
CN112806015A (zh) | 全向视频的编码和解码 | |
CN104350748A (zh) | 使用低分辨率深度图的视图合成 | |
Hobloss et al. | Hybrid dual stream blender for wide baseline view synthesis | |
US20240020885A1 (en) | Point cloud data transmission method, point cloud data transmission device, point cloud data reception method, and point cloud data reception device | |
US20240029312A1 (en) | Point cloud data transmission method, point cloud data transmission device, point cloud data reception method, and point cloud data reception device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13844654 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14433820 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2013844654 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013844654 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 20157012240 Country of ref document: KR Kind code of ref document: A |