WO2014013695A1 - 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置 - Google Patents
画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置 Download PDFInfo
- Publication number
- WO2014013695A1 WO2014013695A1 PCT/JP2013/004192 JP2013004192W WO2014013695A1 WO 2014013695 A1 WO2014013695 A1 WO 2014013695A1 JP 2013004192 W JP2013004192 W JP 2013004192W WO 2014013695 A1 WO2014013695 A1 WO 2014013695A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- viewpoint
- viewpoints
- identification information
- image decoding
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/178—Metadata, e.g. disparity information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to an image encoding method and an image decoding method.
- a multi-view image including two or more images is typically, for example, a video encoding standard H.264. H.264 / MVC (Multi View Coding) compliant image data is generated by being photographed and encoded. Note that depth information may be included in the multi-viewpoint image.
- the encoded data is transmitted to the image decoding device.
- the image decoding device decodes the encoded data and displays a multi-viewpoint image obtained by the decoding.
- the image encoding device captures a multi-viewpoint image using two cameras arranged at a predetermined distance from each other (see, for example, Non-Patent Document 1).
- an object of the present invention is to provide an image encoding method or an image decoding method capable of providing a viewer with an optimum sense of depth without depending on an image decoding device.
- An image encoding method is an image encoding method for encoding a multi-viewpoint image shot from a plurality of shooting viewpoints, and corresponds to each of a plurality of screen sizes used in an image decoding device.
- a generation step of generating viewpoint identification information for specifying a plurality of display viewpoints which are a plurality of viewpoints used for display in the image decoding device, and an encoding step of encoding the viewpoint identification information are included. .
- the present invention can provide an image encoding method or an image decoding method capable of providing a viewer with an optimal sense of depth without depending on an image decoding device.
- FIG. 1 is a block diagram of an image encoding device and an image decoding device according to a reference example.
- FIG. 2 is a block diagram of an image encoding device and an image decoding device according to a reference example.
- FIG. 3 is a block diagram of an image encoding device and an image decoding device according to a reference example.
- FIG. 4 is a diagram illustrating an example of the SEI syntax configuration according to the reference example.
- FIG. 5 is a block diagram of an image encoding device and an image decoding device according to Embodiment 1.
- FIG. 6A is a flowchart of image coding processing according to Embodiment 1.
- FIG. 6B is a flowchart of image decoding processing according to Embodiment 1.
- FIG. 7 is a block diagram of an image encoding device and an image decoding device according to Embodiment 1.
- FIG. 8 is a block diagram of an image encoding device and an image decoding device according to Embodiment 1.
- FIG. 9 is a block diagram of an image encoding device and an image decoding device according to Embodiment 1.
- FIG. 10 is a block diagram of an image encoding device and an image decoding device according to another example of the first embodiment.
- FIG. 11 is a diagram illustrating an example of a syntax configuration of SEI according to the first embodiment.
- FIG. 12 is an overall configuration diagram of a content supply system that realizes a content distribution service.
- FIG. 13 is an overall configuration diagram of a digital broadcasting system.
- FIG. 12 is an overall configuration diagram of a content supply system that realizes a content distribution service.
- FIG. 14 is a block diagram illustrating a configuration example of a television.
- FIG. 15 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk.
- FIG. 16 is a diagram illustrating a structure example of a recording medium that is an optical disk.
- FIG. 17A is a diagram illustrating an example of a mobile phone.
- FIG. 17B is a block diagram illustrating a configuration example of a mobile phone.
- FIG. 18 is a diagram showing a structure of multiplexed data.
- FIG. 19 is a diagram schematically showing how each stream is multiplexed in the multiplexed data.
- FIG. 20 is a diagram showing in more detail how the video stream is stored in the PES packet sequence.
- FIG. 20 is a diagram showing in more detail how the video stream is stored in the PES packet sequence.
- FIG. 21 is a diagram showing the structure of TS packets and source packets in multiplexed data.
- FIG. 22 is a diagram illustrating a data structure of the PMT.
- FIG. 23 is a diagram showing an internal configuration of multiplexed data information.
- FIG. 24 shows the internal structure of stream attribute information.
- FIG. 25 is a diagram showing steps for identifying video data.
- FIG. 26 is a block diagram illustrating a configuration example of an integrated circuit that realizes the moving picture coding method and the moving picture decoding method according to each embodiment.
- FIG. 27 is a diagram illustrating a configuration for switching the driving frequency.
- FIG. 28 is a diagram illustrating steps for identifying video data and switching between driving frequencies.
- FIG. 29 is a diagram illustrating an example of a lookup table in which video data standards are associated with drive frequencies.
- FIG. 30A is a diagram illustrating an example of a configuration for sharing a module of a signal processing unit.
- FIG. 30B is a diagram illustrating another example of a configuration for sharing a module of a signal processing unit.
- Non-Patent Document 1 when the distance between the cameras is constant, the sense of depth that the viewer feels depends on the screen size of the display.
- SEI Supplemental enhancement information
- depth_acquisition_info relating to the actual distance between cameras at the time of shooting
- the image decoding device display device
- SEI “3d_reference_displays_info” transmits, for example, an optimum distance between cameras that can realize an optimum sense of depth under the viewing condition for each viewing condition.
- the viewing condition is specifically the screen size of the display.
- the image decoding apparatus can adjust the feeling of depth by displaying the image corresponding to the optimum distance based on the relationship between the actual distance between the cameras and the optimum distance between the cameras.
- each viewpoint to be displayed may be generated by combining two viewpoint images.
- FIGS. 1 to 3 are diagrams showing configurations of an image encoding device 100 and an image decoding device 200 according to a reference example of the present embodiment.
- the image encoding device 100 captures a subject (scene) from a plurality of viewpoints, and encodes a multi-viewpoint image obtained by the capturing to generate an encoded bitstream.
- the image encoding device 100 includes a first camera 111, a second camera 112, a first encoder 121, a second encoder 122, an SEI generator 131, and an SEI encoder 132.
- the first camera 111 and the second camera 112 take a multi-viewpoint image. Specifically, the first camera 111 generates the first image 151 by photographing a subject (scene) from the first viewpoint. The second camera 112 generates the second image 152 by photographing the subject from the second viewpoint.
- the first encoder 121 generates the first encoded image 161 by encoding the first image 151.
- the second encoder 122 generates the second encoded image 162 by encoding the second image 152.
- the SEI generation unit 131 generates an optimum distance 171 associated with each screen size. That is, the SEI generation unit 131 generates a plurality of optimum distances 171 associated with a plurality of image sizes. Each optimum distance 171 is a distance between cameras (between viewpoints) that can give the viewer a feeling of optimal depth when a multi-viewpoint image is displayed on a display having an associated screen size.
- the SEI encoder 132 encodes the plurality of optimum distances 171 to generate the optimum coding distance 172.
- the image encoding device 100 generates an encoded bitstream including the first encoded image 161, the second encoded image 162, and the encoding optimum distance 172. Then, the encoded bit stream is propagated to the image decoding apparatus 200 via the channel.
- the image decoding apparatus 200 decodes the encoded bitstream generated by the image encoding apparatus 100 and displays a multi-viewpoint image.
- the image decoding device 200 includes a first decoder 211, a second decoder 212, an SEI decoder 221, and a display device 222.
- the first decoder 211 generates the first decoded image 251 by decoding the first encoded image 161.
- the second decoder 212 generates the second decoded image 252 by decoding the second encoded image 162.
- the SEI decoder 221 generates a plurality of optimum distances 262 by decoding the coding optimum distance 172.
- the display device 222 uses the first decoded image 251 and the second decoded image 252 to display a multi-viewpoint image (stereoscopic image). Specifically, the display device 222 acquires the optimum distance 262 associated with its own screen size 261 among the plurality of optimum distances 262. Then, the display device 222 displays a multi-viewpoint image according to the acquired optimum distance 262.
- the display device 222 corresponds to a viewpoint between the first viewpoint and the second viewpoint by performing viewpoint synthesis using the first decoded image 251 and the second decoded image 252.
- a composite image to be generated is generated.
- the display device 222 displays the generated composite image and the image of the first camera 111 (first decoded image 251) as a multi-viewpoint image.
- the distance between the viewpoints of the composite image and the image of the first camera 111 is equal to the optimum distance 262.
- the display device 222 may display the composite image and the image of the second camera 112 (second decoded image 252). As shown in FIG. 3, the display device 222 may display two composite images.
- FIG. 4 is a diagram showing the syntax of SEI “3d_reference_displays_info”. Exponet_ref_baseline [i] and maintisa_ref_baseline [i] shown in FIG. 4 correspond to the optimum distance 171 (262). The meaning of each parameter shown in FIG. 4 is described in Non-Patent Document 1, for example.
- the image decoding apparatus 200 decodes information regarding the actual distance between the cameras used for encoding the images of a plurality of viewpoints from the encoded bitstream. Further, the image decoding device 200 decodes information indicating one or more viewing conditions from the encoded bitstream. Here, the viewing condition is, for example, the screen size of the display device 222 included in the image decoding device 200. The image decoding apparatus 200 further decodes a plurality of optimum distances 262 that are information indicating the optimum inter-camera distance for each viewing condition from the encoded bit stream.
- the optimal inter-camera distance is the inter-camera distance that should have been used to capture an image of each viewpoint in order to give the viewer an optimal sense of depth.
- the image decoding apparatus 200 selects a viewpoint (view) used for display that can realize a desired sense of depth using the actual viewing conditions, the decoded optimal inter-camera distance, and the actual inter-camera distance. To do.
- the image decoding apparatus 200 may further generate an image to be displayed by viewpoint synthesis so that the distance between the two viewpoints used for display is the optimal inter-camera distance.
- the image encoding device 100 transmits parameters for calculating the optimum inter-camera distance to the image decoding device 200 so that the image decoding device 200 can calculate the viewpoint position for viewpoint synthesis. .
- the image decoding apparatus 200 grasps the relationship between the actual inter-camera distance and the optimum inter-camera distance from the SEI, and selects the viewpoint used for display. That is, the image decoding apparatus 200 determines the positions of the two viewpoints so that the distance between the two viewpoints becomes the optimal inter-camera distance.
- the image decoding apparatus 200 can arbitrarily set the two viewpoints if the distance between the two viewpoints is an optimal camera distance. For example, the image decoding apparatus 200 can arbitrarily select two viewpoints used for display as shown in FIGS. That is, since which viewpoint is selected depends on the image decoding apparatus 200, the same content is not displayed for all viewers.
- the present inventor has found that the above technique has a problem that the displayed contents may differ depending on the image decoding apparatus.
- An image encoding method is an image encoding method for encoding a multi-viewpoint image shot from a plurality of shooting viewpoints, and corresponds to each of a plurality of screen sizes used in an image decoding device.
- a generation step of generating viewpoint identification information for specifying a plurality of display viewpoints which are a plurality of viewpoints used for display in the image decoding device, and an encoding step of encoding the viewpoint identification information are included. .
- the image encoding method transmits viewpoint identification information for specifying a viewpoint to be selected by the image decoding apparatus to the image decoding apparatus in accordance with the screen size of the image decoding apparatus.
- the image decoding apparatus displays an image using the viewpoint specified by the viewpoint identification information associated with its own screen size. In this way, the viewpoint that is uniquely used for display in the image decoding apparatus is determined. Therefore, the image coding method can provide an optimal sense of depth to the viewer without depending on the image decoding device.
- the viewpoint identification information may indicate one shooting viewpoint among the plurality of shooting viewpoints.
- the viewpoint identification information may indicate a viewpoint of a composite image generated by combining images shot at two shooting viewpoints among the plurality of shooting viewpoints.
- the viewpoint identification information may be an identifier for identifying the plurality of shooting viewpoints.
- An image decoding method is an image decoding method for decoding a bitstream generated by encoding multi-viewpoint images shot from a plurality of shooting viewpoints.
- the decoding step for decoding the viewpoint identification information for identifying a plurality of viewpoints associated with each of the plurality of screen sizes, and the image decoding apparatus has the decoded viewpoint identification information among the decoded viewpoint identification information
- the image decoding method displays an image using the viewpoint specified by the viewpoint identification information associated with the screen size of the image decoding apparatus.
- the viewpoint that is uniquely used for display in the image decoding apparatus is determined. Therefore, the image decoding method can provide the viewer with an optimal sense of depth that does not depend on the image decoding device.
- the viewpoint identification information indicates one shooting viewpoint among the plurality of shooting viewpoints, and, in the determination step, the one shooting viewpoint indicated by the viewpoint identification information among the plurality of shooting viewpoints is selected from the plurality of shooting viewpoints. It may be determined as one of the display viewpoints.
- the viewpoint identification information may indicate a viewpoint of a composite image generated by combining images shot at two shooting viewpoints among the plurality of shooting viewpoints.
- the viewpoint identification information may be an identifier for identifying the plurality of shooting viewpoints.
- An image encoding device is an image encoding device that encodes a multi-viewpoint image captured from a plurality of imaging viewpoints, and each of a plurality of screen sizes used in the image decoding device.
- a viewpoint identification information generating unit that generates viewpoint identification information for specifying a plurality of display viewpoints that are a plurality of viewpoints used for display in the image decoding device, and a viewpoint that encodes the viewpoint identification information An identification information encoding unit.
- the image encoding apparatus transmits viewpoint identification information for specifying a viewpoint to be selected by the image decoding apparatus to the image decoding apparatus in accordance with the screen size of the image decoding apparatus.
- the image decoding apparatus displays an image using the viewpoint specified by the viewpoint identification information associated with its own screen size. In this way, the viewpoint that is uniquely used for display in the image decoding apparatus is determined. Therefore, the image coding apparatus can provide an optimal sense of depth to the viewer without depending on the image decoding apparatus.
- An image decoding apparatus is an image decoding apparatus that decodes a bitstream generated by encoding multi-viewpoint images shot from a plurality of shooting viewpoints, the bitstream
- a viewpoint identification information decoding unit that decodes viewpoint identification information for identifying a plurality of viewpoints associated with each of a plurality of screen sizes, and image decoding among the decoded plurality of viewpoint identification information
- a viewpoint determination unit that determines a plurality of display viewpoints, which are a plurality of viewpoints used for display by the image decoding apparatus, using viewpoint identification information associated with a screen size of the display apparatus included in the apparatus.
- the image decoding apparatus displays an image using the viewpoint specified by the viewpoint identification information associated with its own screen size.
- the viewpoint that is uniquely used for display in the image decoding apparatus is determined. Therefore, the image decoding apparatus can provide the viewer with an optimal sense of depth that does not depend on its own function.
- an image encoding / decoding device may include the image encoding device and the image decoding device.
- the image encoding device selects the viewpoint position corresponding to the optimal inter-camera distance for each of one or more viewing conditions (specifically, the screen size), Encode information about this viewpoint position in the bitstream.
- the image decoding apparatus decodes information indicating one or more viewing conditions from the bit stream.
- the image decoding apparatus further decodes information regarding the viewpoint position for each decoded viewing condition from the bit stream.
- This viewpoint position corresponds to an optimal inter-camera distance that should have been used to capture an image of each viewpoint in order to give the viewer a sense of optimal depth. That is, the image decoding apparatus does not need to decode information indicating the actual camera distance from the bit stream.
- the image decoding apparatus selects a plurality of viewpoints that can achieve a desired sense of depth using actual viewing conditions and decoded viewpoint positions.
- FIG. 5 is a block diagram showing a configuration of the image encoding device 300 and the image decoding device 400 according to the present embodiment.
- the image coding apparatus 300 shoots a subject (scene) from a plurality of viewpoints (shooting viewpoints) and encodes a multi-view image (multi-view video) obtained by shooting to generate an encoded bitstream.
- the image encoding device 300 includes a first camera 311, a second camera 312, a first encoder 321, a second encoder 322, an SEI generator 331, and an SEI encoder 332.
- the first camera 311 and the second camera 312 take multi-viewpoint images. Specifically, the first camera 311 generates a first image 351 by photographing a subject (scene) from the first viewpoint. The second camera 312 generates the second image 352 by photographing the subject from the second viewpoint. The first image 351 and the second image 352 are included in the multi-viewpoint image.
- the first encoder 321 generates the first encoded image 361 by encoding the first image 351.
- the second encoder 322 generates the second encoded image 362 by encoding the second image 352.
- the SEI generation unit 331 is a viewpoint identification information generation unit that generates a viewpoint position 371 associated with each screen size. That is, the SEI generation unit 331 generates a plurality of viewpoint positions 371 associated with a plurality of image sizes.
- Each viewpoint position 371 is a position of two viewpoints (display viewpoints) that can give the viewer a sense of depth that is optimal when displaying a multi-viewpoint image (stereoscopic image) on a display of the associated screen size. is there.
- the SEI encoder 332 is a viewpoint identification information encoding unit that generates an encoded viewpoint position 372 by encoding a plurality of viewpoint positions 371.
- the image encoding device 300 generates an encoded bitstream including the first encoded image 361, the second encoded image 362, and the encoded viewpoint position 372. Then, the encoded bit stream is propagated to the image decoding apparatus 400 via the channel.
- the image decoding apparatus 400 decodes the encoded bitstream generated by the image encoding apparatus 300 and displays a multi-viewpoint image.
- the image decoding device 400 includes a first decoder 411, a second decoder 412, an SEI decoder 421, and a display device 422.
- the first decoder 411 generates the first decoded image 451 by decoding the first encoded image 361.
- the second decoder 412 generates the second decoded image 452 by decoding the second encoded image 362.
- the SEI decoder 421 is a viewpoint identification information decoding unit that generates a plurality of viewpoint positions 462 by decoding the encoded viewpoint position 372.
- the display device 422 displays a multi-viewpoint image (stereoscopic image) using the first decoded image 451 and the second decoded image 452. Specifically, the display device 422 acquires a viewpoint position 462 associated with its own screen size 461 among the plurality of viewpoint positions 462. Then, the display device 422 determines a plurality of display viewpoints, which are a plurality of viewpoints used for display, according to the acquired viewpoint position 462, and displays an image from the determined display viewpoint as a multi-viewpoint image. The determination of the display viewpoint is performed by a viewpoint determination unit included in the display device 422.
- the display device 422 supports viewpoints between the first viewpoint and the second viewpoint by performing viewpoint synthesis using the first decoded image 451 and the second decoded image 452.
- a composite image to be generated is generated.
- the viewpoint positions of the two synthesized images generated by the viewpoint synthesis correspond to the acquired two viewpoint positions 462.
- the viewpoint positions of the two composite images are the same as the two viewpoint positions 462.
- the display device 422 displays the generated two composite images as a multi-viewpoint image.
- at least one of the two viewpoint positions 462 may be equal to the viewpoint position of the first camera or the viewpoint position of the second camera. In this case, viewpoint synthesis is not performed, and the first decoded image 451 or the second decoded image 452 is used for display.
- FIG. 6A is a flowchart showing an overview of image encoding processing by the image encoding device 300.
- the image encoding device 300 specifies a plurality of display viewpoints that are a plurality of viewpoints used for display in the image decoding device in association with each of a plurality of screen sizes used in the image decoding device.
- Viewpoint identification information is generated (S101).
- the viewpoint identification information corresponds to the viewpoint position 371 described above.
- the image encoding device 300 encodes the above-described viewpoint identification information (S102). Then, a bitstream including the encoded viewpoint identification information is transmitted to the image decoding device 400.
- FIG. 6B is a flowchart showing an overview of image decoding processing by the image decoding device 400.
- the image decoding apparatus 400 decodes viewpoint identification information for identifying a plurality of viewpoints associated with each of a plurality of screen sizes included in the bitstream (S201).
- the image decoding device 400 uses the viewpoint identification information associated with the screen size 461 of the display device 422 included in the image decoding device 400 among the plurality of decoded viewpoint identification information.
- a plurality of display viewpoints which are a plurality of viewpoints used for display are determined (S202).
- the image decoding apparatus 400 displays a multi-viewpoint image using a plurality of determined display viewpoints.
- the image decoding apparatus 400 generates and generates a plurality of images viewed from a plurality of display viewpoints using the first decoded image 451 of the first viewpoint and the second decoded image 452 of the second viewpoint. Display multiple images.
- viewpoint identification information of viewpoints is directly included in the bitstream.
- the image decoding apparatus 400 does not need to know the optimal inter-camera distance. That is, the optimal distance between cameras can be omitted.
- the image decoding apparatus 400 does not need to automatically select a viewpoint used for display. Therefore, in different image decoding devices, if the viewing conditions are the same, the same viewpoint is selected as the viewpoint used for display. Therefore, even when the image decoding devices are different, the same sense of depth can be guaranteed.
- information directly indicating the viewpoint position is sent from the image coding apparatus 300 to the image decoding apparatus 400, not the distance between the viewpoints (relative position between the viewpoints).
- the image encoding device 300 can control which viewpoint is used for display in the image decoding device 400.
- the content creator can influence the displayed content, for example, according to individual requirements such as optimal image quality or preferred content. Because multiple viewpoints are images from slightly different viewpoints of the original scene. Therefore, the displayed content differs depending on the selection of the encoded viewpoint.
- this embodiment not only ensures that the displayed content is the same for all viewers by simply defining the viewpoint displayed by the image decoding device, In the image encoding device, for example, the viewpoint can be selected so that the subjective image quality is maximized.
- the multi-view image may include images of three or more viewpoints. Further, the number of viewpoints of the multi-view image generated by the image encoding device 300 may be different from the number of viewpoints of the multi-view image displayed by the image decoding device 400.
- viewpoint identification information viewpoint position 462 and the like
- FIG. 7 is a block diagram illustrating a configuration of the image encoding device 300A and the image decoding device 400A when a viewpoint identifier is used as the viewpoint identification information. Elements similar to those in FIG. 5 are denoted by the same reference numerals, and differences will be mainly described below.
- the 7 includes a third camera 313 and a third encoder 323 in addition to the configuration of the image encoding device 300.
- the third camera 313 generates a third image 353 by photographing a subject (scene) from the third viewpoint. That is, the first camera 311, the second camera 312, and the third camera 313 generate a multi-viewpoint image including the first image 351, the second image 352, and the third image 353.
- the first camera 311, the second camera 312, and the third camera 313 are assigned identifiers (ID 1, ID 2, and ID 3) for uniquely identifying each camera.
- the third encoder 323 generates the third encoded image 363 by encoding the third image 353.
- the SEI generation unit 331A is a viewpoint identification information generation unit that generates a viewpoint identifier 373 associated with each screen size. That is, the SEI generation unit 331A generates a plurality of viewpoint identifiers 373 associated with a plurality of image sizes.
- Each viewpoint identifier 373 is an identifier for identifying a plurality of photographing viewpoints, and indicates, for example, identifiers (ID1, ID2, and ID3) assigned to the camera.
- the viewpoint identifier 373 is an identifier for identifying the first image 351, the second image 352, and the third image 353, and also includes a first decoded image 451, a second decoded image 452, and a third decoded image 453 described later. It is an identifier that identifies
- the viewpoint identifier 373 has two viewpoints (display viewpoints) that can give the viewer an optimum sense of depth when displaying a multi-viewpoint image (stereoscopic image) on a display with the associated screen size.
- the corresponding shooting viewpoint is shown.
- this identifier is a viewpoint ID (view ID) or a viewpoint order index (view order index).
- the SEI encoder 332A is a viewpoint identification information encoding unit that generates an encoded viewpoint identifier 374 by encoding a plurality of viewpoint identifiers 373.
- the image encoding device 300A generates an encoded bit stream including the first encoded image 361, the second encoded image 362, the third encoded image 363, and the encoded viewpoint identifier 374. Then, the encoded bit stream is propagated to the image decoding device 400A via the channel.
- the image decoding device 400A decodes the encoded bitstream generated by the image encoding device 300A and displays a multi-view image.
- the image decoding device 400A further includes a third decoder 413 in addition to the configuration of the image decoding device 400.
- the functions of the SEI decoder 421A and the display device 422A are different from those of the SEI decoder 421 and the display device 422.
- the third decoder 413 generates the third decoded image 453 by decoding the third encoded image 363.
- the SEI decoder 421A is a viewpoint identification information decoding unit that generates a plurality of viewpoint identifiers 463 by decoding the encoded viewpoint identifier 374.
- the display device 422A displays a multi-viewpoint image (stereoscopic image) using the first decoded image 451, the second decoded image 452, and the third decoded image 453. Specifically, the display device 422A acquires the viewpoint identifier 463 associated with its screen size 461 among the plurality of viewpoint identifiers 463. Then, the display device 422A determines a plurality of display viewpoints, which are a plurality of viewpoints used for display, according to the acquired viewpoint identifier 463, and displays an image from the determined display viewpoint as a multi-viewpoint image. The determination of the display viewpoint is performed by a viewpoint determination unit included in the display device 422A.
- the display device 422A determines one shooting viewpoint indicated by the viewpoint identification information among the plurality of shooting viewpoints as one of the plurality of display viewpoints. That is, display device 422A displays a plurality of decoded images corresponding to viewpoint identifier 463 among a plurality of decoded images (first decoded image 451, second decoded image 452, and third decoded image 453). For example, as shown in FIG. 7, when the viewpoint identifier 463 indicates ID1 and ID2, the display device 422A displays the first decoded image 451 and the second decoded image 452. Further, as illustrated in FIG.
- the display device 422A displays the first decoded image 451 and the third decoded image 453. As illustrated in FIG. 9, when the viewpoint identifier 463 indicates ID2 and ID3, the display device 422A displays the second decoded image 452 and the third decoded image 453.
- the image decoding device 400A can easily select an image to be displayed.
- the viewpoint identification information may indicate the viewpoint of a composite image generated by combining images shot at two shooting viewpoints among a plurality of shooting viewpoints.
- the display device 422A generates a synthesized image of the viewpoint indicated by the viewpoint identification information by synthesizing the decoded images corresponding to two of the plurality of imaging viewpoints, and displays the generated synthesized image. To do.
- the viewpoint identification information includes one or more viewpoint identifiers for specifying the viewpoint position of the composite image and information indicating the distance from the one or more viewpoint identifiers to the viewpoint position of the composite image.
- one viewpoint identifier and the distance are defined for one display viewpoint.
- the distance may be indicated by a fraction (for example, a) indicating where the viewpoint of the composite image is located between the first viewpoint and the second viewpoint.
- the distance is represented by “a ⁇ actual distance between the first viewpoint and the second viewpoint”.
- FIG. 10 is a block diagram illustrating the configuration of the image encoding device 300B and the image decoding device 400B when viewpoint coordinates are used as viewpoint identification information. Elements similar to those in FIG. 5 are denoted by the same reference numerals, and differences will be mainly described below.
- the SEI generation unit 331B is a viewpoint identification information generation unit that generates viewpoint coordinates 375 associated with each screen size. That is, the SEI generation unit 331B generates a plurality of viewpoint coordinates 375 associated with a plurality of image sizes.
- Each viewpoint coordinate 375 indicates the coordinates of a plurality of display viewpoints.
- the viewpoint coordinates 375 indicate the coordinates of the coordinate system in the display device 422B.
- the viewpoint coordinates 375 are two viewpoints (display viewpoints) that can give the viewer an optimum sense of depth when displaying a multi-viewpoint image (stereoscopic image) on a display having the associated screen size. Indicates coordinates.
- the SEI encoder 332B is a viewpoint identification information encoding unit that generates encoded viewpoint coordinates 376 by encoding a plurality of viewpoint coordinates 375.
- the image encoding device 300B generates an encoded bitstream including the first encoded image 361, the second encoded image 362, and the encoded viewpoint coordinates 376. Then, the encoded bit stream is propagated to the image decoding device 400B via the channel.
- the image decoding device 400B decodes the encoded bitstream generated by the image encoding device 300B and displays a multi-view image.
- the image decoding device 400B differs from the SEI decoder 421 and the display device 422 in the functions of the SEI decoder 421B and the display device 422B with respect to the configuration of the image decoding device 400.
- the SEI decoder 421B is a viewpoint identification information decoding unit that generates a plurality of viewpoint coordinates 464 by decoding the encoded viewpoint coordinates 376.
- the display device 422B displays a multi-viewpoint image (stereoscopic image) using the first decoded image 451 and the second decoded image 452. Specifically, the display device 422B acquires the viewpoint coordinates 464 associated with its own screen size 461 among the plurality of viewpoint coordinates 464. Then, the display device 422B determines a plurality of display viewpoints, which are a plurality of viewpoints used for display, according to the acquired viewpoint coordinates 464, and displays an image from the determined display viewpoint as a multi-viewpoint image. The determination of the display viewpoint is performed by a viewpoint determination unit included in the display device 422B.
- the display device 422B generates a synthesized image corresponding to the viewpoint between the first viewpoint and the second viewpoint by performing viewpoint synthesis using the first decoded image 451 and the second decoded image 452.
- the viewpoint positions of the two composite images generated by the viewpoint synthesis correspond to the two viewpoint coordinates 464 acquired.
- the viewpoint positions of two composite images are located at two viewpoint coordinates 464.
- the display device 422B displays the two generated composite images as a multi-viewpoint image.
- at least one of the two viewpoint positions 462 may be equal to the viewpoint position of the first camera or the viewpoint position of the second camera. In this case, viewpoint synthesis is not performed, and the first decoded image 451 or the second decoded image 452 is used for display.
- FIG. 11 is a diagram illustrating a syntax example of SEI “3d_reference_displays_info] according to the present embodiment.
- a syntax element preferred_left_view_id_flag shown in FIG. 11 indicates whether or not viewpoint identification information (preferred_left_view_id) of a viewpoint used for display for the left eye by the image decoding apparatus is included in the bitstream.
- the syntax element “preferred_left_view_id” is viewpoint identification information of the viewpoint used by the image decoding apparatus for display for the left eye, and corresponds to, for example, the viewpoint identifier.
- the syntax element “preferential_right_view_id_flag” indicates whether or not viewpoint identification information (preferred_right_view_id) of the viewpoint used by the image decoding apparatus for display for the right eye is included in the bitstream.
- the syntax element preferredred_right_view_id is viewpoint identification information of the viewpoint used by the image decoding apparatus for display for the right eye, and corresponds to, for example, the viewpoint identifier.
- the optimal inter-camera distance need not be included in the bitstream.
- the image decoding apparatus uses the one viewpoint identification information to select the first viewpoint used for the display for the one eye. decide. Further, in this case, the optimal inter-camera distance (expont_ref_baseline and maintisa_ref_baseline) is encoded in the bitstream. Then, the image decoding apparatus identifies the position of the second viewpoint (including the synthesized viewpoint) corresponding to the first viewpoint using the inter-camera distance.
- the position of the second (right) viewpoint is determined to the right of the position of the left viewpoint.
- the position of the second (left) viewpoint is determined to the left of the position of the right viewpoint.
- viewpoint identification information for both the left and right viewpoints is not encoded, only the optimal inter-camera distance is encoded. Note that the operations of the image encoding device and the image decoding device in this case are the same as the operations of the reference example described above.
- the viewpoint identification information may be an operating point ID (operating point ID) indicating an operating point encoded in a VUI (Video Usability Information) of an SPS (sequence parameter set).
- operating point ID operating point ID
- VUI Video Usability Information
- each processing unit included in the image encoding device and the image decoding device according to the above embodiment is typically realized as an LSI that is an integrated circuit. These may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
- circuits are not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
- An FPGA Field Programmable Gate Array
- reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
- Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
- the image encoding device and the image decoding device include a processing circuit and a storage device (storage) that is electrically connected to the processing circuit (accessible from the control circuit).
- the processing circuit includes at least one of dedicated hardware and a program execution unit. Further, when the processing circuit includes a program execution unit, the storage device stores a software program executed by the program execution unit. The processing circuit executes the image encoding method or the image decoding method according to the above embodiment using the storage device.
- the present invention may be the software program or a non-transitory computer-readable recording medium on which the program is recorded.
- the program can be distributed via a transmission medium such as the Internet.
- division of functional blocks in the block diagram is an example, and a plurality of functional blocks can be realized as one functional block, a single functional block can be divided into a plurality of functions, or some functions can be transferred to other functional blocks. May be.
- functions of a plurality of functional blocks having similar functions may be processed in parallel or time-division by a single hardware or software.
- the order in which the steps included in the image encoding method or the image decoding method are executed is for illustrating the present invention specifically, and may be in an order other than the above. . Also, some of the above steps may be executed simultaneously (in parallel) with other steps.
- the image encoding device and the image decoding device according to one or more aspects of the present invention have been described based on the embodiment, but the present invention is not limited to this embodiment. Unless it deviates from the gist of the present invention, the embodiment in which various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments are also applicable to one or more of the present invention. It may be included within the scope of the embodiments.
- the storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
- the system has an image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
- image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
- Other configurations in the system can be appropriately changed according to circumstances.
- FIG. 12 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service.
- a communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
- the content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine ex115 via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Etc. are connected.
- PDA Personal Digital Assistant
- each device may be directly connected to the telephone network ex104 without going from the base station ex106, which is a fixed wireless station, to ex110.
- the devices may be directly connected to each other via short-range wireless or the like.
- the camera ex113 is a device that can shoot moving images such as a digital video camera
- the camera ex116 is a device that can shoot still images and movies such as a digital camera.
- the mobile phone ex114 is a GSM (registered trademark) (Global System for Mobile Communications) system, a CDMA (Code Division Multiple Access) system, a W-CDMA (Wideband-Code Division Multiple Access) system, or an LTE (Long Terminal Term Evolution). It is possible to use any of the above-mentioned systems, HSPA (High Speed Packet Access) mobile phone, PHS (Personal Handyphone System), or the like.
- the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like.
- live distribution content that is shot by a user using the camera ex113 (for example, music live video) is encoded as described in each of the above embodiments (that is, in one aspect of the present invention).
- the streaming server ex103 stream-distributes the content data transmitted to the requested client. Examples of the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 that can decode the encoded data.
- Each device that receives the distributed data decodes the received data and reproduces it (that is, functions as an image decoding device according to one embodiment of the present invention).
- the captured data may be encoded by the camera ex113, the streaming server ex103 that performs data transmission processing, or may be shared with each other.
- the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in common with each other.
- still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111.
- the encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
- these encoding / decoding processes are generally performed in the computer ex111 and the LSI ex500 included in each device.
- the LSI ex500 may be configured as a single chip or a plurality of chips.
- moving image encoding / decoding software is incorporated into some recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111, etc., and encoding / decoding processing is performed using the software. May be.
- moving image data acquired by the camera may be transmitted.
- the moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
- the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
- the encoded data can be received and reproduced by the client.
- the information transmitted by the user can be received, decrypted and reproduced by the client in real time, and personal broadcasting can be realized even for a user who does not have special rights or facilities.
- the digital broadcast system ex200 also includes at least the moving image encoding device (image encoding device) or the moving image decoding according to each of the above embodiments. Any of the devices (image decoding devices) can be incorporated.
- the broadcast station ex201 multiplexed data obtained by multiplexing music data and the like on video data is transmitted to a communication or satellite ex202 via radio waves.
- This video data is data encoded by the moving image encoding method described in each of the above embodiments (that is, data encoded by the image encoding apparatus according to one aspect of the present invention).
- the broadcasting satellite ex202 transmits a radio wave for broadcasting, and this radio wave is received by a home antenna ex204 capable of receiving satellite broadcasting.
- the received multiplexed data is decoded and reproduced by an apparatus such as the television (receiver) ex300 or the set top box (STB) ex217 (that is, functions as an image decoding apparatus according to one embodiment of the present invention).
- a reader / recorder ex218 that reads and decodes multiplexed data recorded on a recording medium ex215 such as a DVD or a BD, or encodes a video signal on the recording medium ex215 and, in some cases, multiplexes and writes it with a music signal. It is possible to mount the moving picture decoding apparatus or moving picture encoding apparatus described in the above embodiments. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device or system using the recording medium ex215 on which the multiplexed data is recorded.
- a moving picture decoding apparatus may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television.
- the moving picture decoding apparatus may be incorporated in the television instead of the set top box.
- FIG. 14 is a diagram showing a television (receiver) ex300 that uses the moving picture decoding method and the moving picture encoding method described in the above embodiments.
- the television ex300 obtains or outputs multiplexed data in which audio data is multiplexed with video data via the antenna ex204 or the cable ex203 that receives the broadcast, and demodulates the received multiplexed data.
- the modulation / demodulation unit ex302 that modulates multiplexed data to be transmitted to the outside, and the demodulated multiplexed data is separated into video data and audio data, or the video data and audio data encoded by the signal processing unit ex306 Is provided with a multiplexing / demultiplexing unit ex303.
- the television ex300 also decodes the audio data and the video data, or encodes the information, the audio signal processing unit ex304, the video signal processing unit ex305 (the image encoding device or the image according to one embodiment of the present invention) A signal processing unit ex306 that functions as a decoding device), a speaker ex307 that outputs the decoded audio signal, and an output unit ex309 that includes a display unit ex308 such as a display that displays the decoded video signal. Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. Furthermore, the television ex300 includes a control unit ex310 that performs overall control of each unit, and a power supply circuit unit ex311 that supplies power to each unit.
- the interface unit ex317 includes a bridge unit ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording unit such as a hard disk.
- a driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included.
- the recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored.
- Each part of the television ex300 is connected to each other via a synchronous bus.
- the television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the multiplexed data demodulated by the modulation / demodulation unit ex302 by the multiplexing / demultiplexing unit ex303 based on the control of the control unit ex310 having a CPU or the like. Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in each of the above embodiments.
- the decoded audio signal and video signal are output from the output unit ex309 to the outside. At the time of output, these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization. Also, the television ex300 may read multiplexed data from recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from broadcasting. Next, a configuration in which the television ex300 encodes an audio signal or a video signal and transmits the signal to the outside or to a recording medium will be described.
- the television ex300 receives a user operation from the remote controller ex220 and the like, encodes an audio signal with the audio signal processing unit ex304, and converts the video signal with the video signal processing unit ex305 based on the control of the control unit ex310. Encoding is performed using the encoding method described in (1).
- the encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320, ex321, etc. so that the audio signal and the video signal are synchronized.
- a plurality of buffers ex318, ex319, ex320, and ex321 may be provided as illustrated, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
- the television ex300 has a configuration for receiving AV input of a microphone and a camera, and performs encoding processing on the data acquired from them. Also good.
- the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output, but these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be a configuration.
- the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218,
- the reader / recorder ex218 may share with each other.
- FIG. 15 shows a configuration of the information reproducing / recording unit ex400 when data is read from or written to an optical disk.
- the information reproducing / recording unit ex400 includes elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 described below.
- the optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disk to write information, and detects information reflected from the recording surface of the recording medium ex215 to read the information.
- the modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data.
- the reproduction demodulator ex403 amplifies the reproduction signal obtained by electrically detecting the reflected light from the recording surface by the photodetector built in the optical head ex401, separates and demodulates the signal component recorded on the recording medium ex215, and is necessary To play back information.
- the buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215.
- the disk motor ex405 rotates the recording medium ex215.
- the servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process.
- the system control unit ex407 controls the entire information reproduction / recording unit ex400.
- the system control unit ex407 uses various types of information held in the buffer ex404, and generates and adds new information as necessary.
- the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner.
- the system control unit ex407 includes, for example, a microprocessor, and executes these processes by executing a read / write program.
- the optical head ex401 has been described as irradiating a laser spot.
- a configuration in which higher-density recording is performed using near-field light may be used.
- FIG. 16 shows a schematic diagram of a recording medium ex215 that is an optical disk.
- Guide grooves grooves
- address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove.
- This address information includes information for specifying the position of the recording block ex231 that is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording or reproducing apparatus.
- the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234.
- the area used for recording user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used.
- the information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or multiplexed data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
- an optical disk such as a single-layer DVD or BD has been described as an example.
- an optical disc with a multi-dimensional recording / reproducing structure such as recording information using light of different wavelengths in the same place on the disc, or recording different layers of information from various angles. It may be.
- the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has.
- the configuration of the car navigation ex211 may be, for example, the configuration shown in FIG. 14 with the addition of a GPS receiver, and the same may be considered for the computer ex111, the mobile phone ex114, and the like.
- FIG. 17A is a diagram showing the mobile phone ex114 using the moving picture decoding method and the moving picture encoding method described in the above embodiment.
- the mobile phone ex114 includes an antenna ex350 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex365 capable of capturing video and still images, a video captured by the camera unit ex365, a video received by the antenna ex350, and the like Is provided with a display unit ex358 such as a liquid crystal display for displaying the decrypted data.
- the mobile phone ex114 further includes a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio, a captured video,
- an audio input unit ex356 such as a microphone for inputting audio
- a captured video In the memory unit ex367 for storing encoded data or decoded data such as still images, recorded audio, received video, still images, mails, or the like, or an interface unit with a recording medium for storing data
- a slot ex364 is provided.
- the mobile phone ex114 has a power supply circuit part ex361, an operation input control part ex362, and a video signal processing part ex355 with respect to a main control part ex360 that comprehensively controls each part of the main body including the display part ex358 and the operation key part ex366.
- a camera interface unit ex363, an LCD (Liquid Crystal Display) control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slot unit ex364, and a memory unit ex367 are connected to each other via a bus ex370. ing.
- the power supply circuit unit ex361 starts up the mobile phone ex114 in an operable state by supplying power from the battery pack to each unit.
- the cellular phone ex114 converts the audio signal collected by the audio input unit ex356 in the voice call mode into a digital audio signal by the audio signal processing unit ex354 based on the control of the main control unit ex360 having a CPU, a ROM, a RAM, and the like. Then, this is subjected to spectrum spread processing by the modulation / demodulation unit ex352, digital-analog conversion processing and frequency conversion processing are performed by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
- the mobile phone ex114 also amplifies the received data received via the antenna ex350 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing by the modulation / demodulation unit ex352, and performs voice signal processing unit After being converted into an analog audio signal by ex354, this is output from the audio output unit ex357.
- the text data of the e-mail input by operating the operation key unit ex366 of the main unit is sent to the main control unit ex360 via the operation input control unit ex362.
- the main control unit ex360 performs spread spectrum processing on the text data in the modulation / demodulation unit ex352, performs digital analog conversion processing and frequency conversion processing in the transmission / reception unit ex351, and then transmits the text data to the base station ex110 via the antenna ex350.
- almost the reverse process is performed on the received data and output to the display unit ex358.
- the video signal processing unit ex355 compresses the video signal supplied from the camera unit ex365 by the moving image encoding method described in the above embodiments. Encode (that is, function as an image encoding device according to an aspect of the present invention), and send the encoded video data to the multiplexing / demultiplexing unit ex353.
- the audio signal processing unit ex354 encodes the audio signal picked up by the audio input unit ex356 while the camera unit ex365 images a video, a still image, etc., and sends the encoded audio data to the multiplexing / separating unit ex353. To do.
- the multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by a predetermined method, and is obtained as a result.
- the multiplexed data is subjected to spread spectrum processing by the modulation / demodulation unit (modulation / demodulation circuit unit) ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
- the multiplexing / separating unit ex353 separates the multiplexed data into a video data bit stream and an audio data bit stream, and performs video signal processing on the video data encoded via the synchronization bus ex370.
- the encoded audio data is supplied to the audio signal processing unit ex354 while being supplied to the unit ex355.
- the video signal processing unit ex355 decodes the video signal by decoding using the video decoding method corresponding to the video encoding method described in each of the above embodiments (that is, an image according to an aspect of the present invention).
- video and still images included in the moving image file linked to the home page are displayed from the display unit ex358 via the LCD control unit ex359.
- the audio signal processing unit ex354 decodes the audio signal, and the audio is output from the audio output unit ex357.
- the terminal such as the mobile phone ex114 is referred to as a transmission terminal having only an encoder and a receiving terminal having only a decoder.
- a transmission terminal having only an encoder
- a receiving terminal having only a decoder.
- multiplexed data in which music data or the like is multiplexed with video data is received and transmitted, but data in which character data or the like related to video is multiplexed in addition to audio data It may be video data itself instead of multiplexed data.
- the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments can be used in any of the above-described devices / systems. The described effect can be obtained.
- multiplexed data obtained by multiplexing audio data or the like with video data is configured to include identification information indicating which standard the video data conforms to.
- identification information indicating which standard the video data conforms to.
- FIG. 18 is a diagram showing a structure of multiplexed data.
- the multiplexed data is obtained by multiplexing one or more of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream.
- the video stream indicates the main video and sub-video of the movie
- the audio stream (IG) indicates the main audio portion of the movie and the sub-audio mixed with the main audio
- the presentation graphics stream indicates the subtitles of the movie.
- the main video indicates a normal video displayed on the screen
- the sub-video is a video displayed on a small screen in the main video.
- the interactive graphics stream indicates an interactive screen created by arranging GUI components on the screen.
- the video stream is encoded by the moving image encoding method or apparatus shown in the above embodiments, or the moving image encoding method or apparatus conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1. ing.
- the audio stream is encoded by a method such as Dolby AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, or linear PCM.
- Each stream included in the multiplexed data is identified by PID. For example, 0x1011 for video streams used for movie images, 0x1100 to 0x111F for audio streams, 0x1200 to 0x121F for presentation graphics, 0x1400 to 0x141F for interactive graphics streams, 0x1B00 to 0x1B1F are assigned to video streams used for sub-pictures, and 0x1A00 to 0x1A1F are assigned to audio streams used for sub-audio mixed with the main audio.
- FIG. 19 is a diagram schematically showing how multiplexed data is multiplexed.
- a video stream ex235 composed of a plurality of video frames and an audio stream ex238 composed of a plurality of audio frames are converted into PES packet sequences ex236 and ex239, respectively, and converted into TS packets ex237 and ex240.
- the data of the presentation graphics stream ex241 and interactive graphics ex244 are converted into PES packet sequences ex242 and ex245, respectively, and further converted into TS packets ex243 and ex246.
- the multiplexed data ex247 is configured by multiplexing these TS packets into one stream.
- FIG. 20 shows in more detail how the video stream is stored in the PES packet sequence.
- the first level in FIG. 20 shows a video frame sequence of the video stream.
- the second level shows a PES packet sequence.
- a plurality of Video Presentation Units in a video stream are divided into pictures, stored in the payload of the PES packet.
- Each PES packet has a PES header, and a PTS (Presentation Time-Stamp) that is a display time of a picture and a DTS (Decoding Time-Stamp) that is a decoding time of a picture are stored in the PES header.
- PTS Presentation Time-Stamp
- DTS Decoding Time-Stamp
- FIG. 21 shows the format of the TS packet that is finally written in the multiplexed data.
- the TS packet is a 188-byte fixed-length packet composed of a 4-byte TS header having information such as a PID for identifying a stream and a 184-byte TS payload for storing data.
- the PES packet is divided and stored in the TS payload.
- a 4-byte TP_Extra_Header is added to a TS packet, forms a 192-byte source packet, and is written in multiplexed data.
- TP_Extra_Header information such as ATS (Arrival_Time_Stamp) is described.
- ATS indicates the transfer start time of the TS packet to the PID filter of the decoder.
- source packets are arranged as shown in the lower part of FIG. 21, and the number incremented from the head of the multiplexed data is called SPN (source packet number).
- TS packets included in the multiplexed data include PAT (Program Association Table), PMT (Program Map Table), PCR (Program Clock Reference), and the like in addition to each stream such as video / audio / caption.
- PAT indicates what the PID of the PMT used in the multiplexed data is, and the PID of the PAT itself is registered as 0.
- the PMT has the PID of each stream such as video / audio / subtitles included in the multiplexed data and the attribute information of the stream corresponding to each PID, and has various descriptors related to the multiplexed data.
- the descriptor includes copy control information for instructing permission / non-permission of copying of multiplexed data.
- the PCR corresponds to the ATS in which the PCR packet is transferred to the decoder. Contains STC time information.
- FIG. 22 is a diagram for explaining the data structure of the PMT in detail.
- a PMT header describing the length of data included in the PMT is arranged at the head of the PMT.
- a plurality of descriptors related to multiplexed data are arranged.
- the copy control information and the like are described as descriptors.
- a plurality of pieces of stream information regarding each stream included in the multiplexed data are arranged.
- the stream information includes a stream descriptor in which a stream type, a stream PID, and stream attribute information (frame rate, aspect ratio, etc.) are described to identify a compression codec of the stream.
- the multiplexed data is recorded together with the multiplexed data information file.
- the multiplexed data information file is management information of multiplexed data, has one-to-one correspondence with the multiplexed data, and includes multiplexed data information, stream attribute information, and an entry map.
- the multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time.
- the system rate indicates a maximum transfer rate of multiplexed data to a PID filter of a system target decoder described later.
- the ATS interval included in the multiplexed data is set to be equal to or less than the system rate.
- the playback start time is the PTS of the first video frame of the multiplexed data
- the playback end time is set by adding the playback interval for one frame to the PTS of the video frame at the end of the multiplexed data.
- attribute information about each stream included in the multiplexed data is registered for each PID.
- the attribute information has different information for each video stream, audio stream, presentation graphics stream, and interactive graphics stream.
- the video stream attribute information includes the compression codec used to compress the video stream, the resolution of the individual picture data constituting the video stream, the aspect ratio, and the frame rate. It has information such as how much it is.
- the audio stream attribute information includes the compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported, and the sampling frequency. With information. These pieces of information are used for initialization of the decoder before the player reproduces it.
- the stream type included in the PMT is used.
- video stream attribute information included in the multiplexed data information is used.
- the video encoding shown in each of the above embodiments for the stream type or video stream attribute information included in the PMT.
- FIG. 25 shows steps of the moving picture decoding method according to the present embodiment.
- step exS100 the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is acquired from the multiplexed data.
- step exS101 it is determined whether or not the stream type or the video stream attribute information indicates multiplexed data generated by the moving picture encoding method or apparatus described in the above embodiments. To do.
- step exS102 the above embodiments are performed. Decoding is performed by the moving picture decoding method shown in the form.
- the conventional information Decoding is performed by a moving image decoding method compliant with the standard.
- FIG. 26 shows the configuration of an LSI ex500 that is made into one chip.
- the LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510.
- the power supply circuit unit ex505 is activated to an operable state by supplying power to each unit when the power supply is on.
- the LSI ex500 uses the AV I / O ex509 to perform the microphone ex117 and the camera ex113 based on the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, the driving frequency control unit ex512, and the like.
- the AV signal is input from the above.
- the input AV signal is temporarily stored in an external memory ex511 such as SDRAM.
- the accumulated data is divided into a plurality of times as appropriate according to the processing amount and the processing speed and sent to the signal processing unit ex507, and the signal processing unit ex507 encodes an audio signal and / or video. Signal encoding is performed.
- the encoding process of the video signal is the encoding process described in the above embodiments.
- the signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside.
- the output multiplexed data is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so as to be synchronized when multiplexing.
- the memory ex511 is described as an external configuration of the LSI ex500.
- a configuration included in the LSI ex500 may be used.
- the number of buffers ex508 is not limited to one, and a plurality of buffers may be provided.
- the LSI ex500 may be made into one chip or a plurality of chips.
- control unit ex501 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like, but the configuration of the control unit ex501 is not limited to this configuration.
- the signal processing unit ex507 may further include a CPU.
- the CPU ex502 may be configured to include a signal processing unit ex507 or, for example, an audio signal processing unit that is a part of the signal processing unit ex507.
- the control unit ex501 is configured to include a signal processing unit ex507 or a CPU ex502 having a part thereof.
- LSI LSI
- IC system LSI
- super LSI ultra LSI depending on the degree of integration
- the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
- An FPGA Field Programmable Gate Array
- Such a programmable logic device typically loads or reads a program constituting software or firmware from a memory or the like, so that the moving image encoding method or the moving image described in each of the above embodiments is used.
- An image decoding method can be performed.
- FIG. 27 shows a configuration ex800 in the present embodiment.
- the drive frequency switching unit ex803 sets the drive frequency high when the video data is generated by the moving image encoding method or apparatus described in the above embodiments.
- the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments is instructed to decode the video data.
- the video data is video data compliant with the conventional standard, compared to the case where the video data is generated by the moving picture encoding method or apparatus shown in the above embodiments, Set the drive frequency low. Then, it instructs the decoding processing unit ex802 compliant with the conventional standard to decode the video data.
- the drive frequency switching unit ex803 includes the CPU ex502 and the drive frequency control unit ex512 in FIG.
- the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments and the decoding processing unit ex802 that complies with the conventional standard correspond to the signal processing unit ex507 in FIG.
- the CPU ex502 identifies which standard the video data conforms to. Then, based on the signal from the CPU ex502, the drive frequency control unit ex512 sets the drive frequency. Further, based on the signal from the CPU ex502, the signal processing unit ex507 decodes the video data.
- the identification of the video data for example, it is conceivable to use the identification information described in the third embodiment.
- the identification information is not limited to that described in Embodiment 3, and any information that can identify which standard the video data conforms to may be used. For example, it is possible to identify which standard the video data conforms to based on an external signal that identifies whether the video data is used for a television or a disk. In some cases, identification may be performed based on such an external signal. In addition, the selection of the driving frequency in the CPU ex502 may be performed based on, for example, a lookup table in which video data standards and driving frequencies are associated with each other as shown in FIG. The look-up table is stored in the buffer ex508 or the internal memory of the LSI, and the CPU ex502 can select the drive frequency by referring to the look-up table.
- FIG. 28 shows steps for executing the method of the present embodiment.
- the signal processing unit ex507 acquires identification information from the multiplexed data.
- the CPU ex502 identifies whether the video data is generated by the encoding method or apparatus described in each of the above embodiments based on the identification information.
- the CPU ex502 sends a signal for setting the drive frequency high to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets a high drive frequency.
- step exS203 the CPU ex502 drives the signal for setting the drive frequency low. This is sent to the frequency control unit ex512. Then, in the drive frequency control unit ex512, the drive frequency is set to be lower than that in the case where the video data is generated by the encoding method or apparatus described in the above embodiments.
- the power saving effect can be further enhanced by changing the voltage applied to the LSI ex500 or the device including the LSI ex500 in conjunction with the switching of the driving frequency. For example, when the drive frequency is set low, it is conceivable that the voltage applied to the LSI ex500 or the device including the LSI ex500 is set low as compared with the case where the drive frequency is set high.
- the setting method of the driving frequency may be set to a high driving frequency when the processing amount at the time of decoding is large, and to a low driving frequency when the processing amount at the time of decoding is small. It is not limited to the method.
- the amount of processing for decoding video data compliant with the MPEG4-AVC standard is larger than the amount of processing for decoding video data generated by the moving picture encoding method or apparatus described in the above embodiments. It is conceivable that the setting of the driving frequency is reversed to that in the case described above.
- the method for setting the drive frequency is not limited to the configuration in which the drive frequency is lowered.
- the voltage applied to the LSIex500 or the apparatus including the LSIex500 is set high.
- the driving of the CPU ex502 is stopped.
- the CPU ex502 is temporarily stopped because there is room in processing. Is also possible. Even when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, if there is a margin for processing, the CPU ex502 is temporarily driven. It can also be stopped. In this case, it is conceivable to set the stop time shorter than in the case where the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1.
- a plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone.
- the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input.
- the signal processing unit ex507 corresponding to each standard is used individually, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
- a decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments and a decoding conforming to a standard such as MPEG-2, MPEG4-AVC, or VC-1
- the processing unit is partly shared.
- An example of this configuration is shown as ex900 in FIG. 30A.
- the moving picture decoding method shown in each of the above embodiments and the moving picture decoding method compliant with the MPEG4-AVC standard are processed in processes such as entropy coding, inverse quantization, deblocking filter, and motion compensation. Some contents are common.
- the decoding processing unit ex902 corresponding to the MPEG4-AVC standard is shared, and for other processing contents specific to one aspect of the present invention that do not correspond to the MPEG4-AVC standard, a dedicated decoding processing unit A configuration using ex901 is conceivable.
- a dedicated decoding processing unit ex901 is used for multi-view image control, and other dequantization and entropy are used. It is conceivable to share a decoding processing unit for any of decoding, deblocking filter, motion compensation, or all processes.
- the decoding processing unit for executing the moving picture decoding method described in each of the above embodiments is shared, and the processing content specific to the MPEG4-AVC standard As for, a configuration using a dedicated decoding processing unit may be used.
- ex1000 in FIG. 30B shows another example in which processing is partially shared.
- a dedicated decoding processing unit ex1001 corresponding to the processing content specific to one aspect of the present invention
- a dedicated decoding processing unit ex1002 corresponding to the processing content specific to another conventional standard
- a common decoding processing unit ex1003 corresponding to the processing contents common to the moving image decoding method according to the above and other conventional moving image decoding methods.
- the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized in one aspect of the present invention or processing content specific to other conventional standards, and can execute other general-purpose processing. Also good.
- the configuration of the present embodiment can be implemented by LSI ex500.
- the processing content common to the moving picture decoding method according to one aspect of the present invention and the moving picture decoding method of the conventional standard reduces the circuit scale of the LSI by sharing the decoding processing unit, In addition, the cost can be reduced.
- the present invention can be applied to an image encoding method, an image decoding method, an image encoding device, and an image decoding device.
- the present invention can also be used for high-resolution information display devices or imaging devices such as televisions, digital video recorders, car navigation systems, mobile phones, digital cameras, and digital video cameras that include an image encoding device.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Library & Information Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
本発明者は、「背景技術」の欄において記載した、画像符号化方法及画像復号方法に関し、以下の問題が生じることを見出した。
本実施の形態は、表示されるコンテンツが全ての視聴者に対して同じであることを保証するために画像復号装置で表示に用いられる視点を定義することができる効率的な手法について説明する。
上記各実施の形態で示した動画像符号化方法(画像符号化方法)または動画像復号化方法(画像復号方法)の構成を実現するためのプログラムを記憶メディアに記録することにより、上記各実施の形態で示した処理を独立したコンピュータシステムにおいて簡単に実施することが可能となる。記憶メディアは、磁気ディスク、光ディスク、光磁気ディスク、ICカード、半導体メモリ等、プログラムを記録できるものであればよい。
上記各実施の形態で示した動画像符号化方法または装置と、MPEG-2、MPEG4-AVC、VC-1など異なる規格に準拠した動画像符号化方法または装置とを、必要に応じて適宜切替えることにより、映像データを生成することも可能である。
上記各実施の形態で示した動画像符号化方法および装置、動画像復号化方法および装置は、典型的には集積回路であるLSIで実現される。一例として、図26に1チップ化されたLSIex500の構成を示す。LSIex500は、以下に説明する要素ex501、ex502、ex503、ex504、ex505、ex506、ex507、ex508、ex509を備え、各要素はバスex510を介して接続している。電源回路部ex505は電源がオン状態の場合に各部に対して電力を供給することで動作可能な状態に起動する。
上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データを復号する場合、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データを復号する場合に比べ、処理量が増加することが考えられる。そのため、LSIex500において、従来の規格に準拠する映像データを復号する際のCPUex502の駆動周波数よりも高い駆動周波数に設定する必要がある。しかし、駆動周波数を高くすると、消費電力が高くなるという課題が生じる。
テレビや、携帯電話など、上述した機器・システムには、異なる規格に準拠する複数の映像データが入力される場合がある。このように、異なる規格に準拠する複数の映像データが入力された場合にも復号できるようにするために、LSIex500の信号処理部ex507が複数の規格に対応している必要がある。しかし、それぞれの規格に対応する信号処理部ex507を個別に用いると、LSIex500の回路規模が大きくなり、また、コストが増加するという課題が生じる。
111、311 第1カメラ
112、312 第2カメラ
121、321 第1エンコーダ
122、322 第2エンコーダ
131、331、331A、331B SEI生成部
132、332、332A、332B SEIエンコーダ
151、351 第1画像
152、352 第2画像
161、361 第1符号化画像
162、362 第2符号化画像
171、262 最適距離
172 符号化最適距離
200、400、400A、400B 画像復号装置
211、411 第1デコーダ
212、412 第2デコーダ
221、421、421A、421B SEIデコーダ
222、422、422A、422B 表示装置
251、451 第1復号画像
252、452 第2復号画像
261、461 画面サイズ
313 第3カメラ
323 第3エンコーダ
353 第3画像
363 第3符号化画像
371、462 視点位置
372 符号化視点位置
373、463 視点識別子
374 符号化視点識別子
375、464 視点座標
376 符号化視点座標
413 第3デコーダ
453 第3復号画像
Claims (10)
- 複数の撮影視点から撮影された多視点画像を符号化する画像符号化方法であって、
画像復号装置で用いられる複数の画面サイズの各々に対応付けて、前記画像復号装置で表示に用いられる複数の視点である複数の表示視点を特定するための視点識別情報を生成する生成ステップと、
前記視点識別情報を符号化する符号化ステップとを含む
画像符号化方法。 - 前記視点識別情報は、前記複数の撮影視点のうち一つの撮影視点を示す
請求項1記載の画像符号化方法。 - 前記視点識別情報は、前記複数の撮影視点のうち二つの撮影視点で撮影された画像が合成されることで生成される合成画像の視点を示す
請求項1記載の画像符号化方法。 - 前記視点識別情報は、前記複数の撮影視点を識別するための識別子である
請求項1又は2記載の画像符号化方法。 - 複数の撮影視点から撮影された多視点画像が符号化されることにより生成されたビットストリームを復号する画像復号方法であって、
前記ビットストリームに含まれる、複数の画面サイズの各々に対応付けられた、複数の視点を特定するための視点識別情報を復号する復号ステップと、
復号された複数の視点識別情報のうち、画像復号装置が有する表示装置の画面サイズに対応付けられている視点識別情報を用いて、前記画像復号装置が表示に用いる複数の視点である複数の表示視点を決定する決定ステップとを含む
画像復号方法。 - 前記視点識別情報は、前記複数の撮影視点のうち一つの撮影視点を示し、
前記決定ステップでは、前記複数の撮影視点のうち前記視点識別情報で示される前記一つの撮影視点を前記複数の表示視点の一つに決定する
請求項5記載の画像復号方法。 - 前記視点識別情報は、前記複数の撮影視点のうち二つの撮影視点で撮影された画像が合成されることで生成される合成画像の視点を示す
請求項5記載の画像復号方法。 - 前記視点識別情報は、前記複数の撮影視点を識別するための識別子である
請求項5又は6記載の画像復号方法。 - 複数の撮影視点から撮影された多視点画像を符号化する画像符号化装置であって、
画像復号装置で用いられる複数の画面サイズの各々に対応付けて、前記画像復号装置で表示に用いられる複数の視点である複数の表示視点を特定するための視点識別情報を生成する視点識別情報生成部と、
前記視点識別情報を符号化する視点識別情報符号化部とを備える
画像符号化装置。 - 複数の撮影視点から撮影された多視点画像が符号化されることにより生成されたビットストリームを復号する画像復号装置であって、
前記ビットストリームに含まれる、複数の画面サイズの各々に対応付けられた、複数の視点を特定するための視点識別情報を復号する視点識別情報復号部と、
復号された複数の視点識別情報のうち、画像復号装置が有する表示装置の画面サイズに対応付けられている視点識別情報を用いて、前記画像復号装置が表示に用いる複数の視点である複数の表示視点を決定する視点決定部とを備える
画像復号装置。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13807884.5A EP2876878B1 (en) | 2012-07-19 | 2013-07-05 | Image encoding method, image decoding method, image encoding device, and image decoding device |
KR1020147000284A KR102058606B1 (ko) | 2012-07-19 | 2013-07-05 | 화상 부호화 방법, 화상 복호 방법, 화상 부호화 장치 및 화상 복호 장치 |
CN201380002054.5A CN103688535B (zh) | 2012-07-19 | 2013-07-05 | 图像编码方法、图像解码方法、图像编码装置及图像解码装置 |
JP2013556917A JP6167906B2 (ja) | 2012-07-19 | 2013-07-05 | 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置 |
US14/198,942 US10104360B2 (en) | 2012-07-19 | 2014-03-06 | Image encoding method, image decoding method, image encoding apparatus, and image decoding apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261673422P | 2012-07-19 | 2012-07-19 | |
US61/673,422 | 2012-07-19 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/198,942 Continuation US10104360B2 (en) | 2012-07-19 | 2014-03-06 | Image encoding method, image decoding method, image encoding apparatus, and image decoding apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014013695A1 true WO2014013695A1 (ja) | 2014-01-23 |
Family
ID=49948539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/004192 WO2014013695A1 (ja) | 2012-07-19 | 2013-07-05 | 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置 |
Country Status (7)
Country | Link |
---|---|
US (1) | US10104360B2 (ja) |
EP (1) | EP2876878B1 (ja) |
JP (1) | JP6167906B2 (ja) |
KR (1) | KR102058606B1 (ja) |
CN (1) | CN103688535B (ja) |
TW (1) | TWI581606B (ja) |
WO (1) | WO2014013695A1 (ja) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111654644A (zh) * | 2020-05-15 | 2020-09-11 | 西安万像电子科技有限公司 | 图像传输方法及系统 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4564107B2 (ja) * | 2008-09-30 | 2010-10-20 | パナソニック株式会社 | 記録媒体、再生装置、システムlsi、再生方法、記録方法、記録媒体再生システム |
WO2012026185A1 (ja) * | 2010-08-24 | 2012-03-01 | 富士フイルム株式会社 | 撮像装置およびその動作制御方法 |
JP2012089906A (ja) * | 2009-02-13 | 2012-05-10 | Panasonic Corp | 表示制御装置 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050146521A1 (en) * | 1998-05-27 | 2005-07-07 | Kaye Michael C. | Method for creating and presenting an accurate reproduction of three-dimensional images converted from two-dimensional images |
JP4520229B2 (ja) * | 2003-07-01 | 2010-08-04 | 株式会社エヌ・ティ・ティ・ドコモ | 通信装置およびプログラム |
DE202007019459U1 (de) * | 2006-03-30 | 2012-09-13 | Lg Electronics Inc. | Vorrichtung zum Decodieren/Codieren eines Videosignals |
US8699583B2 (en) | 2006-07-11 | 2014-04-15 | Nokia Corporation | Scalable video coding and decoding |
US20080095228A1 (en) | 2006-10-20 | 2008-04-24 | Nokia Corporation | System and method for providing picture output indications in video coding |
ES2721506T3 (es) | 2007-01-04 | 2019-08-01 | Interdigital Madison Patent Holdings | Métodos y aparato para la información de vistas múltiples, expresada en sintaxis de alto nivel |
CN101291434A (zh) * | 2007-04-17 | 2008-10-22 | 华为技术有限公司 | 多视编解码方法及装置 |
US8384764B2 (en) * | 2007-12-20 | 2013-02-26 | Samsung Electronics Co., Ltd. | Method and apparatus for generating multiview image data stream and method and apparatus for decoding the same |
CN102067181A (zh) * | 2008-06-23 | 2011-05-18 | 松下电器产业株式会社 | 合成装置及合成方法 |
WO2010010521A2 (en) * | 2008-07-24 | 2010-01-28 | Koninklijke Philips Electronics N.V. | Versatile 3-d picture format |
US8947504B2 (en) * | 2009-01-28 | 2015-02-03 | Lg Electronics Inc. | Broadcast receiver and video data processing method thereof |
JP2011223482A (ja) * | 2010-04-14 | 2011-11-04 | Sony Corp | 画像処理装置、画像処理方法、およびプログラム |
US9497458B2 (en) * | 2010-11-26 | 2016-11-15 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, program, and integrated ciruit |
JP5285682B2 (ja) * | 2010-11-29 | 2013-09-11 | シャープ株式会社 | 画像符号化装置、画像符号化方法 |
JP2012010344A (ja) * | 2011-07-13 | 2012-01-12 | Fujifilm Corp | 画像処理装置、方法およびプログラム |
-
2013
- 2013-07-05 KR KR1020147000284A patent/KR102058606B1/ko active IP Right Grant
- 2013-07-05 WO PCT/JP2013/004192 patent/WO2014013695A1/ja active Application Filing
- 2013-07-05 EP EP13807884.5A patent/EP2876878B1/en active Active
- 2013-07-05 CN CN201380002054.5A patent/CN103688535B/zh active Active
- 2013-07-05 JP JP2013556917A patent/JP6167906B2/ja active Active
- 2013-07-11 TW TW102124893A patent/TWI581606B/zh active
-
2014
- 2014-03-06 US US14/198,942 patent/US10104360B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4564107B2 (ja) * | 2008-09-30 | 2010-10-20 | パナソニック株式会社 | 記録媒体、再生装置、システムlsi、再生方法、記録方法、記録媒体再生システム |
JP2012089906A (ja) * | 2009-02-13 | 2012-05-10 | Panasonic Corp | 表示制御装置 |
WO2012026185A1 (ja) * | 2010-08-24 | 2012-03-01 | 富士フイルム株式会社 | 撮像装置およびその動作制御方法 |
Non-Patent Citations (1)
Title |
---|
A. NORKIN; I. GIRDZIJAUSKAS; Y. ZHAO; Y. LUO: "Show-case and syntax for SEI message on reference display information signaling", MPEG DOCUMENT M26275 |
Also Published As
Publication number | Publication date |
---|---|
EP2876878A1 (en) | 2015-05-27 |
EP2876878A4 (en) | 2015-07-29 |
CN103688535A (zh) | 2014-03-26 |
KR102058606B1 (ko) | 2019-12-23 |
KR20150035685A (ko) | 2015-04-07 |
EP2876878B1 (en) | 2018-12-12 |
US10104360B2 (en) | 2018-10-16 |
US20140184742A1 (en) | 2014-07-03 |
CN103688535B (zh) | 2017-02-22 |
TW201414316A (zh) | 2014-04-01 |
JPWO2014013695A1 (ja) | 2016-06-30 |
JP6167906B2 (ja) | 2017-07-26 |
TWI581606B (zh) | 2017-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5340425B2 (ja) | 画像符号化方法、画像復号方法、画像符号化装置および画像復号装置 | |
JP6562369B2 (ja) | 符号化復号方法および符号化復号装置 | |
JP6112418B2 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置および画像符号化復号装置 | |
JP2015504254A (ja) | 時間動きベクトル予測を用いた、符号化方法、復号方法、符号化装置、及び、復号装置 | |
JP6156648B2 (ja) | 動画像符号化方法、動画像符号化装置、動画像復号化方法、および、動画像復号化装置 | |
WO2014010192A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置 | |
WO2012117722A1 (ja) | 符号化方法、復号方法、符号化装置及び復号装置 | |
WO2013128832A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置および画像符号化復号装置 | |
JP6414712B2 (ja) | 多数の参照ピクチャを用いる動画像符号化方法、動画像復号方法、動画像符号化装置、および動画像復号方法 | |
WO2013001749A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置および画像符号化復号装置 | |
WO2013001813A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置および画像復号装置 | |
JP5680812B1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置および画像復号装置 | |
JP6167906B2 (ja) | 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置 | |
WO2013057938A1 (ja) | システム層処理装置、符号化装置、システム層処理方法、および符号化方法 | |
WO2013076991A1 (ja) | 画像符号化方法、画像符号化装置、画像復号方法、および、画像復号装置 | |
WO2012096157A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置および画像復号装置 | |
WO2012124300A1 (ja) | 動画像符号化方法、動画像復号方法、動画像符号化装置および動画像復号装置 | |
WO2013153808A1 (ja) | 画像復号方法および画像復号装置 | |
WO2013046616A1 (ja) | 画像符号化装置、画像復号装置、画像符号化方法及び画像復号方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2013556917 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013807884 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 20147000284 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13807884 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |