CN104683813B - Decoding apparatus and coding/decoding method - Google Patents
Decoding apparatus and coding/decoding method Download PDFInfo
- Publication number
- CN104683813B CN104683813B CN201510050838.6A CN201510050838A CN104683813B CN 104683813 B CN104683813 B CN 104683813B CN 201510050838 A CN201510050838 A CN 201510050838A CN 104683813 B CN104683813 B CN 104683813B
- Authority
- CN
- China
- Prior art keywords
- depth
- unit
- image
- dps
- decoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000008569 process Effects 0.000 claims abstract description 27
- 238000002224 dissection Methods 0.000 claims abstract description 7
- 239000012141 concentrate Substances 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 97
- 238000005516 engineering process Methods 0.000 description 32
- 239000002131 composite material Substances 0.000 description 29
- 238000004891 communication Methods 0.000 description 18
- 230000015572 biosynthetic process Effects 0.000 description 15
- 238000012937 correction Methods 0.000 description 14
- 238000003786 synthesis reaction Methods 0.000 description 14
- 238000003384 imaging method Methods 0.000 description 13
- 230000005540 biological transmission Effects 0.000 description 12
- 239000011521 glass Substances 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 239000000284 extract Substances 0.000 description 8
- 238000009877 rendering Methods 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 6
- 238000003702 image correction Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 241000208340 Araliaceae Species 0.000 description 3
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 3
- 235000003140 Panax quinquefolius Nutrition 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 235000008434 ginseng Nutrition 0.000 description 3
- 230000005055 memory storage Effects 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 101001109518 Homo sapiens N-acetylneuraminate lyase Proteins 0.000 description 1
- 102100022686 N-acetylneuraminate lyase Human genes 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 238000010129 solution processing Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/178—Metadata, e.g. disparity information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Abstract
The present invention relates to a kind of decoding apparatus and coding/decoding method, wherein the encoding amount of the encoding stream in the case where being included on the information of depth image in encoding stream can be reduced.It present invention can be suitably applied to the code device for example for multi-view image.Decoding apparatus includes:Acquiring unit, it obtains the depth parameter collection and the coded data from including depth parameter collection and the encoding stream of the coded data of depth image, the depth parameter concentrated setting has the deep image information as the information on the depth image, and the depth parameter collection is different from sequence parameter set and image parameters collection;Dissection process unit, its depth parameter obtained from the acquiring unit concentrates the parsing deep image information;And decoding unit, its coded data obtained to the acquiring unit decodes.
Description
The application is Application No. " 201380006563.5 ", entitled " code device and coding method and solution
The application for a patent for invention of code device and coding/decoding method " (enters the PCT application of thenational phase, its international application no is PCT/
JP2013/051265 divisional application).
Technical field
This technology is related to a kind of code device and coding method and decoding apparatus and coding/decoding method, in particular it relates to one
Plant the volume for the encoding amount for being configured to reduce in the case where being included in encoding stream encoding stream on the information of depth image
Code device and coding method and decoding apparatus and coding/decoding method.
Background technology
In recent years, 3D rendering has caused concern.As the scheme of viewing 3D rendering, following scheme (is hereinafter referred to glasses
Type scheme) it is universal:Put on and open left eye shutter in an image between showing two visual point images and showing separately
The glasses of right eye shutter are opened during one image, and watch two visual point images of alternately display.
However, in such glasses type scheme, beholder needs separately to buy glasses with 3D display devices, so as to drop
The purchase intention that low beholder is bought.Because beholder needs to put on one's glasses in viewing, therefore beholder may feel
Feel trouble.Therefore, (non-glasses are hereinafter referred to the viewing scheme that 3D rendering is watched in the case where that need not put on one's glasses
Type scheme) increase in demand.
In such non-glasses type scheme, the visual point image of three or more viewpoints is shown as so that may be viewed by
Angle is different for each viewpoint, therefore beholder can be when being only utilized respectively left eye and right eye watches any two visual point image
3D rendering is watched without putting on one's glasses.
As the method that 3D rendering is shown in non-glasses type scheme, following method is devised:Obtain predetermined viewpoint color
Color image and depth image, the multiple views coloured silk of the viewpoint in addition to predetermined viewpoint is included based on coloured image and depth image generation
Color image, and show the multiple views coloured image.Here, multiple views refer to it being three or more viewpoints.
It is used as the method encoded to multiple views coloured image and depth image, it is proposed that separate to coloured image and depth
The method that degree image is encoded (for example, with reference to patent document 1).
Reference listing
Non-patent literature
NPL1:"Draft Call for Proposals on 3D Video Coding Technology",
INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE
NORMALISATION ISO/IEC JTC1/SC29/WG11CODING OF MOVING PICTURES AND AUDIO,
MPEG2010/N11679 Guangzhou, China, in October, 2010
The content of the invention
Technical problem
However, in the case where being included on the information of depth image in encoding stream, not accounting for the volume of encoding stream
The reduction of code amount.
This technology be in view of such situation and complete, and be for being included in the information on depth image
Reduce the technology of the encoding amount of encoding stream in the case of in encoding stream.
The solution of problem
According to the first aspect of this technology there is provided a kind of code device, it includes:Setting unit, will be used as on depth
The deep image information for spending the information of image is configured differently than sequence parameter set and the parameter set of image parameters collection;Coding is single
Member, is encoded to generate coded data to depth image;And delivery unit, transmit the parameter for including being set by setting unit
The encoding stream of collection and the coded data generated by coding unit.
Correspond to the code device of the first aspect according to this technology according to the coding method of the first aspect of this technology.
According to this technology in a first aspect, the deep image information as the information on depth image is set into difference
In sequence parameter set and the parameter set of image parameters collection;Depth image is encoded to generate coded data;And transmit bag
Include parameter set and the encoding stream of coded data.
According to the second aspect of this technology there is provided a kind of decoding apparatus, it includes:Acquiring unit, from including parameter set
Collect and coded data with being got parms in the encoding stream of the coded data of depth image, be provided with the parameter set as on depth
The deep image information of the information of image is spent, and the parameter set is different from sequence parameter set and image parameters collection;Dissection process
Deep image information is parsed in unit, the parameter set obtained from acquiring unit;And decoding unit, the volume obtained to acquiring unit
Code data are decoded.
Correspond to the decoding apparatus of the second aspect according to this technology according to the coding/decoding method of the second aspect of this technology.
According to the second aspect of this technology, ginseng is obtained from the encoding stream including parameter set and the coded data of depth image
The deep image information as the information on depth image, and the ginseng are provided with manifold and coded data, the parameter set
Manifold is different from sequence parameter set and image parameters collection;Deep image information is parsed from parameter set;And coded data is entered
Row decoding.
According to the code device of first aspect and can be by allowing computer to perform according to the decoding apparatus of second aspect
Program is realized.
In order to realize the code device according to first aspect and the decoding apparatus according to second aspect, it is allowed to held by computer
Capable program can be provided via transmission medium to transmit or can record on the recording medium.
The advantageous effects of the present invention
According to this technology in a first aspect, can be in the case where being included on the information of depth image in encoding stream
Reduce the encoding amount of encoding stream.
, can be to situation about being included in the information on depth image in encoding stream according to the second aspect of this technology
Under reduce encoding stream the encoding stream of encoding amount decoded.
Brief description of the drawings
Fig. 1 is to show parallax and the figure of depth.
Fig. 2 is the block diagram of the configuration example for the code device for showing the embodiment using this technology.
Fig. 3 is the block diagram for the configuration example for showing the multi-view image coding unit in Fig. 2.
Fig. 4 is the figure for the topology example for showing encoding stream.
Fig. 5 is the figure for the syntax example for showing DPS.
Fig. 6 is the figure for the syntax example for showing slice header.
Fig. 7 is the flow chart for the coded treatment for showing the code device in Fig. 2.
Fig. 8 is the flow chart for the details for showing the multi-vision-point encoding processing in Fig. 7.
Fig. 9 is the flow chart of the details for the DPS generation processing for showing Fig. 8.
Figure 10 is the block diagram of the configuration example for the decoding apparatus for showing the embodiment using this technology.
Figure 11 is the block diagram for the configuration example for showing the multi-view image decoding unit in Figure 10.
Figure 12 is the flow chart for the decoding process for showing the decoding apparatus in Figure 10.
Figure 13 is the flow chart for the details for showing the multiple views decoding process in Figure 12.
Figure 14 is the flow chart for the details for showing the generation processing in Figure 13.
Figure 15 is the figure for showing to extend SPS syntax example.
Figure 16 is the figure for showing to extend SPS another syntax example.
Figure 17 is the figure for showing to extend slice_layer definition.
Figure 18 is the figure for showing to extend slice_layer syntax example.
Figure 19 is the figure for showing to extend the syntax example of slice header.
Figure 20 is the figure for showing to extend another syntax example of slice header.
Figure 21 is the figure for the syntax example for showing NAL units.
Figure 22 is the figure for the syntax example for showing slice_layer.
Figure 23 is the figure for another topology example for showing encoding stream.
Figure 24 is the figure of the configuration example for the computer for showing embodiment.
Figure 25 is the figure for the overall arrangement example for showing the television devices using this technology.
Figure 26 is the figure for the overall arrangement example for showing the portable phone using this technology.
Figure 27 is the figure for the overall arrangement example for showing the record reproducing device using this technology.
Figure 28 is the figure for the overall arrangement example for showing the imaging device using this technology.
Embodiment
<The description of depth image (parallax associated picture) in this specification>
Fig. 1 is to show parallax and the figure of depth.
As shown in figure 1, when main body M coloured image is by being arranged in position C1 camera device c1 and being arranged in position
Camera device c2 at C2 shoot when, be used as camera device c1 (camera device c2) in the depth direction and the distance of main body
Main body M depth Z is defined by below equation (a).
[mathematical expression 1]
Z=(L/d) × f ... (a)
L is the distance between position C1 and position C2 (hereinafter referred to camera device between distance) in the horizontal direction.Separately
Outside, d is by from the position of the main body M on the coloured image captured by camera device c1 in the horizontal direction away from the coloured image
Center the main body M on the coloured image that u1 subtracts captured by camera device c2 position in the horizontal direction away from the coloured silk
The value obtained apart from u2 at the center of color image, i.e. parallax.In addition, f is camera device c1 focal length, and in equation (a)
The middle focal length for assuming camera device c1 is identical with camera device c2 focal length.
As represented by equation (a), parallax d and depth Z can be changed uniquely.Therefore, in this manual, refer to
Show the image and indicated depth Z image one of the parallax d between two viewpoint coloured images captured by camera device c1 and c2
As be referred to as depth image.
Depth image can be the image for indicating parallax d or depth Z, and will can be obtained by being normalized to parallax d
Value, depth image itself served as by value for normalizing and obtaining to depth Z 1/Z reciprocal etc. rather than parallax d or depth Z
Pixel value.
By that can be obtained with 8 (0 to 255) to the parallax d value I for normalizing and obtaining by below equation (b).For
Parallax d normalized digit is not limited to 8, but can use other digits, such as 10 or 12.
[mathematical expression 2]
In equation (b), DmaxIt is parallax d maximum, and DminIt is parallax d minimum value.Maximum DmaxAnd minimum
Value DminIt can be set, or can be set in units of multiple screens in units of a screen.
By the value y that is normalized and obtained to depth Z 1/Z reciprocal with 8 (0 to 255) can by below equation (c) come
Obtain.Normalized digit for depth Z 1/Z reciprocal is not limited to 8, but can use other digits, such as 10
Or 12.
[mathematical expression 3]
In equation (c), ZfarIt is depth Z maximum, and ZnearIt is depth Z minimum value.Maximum ZfarMost
Small value ZnearIt can be set, or can be set in units of multiple screens in units of a screen.
Therefore, in this manual, it is contemplated that the fact that parallax d and depth Z can be changed uniquely, will be by right
The value I that parallax d is normalized and obtained is set to the image of pixel value and obtained being normalized by the 1/Z reciprocal to depth Z
The image that the value y obtained is set to pixel value is generally referred to herein as depth image.Here, the color format of depth image is assumed to be
YUV420 or YUV400, but other color formats can be used.
When being concerned with the information on value I or value y information in itself rather than on the pixel value of depth image, value
I or value y are arranged to depth information (parallax correlation).In addition, by claiming to value I and value the y result for being mapped and being obtained
For depth map.
<Embodiment>
<The configuration example of the code device of embodiment>
Fig. 2 is the block diagram of the configuration example for the code device for showing the embodiment using this technology.
Code device 50 in Fig. 2 is configured to include multiple views coloured image capturing unit 51, multiple views coloured image
Correct unit 52, composite image generation unit 53, deep image information generation unit 54 and multi-view image coding single
Member 55.Code device 50 transmits the deep image information (coding parameter) as the information on depth image.
Specifically, cromogram of the capture of multiple views coloured image capturing unit 51 with multiple viewpoints of code device 50
Picture, and provided the coloured image as multiple views coloured image to multiple views coloured image correction unit 52.Multiple views are color
Color image capturing unit 51 generates the external parameter of each viewpoint, depth maximum (parallax maximum correlation) and deep minimum
(parallax relative minima) (its details explained below).Multiple views coloured image capturing unit 51 by external parameter, depth most
Value and deep minimum, which are provided, greatly arrives deep image information generation unit 54, and depth maximum and deep minimum offer are arrived
Composite image generation unit 53.
External parameter is the parameter for the position for defining the multiple views coloured image capturing unit 11 in horizontal direction.Depth is most
Big value is depth Z maximum when the depth image that composite image generation unit 53 is generated is indicated depth Z image
Value Zfar, and be parallax d maximum D when depth image is and indicates parallax d imagemax.Deep minimum is in multiple views
The depth image that depth image generation unit 53 is generated is depth Z minimum value Z when being indicated depth Z imagenear, and
Depth image is parallax d minimum value D when being instruction parallax d imagemin。
The multiple views colour that 52 pairs of multiple views coloured image correction unit is provided from multiple views coloured image capturing unit 51
Image performs color correction, gamma correction, distortion correction etc..Therefore, the level side in multiple views coloured image after calibration
It is common to all viewpoints to the focal length of the multiple views coloured image capturing unit 51 in (X-direction).Multiple views coloured image
Unit 52 is corrected to provide the multiple views coloured image after correction as multiple views correcting colour images to composite image
Generation unit 53 and multi-view image coding unit 55.
Composite image generation unit 53 is maximum based on the depth provided from multiple views coloured image capturing unit 51
Value and deep minimum, it is multiple according to the multiple views correcting colour images generation that the offer of unit 52 is corrected from multiple views coloured image
The depth image of viewpoint.Specifically, composite image generation unit 53 be directed to each viewpoint in multiple viewpoints and from many
View-point correction coloured image obtains the parallax correlation of each pixel, and based on depth maximum and deep minimum to parallax
Correlation is normalized.Then, composite image generation unit 53 generates following depth image:Wherein, for multiple
The parallax correlation for each pixel that each viewpoint in viewpoint is normalized is arranged to the picture of each pixel of depth image
Element value.
Composite image generation unit 53 regard the depth image of the multiple viewpoints generated as multi-view depth figure
Multi-view image coding unit 55 is arrived as providing.
Deep image information generation unit 54 generates the deep image information of each viewpoint.Specifically, deep image information
External parameter of the generation unit 54 based on each viewpoint provided from multiple views coloured image capturing unit 51 and obtain each regard
Distance between the camera device of point.Distance is the colour in capture each viewpoint corresponding with composite image between camera device
51 position in the horizontal direction of multiple views coloured image capturing unit has and coloured image and depth map with capture during image
As corresponding parallax coloured image when the distance between 51 position in the horizontal direction of multiple views coloured image capturing unit.
Deep image information generation unit 54 is by the depth of each viewpoint from multiple views coloured image capturing unit 51
Distance is set to the deep image information of each viewpoint between the camera device of maximum and deep minimum and each viewpoint.It is deep
Image information generation unit 54 is spent to provide the deep image information of each viewpoint to multi-view image coding unit 55.
Multi-view image coding unit 55 according to the scheme for meeting HEVC (efficient video coding) scheme to from regarding more
Point coloured image corrects the multiple views correcting colour images of unit 52 and regarded from composite image generation unit 53 morely
Point depth image is encoded.On May 20th, 2011 issued HEVC schemes, such as in August, 2011 is by Thomas
Wiegand, Woo-jin Han, Benjamin Bross, Jens-Rainer Ohm and Gary J.Sullivian have issued work
For " WD3:In Working Draf t3of High-Efficiency Video Coding ", JCTVC-E603_d5 (version 5)
Draft HEVC schemes.
Multi-view image coding unit 55 for each viewpoint to from deep image information generation unit 54 provide it is each
The deep image information of viewpoint performs differential coding, and generates as the NAL (network abstract layer) including differential coding result
DPS (depth parameter collection) (DRPS) of unit etc..Then, multi-view image coding unit 55 transmits the multiple views school after encoding
The bit stream of the positive composition such as coloured image and composite image, DPS is used as encoding stream (coding stream).
Therefore, because multi-view image coding unit 55 performs differential coding to deep image information and transmitted after coding
Deep image information, therefore multi-view image coding unit 55 can reduce the encoding amount of deep image information.In order to provide
, there is the high probability that deep image information will not be significantly changed between picture in comfortable 3D rendering.Therefore, differential coding is performed
The fact be effective for the reduction of encoding amount.
Be included in deep image information in DPS because multi-view image coding unit 55 is transmitted, thus can prevent as
Identical depth image is redundantly transmitted as being included in deep image information in slice header in the case of to be transmitted
Information.As a result, the encoding amount of deep image information can further be reduced.
<The configuration example of multi-view image coding unit>
Fig. 3 is the block diagram for the configuration example for showing the multi-view image coding unit 55 in Fig. 2.
Multi-view image coding unit 55 in Fig. 3 be configured to include SPS coding units 61, PPS coding units 62,
DPS coding units 63, slice header coding unit 64 and section coding unit 65.
The SPS coding units 61 of multi-view image coding unit 55 generate SPS with sequence unit, and SPS offers are arrived
PPS coding units 62.PPS coding units 62 generate PPS with picture unit, and PPS is added into what is provided from SPS coding units 61
SPS, and the PPS added is provided to slice header coding unit 64.
Depth of the DPS coding units 63 based on each viewpoint provided from the deep image information generation unit 54 in Fig. 2
Image information and for each viewpoint depth image each section to deep image information perform differential coding.Specifically,
When the type for handling target slice is frame in type, the deep image information of the section is set to difference by DPS coding units 63
Coded result is without being changed.On the contrary, when the type for handling target slice is inter type, DPS coding units 63 will
Difference between the deep image information of the adjacent preceding section of the deep image information of the section and the section is set to difference
Coding result.
When not yet generation includes the DPS of the differential coding result of deep image information, DPS coding units 63 are used as setting
Unit is put, and the differential coding result is set in DPS.DPS coding units 63 will be used as the ID (marks for uniquely identifying DPS
Number) DPS_id (index identifier) distribute to DPS, and DPS_id is set in DPS.Then, DPS coding units 63 will be set
It is equipped with the differential coding result of deep image information and DPS_id DPS is provided and arrived slice header coding unit 64.
On the contrary, when having generated the DPS of the differential coding result including deep image information, DPS coding units 63 will
The DPS_id of the DPS, which is provided, arrives slice header coding unit 64.
Slice header coding unit 64 be used as setting unit, and respective viewpoints depth image section section report
The DPS provided from DPS coding units 63 DPS_id is set in head or a DPS_id is set.Slice header coding unit 64 is given birth to
Into the slice header of multiple views coloured image.Slice header coding unit 64 also by the DPS provided from DPS coding units 63 and
Composite image and the slice header of multiple views coloured image are added to the SPS that with the addition of and provided from PPS coding units 62
PPS, and by PPS provide to section coding unit 65.
Coding unit 65 of cutting into slices is used as coding unit, and with cut into slices unit according to meet the schemes of HEVC schemes to from
Multiple views coloured image corrects the multiple views correcting colour images of unit 52 and from composite image generation unit 53
Composite image is encoded.Now, section coding unit 65 uses the depth map included in following DPS_id DPS
As information:During the DPS_id is included in the slice header provided from slice header coding unit 64.
Section coding unit 65 with the addition of by the way that the coded data of the section unit obtained as coding result is added to
Encoding stream is generated from SPS, PPS and DPS of the offer of slice header coding unit 64 slice header.Section coding unit 65 is used
Make delivery unit, and transmit encoding stream.
<The topology example of encoding stream>
Fig. 4 is the figure for the topology example for showing encoding stream.
In Fig. 4, in order to be conducive to description, the coded data of the section of multi-view image is only described.However, actual
On, the coded data of the section of multiple views coloured image also is disposed in encoding stream.
As shown in figure 4, with the addition of the SPS of sequence unit, the PPS of picture unit, the DPS for unit of cutting into slices and slice header
The coded data of section unit be sequentially disposed in encoding stream.
In the example of fig. 4, the frame in type section among the section of picture corresponding with the PPS#0 as the 0th PPS
Deep minimum, distance is 10,50 and 100 respectively between depth maximum and camera device.Therefore, deep minimum " 10 ",
The difference that depth maximum " 50 " distance " 100 " between camera device is generated as the deep image information of the section in itself is compiled
Code result.Then, the DPS of differential coding result is included due to not yet generating, therefore setting includes differential coding in encoding stream
As a result DPS, and for example, 0 is allocated as DPS_id.Then, 0 it is included in slice header as DPS_id.
In the example of fig. 4, the depth of the 1st inter type section among the section of picture corresponding with PPS#0 is minimum
Distance is 9,48 and 105 respectively between value, depth maximum and camera device.Therefore, the deep minimum " 9 " from the section is passed through
The difference " -1 " for subtracting the deep minimum " 10 " of the adjacent preceding frame in type section in coded sequence and obtaining is generated work
For the differential coding result of the deep image information of the section.Similarly, the difference " -2 " and camera device between depth maximum
Between difference " 5 " between distance be generated as the differential coding result of deep image information.
Include the DPS of the differential coding result due to not yet generating, therefore the DPS including the differential coding result is set
For encoding stream, and for example, 1 is allocated as DPS_id.Then, 1 it is included in slice header as DPS_id.
In the example of fig. 4, the depth of the 2nd inter type section among the section of picture corresponding with PPS#0 is minimum
Distance is 7,47 and 110 respectively between value, depth maximum and camera device.Therefore, the deep minimum " 7 " from the section is passed through
The difference " -2 " for subtracting the deep minimum " 9 " of the adjacent preceding 1st inter type section in coded sequence and obtaining is generated
It is used as the differential coding result of the deep image information of the section.Similarly, between depth maximum difference " -1 " and shooting dress
Difference " 5 " between putting between distance is generated as the differential coding result of deep image information.
Include the DPS of the differential coding result due to not yet generating, therefore the DPS including the differential coding result is set
For encoding stream, and for example, 2 are allocated as DPS_id.Then, 2 it is included in slice header as DPS_id.
In the example of fig. 4, the depth of three inter types section of picture corresponding with the PPS#1 as the 1st PPS
The depth map of the 2nd inter type section among the section of the differential coding result of image information and picture corresponding with PPS#0
As the differential coding result of information is identical.Therefore, DPS is not set for the section of these three inter types, and 2 are included in
DPS_id is used as in the slice header of section.
<DPS syntax example>
Fig. 5 is the figure for the syntax example for showing DPS.
As shown in Fig. 5 the 2nd row, the DPS_id (depth_parameter_set_id) for distributing to DPS is included in DPS
In.As shown in the 14th row, depth maximum and deep minimum (depth_ranges) are included in DPS.As in the 17th row
Shown, distance (vsp_param) is included in DPS between camera device.
<The syntax example of slice header>
Fig. 6 is the figure for the syntax example for showing slice header.
As shown in Fig. 6 the 3rd row to the 7th row, the NAL units of the coded data of the section unit of slice header are with the addition of
Type nal_unit_type be 21, its instruction performs coding according to 3DVC (3 d video encoding) scheme, i.e. slice header
It is the slice header of depth image, and when the type slice_type of section is inter type, slice header includes indicating
Whether the mark of weight estimation is performed to depth image.
Specifically, when the type slice_type of section is P (slice_type==P), slice header includes indicating
Whether the mark depth_weighted_pred_flag of weight estimation forward or a backward in is performed.On the other hand, section is worked as
Type slice_type when being B (slice_type==B), slice header includes indicating whether to perform in forward and backward
The mark depth_weighted_bipred_flag of weight estimation.
As shown in eighth row to the 10th row, when performing weight estimation, slice header includes DPS_id (depth_
parameter_set_id).Specifically, when the type slice_type of section is P and indicates depth_weighted_
When pred_flag is 1, or when the type slice_type of section is B and indicates depth_weighted_bipred_
When flag is 1, including DPS_id (depth_parameter_set_id).
Although not shown, when the type nal_unit_type of the NAL units of the coded data of section is value 21,
The fact that including DPS_id with the type slice_type of section be I, is unrelated.
In addition to the type nal_unit_type of NAL unit of the 3rd row into the 10th row is 21 description of situation,
Fig. 6 grammer is identical with the grammer of existing slice header.That is, in the slice header of depth image except mark depth_
Information and colour outside weighted_pred_flag or mark depth_weighted_bipred_flag and DPS_id
The information in slice header in image is identical.It therefore, it can maintain the compatibility with existing encoding stream completely.
Because slice header includes mark depth_weighted_pred_flag and mark depth_weighted_
Bipred_flag, therefore mark depth_weighted_pred_flag or mark depth_ can be set to cut into slices unit
weighted_bipred_flag。
<The description of the processing of code device>
Fig. 7 is the flow chart for the coded treatment for showing the code device 50 in Fig. 2.
In Fig. 7 step S10, the multiple views coloured image capturing unit 51 of code device 50 captures the coloured silk of multiple viewpoints
Color image, and provided the coloured image of multiple viewpoints as multiple views coloured image to multiple views coloured image correction unit
52。
In step s 11, multiple views coloured image capturing unit 51 generates the external parameter of each viewpoint, depth maximum
And deep minimum.External parameter, depth maximum and deep minimum are provided and arrived by multiple views coloured image capturing unit 51
Deep image information generation unit 54, and provide single to composite image generation by depth maximum and deep minimum
Member 53.
In step s 12,52 pairs of unit of multiple views coloured image correction is provided from multiple views coloured image capturing unit 51
Multiple views coloured image perform color correction, gamma correction and distortion correction etc..Multiple views coloured image correction unit 52 will
Multiple views coloured image after correction is provided as multiple views correcting colour images arrives composite image generation unit 53
With multi-view image coding unit 55.
In step s 13, composite image generation unit 53 is based on carrying from multiple views coloured image capturing unit 51
The depth maximum and deep minimum of confession, the multiple views correcting color that unit 52 is provided is corrected according to from multiple views coloured image
Image generates the depth image of multiple viewpoints.Then, composite image generation unit 53 is by the multiple viewpoints generated
Depth image provides as composite image and arrives multi-view image coding unit 55.
In step S14, deep image information generation unit 54 generates the deep image information of each viewpoint, and will be every
The deep image information of individual viewpoint, which is provided, arrives multi-view image coding unit 55.
In step S15, multi-view image coding unit 55 is performed to multiple views school according to the scheme for meeting HEVC schemes
The multi-vision-point encoding processing that positive coloured image and composite image are encoded.It will be retouched with reference to the Fig. 8 being described below
State the details of multi-vision-point encoding processing.
In step s 16, multi-view image coding unit 55 transmits the encoding stream generated as step S15 result,
And terminate the processing.
Fig. 8 is the flow chart of the details of the multi-vision-point encoding processing for the step S15 for showing Fig. 7.
In Fig. 8 step S31, SPS coding units 61 (Fig. 3) formation sequence unit of multi-view image coding unit 55
SPS, and by SPS provide arrive PPS coding units 62.
In step s 32, PPS coding units 62 generate the PPS of picture unit, and PPS is added to from SPS coding units 61
The SPS of offer, and SPS is provided to slice header coding unit 64.Step S33 is performed extremely with the section unit of each viewpoint
Step S37 with post processing.
In step S33, the section that DPS coding units 63 perform for generating processing target view (is hereinafter referred to mesh
Mark viewpoint section) DPS DPS generations processing.With reference to the Fig. 9 being described below the details handled will be generated to describe DPS.
In step S34, the generation of slice header coding unit 64 includes a DPS_id or provided from DPS coding units 63
DPS DPS_id target view section depth image slice header.
In step s 35, the section report of the correcting colour images of the generation of slice header coding unit 64 target view section
Head.Then, slice header coding unit 64 also adds DPS, composite image and multiple views coloured image slice header
The PPS that the SPS provided from PPS coding units 62 is provided is added to, and the PPS is provided to section coding unit 65.
In step S36, section coding unit 65 is according to the 3DVC schemes for meeting HEVC schemes, based on being compiled from slice header
Included depth in the DPS for the DPS_id that the slice header of the depth image for the target view section that code unit 64 is provided includes
Image information is spent, the depth image of the target view section to being provided from composite image generation unit 53 is encoded.
In step S37, section coding unit 65 is according to the scheme for meeting HEVC schemes to from composite image
The correcting colour images for the target view section that generation unit 53 is provided are encoded.Coding unit 65 cut into slices by that will be used as step
Rapid S36 and step S37 coding result and the coded data of section unit that obtains, which are added to, with the addition of SPS, PPS and DPS
And the slice header provided from slice header coding unit 64 generates encoding stream.Then, processing returns to Fig. 7 step S15,
And proceed to step S16.
Fig. 9 is the flow chart of the details of the DPS generation processing for the step S33 for showing Fig. 8.
In Fig. 9 step S51, DPS coding units 63 determine whether the type of target view section is frame in type.When
When the type for determining target view section in step s 51 is frame in type, processing proceeds to step S52.
In step S52, DPS coding units 63 determine whether to have generated including from the deep image information generation in Fig. 2
The DPS of the deep image information for the target view section that unit 54 is provided.
When determining not yet generation DPS in step S52, DPS coding units 63 are generated in step S53 includes target
The DPS of the deep image information of viewpoint section is as differential coding result, and processing proceeds to step S57.
On the contrary, when the type for determining target view section in step s 51 is not frame in type, i.e. work as target view
When the type of section is inter type, processing proceeds to step S54.
In step S54, DPS coding units 63 are by obtaining deep image information and target view that target view is cut into slices
Difference between the deep image information of the section of adjacent preceding same viewpoint in the coded sequence of section is used as differential coding
As a result, differential coding is performed.
In step S55, DPS coding units 63 determine whether to have generated the differential coding including obtained in step S54
As a result DPS.When determining not yet to generate the DPS in step S55, DPS coding units 63 are generated in step S56 includes step
The DPS of differential coding result obtained in rapid S54, and processing proceeds to step S57.
In step S57, DPS coding units 63 distribute to DPS_id the DPS generated in step S53 or step S56,
And allow DPS_id to be included in DPS.DPS coding units 63 keep including DPS_id DPS.Carry out step S52 and
Step S55 regularly uses kept DPS really.
In step S58, the DPS including DPS_id is output to slice header coding unit 64 by DPS coding units 63.So
Afterwards, processing returns to Fig. 8 step S33, and proceeds to step S34.
On the contrary, when determining to have generated DPS in step S52, DPS of the DPS coding units 63 retained in step S57
Middle detection DPS DPS_id, and DPS_id is output to slice header coding unit 64 in step S59.Then, processing is returned
Fig. 8 step S33 is returned to, and proceeds to step S34.
On the contrary, when determining to have generated DPS in step S55, DPS of the DPS coding units 63 retained in step S57
Middle detection DPS DPS_id, and DPS_id is output to slice header coding unit 64 in step S60.Then, processing is returned
Fig. 8 step S33 is returned to, and proceeds to step S34.
As noted previously, as code device 50 sets deep image information in DPS, it is allowed to which deep image information is included
In encoding stream, and encoding stream is transmitted, therefore deep image information can be shared between section.As a result, with working as depth map
Compared when being included in as information in slice header etc. to be transmitted, can more reduce the redundancy of deep image information,
Therefore encoding amount can be reduced.
Because the deep image information in DPS is arranged to be different from existing parameter set, SPS and PPS and the depth map
As information generation encoding stream, therefore code device 50 can generate the encoding stream compatible with existing encoding stream.
In addition, when code device 50 distributes DPS_id with DPS setting sequence, decoding side can be based on being included in DPS
In DPS_id and detect DPS and lost during transmitting.Therefore, in this case, code device 50 can be held with high error
Limit performs transmission.
In code device 50, composite image is generated according to multiple views correcting colour images.However, many when capturing
During viewpoint coloured image, composite image can be generated by detection parallax d and depth Z sensor.
<The configuration example of the decoding apparatus of embodiment>
Figure 10 is shown according to the embodiment using this technology to being flowed into from the coding of the transmission of code device 50 in Fig. 2
The block diagram of the configuration example of the decoding apparatus of row decoding.
Decoding apparatus 80 in Figure 10 is configured to include multi-view image decoding unit 81, View Synthesis unit 82 and many
Visual point image display unit 83.
The multi-view image decoding unit 81 of decoding apparatus 80 receives the encoding stream transmitted from the code device 50 in Fig. 2.
Multi-view image decoding unit 81 from acquired encoding stream extract SPS, PPS, DPS, slice header, unit of cutting into slices coded number
According to etc..Then, multi-view image decoding unit 81 is according to corresponding with the encoding scheme of the multi-view image coding unit 55 in Fig. 2
Scheme, for each viewpoint, the DPS specified based on the DPS_id by being included in slice header is pair corresponding with slice header
The coded data of depth image of section decoded, to generate depth image.Multi-view image decoding unit 81 according to
The corresponding scheme of encoding scheme of multi-view image coding unit 55 and the coded number of the section unit to multiple views coloured image
According to being decoded, to generate multiple views correcting colour images.Multi-view image decoding unit 81 corrects the multiple views generated
Coloured image and composite image, which are provided, arrives View Synthesis unit 82.
82 pairs of composite images from multi-view image decoding unit 81 of View Synthesis unit are gone to regarding more
The warpage processing of the viewpoint (hereinafter referred to showing viewpoint) of the corresponding number of views of dot image display unit 83 is (explained below
Its details).Now, it can be set and use deep image information.
Warpage processing is to perform the processing from the geometric transformation of image of the image of given viewpoint to another viewpoint.Display is regarded
Point includes the viewpoint in addition to viewpoint corresponding with multiple views coloured image.
Depth image of the View Synthesis unit 82 based on the display viewpoint obtained as the result that warpage is handled, to from many
The multiple views correcting colour images that visual point image decoding unit 81 is provided go to the warpage processing of display viewpoint.Now, it can be set
Use deep image information.View Synthesis unit 82 is using the coloured image of the display viewpoint obtained as the result that warpage is handled
There is provided as multiple views combined color image and arrive multi-view image display unit 83.
Multi-view image display unit 83 shows the multiple views combined color image provided from View Synthesis unit 82, so that
Angle must be may be viewed by different in each viewpoint.Beholder only can watch two any viewpoints by being utilized respectively left eye and right eye
Image and in the case of not wearing spectacles from multiple viewpoints watch 3D rendering.
<The configuration example of multi-view image decoding unit>
Figure 11 is the block diagram for the configuration example for showing the multi-view image decoding unit 81 in Figure 10.
Multi-view image decoding unit 81 in Figure 11 is configured to include SPS decoding units 101, PPS decoding units
102nd, DPS decoding units 103, slice header decoding unit 104 and slice decoder unit 105.
The SPS decoding units 101 of multi-view image decoding unit 81 receive the coding transmitted from the code device 50 in Fig. 2
Stream.SPS decoding units 101 extract SPS from encoding stream.The SPS extracted and encoding stream are provided and are arrived PPS by SPS decoding units 101
Decoding unit 102 and DPS decoding units 103.
PPS decoding units 102 extract PPS in the encoding stream provided from SPS decoding units 101.PPS decoding units 102
The encoding stream provided by the PPS extracted and from SPS decoding units 101 is provided to slice header decoding unit 104.DPS is decoded
Unit 103 is used as acquiring unit, and obtains DPS in the encoding stream provided from SPS decoding units 101.DPS decoding units
103 are used as dissection process unit, and parse (extraction) deep image information from DPS and keep the deep image information.It is deep
Degree image information is provided to View Synthesis unit 82 when necessary.
Slice header decoding unit 104 extracts slice header in the encoding stream provided from PPS decoding units 102.Section
Header decoding unit 104 reads the depth of the DPS specified by the DPS_id included as slice header from DPS decoding units 103
Image information.SPS, PPS, slice header, DPS and encoding stream are provided and arrive slice decoder unit by slice header decoding unit 104
105。
Slice decoder unit 105 is used as acquiring unit, and in the encoding stream provided from slice header decoding unit 104
Obtain the coded data of section unit.Slice decoder unit 105 is used as generation unit, and based on section corresponding with DPS
Slice type and differential coding result included in the DPS that is provided from slice header decoding unit 104 is decoded.
Specifically, when the slice type of section corresponding with DPS is frame in type, slice decoder unit 105 uses DPS
In included differential coding result as decoded result carry out perform decoding without being changed.On the other hand, when corresponding with DPS
The slice type of section when being inter type, slice decoder unit 105 is by differential coding result and coding included in DPS
The deep image information kept of adjacent preceding section in sequence is added, and the additive value that will be obtained as addition results
It is set to decoded result.Slice decoder unit 105 keeps decoded result to be used as deep image information.
Slice decoder unit 105 according to scheme corresponding with the encoding scheme in section coding unit 65 (Fig. 3), based on from
Slice header decoding unit 104 provide SPS, PPS, slice header and deep image information and to cut into slices unit coded data
Decoded.Multiple views correcting colour images and multi-view depth figure that slice decoder unit 105 will be obtained as decoded result
As providing the View Synthesis unit 82 into Figure 10.
<The description of the processing of decoding apparatus>
Figure 12 is the flow chart for the decoding process for showing the decoding apparatus 80 in Figure 10.Decoding process is for example from Fig. 2
Code device 50 transmission encoding stream when start.
In Figure 12 step S61, the multi-view image decoding unit 81 of decoding apparatus 80 is received from the coding dress in Fig. 2
Put the encoding stream of 50 transmission.
In step S62, multi-view image decoding unit 81 performs the multiple views decoding of the encoding stream decoding to being received
Processing.The details of multiple views decoding process will be described with reference to Figure 13 for being described below.
In step S63, View Synthesis unit 82 is based on the multiple views correction provided from multi-view image decoding unit 81
Coloured image and composite image and generate multiple views combined color image.
In step S64, multi-view image display unit 83 shows the multiple views synthesis provided from View Synthesis unit 82
Coloured image is different in each viewpoint may be viewed by angle, and terminates processing.
Figure 13 is the flow chart of the details of the multiple views decoding process for the step S62 for showing Figure 12.
In Figure 13 step S71, the SPS decoding units 101 of multi-view image decoding unit 81 are from the coding received
Stream extracts SPS.The SPS extracted and encoding stream are provided to PPS decoding units 102 and DPS and are decoded single by SPS decoding units 101
Member 103.
In step S72, PPS decoding units 102 extract PPS in the encoding stream provided from SPS decoding units 101.PPS
The SPS and encoding stream that decoding unit 102 is provided by the PPS extracted and from SPS decoding units 101 are provided to slice header solution
Code unit 104.
In step S73, DPS decoding units 103 extract DPS in the encoding stream provided from SPS decoding units 101, and
And deep image information is parsed from DPS and the deep image information is kept.Step S74 is performed with the section unit of each viewpoint
To step S77 with post processing.In step S74, slice header decoding unit 104 is in the volume provided from PPS decoding units 102
The slice header of target view section is extracted in code stream.
In step S75, slice header decoding unit 104 is read by being extracted in step S74 from DPS decoding units 103
The DPS_id that includes of slice header specified by DPS deep image information.Slice header decoding unit 104 regards target
SPS, PPS, slice header and the DPS and encoding stream of point section, which are provided, arrives slice decoder unit 105.
In step S76, included by the DPS that 105 pairs of slice decoder unit is provided from slice header decoding unit 104
Differential coding result is decoded, and the generation performed for generating deep image information is handled.Will be with reference to being described below
Figure 14 come describe generation processing details.
In step S77, slice decoder unit 105 is extracted in the encoding stream provided from slice header decoding unit 104
The coded data of target view section.
In step S78, slice decoder unit 105 is according to corresponding with the encoding scheme in section coding unit 65 (Fig. 3)
Scheme, based on SPS, the PPS and slice header and deep image information provided from slice header decoding unit 104, to mesh
The coded data of mark viewpoint section is decoded.The correcting colour images that slice decoder unit 105 will be obtained as decoded result
The View Synthesis unit 82 into Figure 10 is provided with depth image.Then, processing returns to Figure 12 step S62, and carries out
To step S63.
Figure 14 is the flow chart of the details of the generation processing for the step S76 for showing Figure 13.
In step S91, slice decoder unit 105 determines whether the type of target view section is frame in type.When
When the type that target view section is determined in step S91 is frame in type, processing proceeds to step S92.
In step S92, slice decoder unit 105 keeps included from the DPS of the offer of slice header decoding unit 104
Deep minimum differential coding result together with keep decoded result deep image information deep minimum.
In step S93, slice decoder unit 105 keeps included from the DPS of the offer of slice header decoding unit 104
Depth maximum differential coding result together with keep decoded result deep image information depth maximum.
In step S94, slice decoder unit 105 keeps included from the DPS of the offer of slice header decoding unit 104
Camera device between distance differential coding result together with keep decoded result deep image information camera device between distance.
Then, processing returns to Figure 13 step S76, and proceeds to step S77.
On the contrary, when the type that target view section is determined in step S91 is not frame in type, i.e. work as target view
When the type of section is inter type, processing proceeds to step S95.
In step S95, slice decoder unit 105 is by the way that the DPS provided from slice header decoding unit 104 is included
Deep minimum differential coding result and the deep minimum phase kept of the adjacent preceding section in coded sequence
Plus and perform decoding.Slice decoder unit 105 keeps the deep minimum of the deep image information obtained as decoded result.
In step S96, slice decoder unit 105 is by the way that the DPS provided from slice header decoding unit 104 is included
Depth maximum differential coding result and the depth maximum phase kept of the adjacent preceding section in coded sequence
Calais's perform decoding.Slice decoder unit 105 keeps the depth maximum of the deep image information obtained as decoded result.
In the step s 97, slice decoder unit 105 is by the way that the DPS provided from slice header decoding unit 104 is included
Camera device between distance differential coding result and coded sequence in adjacent preceding section the camera device kept
Between distance be added come perform decoding.Slice decoder unit 105 keeps taking the photograph for the deep image information obtained as decoded result
As distance between device.Then, processing returns to Figure 13 step S76, and proceeds to step S77.
As described above, decoding apparatus 80 can be to reducing its encoding amount by setting deep image information in DPS
Encoding stream decoded.Further, since deep image information is included in encoding stream, thus decoding apparatus 80 can to
Used encoding stream is decoded when deep image information is encoded.
Because deep image information can be included in the DPS different from existing parameter set, SPS and PPS, therefore can be
Deep image information is easily used during such as post processing of warpage processing.Further, since DPS is collectively arranged in section unit
Before coded data, therefore View Synthesis unit 82 can obtain deep image information jointly before decoding.
Can be in coding or the decoding of composite image without using deep image information.
In the above-described embodiments, DPS_id is included in slice header.However, for example, when with sequence unit (GOP (pictures
Group) unit) when deep image information is set, existing SPS can be expanded, and DPS_id may include SPS after expansion (hereafter
In be referred to as extension SPS) in.
In this case, extension SPS grammer for example figure 15 illustrates.That is, extension SPS is included as shown in the 2nd row
Be used for identify the mark depth_range_present_flag (identification information) of the fact that DPS is present, and as in the 3rd row
Shown, when indicating that depth_range_present_flag is 1, extension SPS includes DPS_id (depth_parameter_
set_id)。
In this case, as shown in Figure 16 the 5th row and the 6th row, mark depth_weighted_pred_flag and
Mark depth_weighted_bipred_flag can also be set with sequence unit, and can be included in extension SPS.
Furthermore it is possible to extend existing slice header and the existing SPS of non-expanding, and DPS_id can be included in extension section
(slice header is hereinafter referred to extended in header).
In this case, for example, slice_layer is expanded.Therefore, as shown in figure 17, the type of NAL units is defined
Nal_unit_type is 21 NAL units, i.e. be extended to according to the NAL units of the coded data of 3DVC scheme codes
Slice_layer slice_layer (slice_layer_3dvc_extension_rbsp) (hereinafter referred to extends
slice_layer).As shown in figure 17, the type nal_unit_type of DPS NAL units is 16, this and existing NAL units
(such as SPS or PPS) is different.
As shown in figure 18, extension slice_layer (slice_layer_3dvc_extension_rbsp) coded data
It is defined to include the coded data of extension slice header (slice_header_3dvc_extension) and unit of cutting into slices
(slice_data)。
The grammer of extension slice header (slice_header_3dvc_extension) is shown in such as Figure 19.I.e., such as
Figure 19 the 2nd row is to shown in the 4th row, as mark depth_weighted_pred_flag or depth_weighted_
When bipred_flag is 1, extension slice header (slice_header_3dvc_extension) not only includes existing section report
Head (slice_header) and including DPS_id (depth_parameter_set_id).
As shown in figure 20, extension slice header (slice_header_3dvc_extension) may include to indicate depth_
Weighted_pred_flag or depth_weighted_bipred_flag.
As shown in Figure 19 or Figure 20, because existing slice header is included in extension slice header, therefore it can tie up completely
Hold the compatibility with existing encoding stream.
Slice_layer (as shown in Figure 17 and Figure 18) is not extended, but it is also possible to defined using existing slice_layer
Extend slice header (slice_header_3dvc_extension).
In this case, as shown in Figure 21 the 15th row and the 16th row, as the type nal_unit_type of NAL units
When being 21, NAL units include mark 3dvc_extension_flag, mark 3dvc_extension_flag and indicate that NAL is mono-
Whether the type of position is NAL units for 3DVC schemes.
If Figure 22 the 6th row is to as shown in eighth row, when mark 3dvc_extension_flag is to indicate to be used for 3DVC side
The NAL units of case 1 when, slice_layer coded data is defined to include extension slice header (slice_header_
3dvc_extension) with the coded data (slice_data) of section unit.
In the above-described embodiments, as shown in figure 4, sharing DPS between section, and slice header includes respective slice
DPS DPS_id.However, as shown in figure 23, DPS can be set for each section, and DPS can be added to each section
Coded data.In this case, it is not no into DPS distribution DPS_id, and slice header to include DPS_id.
<Using the description of the computer of this technology>
Next, above-mentioned series of processes can be performed by hardware or can performed by software.When the series of processes is by soft
When part is to perform, the program for the software is arranged in all-purpose computer etc..
Here, Figure 24 shows that the configuration of the computer of the embodiment for the program for being mounted with to perform above-mentioned series of processes is shown
Example.
The memory cell 808 or ROM that program can be previously recorded in the recording medium included as computer are (read-only to deposit
Reservoir) in 802.
As an alternative, program can also store (record) in detachable media 811.Detachable media 811 can be set
For so-called canned software.Here, the example of detachable media 811 includes floppy disk, CD-ROM (compact disc read-only memory), MO
(magneto-optic) disk, DVD (digital universal disc), disk and semiconductor memory.
Program can be installed in a computer via driver 810 from above-mentioned detachable media 811, and can also be via
Communication network or radio network download to computer to be installed in included memory cell 808.That is, for example, program can
To be wirelessly sent to computer from download website via the artificial satellite for digital satellite broadcasting, or can be with
Computer is sent in a wired fashion via such as LAN (LAN) or the network of internet.
Computer includes CPU (CPU) 801.Input/output interface 805 is connected to CPU via bus 804
801。
When user's handle input unit 806 is with via 805 input instruction of input/output interface, CPU 801 is according to instruction
Perform the program being stored in ROM 802.As an alternative, the program that CPU 801 will be stored in memory cell 808 is loaded into RAM
(random access memory) 803 and perform the program.
Therefore, CPU 801 performs the processing performed according to the processing of above-mentioned flow chart or by above-mentioned block configuration.So
Afterwards, CPU 801 exports result from output unit 807, and result, or example when necessary are transmitted from communication unit 809
Such as via input/output interface 805 by result record in memory cell 808.
Input block 806 is configured to include keyboard, mouse and microphone.Output unit 807 is configured to include LCD
(liquid crystal display) or loudspeaker.
Here, in this manual, the processing performed by computer according to program not necessarily basis can be retouched in flow charts
The order stated is performed in chronological order.That is, the processing performed by computer according to program also includes concurrently or individually holding
Capable processing (for example, parallel processing or processing according to object).
Program can be handled by single computer (processor) or can in a distributed way handled by multiple computers.
Program may pass to remotely located computer to be performed.
This technology can apply to when via such as satellite broadcasting, wired TV (TV), internet and portable phone
The coding used when network medium performs communication or when the storage medium to such as CD, disk and flash memory is performed and handled is filled
Put or decoding apparatus.
Above-mentioned encoding apparatus and decoding apparatus can apply to any electronic equipment.Electronic equipment explained below shows
Example.
<The configuration example of television devices>
Overall arrangements of the Figure 25 exemplified with the television devices of application this technology.Television devices 900 include antenna 901,
Tuner 902, demultiplexer 903, decoder 904, video signal processing unit 905, display unit 906, Speech processing
Unit 907, loudspeaker 908 and external interface unit 909.Television devices 900 also include control unit 910 and user interface list
Member 911.
Tuner 902 tunes expectation channel from the broadcast wave signal received by antenna 901, performs demodulation, and by institute
The encoding stream of acquisition is output to demultiplexer 903.
Demultiplexer 903 extracts the packet of the video or voice of the program to be watched from encoding stream, and will be extracted
The data output of packet is to decoder 904.Such as EPG (electronic program guides) packet is provided and arrived by demultiplexer 903
Control unit 910.When performing scrambling, cancelled by demultiplexer etc. and scrambled.
The packet perform decodings processing of 904 pairs of decoder, and by the video data generated by decoding process and voice number
According to being respectively outputted to video signal processing unit 905 and Speech processing unit 907.
Video signal processing unit 905 sets according to user and performs denoising or Video processing etc. to video data.Video is believed
Number processing unit 905 generates the video data for the program that be shown on display unit 906, or by based on being carried via network
The processing of the application of confession generates view data.Video signal processing unit 905 generates to show for selection of project etc.
Menu screen etc. video data, it is and the video data is overlapping with the video data of program.Video signal processing unit
905 generate drive signal based on the video data generated in this way, and drive display unit 906.
Display unit 906 driven based on the drive signal from video signal processing unit 905 display device (for example,
Liquid crystal display cells), and the video of display program etc..
Speech processing unit 907 performs the predetermined process of such as denoising to speech data, to the voice number after processing
There is provided to loudspeaker 908 to perform voice output according to execution D/A conversion process or enhanced processing, and by speech data.
External interface unit 909 is, for being connected to the interface of external equipment or network, and to perform video data, voice
The data transmission and reception of data etc..
User interface section 911 is connected to control unit 910.User interface section 911 is configured to include manipulating switch
Or remote control signal receiving unit, and the manipulation signal manipulated according to user is provided to control unit 910.
Control unit 910 is constituted using CPU (CPU), memory etc..What memory storage will be performed by CPU
Program, CPU execution processing needed for various data, EPG data, via data of Network Capture etc..CPU is (all in predetermined timing
When such as starting television devices 900) read and perform the program of storage in memory.CPU performs the program to control each
Unit so that television devices 900 are manipulated and worked according to user.
In television devices 900, bus 912 is installed to be is connected to tuner 902, demultiplexing by control unit 910
Device 903, video signal processing unit 905, Speech processing unit 907, external interface unit 909 etc..
In the television devices configured in this way, the function of the decoding apparatus (coding/decoding method) of the application is installed in
In decoder 904.It therefore, it can the volume to the encoding stream in the case where being included on the information of depth image in encoding stream
The encoding stream that code amount reduces is decoded.
<The configuration example of portable phone>
Overall arrangements of the Figure 26 exemplified with the portable phone of application this technology.Portable phone 920 includes communication unit
922nd, audio coder & decoder (codec) 923, camera device unit 926, graphics processing unit 927, multiplexing separative element 928, record reproduce
Unit 929, display unit 930 and control unit 931.These units are connected to each other via bus 933.
Antenna 921 is connected to communication unit 922, and loudspeaker 924 and microphone 925 are connected to audio coder & decoder (codec)
923.Actuation unit 932 is also connected to control unit 931.
(such as voice call pattern or data communication mode) performs various operations to portable phone 920 in various patterns,
The transmission and reception of such as voice signal, the transmission of Email or view data and reception, image taking and data record.
In voice call pattern, microphone 925 generate voice signal be subjected to audio coder & decoder (codec) 923 progress arrive language
The conversion or data compression of sound data are to be provided to communication unit 922.Communication unit 922 is performed at speech data modulation
Reason, frequency conversion process etc. transmit signal to generate.Communication unit 922 is provided signal is transmitted to antenna 921 to be sent to base
Stand (not shown).The reception signal that communication unit 922 is received to antenna 921 performs amplification, frequency conversion process, demodulation process
Deng, and the speech data obtained is provided to audio coder & decoder (codec) 923.Audio coder & decoder (codec) 923 is performed to speech data
Data decompression, or analog voice signal is converted voice data to, and analog voice signal is output to loudspeaker
924。
When performing mail transmission in a data communication mode, control unit 931 receives the manipulation by actuation unit 932
And the lteral data inputted, and the inputted word of display on display unit 930.Control unit 931 is based on actuation unit
User instruction in 932 etc. and generate mail data, and by mail data provide arrive communication unit 922.922 pairs of communication unit
Mail data performs modulation treatment, frequency conversion process etc., and transmits from antenna 921 obtained transmission signal.Communication unit
The reception signal that 922 pairs of antennas 921 of member are received performs enhanced processing, frequency conversion process, demodulation process etc., to recover mail number
According to.Mail data is provided to display unit 930 to show Mail Contents.
In portable phone 920, the mail data received can also be reproduced unit 929 by record and be stored in storage Jie
In matter.Storage medium is any rewritable storage medium.For example, storage medium is such as semiconductor memory (such as RAM or interior
Put flash memory), hard disk, disk, magneto-optic disk, CD, the detachable media of USB storage or storage card.
When transmitting view data in data communication mode, the view data that camera device unit 926 is generated is provided
To graphics processing unit 927.Graphics processing unit 927 performs coded treatment to generate coded data to view data.
The coded data that is generated according to predetermined scheme to graphics processing unit 927 of multiplexing separative element 928 and from voice coder
The speech data that decoder 923 is provided is multiplexed, and the coded data after multiplexing and speech data are provided to communication unit
Member 922.Data after 922 pairs of multiplexings of communication unit perform demodulation process, frequency conversion process etc., and are transmitted from antenna 921
The transmission signal obtained.Reception signal that communication unit 922 is received to antenna 921 perform enhanced processing, frequency conversion process,
Demodulation process etc. is to recover multiplex data.Multiplex data is provided to multiplexing separative element 928.It is multiplexed 928 pairs of separative element multiple
Separated with data, and coded data and speech data are provided respectively to graphics processing unit 927 and encoding and decoding speech
Device 923.Graphics processing unit 927 handles coded data perform decoding to generate view data.View data is provided to aobvious
Show unit 930 to show received image.Audio coder & decoder (codec) 923 converts voice data to analog voice signal, by mould
Intend voice signal to provide to loudspeaker 924, and export received voice.
In the portable telephone apparatus configured in this way, the code device (coding method) and decoding apparatus of the application
The function mounting of (coding/decoding method) is in graphics processing unit 927.Therefore, it is included in coding in the information on depth image
The encoding amount of encoding stream can be reduced in the case of in stream.Furthermore, it is possible to being included in volume in the information on depth image
The encoding stream that the encoding amount of encoding stream reduces in the case of in code stream is decoded.
<The configuration example of record reproducing device>
Overall arrangements of the Figure 27 exemplified with the record reproducing device of application this technology.For example, record reproducing device 940 is by institute
The voice data and video data recording of the broadcast program of reception in the recording medium, and are incited somebody to action according to the timing of user instruction
Record data, which is provided, arrives user.For example, record reproducing device 940 can obtain voice data and video data from another equipment,
And by voice data and video data recording in the recording medium.940 pairs of record reproducing device is recorded in the recording medium
Voice data and video data are decoded and exported so that can perform that image is shown or audio is defeated on monitor apparatus etc.
Go out.
Record reproducing device 940 includes tuner 941, external interface unit 942, encoder 943, HDD (hard drives
Device) unit 944, disk drive 945, selector 946, decoder 947, OSD (screen display) unit 948, the and of control unit 949
User interface section 950.
Tuner 941 tunes expectation channel from the broadcast singal received by antenna (not shown).Tuner 941 will pass through
To it is expected that the coding stream that the reception signal of channel is decoded and obtained is output to selector 946.
External interface unit 942 is configured to include the interfaces of IEEE 1394, NIU, USB interface and flash memory to connect
Mouthful at least one.External interface unit 942 is, for connecting the interface of external equipment, network, storage card etc., and to receive
The data to be recorded of such as video data or speech data.
When the video data or speech data that are provided from external interface unit 942 are not encoded, the basis of encoder 943
Predetermined scheme is encoded to video data or speech data, and coding stream is output into selector 946.
HDD units 944 will be hard built in the content-data of video, voice etc., various programs and other data Ji Lu
In disk, and in reproduction etc. from hard disk content data, program and other data.
Disk drive 945 reproduces on the CD installed and to the signal signal record.CD is such as DVD
Disk (DVD- videos, DVD-RAM, DVD-R, DVD-RW, DVD+R, DVD+RW etc.) or Blu-ray disc.
Selector 946 selects a coding stream when recording video or voice from tuner 941 or encoder 943, and
And provide the coding stream to one of HDD units 944 and disk drive 945.Selector 946 will when reproducing video or voice
The coding stream exported from HDD units 944 or disk drive 945 is provided to decoder 947.
Decoder 947 is to the processing of coding stream perform decoding.Decoder 947 will by perform decoding handle and generate regard
Frequency arrives OSD unit 948 according to offer.The speech data that the output of decoder 947 is handled and generated by perform decoding.
OSD unit 948 generates to show the video data for the menu screen of the selection of project etc. etc., by the video
Data are overlapping with the video data exported from decoder 947, and export it is overlapping after video data.
User interface section 950 is connected to control unit 949.User interface section 950 is configured to include manipulating switch
Or remote control signal receiving unit, and the manipulation signal manipulated according to user is provided to control unit 949.
Control unit 949 is configured using CPU, memory etc..The program and CPU that memory storage will be performed by CPU are held
Various data needed for row processing.CPU is read in predetermined timing (when such as starting record reproducing device 940) and execution is stored in
Program in memory.CPU performs the program to control unit so that record reproducing device 940 is according to user's manipulation
Work.
In the record reproducing device configured in this way, the function mounting of the decoding apparatus (coding/decoding method) of the application exists
In decoder 947.It therefore, it can the volume to the encoding stream in the case where being included on the information of depth image in encoding stream
The encoding stream that code amount reduces is decoded.
<The configuration example of imaging device>
Overall arrangements of the Figure 28 exemplified with the imaging device of application this technology.Imaging device 960 is imaged to main body, and
And display main body image, or using the image of main body as Imagery Data Recording on the recording medium.
Imaging device 960 includes optical module 961, imaging unit 962, camera device signal processing unit 963, picture number
According to processing unit 964, display unit 965, external interface unit 966, memory cell 967, media drive 968, OSD unit
969 and control unit 970.User interface section 971 is connected to control unit 970.Image data processing unit 964, outside connect
Mouth unit 966, memory cell 967, media drive 968, OSD unit 969, control unit 970 etc. are connected via bus 972
To each other.
Optical module 961 is configured using condenser lens, aperture device etc..Optical module 961 imaging unit 962 into
The optical imagery of main body is formed on image surface.Unit 962, imaging unit are configured like using CCD or cmos image sensor
962 generate electric signal according to optical imagery by opto-electronic conversion, and electric signal is provided to camera device signal transacting list
Member 963.
963 pairs of electric signals provided from imaging unit 962 of camera device signal processing unit perform various camera device letters
Number processing, such as knee correction, gamma correction, color correction.Camera device signal processing unit 963 is by camera device signal
View data after reason, which is provided, arrives image data processing unit 964.
964 pairs of view data provided from camera device signal processing unit 963 of image data processing unit perform coding
Processing.Image data processing unit 964 will be provided by the coded data for performing coded treatment and generating and arrive external interface unit
966 or media drive 968.964 pairs of image data processing unit is provided from external interface unit 966 or media drive 968
Coded data perform decoding processing.The view data that image data processing unit 964 will be handled and generated by perform decoding
Display unit 965 is provided.The picture number that image data processing unit 964 will be provided from camera device signal processing unit 963
Display unit 965 is arrived according to providing so that the display data obtained from OSD unit 969 is overlapping with view data, and by overlapping number
Display unit 965 is arrived according to providing.
The generation display data of OSD unit 969 (by the menu screen of symbol, word or figure constitution, icon etc.), and
And display data is output to image data processing unit 964.
External interface unit 966 is configured to include such as USB input/output terminals, and is connected when print image
To printer.Driver is connected to external interface unit 966 when necessary, suitably installs the detachable of such as disk or CD
Medium, and the computer program read from detachable media is installed when necessary.External interface unit 966 is pre- including being connected to
Determine network (such as LAN or internet) network interface.For example, control unit 970 can be according to from user interface section 971
Instruction and read coded data from memory cell 967, and the coded data from external interface unit 966 can be carried
It is supplied to the miscellaneous equipment via network connection.Control unit 970 can be obtained by external interface unit 966 via network from its
Coded data or view data that its equipment is provided, and coded data or view data are provided to image data processing unit
964。
It is as the recording medium driven by media drive 968, such as all using any readable/writeable detachable media
Such as disk, magneto-optic disk, CD or semiconductor memory.The detachable media of any kind may be used as recording medium.It can be used
Magnetic tape equipment, can be used disk, or usable storage card.Certainly, non-contact IC card etc. can be used.
Media drive 968 can be integrated with recording medium, therefore can be (such as built-in hard by for example non-portable storage medium
Disk drive or SSD (solid-state drive)) carry out collocating medium driver 968.
Control unit 970 is configured using CPU, memory etc..The program and CPU that memory storage will be performed by CPU are held
Various data needed for row processing.CPU is read in predetermined timing (when such as starting imaging device 960) and execution is stored in storage
Program in device.CPU performs the program to control unit so that imaging device 960 is manipulated and worked according to user.
In the imaging device configured in this way, code device (coding method) and the decoding apparatus (decoding of the application
Method) function mounting in image data processing unit 964.Therefore, it is included in coding in the information on depth image
The encoding amount of encoding stream can be reduced in the case of in stream.Furthermore, it is possible to being included in volume in the information on depth image
The encoding stream that the encoding amount of encoding stream reduces in the case of in code stream is decoded.
The embodiment of this technology is not limited to above-described embodiment, but can be in the range of the purport without departing substantially from this technology with each
The mode of kind is modified.
This technology can be configured as follows.
(1) a kind of code device, including:Setting unit, it will be used as the depth image letter of the information on depth image
Breath is configured differently than sequence parameter set and the depth parameter collection of image parameters collection;Coding unit, it enters to the depth image
Row encodes to generate coded data;And delivery unit, its transmit the depth parameter collection that includes being set by the setting unit and
The encoding stream of the coded data generated by the coding unit.
(2) in the code device more than described in (1), the setting unit can be used in the depth parameter concentrated setting
In the ID for uniquely identifying the depth parameter collection.The delivery unit can be transmitted including institute corresponding with the depth image
State the ID encoding stream.
(3) in the code device more than described in (2), the setting unit can the depth image slice header
The middle setting ID corresponding with the depth image for unit of cutting into slices.
The delivery unit can transmit the encoding stream of the slice header including being set by the setting unit.
(4) in the code device more than any one of (1) to (3), the setting unit can be to the depth
Image information performs differential coding, and the differential coding result of the deep image information is set into the depth parameter collection.
(5) in the code device more than any one of (1) to (4), the coding unit can be based on the depth
Degree image information is encoded to the depth image.
(6) in the code device more than any one of (1) to (5), the deep image information can include institute
State the distance between the maximum and minimum value of the pixel value of depth image and the camera device of the capture depth image.
(7) in the code device more than any one of (1) to (6), different from sequence parameter set and image parameters
The network abstract layer NAL flat types of collection can be arranged on the depth parameter and concentrate.
(8) in the code device more than any one of (1) to (7), the setting unit can be provided for mark
Know the identification information of the presence of the deep image information.The delivery unit can be transmitted including being set by the setting unit
Identification information the encoding stream.
(9) a kind of coding method of code device, including the following steps performed by code device:Setting steps, will make
The depth ginseng of sequence parameter set and image parameters collection is configured differently than for the deep image information of the information on depth image
Manifold;And coding step, the depth image is encoded to generate coded data;And transfer step, transmission includes
The depth parameter collection set in the processing of the setting steps and the coded data generated in the processing of the coding step
Encoding stream.
(10) a kind of decoding apparatus, including:Acquiring unit, it is from including the coded data of depth parameter collection and depth image
Encoding stream obtain the depth parameter collection and the coded data, the depth parameter concentrated setting has as on the depth
The deep image information of the information of image is spent, and the depth parameter collection is different from sequence parameter set and image parameters collection;Solution
Processing unit is analysed, its depth parameter obtained from the acquiring unit concentrates the parsing deep image information;And decoding is single
Member, its coded data obtained to the acquiring unit is decoded.
(11) in the decoding apparatus more than described in (10),
The depth parameter concentration can be arranged on for uniquely identifying the ID of the depth parameter collection.The coding
Stream can include the ID corresponding with the depth image.
(12) in the decoding apparatus more than described in (11), the encoding stream includes being provided with the depth with section unit
Spend the corresponding ID of image slice header.
(13) decoding apparatus more than any one of (10) to (12) can also include:Generation unit, it is right that it passes through
The differential coding result of the deep image information is decoded and generates the deep image information.The encoding stream can be wrapped
Include the depth parameter collection for the differential coding result for being provided with the deep image information.The generation unit can be by right
The differential coding result for being arranged to the deep image information of the depth parameter collection is decoded and generates the depth
Image information.
(14) in the decoding apparatus more than any one of (10) to (13), the decoding unit can be based on by solving
Analyse the deep image information of processing unit parsing and the coded data is decoded.
(15) in the decoding apparatus more than any one of (10) to (14), the deep image information can include
The distance between the maximum and minimum value of the pixel value of the depth image and the camera device of the capture depth image.
(16) in the decoding apparatus more than any one of (10) to (15), join different from sequence parameter set and picture
The network abstract layer NAL flat types of manifold are arranged on the depth parameter and concentrated.
(17) in the decoding apparatus more than any one of (10) to (16), the encoding stream can include being used to mark
Know the identification information of the presence of the deep image information.
(18) a kind of coding/decoding method, including the following steps performed by decoding apparatus:Obtaining step, from including depth parameter
The encoding stream of the coded data of collection and depth image obtains the depth parameter collection and the coded data, the depth parameter collection
In be provided with deep image information as the information on the depth image, and the depth parameter collection is different from sequence
Parameter set and image parameters collection;Dissection process step, the depth parameter obtained from the processing in the obtaining step concentrates solution
Analyse the deep image information;And decoding step, the coded data obtained in the processing of the obtaining step is solved
Code.
Reference numerals list
50 code devices, 51 multiple views coloured image imaging units, 52 multiple views coloured images correction unit, more than 53
Viewpoint depth image generation unit, 54 deep image information generation units, 55 multi-view image coding units, 61 SPS codings
Unit, 62 PPS coding units, 63 DPS coding units, 64 slice header coding units, 65 section coding units, 80 solutions
Code device, 81 multi-view image decoding units, 82 View Synthesis units, 101 SPS decoding units, 102 PPS decoding units,
103 DPS decoding units, 104 slice header decoding units, 105 slice decoder units.
Claims (7)
1. a kind of decoding apparatus, including:
Acquiring unit, it obtains the depth parameter collection from including depth parameter collection and the encoding stream of the coded data of depth image
With the coded data, the depth parameter concentrated setting has the depth image letter as the information on the depth image
Breath, and the depth parameter collection is different from sequence parameter set and image parameters collection;
Dissection process unit, its depth parameter obtained from the acquiring unit concentrates the parsing deep image information;And
Decoding unit, its coded data obtained to the acquiring unit is decoded,
Wherein, the depth parameter concentration is arranged on for uniquely identifying the ID of the depth parameter collection, and
Wherein, the encoding stream includes the ID corresponding with the depth image,
Wherein, it is arranged on the depth different from the network abstract layer NAL flat types of sequence parameter set and image parameters collection
In parameter set.
2. decoding apparatus according to claim 1, wherein, the encoding stream includes being provided with the depth with section unit
Spend the corresponding ID of image slice header.
3. decoding apparatus according to claim 1, in addition to:
Generation unit, it is decoded by the differential coding result to the deep image information and generates the depth image
Information,
Wherein, the encoding stream includes the depth parameter collection for being provided with the differential coding result of the deep image information,
And
Wherein, the differential coding that the generation unit passes through the deep image information to being arranged to the depth parameter collection
As a result decoded and generate the deep image information.
4. decoding apparatus according to claim 1, wherein, the decoding unit is based on the institute by dissection process unit resolves
State deep image information and the coded data is decoded.
5. decoding apparatus according to claim 1, wherein, the deep image information includes the pixel of the depth image
The distance between the maximum and minimum value of value and the camera device of the capture depth image.
6. decoding apparatus according to claim 1, wherein, the encoding stream includes being used to identify the deep image information
Presence identification information.
7. a kind of coding/decoding method of decoding apparatus, including:
Obtaining step, from including depth parameter collection and the encoding stream of the coded data of depth image obtain the depth parameter collection and
The coded data, the depth parameter concentrated setting has the deep image information as the information on the depth image,
And the depth parameter collection is different from sequence parameter set and image parameters collection;
Dissection process step, the depth parameter obtained from the processing in the obtaining step concentrates the parsing depth image letter
Breath;And
Decoding step, is decoded to the coded data obtained in the processing of the obtaining step,
Wherein, the depth parameter concentration is arranged on for uniquely identifying the ID of the depth parameter collection, and
Wherein, the encoding stream includes the ID corresponding with the depth image,
Wherein, it is arranged on the depth different from the network abstract layer NAL flat types of sequence parameter set and image parameters collection
In parameter set.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012019025 | 2012-01-31 | ||
JP2012-019025 | 2012-01-31 | ||
CN201380006563.5A CN104067615B (en) | 2012-01-31 | 2013-01-23 | Code device and coding method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201380006563.5A Division CN104067615B (en) | 2012-01-31 | 2013-01-23 | Code device and coding method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104683813A CN104683813A (en) | 2015-06-03 |
CN104683813B true CN104683813B (en) | 2017-10-10 |
Family
ID=48905067
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510050838.6A Expired - Fee Related CN104683813B (en) | 2012-01-31 | 2013-01-23 | Decoding apparatus and coding/decoding method |
CN201380006563.5A Expired - Fee Related CN104067615B (en) | 2012-01-31 | 2013-01-23 | Code device and coding method |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201380006563.5A Expired - Fee Related CN104067615B (en) | 2012-01-31 | 2013-01-23 | Code device and coding method |
Country Status (13)
Country | Link |
---|---|
US (2) | US10085007B2 (en) |
EP (1) | EP2811741A4 (en) |
JP (2) | JP5975301B2 (en) |
KR (1) | KR20140123051A (en) |
CN (2) | CN104683813B (en) |
AU (1) | AU2013216395A1 (en) |
BR (1) | BR112014018291A8 (en) |
CA (1) | CA2860750A1 (en) |
MX (1) | MX2014008979A (en) |
PH (1) | PH12014501683A1 (en) |
RU (1) | RU2014130727A (en) |
TW (1) | TW201342884A (en) |
WO (1) | WO2013115025A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104205819B (en) | 2012-02-01 | 2017-06-30 | 诺基亚技术有限公司 | Method for video encoding and device |
JP2013198059A (en) * | 2012-03-22 | 2013-09-30 | Sharp Corp | Image encoder, image decoder, image encoding method, image decoding method and program |
WO2015053593A1 (en) * | 2013-10-12 | 2015-04-16 | 삼성전자 주식회사 | Method and apparatus for encoding scalable video for encoding auxiliary picture, method and apparatus for decoding scalable video for decoding auxiliary picture |
WO2015057038A1 (en) * | 2013-10-18 | 2015-04-23 | 엘지전자 주식회사 | Method and apparatus for decoding multi-view video |
CN108616748A (en) * | 2017-01-06 | 2018-10-02 | 科通环宇(北京)科技有限公司 | A kind of code stream and its packaging method, coding/decoding method and device |
US11348265B1 (en) | 2017-09-15 | 2022-05-31 | Snap Inc. | Computing a point cloud from stitched images |
CN112544084B (en) * | 2018-05-15 | 2024-03-01 | 夏普株式会社 | Image encoding device, encoded stream extracting device, and image decoding device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010525724A (en) * | 2007-04-25 | 2010-07-22 | エルジー エレクトロニクス インコーポレイティド | Method and apparatus for decoding / encoding a video signal |
CN102265617A (en) * | 2008-12-26 | 2011-11-30 | 日本胜利株式会社 | Image encoding device, image encoding method, program thereof, image decoding device, image decoding method, and program thereof |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2003231510A1 (en) * | 2002-04-25 | 2003-11-10 | Sharp Kabushiki Kaisha | Image data creation device, image data reproduction device, and image data recording medium |
CN101416149A (en) | 2004-10-21 | 2009-04-22 | 索尼电子有限公司 | Supporting fidelity range extensions in advanced video codec file format |
US20070098083A1 (en) | 2005-10-20 | 2007-05-03 | Visharam Mohammed Z | Supporting fidelity range extensions in advanced video codec file format |
RU2007118660A (en) * | 2004-10-21 | 2008-11-27 | Сони Электроникс, Инк. (Us) | SUPPORTING IMAGE QUALITY EXTENSIONS IN THE ENHANCED VIDEO CODEC FILE FORMAT |
KR101244911B1 (en) * | 2005-10-11 | 2013-03-18 | 삼성전자주식회사 | Apparatus for encoding and decoding muti-view image by using camera parameter, and method thereof, a recording medium having a program to implement thereof |
BRPI0708305A2 (en) * | 2006-03-29 | 2011-05-24 | Thomson Licensing | Method and apparatus for use in a multi-view video encoding system |
CN101669367A (en) | 2007-03-02 | 2010-03-10 | Lg电子株式会社 | A method and an apparatus for decoding/encoding a video signal |
WO2008117963A1 (en) * | 2007-03-23 | 2008-10-02 | Lg Electronics Inc. | A method and an apparatus for decoding/encoding a video signal |
JP2010157824A (en) * | 2008-12-26 | 2010-07-15 | Victor Co Of Japan Ltd | Image encoder, image encoding method, and program of the same |
JP2010157826A (en) * | 2008-12-26 | 2010-07-15 | Victor Co Of Japan Ltd | Image decoder, image encoding/decoding method, and program of the same |
KR101619450B1 (en) * | 2009-01-12 | 2016-05-10 | 엘지전자 주식회사 | Video signal processing method and apparatus using depth information |
US8457155B2 (en) * | 2009-09-11 | 2013-06-04 | Nokia Corporation | Encoding and decoding a multi-view video signal |
US20140321546A1 (en) * | 2011-08-31 | 2014-10-30 | Sony Corporation | Image processing apparatus and image processing method |
-
2013
- 2013-01-16 TW TW102101632A patent/TW201342884A/en unknown
- 2013-01-23 JP JP2013556335A patent/JP5975301B2/en not_active Expired - Fee Related
- 2013-01-23 BR BR112014018291A patent/BR112014018291A8/en not_active IP Right Cessation
- 2013-01-23 CN CN201510050838.6A patent/CN104683813B/en not_active Expired - Fee Related
- 2013-01-23 US US14/373,949 patent/US10085007B2/en active Active
- 2013-01-23 CN CN201380006563.5A patent/CN104067615B/en not_active Expired - Fee Related
- 2013-01-23 AU AU2013216395A patent/AU2013216395A1/en not_active Abandoned
- 2013-01-23 WO PCT/JP2013/051265 patent/WO2013115025A1/en active Application Filing
- 2013-01-23 RU RU2014130727A patent/RU2014130727A/en unknown
- 2013-01-23 KR KR1020147020149A patent/KR20140123051A/en not_active Application Discontinuation
- 2013-01-23 MX MX2014008979A patent/MX2014008979A/en not_active Application Discontinuation
- 2013-01-23 EP EP13743221.7A patent/EP2811741A4/en not_active Withdrawn
- 2013-01-23 CA CA2860750A patent/CA2860750A1/en not_active Abandoned
-
2014
- 2014-07-24 PH PH12014501683A patent/PH12014501683A1/en unknown
-
2016
- 2016-07-19 JP JP2016141552A patent/JP6206559B2/en not_active Expired - Fee Related
-
2018
- 2018-07-13 US US16/034,985 patent/US10205927B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010525724A (en) * | 2007-04-25 | 2010-07-22 | エルジー エレクトロニクス インコーポレイティド | Method and apparatus for decoding / encoding a video signal |
CN102265617A (en) * | 2008-12-26 | 2011-11-30 | 日本胜利株式会社 | Image encoding device, image encoding method, program thereof, image decoding device, image decoding method, and program thereof |
Non-Patent Citations (1)
Title |
---|
Descriptions of 3D Video Coding Proposal (HEVC-Compatible Category);Yoshitimo Takahashi et al;《Motion Picture Expert Group or ISO/IEC JTC1/SC29/WG11》;20111202;正文3.1、3.5.2节,附录A表15,图3 * |
Also Published As
Publication number | Publication date |
---|---|
BR112014018291A8 (en) | 2017-07-11 |
CA2860750A1 (en) | 2013-08-08 |
WO2013115025A1 (en) | 2013-08-08 |
BR112014018291A2 (en) | 2017-06-20 |
CN104067615A (en) | 2014-09-24 |
JP2016195456A (en) | 2016-11-17 |
US10205927B2 (en) | 2019-02-12 |
TW201342884A (en) | 2013-10-16 |
US10085007B2 (en) | 2018-09-25 |
CN104067615B (en) | 2017-10-24 |
AU2013216395A1 (en) | 2014-07-10 |
US20180343437A1 (en) | 2018-11-29 |
JPWO2013115025A1 (en) | 2015-05-11 |
JP5975301B2 (en) | 2016-08-23 |
KR20140123051A (en) | 2014-10-21 |
US20150042753A1 (en) | 2015-02-12 |
RU2014130727A (en) | 2016-02-10 |
PH12014501683A1 (en) | 2014-11-10 |
EP2811741A4 (en) | 2015-06-24 |
MX2014008979A (en) | 2014-08-27 |
EP2811741A1 (en) | 2014-12-10 |
CN104683813A (en) | 2015-06-03 |
JP6206559B2 (en) | 2017-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104683813B (en) | Decoding apparatus and coding/decoding method | |
CN107197227B (en) | Image processing equipment, image processing method and computer readable storage medium | |
WO2012128069A1 (en) | Image processing device and image processing method | |
WO2012147621A1 (en) | Encoding device and encoding method, and decoding device and decoding method | |
US9235749B2 (en) | Image processing device and image processing method | |
CN105979240B (en) | Code device and coding method and decoding apparatus and coding/decoding method | |
WO2012111757A1 (en) | Image processing device and image processing method | |
CN103748883B (en) | Encoding device, coding method, decoding device and coding/decoding method | |
JP6229895B2 (en) | Encoding apparatus and encoding method, and decoding apparatus and decoding method | |
CN104982038A (en) | Method and apparatus for processing video signals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171010 |
|
CF01 | Termination of patent right due to non-payment of annual fee |