US20160014413A1 - Image encoding device and method and image decoding device and method - Google Patents

Image encoding device and method and image decoding device and method Download PDF

Info

Publication number
US20160014413A1
US20160014413A1 US14/773,834 US201414773834A US2016014413A1 US 20160014413 A1 US20160014413 A1 US 20160014413A1 US 201414773834 A US201414773834 A US 201414773834A US 2016014413 A1 US2016014413 A1 US 2016014413A1
Authority
US
United States
Prior art keywords
image
encoding
section
information
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/773,834
Other languages
English (en)
Inventor
Kazushi Sato
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Assigned to SONY CORPORATION reassignment SONY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SATO, KAZUSHI
Publication of US20160014413A1 publication Critical patent/US20160014413A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present disclosure relates to an image encoding device and method and an image decoding device and method, and more particularly, to an image encoding device and method and an image decoding device and method, which are capable of suppressing an increase in encoding or decoding workload.
  • MPEG 2 (ISO/IEC 13818-2) is a standard that is defined as a general-purpose image encoding scheme, and covers interlaced scan images, progressive scan images, standard resolution images, and high definition images.
  • MPEG 2 is now being widely used in a wide range of applications such as professional use and consumer use.
  • a coding amount (bit rate) for example, in the case of an interlaced scan image of a standard resolution having 720 ⁇ 480 pixels, a coding amount (bit rate) of 4 to 8 Mbps is allocated.
  • bit rate bit rate
  • the MPEG 2 compression scheme for example, in the case of an interlaced scan image of a high resolution having 1920 ⁇ 1088 pixels, a coding amount (bit rate) of 18 to 22 Mbps is allocated.
  • bit rate bit rate
  • MPEG 2 is mainly intended for high definition coding suitable for broadcasting but does not support an encoding scheme having a coding amount (bit rate) lower than that of MPEG 1, that is, an encoding scheme of a high compression rate.
  • bit rate coding amount
  • MPEG 4 encoding scheme has been standardized.
  • An international standard for an image encoding scheme was approved as ISO/IEC 14496-2 in December, 1998.
  • H.26L International Telecommunication Union Telecommunication Standardization Sector Q6/16 Video Coding Expert Group (ITU-T Q6/16 VCEG)
  • ISO-T Q6/16 VCEG Video Coding Expert Group
  • H.26L requires a larger computation amount for encoding and decoding than in existing encoding schemes such as MPEG 2 or MPEG 4, but is known to implement high encoding efficiency.
  • MPEG 4 standardization of incorporating even a function that is not supported in H.26L and implementing high encoding efficiency based on H.26L has been performed as a Joint Model of Enhanced-Compression Video Coding.
  • AVC Advanced Video Coding
  • Fidelity Range Extension including an encoding tool necessary for professional use such as RGB or 4:2:2 or 4:4:4 or 8 ⁇ 8 DCT and a quantization matrix which are specified in MPEG-2 was standardized in February, 2005.
  • FRExt Fidelity Range Extension
  • H.264/AVC has become an encoding scheme capable of also expressing film noise included in movies well and is being used in a wide range of applications such as Blu-Ray Discs (trademark).
  • JCTC Joint Collaboration Team-Video Coding
  • HEVC High Efficiency Video Coding
  • HEVC High Efficiency Video Coding
  • the existing image encoding schemes such as MPEG-2 and AVC have a scalability function of dividing an image into a plurality of layers and encoding the plurality of layers.
  • image compression information of only a base layer is transmitted, and a moving image of low spatial and temporal resolutions or a low quality is reproduced
  • a terminal having a high processing capability such as a television or a personal computer
  • image compression information of an enhancement layer as well as a base layer is transmitted, and a moving image of high spatial and temporal resolutions or a high quality is reproduced. That is, image compression information according to a capability of a terminal or a network can be transmitted from a server without performing the transcoding process.
  • the present disclosure has been made in light of the foregoing, and it is desirable to suppress an increase in encoding or decoding workload.
  • an image encoding device including: a generation section configured to generate control information used to control a certain area in which encoding-related information, of another layer encoded for each of a plurality of certain areas obtained by dividing a picture, is referred to regarding a current layer of image data including a plurality of layers; an encoding section configured to encode the current layer of the image data with reference to the encoding-related information of some areas of the other layer according to control of the control information generated by the generation section; and a transmission section configured to transmit encoded data of the image data generated by the encoding section and the control information generated by the generation section.
  • the control information may be information limiting an area in which the encoding-related information is referred to by designating an area in which reference to the encoding-related information of the other layer is permitted, designating an area in which reference to the encoding-related information is prohibited, or designating an area in which the encoding-related information is referred to.
  • the control information may designate the area using an identification number allocated in a raster scan order, information indicating positions of the area in vertical and horizontal directions in a picture, or information indicating a data position of the area in the encoded data.
  • the transmission section may further transmit information indicating whether or not to control an area in which the encoding-related information is referred to.
  • the encoding-related information may be information used for generation of a prediction image used in encoding of the image data.
  • the information used for the generation of the prediction image may include information used for texture prediction of the image data and information used for syntax prediction of the image data.
  • the control information may be information used to independently control an area in which the information used for the texture prediction is referred to and an area in which the information used for the syntax prediction is referred to.
  • the generation section may generate the control information for each of the plurality of certain areas obtained by dividing the picture of the current layer of the image data.
  • the encoding section may encode the current layer of the image data with reference to the encoding-related information of some areas of the other layer for each of the areas according to control of the control information of each area generated by the generation section.
  • the transmission section may further transmit information indicating whether or not an area division of the current layer is similar to an area division of the other layer.
  • the area may be a slice or a tile of the image data.
  • an image encoding method including: generating control information used to control a certain area in which encoding-related information, of another layer encoded for each of a plurality of certain areas obtained by dividing a picture, is referred to regarding a current layer of image data including a plurality of layers; encoding the current layer of the image data with reference to the encoding-related information of some areas of the other layer according to control of the generated control information; and transmitting encoded data generated by encoding the image data and the generated control information.
  • an image decoding device including: a reception section configured to receive encoded data of a current layer of image data including a plurality of layers and control information used to control a certain area in which encoding-related information, of another layer encoded for each of a plurality of certain areas obtained by dividing a picture of the image data, is referred to; and a decoding section configured to decode the encoded data with reference to the encoding-related information of some areas of the other layer according to control of the control information received by the reception section.
  • the control information may be information limiting an area in which the encoding-related information is referred to by designating an area in which reference to the encoding-related information of the other layer is permitted, designating an area in which reference to the encoding-related information is prohibited, or designating an area in which the encoding-related information is referred to.
  • the control information may designate the area using an identification number allocated in a raster scan order, information indicating positions of the area in vertical and horizontal directions in a picture, or information indicating a data position of the area in the encoded data.
  • the reception section may further receive information indicating whether or not to control an area in which the encoding-related information is referred to.
  • the encoding-related information may be information used for generation of a prediction image used in decoding of the encoded data.
  • the information used for the generation of the prediction image may include information used for texture prediction of the image data and information used for syntax prediction of the image data.
  • the control information may be information used to independently control an area in which the information used for the texture prediction is referred to and an area in which the information used for the syntax prediction is referred to.
  • the reception section may receive the encoded data encoded for each of the plurality of certain areas obtained by dividing the picture of the current layer of the image data and the control information of each of the areas.
  • the decoding section may decode the encoded data received by the reception section with reference to the encoding-related information of some areas of the other layer for each of the areas according to control of the control information of each area.
  • the reception section may further receive information indicating whether or not an area division of the current layer is similar to an area division of the other layer.
  • the area may be a slice or a tile of the image data.
  • an image decoding method including: receiving encoded data of a current layer of image data including a plurality of layers and control information used to control a certain area in which encoding-related information, of another layer encoded for each of a plurality of certain areas obtained by dividing a picture of the image data, is referred to; and decoding the encoded data with reference to the encoding-related information of some areas of the other layer according to control of the received control information.
  • control information used to control an area in which encoding-related information, of another layer encoded for each of a plurality of certain areas obtained by dividing a picture is referred to regarding a current layer of image data including a plurality of layers is generated, the current layer of the image data is encoded with reference to the encoding-related information of some areas of the other layer according to control of the generated control information, and encoded data generated by encoding the image data and the generated control information is transmitted.
  • encoded data of a current layer of image data including a plurality of layers and control information used to control an area in which encoding-related information, of another layer encoded for each of a plurality of certain areas obtained by dividing a picture of the image data, is referred to are received, and the encoded data is decoded with reference to the encoding-related information of some areas of the other layer according to control of the received control information.
  • FIG. 1 is a diagram for describing an example of a configuration of a coding unit.
  • FIG. 2 is a diagram illustrating an example of a scalable layered image encoding scheme.
  • FIG. 3 is a diagram for describing an example of spatial scalable coding.
  • FIG. 4 is a diagram for describing an example of temporal scalable coding.
  • FIG. 5 is a diagram for describing an example of scalable coding of a signal to noise ratio.
  • FIG. 6 is a diagram for describing an example of a slice.
  • FIG. 7 is a diagram for describing an example of a tile.
  • FIG. 8 is a diagram for describing an example of base layer reference control.
  • FIG. 9 is a diagram for describing an example of a tile setting.
  • FIG. 10 is a diagram for describing another example of base layer reference control.
  • FIG. 11 is a diagram for describing an example of a parallel process.
  • FIG. 12 is a diagram for describing an example of a method of allocating an identification number of a tile.
  • FIG. 13 is a diagram for describing an example of syntax of a picture parameter set.
  • FIG. 14 is a continuation from FIG. 13 for describing an example of syntax of a picture parameter set.
  • FIG. 15 is a diagram for describing an example of syntax of a slice header.
  • FIG. 16 is a continuation from FIG. 15 for describing an example of syntax of a slice header.
  • FIG. 17 is a continuation from FIG. 16 for describing an example of syntax of a slice header.
  • FIG. 18 is a block diagram illustrating an example of a main configuration of an image encoding device.
  • FIG. 19 is a block diagram illustrating an example of a main configuration of a base layer image encoding section.
  • FIG. 20 is a block diagram illustrating an example of a main configuration of an enhancement layer image encoding section.
  • FIG. 21 is a block diagram illustrating an example of a main configuration of an area synchronization section.
  • FIG. 22 is a flowchart for describing an example of the flow of an image encoding process.
  • FIG. 23 is a flowchart for describing an example of the flow of a base layer encoding process.
  • FIG. 24 is a flowchart for describing an example of the flow of an enhancement layer encoding process.
  • FIG. 25 is a flowchart for describing an example of the flow of an enhancement layer encoding process, continuing from FIG. 24 .
  • FIG. 26 is a block diagram illustrating an example of a main configuration of an image decoding device.
  • FIG. 27 is a block diagram illustrating an example of a main configuration of a base layer image decoding section.
  • FIG. 28 is a block diagram illustrating an example of a main configuration of an enhancement layer image decoding section.
  • FIG. 29 is a block diagram illustrating an example of a main configuration of an area synchronization section.
  • FIG. 30 is a flowchart for describing an example of the flow of an image decoding process.
  • FIG. 31 is a flowchart for describing an example of the flow of a base layer decoding process.
  • FIG. 32 is a flowchart for describing an example of the flow of an enhancement layer decoding process.
  • FIG. 33 is a flowchart for describing an example of the flow of an enhancement layer decoding process, continuing from FIG. 32 .
  • FIG. 34 is a diagram illustrating an example of a multi-view image encoding scheme.
  • FIG. 35 is a diagram illustrating an example of a main configuration of a multi-view image encoding device to which the present disclosure is applied.
  • FIG. 36 is a diagram illustrating an example of a main configuration of a multi-view image decoding device to which the present disclosure is applied.
  • FIG. 37 is a block diagram illustrating an example of a main configuration of a computer.
  • FIG. 38 is a block diagram illustrating an example of a schematic configuration of a television device.
  • FIG. 39 is a block diagram illustrating an example of a schematic configuration of a mobile phone.
  • FIG. 40 is a block diagram illustrating an example of a schematic configuration of a recording/reproduction device.
  • FIG. 41 is a block diagram illustrating an example of a schematic configuration of an image capturing device.
  • FIG. 42 is a block diagram illustrating an example of using scalable coding.
  • FIG. 43 is a block diagram illustrating another example of using scalable coding.
  • FIG. 44 is a block diagram illustrating another example of using scalable coding.
  • FIG. 45 is a block diagram illustrating an example of a schematic configuration of a video set.
  • FIG. 46 is a block diagram illustrating an example of a schematic configuration of a video processor.
  • FIG. 47 is a block diagram illustrating another example of a schematic configuration of a video processor.
  • FIG. 48 is an explanatory diagram illustrating a configuration of a content reproducing system.
  • FIG. 49 is an explanatory diagram illustrating the flow of data in a content reproducing system.
  • FIG. 50 is an explanatory diagram illustrating a specific example of an MPD.
  • FIG. 51 is a functional block diagram illustrating a configuration of a content server of a content reproducing system.
  • FIG. 52 is a functional block diagram illustrating a configuration of a content reproducing device of a content reproducing system.
  • FIG. 53 is a functional block diagram illustrating a configuration of a content server of a content reproducing system.
  • FIG. 54 is a sequence chart illustrating a communication processing example by respective devices of a wireless communication system.
  • FIG. 55 is a sequence chart illustrating a communication processing example by respective devices of a wireless communication system.
  • FIG. 56 is a diagram schematically illustrating an example of a configuration of a frame format transmitted and received in a communication process by respective devices of a wireless communication system.
  • FIG. 57 is a sequence chart illustrating a communication processing example by respective devices of a wireless communication system.
  • HEVC High Efficiency Video Coding
  • AVC Advanced Video Coding
  • a hierarchical structure based on a macroblock and a sub macroblock is defined.
  • a macroblock of 16 ⁇ 16 pixels is not optimal for a large image frame such as a Ultra High Definition (UHD) (4000 ⁇ 2000 pixels) serving as a target of a next generation encoding scheme.
  • UHD Ultra High Definition
  • a coding unit (CU) is defined as illustrated in FIG. 1 .
  • a CU is also referred to as a coding tree block (CTB), and serves as a partial area of an image of a picture unit undertaking a similar role of a macroblock in the AVC scheme.
  • CTB coding tree block
  • the latter is fixed to a size of 16 ⁇ 16 pixels, but the former is not fixed to a certain size but designated in image compression information in each sequence.
  • a largest coding unit (LCU) and a smallest coding unit (SCU) of a CU are specified in a sequence parameter set (SPS) included in encoded data to be output.
  • SPS sequence parameter set
  • a size of an LCU is 128, and a largest scalable depth is 5.
  • a CU of a size of 2N ⁇ 2N is divided into CUs having a size of N ⁇ N serving as a layer that is one-level lower when a value of split_flag is 1.
  • a CU is divided in prediction units (PUs) that are areas (partial areas of an image of a picture unit) serving as processing units of intra or inter prediction, and divided into transform units (TUs) that are areas (partial areas of an image of a picture unit) serving as processing units of orthogonal transform.
  • PUs prediction units
  • TUs transform units
  • orthogonal transform of 16 ⁇ 16 and 32 ⁇ 32 can be used.
  • a macroblock can be considered to correspond to an LCU, and a block (sub block) can be considered to correspond to a CU.
  • a motion compensation block can be considered to correspond to a PU.
  • a size of an LCU of a topmost layer is commonly set to be larger than a macroblock in the AVC scheme, for example, such as 128 ⁇ 128 pixels.
  • an LCU is assumed to include a macroblock in the AVC scheme
  • a CU is assumed to include a block (sub block) in the AVC scheme.
  • a “block” used in the following description indicates an arbitrary partial area in a picture, and, for example, a size, a shape, and characteristics thereof are not limited.
  • a “block” includes an arbitrary area (a processing unit) such as a TU, a PU, an SCU, a CU, an LCU, a sub block, a macroblock, or a slice.
  • a “block” includes other partial areas (processing units) as well. When it is necessary to limit a size, a processing unit, or the like, it will be appropriately described.
  • JM joint model
  • a cost function in the high complexity mode is represented as in the following Formula (1):
  • indicates a universal set of candidate modes for encoding a corresponding block or macroblock
  • D indicates differential energy between a decoded image and an input image when encoding is performed in a corresponding prediction mode.
  • indicates Lagrange's undetermined multiplier given as a function of a quantization parameter.
  • R indicates a total coding amount including an orthogonal transform coefficient when encoding is performed in a corresponding mode.
  • a cost function in the low complexity mode is represented by the following Formula (2):
  • Cost(Mode ⁇ Q ) D +QP2Quant(QP)*HeaderBit (2)
  • D is different from that of the high complexity mode and indicates differential energy between a prediction image and an input image.
  • QP2Quant QP
  • QP QP2Quant
  • HeaderBit indicates a coding amount related to information belonging to a header such as a motion vector or a mode including no orthogonal transform coefficient.
  • Scalable coding refers to a scheme of dividing (hierarchizing) an image into a plurality of layers and performing encoding for each layer.
  • FIG. 2 is a diagram illustrating an example of a layered image encoding scheme.
  • a hierarchized image (a layered image) includes a plurality of layers that differs in a value of a certain parameter.
  • the plurality of layers of the layered image is configured with a base layer on which encoding and decoding are performed using only an image of its own layer without using an image of another layer and a non-base layer (which is also referred to as an “enhancement layer”) on which encoding and decoding are performed using an image of another layer.
  • a non-base layer an image of the base layer may be used, and an image of another non-base layer may be used.
  • the non-base layer is configured with data (differential data) of a differential image between an image of its own and an image of another layer.
  • data differential data
  • the base layer and the non-base layer also referred to as an “enhancement layer”
  • an image of a lower quality than an original image is obtained using only data of the base layer
  • an original image that is, a high-quality image
  • image compression information of a terminal having a low processing capability such as a mobile phone
  • image compression information of only a base layer is transmitted, and a moving image of low spatial and temporal resolutions or a low quality is reproduced
  • a terminal having a high processing capability such as a television or a personal computer
  • image compression information of an enhancement layer as well as a base layer is transmitted, and a moving image of high spatial and temporal resolutions or a high quality is reproduced.
  • image compression information according to a capability of a terminal or a network can be transmitted from a server without performing the transcoding process.
  • a parameter with a scalability function is arbitrary.
  • spatial resolution as illustrated in FIG. 3 may be its parameter (spatial scalability).
  • respective layers have different resolutions of an image.
  • each picture is hierarchized into two layers, that is, a base layer of a resolution spatially lower than that of an original image and an enhancement layer that is combined with an image of the base layer to obtain an original image (an original spatial resolution) as illustrated in FIG. 3 .
  • the number of layers is an example, and each picture can be hierarchized into an arbitrary number of layers.
  • a temporal resolution temporary scalability
  • respective layers have different frame rates.
  • each picture is hierarchized into layers having different frame rates, a moving image of a high frame rate can be obtained by combining a layer of a high frame rate with a layer of a low frame rate, and an original moving image (an original frame rate) can be obtained by combining all the layers as illustrated in FIG. 4 .
  • the number of layers is an example, and each picture can be hierarchized into an arbitrary number of layers.
  • SNR scalability there is a signal-to-noise ratio (SNR) (SNR scalability).
  • SNR scalability respective layers having different SNRs.
  • each picture is hierarchized into two layers, that is, a base layer of an SNR lower than that of an original image and an enhancement layer that is combined with an image of the base layer to obtain an original SNR as illustrated in FIG. 5 .
  • base layer image compression information information related to an image of a low PSNR is transmitted, and a high PSNR image can be reconstructed by combining the information with the enhancement layer image compression information.
  • the number of layers is an example, and each picture can be hierarchized into an arbitrary number of layers.
  • a parameter other than the above-described examples may be applied as a parameter having scalability.
  • bit-depth scalability in which the base layer includes an 8-bit image, and a 10-bit image can be obtained by adding the enhancement layer to the base layer.
  • the base layer includes a component image of a 4:2:0 format
  • a component image of a 4:2:2 format can be obtained by adding the enhancement layer to the base layer.
  • FIG. 6 is a diagram illustrating an example of a slice defined in HEVC.
  • a slice is a unit in which an encoding process is performed in a raster scan order, and includes a plurality of areas obtained by dividing a picture as illustrated in FIG. 6 .
  • slice division can be performed only in units of LCUs.
  • the entire square indicates a picture, and a small square indicates an LCU.
  • groups of LCUs having different patterns indicate slices. For example, a slice including LCUs of first and second lines from the top which is indicated by a hatched pattern is a first slice (Slice#1) of the picture.
  • a slice including LCUs of third and fourth lines from the top which is indicated by a white background is a second slice (Slice#2) of the picture.
  • a slice including LCUs of fifth and six lines from the top which is indicated by a gray background is a third slice (Slice#3) of the picture.
  • a slice including LCUs of seventh and eighth lines from the top which is indicated by a mesh pattern is a fourth slice (Slice#4) of the picture.
  • the number of slices or LCUs formed in the picture and a slice division method are arbitrary and not limited to the example of FIG. 6 .
  • FIG. 7 illustrates an example of a tile defined in HEVC.
  • a tile is an area obtained by dividing a picture in units of LCUs, similarly to a slice.
  • a slice is an area obtained by dividing a picture so that LCUs are processed in a raster scan order
  • a tile is an area obtained by dividing a picture into arbitrary rectangles as illustrated in FIG. 7 .
  • the entire square indicates a picture
  • a small square indicates an LCU.
  • groups of LCUs having different patterns indicate tiles.
  • a slice including 4 ⁇ 4 LCUs on the upper left which is indicated by a hatched pattern is a first tile (Tile#1) of the picture.
  • a tile including 4 ⁇ 4 LCUs on the upper right which is indicated by a white background is a second tile (Tile#2) of the picture.
  • a tile including 4 ⁇ 4 LCUs on the lower left which is indicated by a gray background is a third tile (Tile#3) of the picture.
  • a tile including 4 ⁇ 4 LCUs on the lower right which is indicated by a mesh pattern is a fourth tile (Tile#4) of the picture.
  • the number of tiles or LCUs formed in the picture and a tile division method are arbitrary and not limited to the example of FIG. 7 .
  • the LCUs are processed in the raster scan order. Since the tile has a shorter boundary length than the slice, the tile has a characteristic in which a decrease in encoding efficiency by screen division is small.
  • the slices or tiles divided as described above can be processed independently of one another since there is no dependence relation of prediction, CABAC, or the like in encoding or decoding.
  • data of slices (or tiles) can be processed in parallel using different central processing units (CPUs) (or different cores).
  • CPUs central processing units
  • encoding-related information of the base layer can be used in encoding of the enhancement layer.
  • Content of the encoding-related information is arbitrary, but includes, for example, texture information such as a decoded image, syntax information such as the motion information or the intra prediction mode information, and the like.
  • the picture of the enhancement layer corresponding to the picture is encoded with reference to the encoding-related information of the base layer.
  • the obtained encoding-related information of the base layer is supplied and appropriately used for encoding of the enhancement layer.
  • the decoding is also performed in a similar procedure.
  • an area serving as the reference target of the encoding-related information of another layer is controlled
  • an area in which encoding-related information is referred to is limited to some areas of a picture of another layer.
  • FIG. 8 is a diagram illustrating an example of an aspect of limiting the reference target.
  • the reference target of the encoding-related information In the case of FIG. 8 , only a tile indicated by a mesh pattern of the base layer is designated as the reference target of the encoding-related information.
  • the encoding-related information of the other areas is neither included as the reference target nor read from a memory, regarding the encoding and decoding of the enhancement layer. Therefore, an increase in the workload of the encoding and decoding of the enhancement layer is suppressed accordingly.
  • the limiting method is arbitrary, but an area in which reference to encoding-related information of another layer is permitted may be designated. Further, for example, an area in which reference to encoding-related information for another layer is prohibited may be designated. Furthermore, for example, an area in which encoding-related information of another layer is referred to may be designated.
  • an area serving as a processing unit of encoding or decoding such as a tile or a slice is used as a reference target control unit of encoding-related information, it is possible to reduce the dependence relation between the areas, and thus it is possible to more easily perform processes independently in parallel.
  • a picture is divided into tiles, and control is performed such that encoding-related information can be referred to in only a few of the tiles.
  • reference to encoding-related information is permitted for those few tiles.
  • control information designating a tile in which reference to encoding-related information is permitted is generated and supplied for the encoding of the enhancement layer.
  • the encoding of the enhancement layer is executed according to the control information.
  • the control information In other words, only encoding-related information of a tile permitted by the control information can be referred to, regarding the encoding of the enhancement layer.
  • a setting method of setting an area in which reference to encoding-related information is permitted is arbitrary.
  • an area in which reference to encoding-related information is permitted may be designated by the user, an application, or the like, or an area in which reference to encoding-related information is permitted may be decided in advance.
  • the area when there is an area in which reference is apparently unnecessary such as a letter box at a common position of pictures of a moving image, the area may be excluded from “an area in which reference to encoding-related information is permitted,” that is, other areas may be designated as “an area in which reference to encoding-related information is permitted” in advance before the pictures of the moving image data are encoded.
  • the user may designate “an area in which reference to encoding-related information is permitted” of each picture, or the user may designate a feature of an image, and an application or the like may designate an area having the designated feature in each picture as “an area in which reference to encoding-related information is permitted.”
  • an application or the like may perform area division (for example, tile division, slice division, or the like) so that an area including a certain feature (or a feature designated by the user) is formed in each picture.
  • an input image is assumed to be an image including a person (A of FIG. 9 ).
  • An application performs a face recognition process on the image, and detects a partial area including a face of a person (B of FIG. 9 ). Then, the application performs tile division on the picture so that the partial area is set as one of tiles (C of FIG. 9 ). Then, the application designates the tile (that is, the detected partial area) including the face of the person as “an area in which reference to encoding-related information is permitted” (a tile of a mesh pattern in D of FIG. 9 ).
  • the area division (forming of tiles or slices) may be performed in a state in which the encoding-related information is recognized to be referred to by the encoding of the enhancement layer.
  • “the number of areas in which reference to encoding-related information is permitted” can be reduced.
  • the encoding of the enhancement layer since it is possible to further narrow the range of the base layer to be referred to, it is possible to suppress an increase in workload.
  • control of an area in which encoding-related information is referred to may be performed in units larger than at least areas (tiles, slices, or the like) as described above.
  • the control may be performed in units of pictures.
  • the control may be performed in units of sequences.
  • the control may be performed in units of moving image data.
  • the control information may be prepared in advance.
  • control information designating a tile in which reference to encoding-related information is prohibited it is desirable to generate control information designating a tile in which reference to encoding-related information is prohibited and supply the control information for the encoding of the enhancement layer.
  • the encoding of the enhancement layer is executed according to the control information, similarly to the case in which reference is permitted.
  • control information similarly to the case in which reference is permitted.
  • only encoding-related information of titles other than the tiles prohibited by the control information can be referred to, regarding the encoding of the enhancement layer.
  • a setting method is arbitrary, similarly to the case in which reference is permitted. Further, the number of areas in which reference to encoding-related information of the base layer is permitted (or prohibited) may be one or several.
  • the enhancement layer As described above, regardless of whether reference to encoding-related information is permitted or prohibited, in the encoding of the enhancement layer, it is arbitrary whether or not a picture is divided into tiles (or slices). Further, how to perform division is also arbitrary. Even if the enhancement layer is encoded in units of areas such as tiles or slices, encoding of each area is performed based on the control information. In other words, only encoding-related information of a tile (or a slice) (other than a prohibited tile (or slice)) permitted by the control information can be referred to in encoding of all areas.
  • an area in which reference to encoding-related information is permitted (or prohibited) may be set for each area of the enhancement layer.
  • an area in which reference to encoding-related information of the base layer is permitted (or prohibited) may not be the same in each area of the enhancement layer.
  • control information may be information (for example, a correspondence table) in which the areas of the enhancement layer and the areas of the base layer are associated (synchronized).
  • a correspondence table in which the areas of the enhancement layer and the areas of the base layer are associated (synchronized).
  • encoding-related information of the areas of the base layer associated by the correspondence table can be referred to in encoding of the areas of the enhancement layer.
  • the area of the enhancement layer may be permitted to refer to encoding-related information of different areas of the base layer as illustrated in FIG. 10 .
  • the reference destination of the encoding-related information of the base layer in encoding of a tile E 0 of the enhancement layer is limited to a tile B 0 of the base layer.
  • the reference destination of the encoding-related information of the base layer in encoding of a tile E 1 of the enhancement layer is limited to a tile B 1 of the base layer.
  • the reference destination of the encoding-related information of the base layer in encoding of a tile E 2 of the enhancement layer is limited to a tile B 2 of the base layer.
  • the reference destination of the encoding-related information of the base layer in encoding of a tile E 3 of the enhancement layer is limited to a tile B 3 of the base layer.
  • the areas of the enhancement layer are permitted to refer to the encoding-related information of the different areas of the base layer as in the example of FIG. 10 , it is possible to reduce the dependence relation between the areas and perform the parallel process more easily as illustrated in FIG. 11 .
  • a first CPU #0 performs encoding on tiles #0 of respective frames in the order of a tile #0 (B 0 — 0) of the base layer of a frame #0, a tile #0 (E 0 — 0) of the enhancement layer of the frame #0, a tile #0 (B 0 — 1) of the base layer of a frame #1, a tile #0 (E 0 — 1) of the enhancement layer of the frame #1, a tile #0 (B 0 — 2) of the base layer of a frame #2, and a tile #0 (E 0 — 2) of the enhancement layer of the frame #2.
  • a second CPU #1 performs encoding on tiles #1 of respective frames in the order of a tile #1 (B 1 — 0) of the base layer of the frame #0, a tile #1 (E 1 — 0) of the enhancement layer of the frame #0, a tile #1 (B 1 — 1) of the base layer of the frame #1, a tile #1 (E 1 — 1) of the enhancement layer of the frame #1, a tile #1 (B 1 — 2) of the base layer of the frame #2, and a tile #1 (E 1 — 2) of the enhancement layer of the frame #2.
  • a third CPU #2 performs encoding on tiles #2 of respective frames in the order of a tile #2 (B 2 — 0) of the base layer of the frame #0, a tile #2 (E 2 — 0) of the enhancement layer of the frame #0, a tile #2 (B 2 — 1) of the base layer of the frame #1, a tile #2 (E 2 — 1) of the enhancement layer of the frame #1, a tile #2 (B 2 — 2) of the base layer of the frame #2, and a tile #2 (E 2 — 2) of the enhancement layer of the frame #2.
  • a fourth CPU #3 performs encoding on tiles #2 of respective frames in the order of a tile #2 (B 3 — 0) of the base layer of the frame #0, a tile #3 (E 3 — 0) of the enhancement layer of the frame #0, a tile #3 (B 3 — 1) of the base layer of the frame #1, a tile #3 (E 3 — 1) of the enhancement layer of the frame #1, a tile #3 (B 3 — 2) of the base layer of the frame #2, and a tile #3 (E 3 — 2) of the enhancement layer of the frame #2.
  • the designation of an area (a tile, a slice, or the like) of the base layer in the control information may be performed based on a position (for example, an offset value from the head) of data of each area included in encoded data (bitstream) or may be performed based on an identification number allocated to each area of the base layer.
  • an identification number may be allocated to each area in the raster scan order, and an area in which reference to encoding-related information is permitted or prohibited may be designated using the identification number.
  • a method of allocating the identification number is arbitrary, and the raster scan order is an example.
  • the control information used to control reference to the encoding-related information may be transmitted from an encoding side to a decoding side.
  • the control information can be used in decoding.
  • the control information may be specified, for example, a picture parameter set (PPS) or a slice header.
  • PPS picture parameter set
  • the control information can be transmitted by an arbitrary method.
  • the control information may be specified in a sequence parameter set, a video parameter set, or the like.
  • the control information may be transmitted as data separate from encoded data of image data.
  • FIGS. 13 and 14 illustrate an example of syntax of the picture parameter set of the enhancement layer when the control information is transmitted through the picture parameter set.
  • tile_setting_from_ref_layer_flag is transmitted as information indicating whether or not area division of a current layer (that is, the enhancement layer) serving as the processing target is similar to area division of another layer (that is, the base layer).
  • a value thereof is 1, it indicates that a method of the area division (for example, the tile division) in the enhancement layer is similar to that of the base layer.
  • the area division of the enhancement layer is similar to the area division of the base layer, it is possible to detect the area division of the enhancement layer with reference to the area division of the base layer information in the decoding of the enhancement layer, and thus it is unnecessary to transmit information (for example, num_tile_columns_minus1, num_tile_rows_minus1, uniform_spacing_flag, and the like in FIG. 13 ) related to the area division of the enhancement layer. Therefore, it is possible to suppress a decrease in encoding efficiency.
  • inter_layer_tile_prediction_restriction_flag is transmitted as information indicating whether or not to control an area in which encoding-related information is referred to.
  • the control information used to control reference to encoding-related information is transmitted (second to ninth lines from the top in FIG. 14 ).
  • the enhancement layer is encoded in units of areas, and the control information used to control an area of the base layer in which encoding-related information is referred to is transmitted for each area of the enhancement layer.
  • the information indicating whether or not to control an area in which encoding-related information is referred to is transmitted as described above, when an area in which encoding-related information is referred to is not controlled, transmission of the control information can be omitted (the control information can be transmitted only when an area in which encoding-related information is referred to is controlled). Therefore, it is possible to suppress a decrease in encoding efficiency.
  • a current area serving as the processing target of the enhancement layer is designated by a position (i,j) in a horizontal direction and a vertical direction in the area array. Further, the number (num_ref_tiles_minus1) of areas of the base layer serving as the reference destination and the area thereof are designated for each area. Furthermore, the area of the base layer serving as the reference destination is designated by an identification number (ref_tile[k]). The identification number is allocated to each area of the base layer in the raster scan order as in the example of FIG. 12 .
  • the current area of the enhancement layer and the area of the base layer serving as the reference destination can be designated by an arbitrary method other than the above-mentioned methods.
  • the current area of the enhancement layer may be designated using an identification number.
  • the area of the base layer serving as the reference destination may be designated by a position (i,j) in the horizontal direction and the vertical direction in the area array or may be designated by information (for example, an offset value from the top) indicating a position of area data in encoded data.
  • FIGS. 15 to 17 illustrate an example of syntax of the slice header of the enhancement layer when the control information is transmitted through the slice header.
  • the control information is transmitted by a method similar to that in the case of the picture parameter set described with reference to FIGS. 13 and 14 .
  • FIGS. 13 to 17 an example in which a tile is used as an area has been described, but what have been described above can be similarly applied to a slice used as an area.
  • the encoding-related information includes texture information such as a decoded image or syntax information such as the motion information or intra prediction mode information, for example.
  • texture information such as a decoded image or syntax information such as the motion information or intra prediction mode information
  • inter-layer texture prediction in which texture information such as decoded image information of the base layer is used for prediction
  • inter-layer syntax prediction in which syntax information such as the motion information and the intra prediction mode information of the base layer is used for prediction.
  • control of the reference destination of the encoding-related information may be independently performed in each prediction process.
  • a reference destination area of the texture information and a reference destination area of a syntax area may be independently designated.
  • FIG. 18 is a diagram illustrating an image encoding device as an example of an image processing device to which the present technology is applied.
  • An image encoding device 100 illustrated in FIG. 18 is a device that performs layered image encoding. As illustrated in FIG. 18 , the image encoding device 100 includes a base layer image encoding section 101 , an enhancement layer image encoding section 102 , and a multiplexing unit 103 .
  • the base layer image encoding section 101 encodes a base layer image, and generates a base layer image encoded stream.
  • the enhancement layer image encoding section 102 encodes an enhancement layer image, and generates an enhancement layer image encoded stream.
  • the multiplexing unit 103 multiplexes the base layer image encoded stream generated in the base layer image encoding section 101 and the enhancement layer image encoded stream generated in the enhancement layer image encoding section 102 , and generates a layered image encoded stream.
  • the multiplexing unit 103 transmits the generated layered image encoded stream to the decoding side.
  • the base layer image encoding section 101 performs the area division such as the tile division or the slice division on the current picture, and performs the encoding for each area (a tile, a slice, or the like).
  • the base layer image encoding section 101 supplies the encoding-related information of the base layer obtained in the encoding to the enhancement layer image encoding section 102 .
  • the enhancement layer image encoding section 102 performs the area division such as the tile division or the slice division on the current picture, and performs the encoding for each area (a tile, a slice, or the like). In this event, the enhancement layer image encoding section 102 controls an area serving as the reference destination of the encoding-related information of the base layer. More specifically, the enhancement layer image encoding section 102 associates the areas of the enhancement layer with the areas of the base layer serving as the reference destination of the encoding-related information, and generates the control information indicating the correspondence relation thereof.
  • the enhancement layer image encoding section 102 appropriately refers to the encoding-related information of the base layer according to the control of the control information, and encodes the enhancement layer image.
  • the enhancement layer image encoding section 102 transmits the control information to the decoding side (as the layered image encoded stream) through the multiplexing unit 103 .
  • FIG. 19 is a block diagram illustrating an example of a main configuration of the base layer image encoding section 101 of FIG. 18 .
  • the base layer image encoding section 101 has an A/D converting section 111 , a screen reordering buffer 112 , an operation section 113 , an orthogonal transform section 114 , a quantization section 115 , a lossless encoding section 116 , an accumulation buffer 117 , an inverse quantization section 118 , and an inverse orthogonal transform section 119 .
  • the base layer image encoding section 103 has an operation section 120 , a loop filter 121 , a frame memory 122 , a selecting section 123 , an intra prediction section 124 , an inter prediction section 125 , a predictive image selecting section 126 , and a rate control section 127 .
  • the base layer image encoding section 101 has a base layer area division setting section.
  • the A/D converting section 111 performs A/D conversion on input image data (the base layer image information), and supplies the converted image data (digital data) to be stored in the screen reordering buffer 112 .
  • the screen reordering buffer 112 reorders images of frames stored in a display order in a frame order for encoding according to a Group Of Pictures (GOP), and supplies the images in which the frame order is reordered to the operation section 113 .
  • the screen reordering buffer 112 also supplies the images in which the frame order is reordered to the intra prediction section 124 and the inter prediction section 125 .
  • the operation section 113 subtracts a predictive image supplied from the intra prediction section 124 or the inter prediction section 125 via the predictive image selecting section 126 from an image read from the screen reordering buffer 112 , and outputs differential information thereof to the orthogonal transform section 114 .
  • the operation section 113 subtracts the predictive image supplied from the intra prediction section 124 from the image read from the screen reordering buffer 112 .
  • the operation section 113 subtracts the predictive image supplied from the inter prediction section 125 from the image read from the screen reordering buffer 112 .
  • the orthogonal transform section 114 performs an orthogonal transform such as a discrete cosine transform or a Karhunen-Loève Transform on the differential information supplied from the operation section 113 .
  • the orthogonal transform section 114 supplies transform coefficients to the quantization section 115 .
  • the quantization section 115 quantizes the transform coefficients supplied from the orthogonal transform section 114 .
  • the quantization section 115 sets a quantization parameter based on information related to a target value of a coding amount supplied from the rate control section 127 , and performs the quantizing.
  • the quantization section 115 supplies the quantized transform coefficients to the lossless encoding section 116 .
  • the lossless encoding section 116 encodes the transform coefficients quantized in the quantization section 115 according to an arbitrary encoding scheme. Since coefficient data is quantized under control of the rate control section 127 , the coding amount becomes a target value (or approaches a target value) set by the rate control section 127 .
  • the lossless encoding section 116 acquires information indicating an intra prediction mode or the like from the intra prediction section 124 , and acquires information indicating an inter prediction mode, differential motion vector information, or the like from the inter prediction section 125 . Further, the lossless encoding section 116 appropriately generates an NAL unit of the base layer including a sequence parameter set (SPS), a picture parameter set (PPS), and the like.
  • SPS sequence parameter set
  • PPS picture parameter set
  • the lossless encoding section 116 encodes information (which is also referred to as “base layer area division information”) related to area (for example, a tile, a slice, or the like) division of the base layer set by the base layer area division setting section.
  • base layer area division information information related to area (for example, a tile, a slice, or the like) division of the base layer set by the base layer area division setting section.
  • the lossless encoding section 116 encodes various kinds of information according to an arbitrary encoding scheme, and sets (multiplexes) the encoded information as part of encoded data (also referred to as an “encoded stream”).
  • the lossless encoding section 116 supplies the encoded data obtained by the encoding to be accumulated in the accumulation buffer 117 .
  • Examples of the encoding scheme of the lossless encoding section 116 include variable length coding and arithmetic coding.
  • the variable length coding for example, there is Context-Adaptive Variable Length Coding (CAVLC) defined in the H.264/AVC scheme.
  • CABAC Context-Adaptive Binary Arithmetic Coding
  • the accumulation buffer 117 temporarily holds the encoded data (base layer encoded data) supplied from the lossless encoding section 116 .
  • the accumulation buffer 117 outputs the held base layer encoded data to a recording device (recording medium), a transmission path, or the like (not illustrated) at a subsequent stage under certain timing.
  • the accumulation buffer 117 serves as a transmitting section that transmits the encoded data as well.
  • the transform coefficients quantized by the quantization section 115 are also supplied to the inverse quantization section 118 .
  • the inverse quantization section 118 inversely quantizes the quantized transform coefficients according to a method corresponding to the quantization performed by the quantization section 115 .
  • the inverse quantization section 118 supplies the obtained transform coefficients to the inverse orthogonal transform section 119 .
  • the inverse orthogonal transform section 119 performs an inverse orthogonal transform on the transform coefficients supplied from the inverse quantization section 118 according to a method corresponding to the orthogonal transform process performed by the orthogonal transform section 114 .
  • An output (restored differential information) that has been subjected to the inverse orthogonal transform is supplied to the operation section 120 .
  • the operation section 120 obtains a locally decoded image (a decoded image) by adding the predictive image supplied from the intra prediction section 124 or the inter prediction section 125 via the predictive image selecting section 126 to the restored differential information serving as an inverse orthogonal transform result supplied from the inverse orthogonal transform section 119 .
  • the decoded image is supplied to the loop filter 121 or the frame memory 122 .
  • the loop filter 121 includes a deblock filter, an adaptive loop filter, or the like, and appropriately performs a filter process on the reconstructed image supplied from the operation section 120 .
  • the loop filter 121 performs the deblock filter process on the reconstructed image, and removes block distortion of the reconstructed image.
  • the loop filter 121 improves the image quality by performing the loop filter process on the deblock filter process result (the reconstructed image from which the block distortion has been removed) using a Wiener filter.
  • the loop filter 121 supplies the filter process result (hereinafter referred to as a “decoded image”) to the frame memory 122 .
  • the loop filter 121 may further perform any other arbitrary filter process on the reconstructed image.
  • the loop filter 121 may supply information used in the filter process such as a filter coefficient to the lossless encoding section 116 as necessary so that the information can be encoded.
  • the frame memory 122 stores the supplied decoded image, and supplies the stored decoded image to the selecting section 123 as a reference image under certain timing.
  • the frame memory 122 stores the reconstructed image supplied from the operation section 120 and the decoded image supplied from the loop filter 121 .
  • the frame memory 122 supplies the stored reconstructed image to the intra prediction section 124 via the selecting section 123 under certain timing or based on an external request, for example, from the intra prediction section 124 .
  • the frame memory 122 supplies the stored decoded image to the inter prediction section 125 via the selecting section 123 under certain timing or based on an external request, for example, from the inter prediction section 125 .
  • the selecting section 123 selects a supply destination of the reference image supplied from the frame memory 122 .
  • the selecting section 123 supplies the reference image (a pixel value of a current picture) supplied from the frame memory 122 to the intra prediction section 124 .
  • the selecting section 123 supplies the reference image supplied from the frame memory 122 to the inter prediction section 125 .
  • the intra prediction section 124 performs the prediction process on the current picture that is an image of a processing target frame, and generates a prediction image.
  • the intra prediction section 124 performs the prediction process in units of certain blocks (using a block as a processing unit). In other words, the intra prediction section 124 generates a prediction image of a current block serving as the processing target in the current picture.
  • the intra prediction section 124 performs the prediction process (intra-screen prediction (which is also referred to as “intra prediction”)) using a reconstructed image supplied as the reference image from the frame memory 122 via the selecting section 123 .
  • the intra prediction section 124 generates the prediction image using pixel values neighboring the current block which are included in the reconstructed image.
  • the neighboring pixel value used for the intra prediction is a pixel value of a pixel which has been previously processed in the current picture.
  • a plurality of methods (which are also referred to as “intra prediction modes”) is prepared as candidates in advance.
  • the intra prediction section 124 performs the intra prediction in the plurality of intra prediction modes prepared in advance.
  • the intra prediction section 124 generates predictive images in all the intra prediction modes serving as the candidates, evaluates cost function values of the predictive images using the input image supplied from the screen reordering buffer 112 , and selects an optimal mode. When the optimal intra prediction mode is selected, the intra prediction section 124 supplies the predictive image generated in the optimal mode to the predictive image selecting section 126 .
  • the intra prediction section 124 appropriately supplies, for example, the intra prediction mode information indicating the employed intra prediction mode to the lossless encoding section 116 so that the information is encoded.
  • the inter prediction section 125 performs the prediction process on the current picture, and generates a prediction image.
  • the inter prediction section 125 performs the prediction process in units of certain blocks (using a block as a processing unit). In other words, the inter prediction section 125 generates a prediction image of a current block serving as the processing target in the current picture. In this event, the inter prediction section 125 performs the prediction process using image data of the input image supplied from the screen reordering buffer 112 and image data of a decoded image supplied as the reference image from the frame memory 122 .
  • the decoded image is an image (another picture that is not the current picture) of a frame which has been processed before the current picture.
  • the inter prediction section 125 performs the prediction process (inter-screen prediction (which is also referred to as “inter prediction”) of generating the prediction image using an image of another picture.
  • the inter prediction includes motion prediction and motion compensation. More specifically, the inter prediction section 125 performs the motion prediction on the current block using the input image and the reference image, and detects a motion vector. Then, the inter prediction section 125 performs motion compensation process using the reference image according to the detected motion vector, and generates the prediction image (inter prediction image information) of the current block.
  • the inter prediction that is, a method of generating the prediction image
  • a plurality of methods (which are also referred to as “inter prediction modes”) is prepared as candidates in advance.
  • the inter prediction section 125 performs the inter prediction in the plurality of inter prediction modes prepared in advance.
  • the inter prediction section 125 generates predictive images in all the inter prediction modes serving as a candidate.
  • the inter prediction section 125 evaluates cost function values of the predictive images using the input image supplied from the screen reordering buffer 112 , information of the generated differential motion vector, and the like, and selects an optimal mode. When the optimal inter prediction mode is selected, the inter prediction section 125 supplies the predictive image generated in the optimal mode to the predictive image selecting section 126 .
  • the inter prediction section 125 supplies information indicating the employed inter prediction mode, information necessary for performing processing in the inter prediction mode in decoding of the encoded data, and the like to the lossless encoding section 116 so that the information is encoded. For example, as the necessary information, there is information of a generated differential motion vector, and as prediction motion vector information, there is a flag indicating an index of a prediction motion vector.
  • the predictive image selecting section 126 selects a supply source of the prediction image to be supplied to the operation section 113 and the operation section 120 .
  • the predictive image selecting section 126 selects the intra prediction section 124 as the supply source of the predictive image, and supplies the predictive image supplied from the intra prediction section 124 to the operation section 113 and the operation section 120 .
  • the predictive image selecting section 126 selects the inter prediction section 125 as the supply source of the predictive image, and supplies the predictive image supplied from the inter prediction section 125 to the operation section 113 and the operation section 120 .
  • the rate control section 127 controls a rate of a quantization operation of the quantization section 115 based on the coding amount of the encoded data accumulated in the accumulation buffer 117 such that no overflow or underflow occurs.
  • the base layer area division setting section 128 sets the area division (for example, a tile, a slice, or the like) to the picture of the base layer.
  • the base layer area division setting section 128 supplies this setting to the respective sections of the base layer image encoding section 101 as the base layer area division information.
  • the respective sections of the base layer image encoding section 101 execute processing for each area indicated by the base layer area division information. Encoding of each area is independently processed. Therefore, for example, it is possible to process encoding of the areas in parallel using a plurality of CPUs.
  • the base layer image encoding section 101 performs encoding without referring to another layer.
  • the intra prediction section 124 and the inter prediction section 125 do not refer to the encoding-related information of the other layers.
  • the frame memory 122 supplies the image data of the decoded image of the base layer stored therein to the enhancement layer image encoding section 102 as the encoding-related information of the base layer.
  • the intra prediction section 124 supplies the intra prediction mode information and the like to the enhancement layer image encoding section 102 as the encoding-related information of the base layer.
  • the inter prediction section 125 supplies the motion information and the like to the enhancement layer image encoding section 102 as the encoding-related information of the base layer.
  • the base layer area division setting section 128 supplies the base layer area division information to the enhancement layer image encoding section 102 as well.
  • FIG. 20 is a block diagram illustrating an example of a main configuration of the enhancement layer image encoding section 102 of FIG. 18 .
  • the enhancement layer image encoding section 102 has basically a configuration similar to that of the base layer image encoding section 101 of FIG. 19 .
  • the enhancement layer image encoding section 102 includes an A/D converting section 131 , a screen reordering buffer 132 , an operation section 133 , an orthogonal transform section 134 , a quantization section 135 , a lossless encoding section 136 , an accumulation buffer 137 , an inverse quantization section 138 , and an inverse orthogonal transform section 139 as illustrated in FIG. 20 .
  • the enhancement layer image encoding section 102 further includes an operation section 140 , a loop filter 141 , a frame memory 142 , a selecting section 143 , an intra prediction section 144 , an inter prediction section 145 , a prediction image selecting section 146 , and a rate control section 147 .
  • the A/D converting section 131 to the rate control section 147 correspond to the A/D converting section 111 to the rate control section 127 of FIG. 19 , and perform processing similar to that performed by the corresponding processing sections.
  • the respective sections of the enhancement layer image encoding section 102 perform the process of encoding the enhancement layer image information rather than the base layer. Therefore, the description of the A/D converting section 111 to the rate control section 127 of FIG. 19 can be applied as a description of processing of the A/D converting section 131 to the rate control section 147 , but in this case, it is necessary to set data of the enhancement layer as data to be processed instead of data of the base layer. Further, it is necessary to appropriately interpret the processing sections of data input source and data output destination as the corresponding processing sections of the A/D converting section 131 to the rate control section 147 .
  • the enhancement layer image encoding section 102 does not include the base layer area division setting section 128 but includes an area synchronization section 148 and an up-sampling section 149 .
  • the area synchronization section 148 sets the area division (for example, a tile, a slice, or the like) to the picture of the enhancement layer.
  • the area synchronization section 148 supplies this setting to the respective sections of the enhancement layer image encoding section 102 as the enhancement layer area division information.
  • the area synchronization section 148 controls an area in which the encoding-related information of the base layer is referred to, regarding the encoding of the enhancement layer. For example, the area synchronization section 148 generates the control information used to control an area in which the encoding-related information of the base layer is referred to, and control the intra prediction section 144 or the inter prediction section 145 according to the control information. In other words, the area synchronization section 148 controls the area of the base layer in which the encoding-related information is referred to when the intra prediction section 144 or the inter prediction section 145 performs the inter-layer prediction.
  • the area synchronization section 148 supplies the control information to the lossless encoding section 136 so that the control information is encoded and transmitted to the decoding side.
  • the enhancement layer image encoding section 102 performs encoding with reference to the encoding-related information of another layer (for example, the base layer).
  • the area synchronization section 148 acquires the base layer area division information supplied from the base layer image encoding section 101 .
  • the area synchronization section 148 generates the control information using the base layer area division information.
  • the up-sampling section 149 acquires the encoding-related information of the base layer supplied from the base layer image encoding section 101 .
  • the up-sampling section 149 acquires the texture information such as the decoded image (which is also referred to as a “decoded base layer image”) of the base layer as the encoding-related information.
  • the up-sampling section 149 also acquires the syntax information such as the motion information and the intra prediction mode information of the base layer as the encoding-related information.
  • the up-sampling section 149 performs the up-sampling process on the acquired encoding-related information of the base layer.
  • layers differ in a value of a certain parameter (for example, a resolution or the like) with a scalability function.
  • the up-sampling section 149 performs the up-sampling process (performs the scalable parameter conversion process) on the encoding-related information of the base layer so that the value of the parameter is converted based on the enhancement layer.
  • the up-sampling process is performed as described above, the encoding-related information of the base layer can be used in encoding of the enhancement layer.
  • the up-sampling section 149 supplies the encoding-related information of the base layer that has undergone the up-sampling process to be stored in the frame memory 142 .
  • the encoding-related information of the base layer is supplied to the intra prediction section 144 or the inter prediction section 145 as the reference image.
  • the syntax information is similarly supplied to the intra prediction section 144 or the inter prediction section 145 .
  • FIG. 21 is a block diagram illustrating an example of a main configuration of the area synchronization section 148 of FIG. 20 .
  • the area synchronization section 148 includes a base layer area division information buffer 171 , an enhancement layer area division setting section 172 , and an area synchronization setting section 173 .
  • the base layer area division information buffer 171 acquires and holds the base layer area division information supplied from the base layer image encoding section 101 .
  • the base layer area division information buffer 171 supplies the base layer area division information being held therein to the area synchronization setting section 173 under certain timing or according to an external request from the area synchronization setting section 173 or the like.
  • the enhancement layer area division setting section 172 sets the area division (for example, a tile, a slice, or the like) of the picture of the enhancement layer.
  • An area division setting method is arbitrary.
  • the area division may be set by the user, the application, or the like or may be decided in advance.
  • the area division of the enhancement layer may be similar to or different from the area division of the base layer.
  • the enhancement layer area division setting section 172 supplies this setting to the respective sections of the enhancement layer image encoding section 102 as the enhancement layer area division information.
  • the respective sections of the enhancement layer image encoding section 102 execute processing for each area indicated by the enhancement layer area division information. Encoding of each area is independently processed. Therefore, for example, it is possible to process encoding of the areas in parallel using a plurality of CPUs.
  • the enhancement layer area division setting section 172 supplies the generated enhancement layer area division information to the area synchronization setting section 173 as well.
  • the enhancement layer area division setting section 172 supplies the generated enhancement layer area division information to the lossless encoding section 136 so that the enhancement layer area division information is encoded and transmitted to the decoding side.
  • the decoding side can perform decoding with reference to this information, it is possible to reduce decoding workload.
  • the area synchronization setting section 173 performs area association between layers using the supplied base layer area division information and the enhancement layer division information. In other words, the area synchronization setting section 173 sets an area in which the encoding-related information of the base layer is referred to in the event of encoding to each area of the enhancement layer.
  • the area synchronization setting section 173 generates synchronization area information indicating this setting.
  • Information of any specification can be used as the synchronization area information as long as the information is used to control of the area of the base layer serving as the reference destination of the encoding-related information.
  • information used to associate the area of the base layer serving the reference destination of the encoding-related information with each area of the enhancement layer may be used.
  • information of the syntax described in ⁇ 1. Main description of present technology> may be used.
  • the setting method is arbitrary.
  • an area that is referred to in the intra prediction section 144 or the inter prediction section 145 is decided by an arbitrary method.
  • the area may be set by the user, the application, or the like or may be decided in advance.
  • the area synchronization setting section 173 specifies the area of the base layer that is used as the reference destination of the encoding-related information in the current area serving as the processing target using the generated synchronization area information, generates synchronization address information indicating a position (address) of data of the area in data of the encoding-related information (for example, the texture information such as the reference image or the syntax information such as the motion information or the intra prediction mode information) that has undergone the up-sampling process and is stored in the frame memory 142 , and supplies the synchronization address information to the intra prediction section 144 or the inter prediction section 145 .
  • the intra prediction section 144 or the inter prediction section 145 performs the inter-layer prediction according to the synchronization address information, and thus it is possible to set only some areas of the picture of the base layer as the reference destination, and it is possible to suppress an increase in the number of accesses to the frame memory 142 . In other words, as the area synchronization setting section 173 performs this process, it is possible to suppress an increase in the encoding workload.
  • the area synchronization setting section 173 supplies the generated synchronization area information to the lossless encoding section 136 so that the synchronization area information is encoded and transmitted to the decoding side.
  • the decoding side can perform decoding with reference to the synchronization area information, and thus, regarding decoding, it is similarly possible to suppress an increase in the number of accesses to the memory, and it is possible to reduce the decoding workload.
  • step S 101 the base layer image encoding section 101 of the image encoding device 100 encodes image data of the base layer.
  • step S 102 the enhancement layer image encoding section 102 encodes image data of the enhancement layer.
  • step S 103 the multiplexing unit 103 multiplexes a base layer image encoded stream generated in the process of step S 101 and an enhancement layer image encoded stream generated in the process of step S 102 (that is, the bitstreams of the respective layers), and generates a layered image encoded stream of one system.
  • step S 103 the image encoding device 100 ends the image encoding process.
  • One picture is processed through the image encoding process. Therefore, the image encoding device 100 repeatedly performs the image encoding process on pictures of hierarchized moving image data.
  • step S 121 the base layer area division setting section 128 of the base layer image encoding section 101 decides the area division of the base layer by a certain method, and generates the base layer area division information. Further, the base layer area division setting section 128 supplies the base layer area division information to the respective sections of the base layer image encoding section 101 .
  • step S 122 the base layer area division setting section 128 supplies the base layer area division information generated in step S 121 to the lossless encoding section 116 so that the base layer area division information is transmitted.
  • each process is executed using the area or a certain unit smaller than the area as a processing unit.
  • step S 123 the A/D converting section 111 performs A/D conversion on an image of each frame (picture) of an input moving image.
  • step S 124 the screen reordering buffer 112 stores the image that has undergone the A/D conversion in step S 123 , and performs reordering from a display order to an encoding order on each picture.
  • step S 125 the intra prediction section 124 performs the intra prediction process of the intra prediction mode.
  • step S 126 the inter prediction section 125 performs the inter prediction process in which the motion prediction, the motion compensation, and the like are performed in the inter prediction mode.
  • step S 127 the prediction image selecting section 126 selects a prediction image based on a cost function value or the like. In other words, the prediction image selecting section 126 selects any one of the prediction image generated by the intra prediction of step S 125 and the prediction image generated by the inter prediction of step S 126 .
  • step S 128 the operation section 113 calculates a difference between the input image in which the frame order is reordered in the process of step S 124 and the prediction image selected in the process of step S 127 .
  • the operation section 113 generates image data of a differential image between the input image and the prediction image. An amount of the obtained image data of the differential image is reduced to be smaller than the original image data. Therefore, an amount of data can be compressed to be smaller than when an image is encoded without change.
  • step S 129 the orthogonal transform section 114 performs the orthogonal transform on the image data of the differential image generated in the process of step S 128 .
  • step S 130 the quantization section 115 quantizes the orthogonal transform coefficient obtained in the process of step S 129 using the quantization parameter calculated by the rate control section 127 .
  • step S 131 the inverse quantization section 118 inversely quantizes the quantized coefficient (which is also referred to as a “quantization coefficient”) generated in the process of step S 130 according to characteristics corresponding to characteristics of the quantization section 115 .
  • step S 132 the inverse orthogonal transform section 119 performs the inverse orthogonal transform on the orthogonal transform coefficient obtained in the process of step S 131 .
  • step S 133 the operation section 120 generates image data of a reconstructed image by adding the prediction image selected in the process of step S 127 to the differential image restored in the process of step S 132 .
  • step S 134 the loop filter 121 performs the loop filter process on the image data of the reconstructed image generated in the process of step S 133 .
  • block distortion of the reconstructed image is removed.
  • step S 135 the frame memory 122 stores data such as the decoded image obtained in the process of step S 134 , the reconstructed image obtained in the process of step S 133 , and the like.
  • step S 136 the lossless encoding section 116 encodes the quantized coefficients obtained in the process of step S 130 .
  • lossless coding such as variable length coding or arithmetic coding is performed on data corresponding to the differential image.
  • the lossless encoding section 116 encodes information related to the prediction mode of the predictive image selected in the process of step S 127 , and adds the encoded information to the encoded data obtained by encoding the differential image.
  • the lossless encoding section 116 also encodes, for example, information according to the optimal intra prediction mode information supplied from the intra prediction section 124 or the optimal inter prediction mode supplied from the inter prediction section 125 , and adds the encoded information to the encoded data.
  • the lossless encoding section 116 sets and encodes syntax elements such as various null units, and adds the encoded syntax elements to the encoded data.
  • step S 137 the accumulation buffer 117 accumulates the encoded data obtained in the process of step S 136 .
  • the encoded data accumulated in the accumulation buffer 117 is appropriately read and transmitted to the decoding side via a transmission path or a recording medium.
  • step S 138 the rate control section 127 controls the quantization operation of the quantization section 115 based on the coding amount (the generated coding amount) of the encoded data accumulated in the accumulation buffer 117 in the process of step S 137 so that no overflow or underflow occurs. Further, the rate control section 127 supplies information related to the quantization parameter to the quantization section 115 .
  • step S 139 the frame memory 122 , the intra prediction section 124 , the inter prediction section 125 , and the base layer area division setting section 128 supply the encoding-related information of the base layer obtained in the above base layer encoding process for the encoding process of the enhancement layer.
  • step S 139 ends, the base layer encoding process ends, and the process returns to FIG. 22 .
  • step S 151 the base layer area division information buffer 171 of the enhancement layer image encoding section 102 acquires the base layer area division information that is generated in the base layer encoding process and supplied.
  • step S 152 the up-sampling section 149 acquires the decoded base layer image (that is, the texture information) that is generated in the base layer encoding process and supplied as the encoding-related information.
  • the up-sampling section 149 also acquires the syntax information that is generated in the base layer encoding process and supplied as the encoding-related information.
  • step S 153 the up-sampling section 149 performs the up-sampling process on the encoding-related information (for example, the decoded base layer image) of the base layer acquired in step S 152 .
  • step S 154 the frame memory 142 stores the encoding-related information (for example, the decoded base layer image) of the base layer that has undergone the up-sampling process through the process of step S 153 .
  • the encoding-related information for example, the decoded base layer image
  • step S 155 the enhancement layer area division setting section 172 decides the area division of the enhancement layer by a certain method, and generates the enhancement layer area division information. Further, the enhancement layer area division setting section 172 supplies the enhancement layer area division information to the respective sections of the enhancement layer image encoding section 102 .
  • step S 156 the area synchronization setting section 173 generates the synchronization area information by a certain method using the base layer area division information acquired in step S 151 and the enhancement layer area division information generated in step S 155 .
  • the area synchronization setting section 173 sets the area of the base layer serving as the reference destination of the encoding-related information to each area of the enhancement layer.
  • step S 157 the area synchronization setting section 173 generates the synchronization address information indicating data of the area of the base layer serving as the reference destination of the encoding-related information using the synchronization area information generated in the process of step S 156 .
  • step S 158 the area synchronization setting section 173 supplies the synchronization area information generated in the process of step S 156 to the lossless encoding section 136 so that the synchronization area information is transmitted. Further, the enhancement layer area division setting section 172 supplies the enhancement layer area division information generated in the process of step S 155 to the lossless encoding section 136 so that the enhancement layer area division information is transmitted.
  • step S 158 ends, the process proceeds to step S 161 of FIG. 25 .
  • each process is executed using the area or a certain unit smaller than the area as a processing unit.
  • step S 161 to step S 176 of FIG. 25 corresponds and is executed similarly to the process of step S 123 to step S 138 of FIG. 23 .
  • step S 176 When the process of step S 176 ends, the enhancement layer encoding process ends, and the process returns to FIG. 22 .
  • the image encoding device 100 can reduce the number of memory accesses for referring to the encoding-related information of another layer in the inter-layer prediction and thus suppress an increase in the encoding and decoding workload.
  • FIG. 26 is a block diagram illustrating an example of a main configuration of an image decoding device that corresponds to the image encoding device 100 of FIG. 18 as an example of an image processing device to which the present technology is applied.
  • An image decoding device 200 illustrated in FIG. 26 decodes the encoded data generated by the image encoding device 100 by a decoding method corresponding to an encoding method thereof (that is, performs scalable decoding on the encoded data that has undergone the scalable coding).
  • the image decoding device 200 includes a demultiplexing unit 201 , a base layer image decoding section 202 , and an enhancement layer image decoding section 203 .
  • the demultiplexing unit 201 receives the layered image encoded stream in which the base layer image encoded stream and the enhancement layer image encoded stream are multiplexed, which is transmitted from the encoding side, demultiplexes the scalable image encoded stream, and extracts the base layer image encoded stream and the enhancement layer image encoded stream.
  • the base layer image decoding section 202 decodes the base layer image encoded stream extracted by the demultiplexing unit 201 , and obtains the base layer image. In this event, the base layer image decoding section 202 performs the decoding for each area (a tile, a slice, or the like) set in the encoding side based on the base layer area division information supplied from the encoding side.
  • the enhancement layer image decoding section 203 decodes the enhancement layer image encoded stream extracted by the demultiplexing unit 201 , and obtains the enhancement layer image. In this event, the enhancement layer image decoding section 203 performs the decoding for each area (a tile, a slice, or the like) set in the encoding side based on the enhancement layer area division information supplied from the encoding side.
  • the enhancement layer image decoding section 203 performs the inter-layer prediction using the synchronization area information serving as the control information that is supplied from the encoding side and used to control the area of the base layer serving as the reference destination of the encoding-related information of each area of the enhancement layer.
  • the enhancement layer image decoding section 203 refers to the encoding-related information of the area of the base layer designated by the synchronization area information.
  • FIG. 27 is a block diagram illustrating an example of a main configuration of the base layer image decoding section 202 of FIG. 26 .
  • the base layer image decoding section 202 includes an accumulation buffer 211 , a lossless decoding section 212 , an inverse quantization section 213 , an inverse orthogonal transform section 214 , an operation section 215 , a loop filter 216 , a screen reordering buffer 217 , and a D/A conversion section 218 .
  • the base layer image decoding section 202 further includes a frame memory 219 , a selecting section 220 , an intra prediction section 221 , an inter prediction section 222 , and a prediction image selecting section 223 .
  • the accumulation buffer 211 is a reception section that receives the transmitted encoded data.
  • the accumulation buffer 211 receives and accumulates the transmitted encoded data, and supplies the encoded data to the lossless decoding section 212 under certain timing.
  • Information necessary for decoding such as the prediction mode information is added to the encoded data.
  • the lossless decoding section 212 decodes the information that is supplied from the accumulation buffer 211 and encoded by the lossless encoding section 116 according to the decoding scheme corresponding to the encoding scheme.
  • the lossless decoding section 212 supplies quantized coefficient data of a differential image obtained by the decoding to the inverse quantization section 213 .
  • the lossless decoding section 212 determines whether the intra prediction mode or the inter prediction mode is selected as an optimum prediction mode, and supplies information related to the optimum prediction mode to the mode determined to be selected, that is, the intra prediction section 221 or the inter prediction section 222 .
  • the information related to the optimum prediction mode is supplied to the intra prediction section 221 .
  • the inter prediction mode is selected as the optimum prediction mode at the encoding side
  • the information related to the optimum prediction mode is supplied to the inter prediction section 222 .
  • the lossless decoding section 212 supplies information necessary for inverse quantization such as a quantization matrix or a quantization parameter to the inverse quantization section 213 .
  • the lossless decoding section 212 supplies the base layer area division information supplied from the encoding side to the respective processing sections of the base layer image decoding section 202 .
  • the respective sections of the base layer image decoding section 202 perform processing for each area indicated by the base layer area division information. Decoding of each area is independently processed. Therefore, for example, it is possible to perform the decoding of respective areas in parallel using a plurality of CPUs.
  • the inverse quantization section 213 inversely quantizes the quantized coefficient data obtained through the decoding performed by the lossless decoding section 212 according to a scheme corresponding to the quantization scheme of the quantization section 115 .
  • the inverse quantization section 213 is a processing section similar to the inverse quantization section 118 . In other words, the description of the inverse quantization section 213 can be applied to the inverse quantization section 118 as well. However, it is necessary to interpret data input source, data output destination and the like as each processing section of the base layer image decoding section 202 .
  • the inverse quantization section 213 supplies the obtained coefficient data to the inverse orthogonal transform section 214 .
  • the inverse orthogonal transform section 214 performs the inverse orthogonal transform on the orthogonal transform coefficient supplied from the inverse quantization section 213 according to a scheme corresponding to the orthogonal transform scheme of the orthogonal transform section 114 .
  • the inverse orthogonal transform section 214 is a processing section similar to the inverse orthogonal transform section 119 . In other words, the description of the inverse orthogonal transform section 214 can be applied to the inverse orthogonal transform section 119 as well. However, it is necessary to interpret data input source, data output destination and the like as each processing section of the base layer image decoding section 202 .
  • the image data of the differential image is restored through the inverse orthogonal transform process.
  • the restored image data of the differential image corresponds to the image data of the differential image before the orthogonal transform is performed in the image encoding device.
  • the restored image data of the differential image obtained by the inverse orthogonal transform process of the inverse orthogonal transform section 214 is referred to as “decoded residual data.”
  • the inverse orthogonal transform section 214 supplies the decoded residual data to the operation section 215 . Further, the operation section 215 is supplied with the image data of the prediction image from the intra prediction section 221 or the inter prediction section 222 via the prediction image selecting section 223 .
  • the operation section 215 obtains the image data of the reconstructed image in which the differential image and the prediction image are added using the decoded residual data and the image data of the prediction image.
  • the reconstructed image corresponds to the input image before the prediction image is subtracted by the operation section 113 .
  • the operation section 215 supplies the reconstructed image to the loop filter 216 .
  • the loop filter 216 generates a decoded image by appropriately performing a loop filter process including a deblock filter process, an adaptive loop filter process, or the like on the supplied reconstructed image. For example, the loop filter 216 removes block distortion by performing the deblock filter process on the reconstructed image. Further, for example, the loop filter 216 improves the image quality by performing the loop filter process on the deblock filter process result (the reconstructed image from which the block distortion has been removed) using a Wiener Filter.
  • a type of the filter process performed by the loop filter 216 is arbitrary, and a process other than the above-described filter process may be performed. Further, the loop filter 216 may perform the filter process using the filter coefficient supplied from the image encoding device. Furthermore, the loop filter 216 may omit the filter process and may output input data without performing the filter process.
  • the loop filter 216 supplies the decoded image (or the reconstructed image) serving as the filter process result to the screen reordering buffer 217 and the frame memory 219 .
  • the screen reordering buffer 217 performs reordering of the frame order on the decoded image.
  • the screen reordering buffer 217 reorders an image of respective frames reordered in the encoding order by the screen reordering buffer 112 in an original display order.
  • the screen reordering buffer 217 stores the image data of the decoded image of the respective frames supplied in the encoding order in that order, reads the image data of the decoded image of the respective frames stored in the encoding order in the display order, and supplies it to the D/A conversion section 218 .
  • the D/A conversion section 218 performs the D/A conversion on the decoded image (digital data) of the respective frames supplied from the screen reordering buffer 217 , and outputs analog data to be displayed on a display (not illustrated).
  • the frame memory 219 stores the supplied decoded image, and supplies the stored decoded image to the intra prediction section 221 or the inter prediction section 222 as the reference image via the selecting section 220 under certain timing or based on an external request from the intra prediction section 221 , the inter prediction section 222 , or the like.
  • the intra prediction mode information and the like are appropriately supplied from the lossless decoding section 212 to the intra prediction section 221 .
  • the intra prediction section 221 performs the intra prediction in the intra prediction mode (the optimum intra prediction mode) used in the intra prediction section 124 , and generates the prediction image.
  • the intra prediction section 221 performs the intra prediction using the image data of the reconstructed image supplied from the frame memory 219 via the selecting section 220 .
  • the intra prediction section 221 uses the reconstructed image as the reference image (a neighboring pixel).
  • the intra prediction section 221 supplies the generated prediction image to the prediction image selecting section 223 .
  • the optimum prediction mode information, the motion information, and the like are appropriately supplied from the lossless decoding section 212 to the inter prediction section 222 .
  • the inter prediction section 222 performs the inter prediction using the decoded image (the reference image) acquired from the frame memory 219 in the inter prediction mode (the optimum inter prediction mode) indicated by the optimum prediction mode information acquired from the lossless decoding section 212 , and generates the prediction image.
  • the prediction image selecting section 223 supplies the prediction image supplied from the intra prediction section 221 or the prediction image supplied from the inter prediction section 222 to the operation section 215 . Then, the operation section 215 obtains the reconstructed image in which the prediction image is added to the decoded residual data (the differential image information) from the inverse orthogonal transform section 214 .
  • the base layer image decoding section 202 performs the decoding without referring to another layer.
  • the intra prediction section 221 and the inter prediction section 222 do not refer to the encoding-related information of another layer.
  • the frame memory 219 supplies the stored image data of the decoded image of the base layer to the enhancement layer image decoding section 203 as the encoding-related information of the base layer.
  • the intra prediction section 221 supplies the intra prediction mode information and the like to the enhancement layer image decoding section 203 as the encoding-related information of the base layer.
  • the inter prediction section 222 supplies the motion information and the like to the enhancement layer image decoding section 203 as the encoding-related information of the base layer.
  • the intra prediction section 221 or the inter prediction section 222 (an arbitrary processing section of the base layer image decoding section 202 such as the lossless decoding section 212 ) supplies the base layer area division information to the enhancement layer image decoding section 203 .
  • FIG. 28 is a block diagram illustrating an example of a main configuration of the enhancement layer image decoding section 203 of FIG. 26 .
  • the enhancement layer image decoding section 203 has basically a configuration similar to that of the base layer image decoding section 202 of FIG. 27 .
  • the enhancement layer image decoding section 203 includes an accumulation buffer 231 , a lossless decoding section 232 , an inverse quantization section 233 , an inverse orthogonal transform section 234 , an operation section 235 , a loop filter 236 , a screen reordering buffer 237 , and a D/A conversion section 238 as illustrated in FIG. 28 .
  • the enhancement layer image decoding section 203 further includes a frame memory 239 , a selecting section 240 , an intra prediction section 241 , an inter prediction section 242 , and a prediction image selecting section 243 .
  • the accumulation buffer 231 to the prediction image selecting section 243 correspond to the accumulation buffer 211 to the prediction image selecting section 223 of FIG. 27 , and perform processes similar to those performed by the corresponding processing sections.
  • the respective sections of the enhancement layer image decoding section 203 perform processing of encoding the enhancement layer image information rather than that of the base layer. Therefore, the description of the accumulation buffer 211 to the prediction image selecting section 223 of FIG. 27 can be applied as a description of processes of the accumulation buffer 231 to the prediction image selecting section 243 , but, in this case, data to be processed needs to be data of the enhancement layer rather than data of the base layer. Further, it is necessary to interpret a processing section of data input source and data output destination as a corresponding processing section of the enhancement layer image decoding section 203 appropriately.
  • the enhancement layer image decoding section 203 further includes an area synchronization section 244 and an up-sampling section 245 .
  • the area synchronization section 244 acquires the enhancement layer area division information and the synchronization area information supplied from the lossless decoding section 232 .
  • the information is generated at the decoding side and transmitted from the decoding side. Further, the area synchronization section 244 acquires the base layer area division information supplied from the base layer image decoding section 202 .
  • the area synchronization section 244 controls an area in which the encoding-related information of the base layer is referred to in the decoding of the enhancement layer using the information.
  • the area synchronization section 244 controls an area of the base layer in which the encoding-related information is referred to when the intra prediction section 241 or the inter prediction section 242 performs the inter-layer prediction using the information.
  • the area synchronization section 244 can control an area in which the encoding-related information of the base layer is referred to in the decoding of the enhancement layer. Therefore, the area synchronization section 244 can reduce the number of memory accesses and suppress an increase in the decoding workload.
  • the enhancement layer image decoding section 203 performs encoding with reference to the encoding-related information of another layer (for example, the base layer).
  • the up-sampling section 245 acquires the encoding-related information of the base layer supplied from the base layer image decoding section 202 .
  • the up-sampling section 245 acquires the texture information such as the decoded image (also referred to as a “decoded base layer image”) of the base layer as the encoding-related information.
  • the up-sampling section 245 acquires the syntax information such as the motion information and the intra prediction mode information of the base layer as the encoding-related information as well.
  • the up-sampling section 245 performs the up-sampling process on the acquired encoding-related information of the base layer.
  • different layers differ in a value of a certain parameter (for example, a resolution or the like) having a scalability function.
  • the up-sampling section 245 performs the up-sampling process (performs the scalable parameter conversion process) on the encoding-related information of the base layer so that the value of the parameter is converted on the basis of the enhancement layer.
  • the up-sampling process is performed as described above, the encoding-related information of the base layer can be used in the decoding of the enhancement layer.
  • the up-sampling section 149 supplies the encoding-related information of the base layer that has undergone the up-sampling process to be stored in the frame memory 239 .
  • the encoding-related information of the base layer is supplied to the intra prediction section 241 or the inter prediction section 242 as the reference image.
  • the syntax information is supplied to the intra prediction section 241 or the inter prediction section 242 as well.
  • FIG. 29 is a block diagram illustrating an example of a main configuration of the area synchronization section 244 of FIG. 28 .
  • the area synchronization section 244 includes a base layer area division information buffer 271 , an enhancement layer area division information buffer 272 , and a synchronization area information decoding section 273 as illustrated in FIG. 29 .
  • the base layer area division information buffer 271 acquires the base layer area division information supplied from the base layer image decoding section 202 , that is, the base layer area division information supplied from the encoding side, and holds the acquired base layer area division information.
  • the base layer area division information buffer 271 supplies the held base layer area division information to the synchronization area information decoding section 273 under certain timing or according to an external request from the synchronization area information decoding section 273 or the like.
  • the enhancement layer area division information buffer 272 acquires the enhancement layer area division information supplied from the lossless decoding section 232 , that is, the enhancement layer area division information supplied from the encoding side, and holds the acquired enhancement layer area division information.
  • the enhancement layer area division information buffer 272 supplies the held enhancement layer area division information to the synchronization area information decoding section 273 under certain timing or according to an external request from the synchronization area information decoding section 273 or the like.
  • the synchronization area information decoding section 273 acquires the base layer area division information from the base layer area division information buffer 271 , and acquires the enhancement layer area division information from the enhancement layer area division information buffer 272 . Further, the synchronization area information decoding section 273 acquires the synchronization area information supplied from the lossless decoding section 232 , that is, acquires the synchronization area information supplied from the encoding side, and holds the acquired synchronization area information.
  • the synchronization area information is information used to control the area of the base layer serving as the reference destination of the encoding-related information of each area of the enhancement layer.
  • the synchronization area information decoding section 273 decodes the synchronization area information using the base layer area division information and the enhancement layer area division information. In other words, the synchronization area information decoding section 273 detects a positional relation between the areas of the layers using the base layer area division information and the enhancement layer area division information, and analyzes the correspondence relation between the areas of the layers indicated by the synchronization area information according to the positional relation.
  • the synchronization area information decoding section 273 specifies a position of data of the area of the base layer serving as the reference destination of the encoding-related information for the current area serving as the processing target of the enhancement layer in data of the encoding-related information such as the reference image supplied from the frame memory 239 .
  • the synchronization area information decoding section 273 generates the synchronization address information serving as information indicated by the position of the data, and supplies the synchronization address information to the intra prediction section 241 or the inter prediction section 242 .
  • the synchronization area information decoding section 273 can generate synchronization address information similar to that generated by the area synchronization setting section 173 . In other words, the synchronization area information decoding section 273 can perform control similar to that performed by the area synchronization setting section 173 .
  • the intra prediction section 241 or the inter prediction section 242 performs the inter-layer prediction according to the synchronization address information, only some areas of the picture of the base layer can be set as the reference destination, and an increase in the number of accesses to the frame memory 239 can be suppressed.
  • the synchronization area information decoding section 273 can reduce the number of memory accesses and suppress an increase in the decoding workload by performing the above-described process.
  • step S 201 the demultiplexing unit 201 of the image decoding device 200 performs demultiplexing on the layered image encoded stream transmitted from the encoding side for each layer.
  • step S 202 the base layer image decoding section 202 decodes the base layer image encoded stream extracted in the process of step S 201 .
  • the base layer image decoding section 202 outputs data of the base layer image generated by the decoding.
  • step S 203 the enhancement layer image decoding section 203 decodes the enhancement layer image encoded stream extracted in the process of step S 201 .
  • the enhancement layer image decoding section 203 outputs data of the enhancement layer image generated by the decoding.
  • step S 203 the image decoding device 200 ends the image decoding process.
  • One picture is processed in this image decoding process. Therefore, the image decoding device 200 repeatedly performs the image decoding process on each picture of the hierarchized moving image data.
  • step S 221 the lossless decoding section 212 of the base layer image decoding section 202 decodes the encoded data acquired through the accumulation buffer 211 , and acquires the base layer area division information supplied from the encoding side. Further, the lossless decoding section 212 supplies the base layer area division information to the respective sections of the base layer image decoding section 202 .
  • each process is executed using the area or a certain unit smaller than the area as a processing unit.
  • step S 222 the accumulation buffer 211 accumulates the transmitted bitstream (encoded data).
  • step S 223 the lossless decoding section 212 decodes the bitstream (encoded data) supplied from the accumulation buffer 211 .
  • image data such as an I picture, a P picture, and a B picture encoded by the lossless encoding section 116 is decoded.
  • various kinds of information are decoded in addition to the image data included in the bitstream such as the header information.
  • step S 224 the inverse quantization section 213 inversely quantizes the quantized coefficients obtained in the process of step S 223 .
  • step S 225 the inverse orthogonal transform section 214 performs the inverse orthogonal transform on the coefficients inversely quantized in step S 224 .
  • step S 226 the intra prediction section 221 or the inter prediction section 222 performs the prediction process, and generates the predictive image.
  • the prediction process is performed in the prediction mode that is determined to have been applied in the event of encoding in the lossless decoding section 212 .
  • the intra prediction section 221 generates the predictive image in the intra prediction mode recognized to be optimal in the event of encoding.
  • the inter prediction section 222 generates the predictive image in the inter prediction mode recognized to be optimal in the event of encoding.
  • step S 227 the operation section 215 adds the differential image obtained by performing the inverse orthogonal transform in step S 225 to the prediction image generated in step S 226 . As a result, the image data of the reconstructed image is obtained.
  • step S 228 the loop filter 216 appropriately performs the loop filter process including the deblock filter process, the adaptive loop filter process, or the like on the image data of the reconstructed image obtained in the process of step S 227 .
  • step S 229 the screen reordering buffer 217 reorders the respective frames of the reconstructed image that has undergone the filter process in step S 228 .
  • the order of the frames reordered in the event of encoding is changed to the original display order.
  • step S 230 the D/A conversion section 218 performs the D/A conversion on the image in which the order of the frames is reordered in step S 229 .
  • the image is output to a display (not illustrated), and the image is displayed.
  • step S 231 the frame memory 219 stores data such as the decoded image obtained in the process of step S 228 , the reconstructed image obtained in the process of step S 227 , and the like.
  • step S 232 the frame memory 219 , the intra prediction section 221 , and the inter prediction section 222 supplies the encoding-related information of the base layer supplied from the encoding side for the decoding process of the enhancement layer.
  • step S 232 ends, the base layer decoding process ends, and the process returns to FIG. 30 .
  • step S 251 the base layer area division information buffer 271 of the enhancement layer image decoding section 203 acquires the base layer area division information supplied from the base layer image decoding section 202 in the base layer decoding process.
  • the base layer area division information is information supplied from the encoding side.
  • step S 252 the up-sampling section 245 acquires the decoded base layer image (that is, texture information) supplied from the base layer image decoding section 202 in the base layer decoding process as the encoding-related information. Further, when the inter-layer syntax prediction is performed, the up-sampling section 245 acquires the syntax information supplied from the base layer image decoding section 202 in the base layer decoding process as the encoding-related information as well.
  • the encoding-related information is information supplied from the encoding side or information restored based on information supplied from the encoding side.
  • step S 253 the up-sampling section 245 performs the up-sampling process on the encoding-related information of the base layer (for example, the decoded base layer image) acquired in step S 252 .
  • the frame memory 239 stores the encoding-related information of the base layer (for example, the decoded base layer image) that has undergone the up-sampling process through the process of step S 253 .
  • step S 254 the enhancement layer area division information buffer 272 acquires the enhancement layer area division information supplied from the lossless decoding section 232 .
  • the enhancement layer area division information is information supplied from the encoding side.
  • step S 255 the synchronization area information decoding section 273 acquires the synchronization area information supplied from the lossless decoding section 232 .
  • the synchronization area information is information supplied from the encoding side.
  • step S 256 the synchronization area information decoding section 273 analyzes the synchronization area information acquired in step S 255 using the base layer area division information acquired in step S 251 and the enhancement layer area division information acquired in step S 254 , sets a position (a synchronization address) of data of the area of the base layer serving as the reference destination, and generates the synchronization address information indicating the synchronization address.
  • the synchronization area information decoding section 273 supplies the generated synchronization address information to the intra prediction section 241 or the inter prediction section 242 .
  • the intra prediction section 241 or the inter prediction section 242 to which the synchronization address information has been supplied performs the inter-layer prediction using the synchronization address information.
  • step S 256 ends, the process proceeds to step S 21 of FIG. 33 .
  • the subsequent processes are executed for each of the areas indicated by the enhancement layer area division information.
  • each process is executed using the area or a certain unit smaller than the area as a processing unit.
  • step S 261 to step S 270 of FIG. 33 corresponds and is performed similarly to the process of step S 222 to step S 231 of FIG. 31 .
  • step S 265 the intra prediction section 241 or the inter prediction section 242 performs the process according to the synchronization address information generated in step S 256 of FIG. 32 .
  • the intra prediction section 241 or the inter prediction section 242 performs the inter-layer prediction with reference to only the encoding-related information of the areas of the base layer designated by the synchronization address information.
  • step S 270 ends, the enhancement layer decoding process ends, and the process returns to FIG. 30 .
  • the image decoding device 200 can decrease the number of memory accesses for referring to the encoding-related information of another layer in the inter-layer prediction and suppress an increase in the decoding workload.
  • the image data is hierarchized and divided into a plurality of layers through the scalable coding, but the number of layers is arbitrary.
  • the enhancement layer is processed with reference to the base layer, but the present disclosure is not limited to this example, and the enhancement layer may be processed with reference to another enhancement layer that has been processed.
  • the frame memory 142 , the intra prediction section 144 , and the inter prediction section 145 ( FIG. 20 ) of the enhancement layer image encoding section 102 of the enhancement layer in which the encoding-related information is referred to may supply the encoding-related information of the enhancement layer to the enhancement layer image encoding section 102 of another enhancement layer in which the encoding-related information is referred to, similarly to the frame memory 122 , the intra prediction section 124 , and the inter prediction section 125 ( FIG. 19 ).
  • the frame memory 239 , the intra prediction section 241 , and the inter prediction section 242 ( FIG. 28 ) of the enhancement layer image decoding section 203 of the enhancement layer in which the encoding-related information is referred to may supply the encoding-related information of the enhancement layer to the enhancement layer image decoding section 203 of another enhancement layer in which the encoding-related information of the enhancement layer is referred to, similarly to the frame memory 219 , the intra prediction section 221 , and the inter prediction section 222 ( FIG. 27 ).
  • the present technology can be applied to a so-called image encoding device and an image decoding device based on a scalable coding/decoding scheme.
  • the present technology can be applied to an image encoding device and an image decoding device used when image information (bitstream) compressed by an orthogonal transform such as a discrete cosine transform and motion compensation as in MPEG and H.26x is received via a network medium such as satellite broadcasting, cable television, the Internet, or a mobile telephone.
  • a network medium such as satellite broadcasting, cable television, the Internet, or a mobile telephone.
  • the present technology can be applied to an image encoding device and an image decoding device used when processing is performed on a storage medium such as an optical disc, a magnetic disk, or a flash memory.
  • FIG. 34 illustrates an exemplary multi-view image coding scheme.
  • a multi-view image includes images of a plurality of views.
  • a plurality of views of the multi-view image includes a base view in which encoding and decoding are performed using only an image of its own view without using information of another view and a non-base view in which encoding and decoding are performed using information of another view.
  • Encoding and decoding of the non-base view may be performed using information of the base view or using information of another non-base view.
  • a reference relation between views in the multi-view image coding and decoding is similar to the reference relation between layers in the scalable image encoding and decoding. Therefore, the above-described method may be applied to the encoding and decoding of a multi-view image illustrated in FIG. 34 .
  • an area of the base view (or another non-base view) in which the encoding-related information is referred to may be controlled. As a result, even in the case of the multi-view image, similarly, it is possible to suppress an increase in the encoding or decoding workload.
  • FIG. 35 is a diagram illustrating a multi-view image coding device that performs the multi-view image encoding.
  • a multi-view image encoding device 600 includes an encoding section 601 , an encoding section 602 , and a multiplexing unit 603 .
  • the encoding section 601 encodes a base view image and generates a base view image encoded stream.
  • the encoding section 602 encodes a non-base view image and generates a non-base view image encoded stream.
  • the multiplexing section 603 multiplexes the base view image encoded stream generated in the encoding section 601 and the non-base view image encoded stream generated in the encoding section 602 , and generates a multi-view image encoded stream.
  • the base layer image encoding section 101 ( FIG. 19 ) may be applied as the encoding section 601 of the multi-view image encoding device 600
  • the enhancement layer image encoding section 102 ( FIG. 20 ) may be applied as the encoding section 602 .
  • an area of the base view (or another non-base view) in which the encoding-related information is referred to may be controlled.
  • it is possible to suppress an increase in the decoding workload by transmitting the control information used to control the area in which encoding-related information is referred to to the decoding side.
  • FIG. 36 is a diagram illustrating a multi-view image decoding device that performs the multi-view image decoding.
  • a multi-view image decoding device 610 includes a demultiplexing unit 611 , a decoding section 612 , and a decoding section 613 .
  • the inverse multiplexing section 611 inversely multiplexes a multi-view image encoded stream in which a base view image encoded stream and a non-base view image encoded stream are multiplexed, and extracts the base view image encoded stream and the non-base view image encoded stream.
  • the decoding section 612 decodes the base view image encoded stream extracted by the inverse multiplexing section 611 and obtains a base view image.
  • the decoding section 613 decodes the non-base view image encoded stream extracted by the inverse multiplexing section 611 and obtains a non-base view image.
  • the base layer image decoding section ( FIG. 27 ) may be applied as the decoding section 612 of the multi-view image decoding device 610
  • the enhancement layer image decoding section 203 ( FIG. 28 ) may be applied as the decoding section 613 .
  • an area of the base view (or another non-base view) in which the encoding-related information is referred to may be controlled.
  • a computer includes a computer which is incorporated in dedicated hardware or a general-purpose personal computer (PC) which can execute various functions by installing various programs into the computer, for example.
  • PC general-purpose personal computer
  • FIG. 37 is a block diagram illustrating a configuration example of hardware of a computer for executing the above-described series of processes through a program.
  • a central processing unit (CPU) 801 a read only memory (ROM) 802 , and a random access memory (RAM) 803 are connected to one another by a bus 804 .
  • CPU central processing unit
  • ROM read only memory
  • RAM random access memory
  • An input and output interface 810 is further connected to the bus 804 .
  • An input section 811 , an output section 812 , a storage section 813 , a communication section 814 , and a drive 815 are connected to the input and output interface 810 .
  • the input section 811 is formed with a keyboard, a mouse, a microphone, a touch panel, an input terminal, and the like.
  • the output section 812 is formed with a display, a speaker, an output terminal, and the like.
  • the storage section 813 is formed with a hard disk, a RAM disk, a nonvolatile memory, or the like.
  • the communication section 814 is formed with a network interface or the like.
  • the drive 815 drives a removable medium 821 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 801 loads the programs stored in the storage section 813 into the RAM 803 via the input and output interface 810 and the bus 804 , and executes the programs, so that the above described series of processes are performed.
  • the RAM 803 also stores data necessary for the CPU 801 to execute the various processes.
  • the program executed by the computer may be provided by being recorded on the removable medium 821 as a packaged medium or the like.
  • the program can be installed into the storage section 813 via the input and output interface 810 .
  • the program may be provided through a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting.
  • a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting.
  • the program can also be installed in advance into the ROM 802 or the storage section 813 .
  • program executed by a computer may be a program that is processed in time sequence according to the described sequence or a program that is processed in parallel or under necessary timing such as upon calling.
  • steps of describing the program to be recorded on the recording medium may include processing performed in time sequence according to the description order and processing not processed in time sequence but performed in parallel or individually.
  • a system means a set of a plurality of constituent elements (devices, modules (parts), or the like) regardless of whether or not all constituent elements are arranged in the same housing.
  • a constituent element described as a single device (or processing unit) above may be divided and configured as a plurality of devices (or processing units).
  • constituent elements described as a plurality of devices (or processing units) above may be configured collectively as a single device (or processing unit).
  • a constituent element other than those described above may be added to each device (or processing unit).
  • a part of a constituent element of a given device (or processing unit) may be included in a constituent element of another device (or another processing unit) as long as the configuration or operation of the system as a whole is substantially the same.
  • the present disclosure can adopt a configuration of cloud computing which processes by allocating and connecting one function by a plurality of apparatuses through a network.
  • each step described by the above mentioned flow charts can be executed by one apparatus or by allocating a plurality of apparatuses.
  • the plurality of processes included in this one step can be executed by one apparatus or by allocating a plurality of apparatuses.
  • the image encoding device and the image decoding device according to the embodiment may be applied to various electronic devices such as transmitters and receivers for satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, distribution to terminals via cellular communication and the like, recording devices that record images in a medium such as optical discs, magnetic disks and flash memory, and reproduction devices that reproduce images from such storage medium.
  • various electronic devices such as transmitters and receivers for satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, distribution to terminals via cellular communication and the like, recording devices that record images in a medium such as optical discs, magnetic disks and flash memory, and reproduction devices that reproduce images from such storage medium.
  • FIG. 38 illustrates an example of a schematic configuration of a television device to which the embodiment is applied.
  • a television device 900 includes an antenna 901 , a tuner 902 , a demultiplexer 903 , a decoder 904 , an video signal processing section 905 , a display section 906 , an audio signal processing section 907 , a speaker 908 , an external interface (I/F) section 909 , a control section 910 , a user interface (I/F) 911 , and a bus 912 .
  • I/F external interface
  • the tuner 902 extracts a signal of a desired channel from broadcast signals received via the antenna 901 , and demodulates the extracted signal.
  • the tuner 902 then outputs an encoded bitstream obtained through the demodulation to the demultiplexer 903 . That is, the tuner 902 serves as a transmission unit of the television device 900 for receiving an encoded stream in which an image is encoded.
  • the demultiplexer 903 demultiplexes the encoded bitstream to obtain a video stream and an audio stream of a program to be viewed, and outputs each stream obtained through the demultiplexing to the decoder 904 .
  • the demultiplexer 903 also extracts auxiliary data such as electronic program guides (EPGs) from the encoded bitstream, and supplies the extracted data to the control section 910 . Additionally, the demultiplexer 903 may perform descrambling when the encoded bitstream is scrambled.
  • EPGs electronic program guides
  • the decoder 904 decodes the video stream and the audio stream input from the demultiplexer 903 .
  • the decoder 904 then outputs video data generated in the decoding process to the video signal processing section 905 .
  • the decoder 904 also outputs the audio data generated in the decoding process to the audio signal processing section 907 .
  • the video signal processing section 905 reproduces the video data input from the decoder 904 , and causes the display section 906 to display the video.
  • the video signal processing section 905 may also cause the display section 906 to display an application screen supplied via a network. Further, the video signal processing section 905 may perform an additional process such as noise removal, for example, on the video data in accordance with the setting.
  • the video signal processing section 905 may generate an image of a graphical user interface (GUI) such as a menu, a button and a cursor, and superimpose the generated image on an output image.
  • GUI graphical user interface
  • the display section 906 is driven by a drive signal supplied from the video signal processing section 905 , and displays video or an image on a video screen of a display device (e.g. liquid crystal display, plasma display, organic electroluminescence display (OLED), etc.).
  • a display device e.g. liquid crystal display, plasma display, organic electroluminescence display (OLED), etc.
  • the audio signal processing section 907 performs a reproduction process such as D/A conversion and amplification on the audio data input from the decoder 904 , and outputs sound from the speaker 908 .
  • the audio signal processing section 907 may also perform an additional process such as noise removal on the audio data.
  • the external interface section 909 is an interface for connecting the television device 900 to an external device or a network.
  • a video stream or an audio stream received via the external interface section 909 may be decoded by the decoder 904 . That is, the external interface section 909 also serves as a transmission unit of the television device 900 for receiving an encoded stream in which an image is encoded.
  • the control section 910 includes a processor such as a central processing unit (CPU), and a memory such as random access memory (RAM) and read only memory (ROM).
  • the memory stores a program to be executed by the CPU, program data, EPG data, data acquired via a network, and the like.
  • the program stored in the memory is read out and executed by the CPU at the time of activation of the television device 900 , for example.
  • the CPU controls the operation of the television device 900 , for example, in accordance with an operation signal input from the user interface section 911 by executing the program.
  • the user interface section 911 is connected to the control section 910 .
  • the user interface section 911 includes, for example, a button and a switch used for a user to operate the television device 900 , and a receiving section for a remote control signal.
  • the user interface section 911 detects an operation of a user via these constituent elements, generates an operation signal, and outputs the generated operation signal to the control section 910 .
  • the bus 912 connects the tuner 902 , the demultiplexer 903 , the decoder 904 , the video signal processing section 905 , the audio signal processing section 907 , the external interface section 909 , and the control section 910 to each other.
  • the decoder 904 has a function of the image decoding device 200 according to the embodiment in the television device 900 configured in this manner. Accordingly, it is possible to suppress an increase in the decoding workload when an image is decoded in the television device 900 .
  • FIG. 39 illustrates an example of a schematic configuration of a mobile phone to which the embodiment is applied.
  • a mobile phone 920 includes an antenna 921 , a communication section 922 , an audio codec 923 , a speaker 924 , a microphone 925 , a camera section 926 , an image processing section 927 , a demultiplexing section 928 , a recording/reproduction section 929 , a display section 930 , a control section 931 , an operation section 932 , and a bus 933 .
  • the antenna 921 is connected to the communication section 922 .
  • the speaker 924 and the microphone 925 are connected to the audio codec 923 .
  • the operation section 932 is connected to the control section 931 .
  • the bus 933 connects the communication section 922 , the audio codec 923 , the camera section 926 , the image processing section 927 , the demultiplexing section 928 , the recording/reproduction section 929 , the display section 930 , and the control section 931 to each other.
  • the mobile phone 920 performs an operation such as transmission and reception of an audio signal, transmission and reception of email or image data, image capturing, and recording of data in various operation modes including an audio call mode, a data communication mode, an image capturing mode, and a videophone mode.
  • An analogue audio signal generated by the microphone 925 is supplied to the audio codec 923 in the audio call mode.
  • the audio codec 923 converts the analogue audio signal into audio data, has the converted audio data subjected to the A/D conversion, and compresses the converted data.
  • the audio codec 923 then outputs the compressed audio data to the communication section 922 .
  • the communication section 922 encodes and modulates the audio data, and generates a transmission signal.
  • the communication section 922 then transmits the generated transmission signal to a base station (not illustrated) via the antenna 921 .
  • the communication section 922 also amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal to acquire a received signal.
  • the communication section 922 then demodulates and decodes the received signal, generates audio data, and outputs the generated audio data to the audio codec 923 .
  • the audio codec 923 extends the audio data, has the audio data subjected to the D/A conversion, and generates an analogue audio signal.
  • the audio codec 923 then supplies the generated audio signal to the speaker 924 to output sound.
  • the control section 931 also generates text data constituting email in accordance with an operation made by a user via the operation section 932 , for example. Moreover, the control section 931 causes the display section 930 to display the text. Furthermore, the control section 931 generates email data in accordance with a transmission instruction from a user via the operation section 932 , and outputs the generated email data to the communication section 922 .
  • the communication section 922 encodes and modulates the email data, and generates a transmission signal. The communication section 922 then transmits the generated transmission signal to a base station (not illustrated) via the antenna 921 .
  • the communication section 922 also amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal to acquire a received signal.
  • the communication section 922 then demodulates and decodes the received signal to restore the email data, and outputs the restored email data to the control section 931 .
  • the control section 931 causes the display section 930 to display the content of the email, and also causes the storage medium of the recording/reproduction section 929 to store the email data.
  • the recording/reproduction section 929 includes a readable and writable storage medium.
  • the storage medium may be a built-in storage medium such as RAM and flash memory, or an externally mounted storage medium such as hard disks, magnetic disks, magneto-optical disks, optical discs, universal serial bus (USB) memory, and memory cards.
  • RAM random access memory
  • USB universal serial bus
  • the camera section 926 captures an image of a subject to generate image data, and outputs the generated image data to the image processing section 927 in the image capturing mode.
  • the image processing section 927 encodes the image data input from the camera section 926 , and causes the storage medium of the recording/reproduction section 929 to store the encoded stream.
  • the demultiplexing section 928 for example, multiplexes a video stream encoded by the image processing section 927 and an audio stream input from the audio codec 923 , and outputs the multiplexed stream to the communication section 922 in the videophone mode.
  • the communication section 922 encodes and modulates the stream, and generates a transmission signal.
  • the communication section 922 then transmits the generated transmission signal to a base station (not illustrated) via the antenna 921 .
  • the communication section 922 also amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal to acquire a received signal.
  • These transmission signal and received signal may include an encoded bitstream.
  • the communication section 922 then demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing section 928 .
  • the demultiplexing section 928 demultiplexes the input stream to obtain a video stream and an audio stream, and outputs the video stream to the image processing section 927 and the audio stream to the audio codec 923 .
  • the image processing section 927 decodes the video stream, and generates video data.
  • the video data is supplied to the display section 930 , and a series of images is displayed by the display section 930 .
  • the audio codec 923 extends the audio stream, has the audio stream subjected to the D/A conversion, and generates an analogue audio signal.
  • the audio codec 923 then supplies the generated audio signal to the speaker 924 , and causes sound to be output.
  • the image processing section 927 has the functions of the image encoding device 100 ( FIG. 18 ) and the image decoding device 200 ( FIG. 26 ) according to the above embodiment.
  • the mobile telephone 920 encodes and decodes an image, it is possible to suppress an increase in workload.
  • FIG. 40 illustrates an example of a schematic configuration of a recording/reproduction device to which the embodiment is applied.
  • a recording/reproduction device 940 encodes audio data and video data of a received broadcast program and records the encoded audio data and the encoded video data in a recording medium.
  • the recording/reproduction device 940 may also encode audio data and video data acquired from another device and record the encoded audio data and the encoded video data in a recording medium.
  • the recording/reproduction device 940 uses a monitor or a speaker to reproduce the data recorded in the recording medium in accordance with an instruction of a user. At this time, the recording/reproduction device 940 decodes the audio data and the video data.
  • the recording/reproduction device 940 includes a tuner 941 , an external interface (I/F) section 942 , an encoder 943 , a hard disk drive (HDD) 944 , a disc drive 945 , a selector 946 , a decoder 947 , an on-screen display (OSD) 948 , a control section 949 , and a user interface (I/F) section 950 .
  • I/F external interface
  • the tuner 941 extracts a signal of a desired channel from broadcast signals received via an antenna (not shown), and demodulates the extracted signal.
  • the tuner 941 then outputs an encoded bitstream obtained through the demodulation to the selector 946 . That is, the tuner 941 serves as a transmission unit of the recording/reproduction device 940 .
  • the external interface section 942 is an interface for connecting the recording/reproduction device 940 to an external device or a network.
  • the external interface section 942 may be an IEEE 1394 interface, a network interface, an USB interface, a flash memory interface, or the like.
  • video data and audio data received via the external interface section 942 are input to the encoder 943 . That is, the external interface section 942 serves as a transmission unit of the recording/reproduction device 940 .
  • the encoder 943 encodes the video data and the audio data.
  • the encoder 943 then outputs an encoded bitstream to the selector 946 .
  • the HDD 944 records, in an internal hard disk, the encoded bitstream in which content data of video and sound is compressed, various programs, and other data.
  • the HDD 944 also reads out the data from the hard disk at the time of reproducing video or sound.
  • the disc drive 945 records and reads out data in a recording medium that is mounted.
  • the recording medium that is mounted on the disc drive 945 may be, for example, a DVD disc (DVD-Video, DVD-RAM, DVD-R, DVD-RW, a DVD+R, DVD+RW, etc.), a Blu-ray (registered trademark) disc, or the like.
  • the selector 946 selects, at the time of recording video or sound, an encoded bitstream input from the tuner 941 or the encoder 943 , and outputs the selected encoded bitstream to the HDD 944 or the disc drive 945 .
  • the selector 946 also outputs, at the time of reproducing video or sound, an encoded bitstream input from the HDD 944 or the disc drive 945 to the decoder 947 .
  • the decoder 947 decodes the encoded bitstream, and generates video data and audio data. The decoder 947 then outputs the generated video data to the OSD 948 . The decoder 947 also outputs the generated audio data to an external speaker.
  • the OSD 948 reproduces the video data input from the decoder 947 , and displays video.
  • the OSD 948 may also superimpose an image of a GUI such as a menu, a button, and a cursor on a displayed video.
  • the control section 949 includes a processor such as a CPU, and a memory such as RAM and ROM.
  • the memory stores a program to be executed by the CPU, program data, and the like. For example, a program stored in the memory is read out and executed by the CPU at the time of activation of the recording/reproduction device 940 .
  • the CPU controls the operation of the recording/reproduction device 940 , for example, in accordance with an operation signal input from the user interface section 950 by executing the program.
  • the user interface section 950 is connected to the control section 949 .
  • the user interface section 950 includes, for example, a button and a switch used for a user to operate the recording/reproduction device 940 , and a receiving section for a remote control signal.
  • the user interface section 950 detects an operation made by a user via these constituent elements, generates an operation signal, and outputs the generated operation signal to the control section 949 .
  • the encoder 943 has the function of the image encoding device 100 ( FIG. 18 ) according to the above embodiment.
  • the decoder 947 has the function of the image decoding device 200 ( FIG. 26 ) according to the above embodiment.
  • FIG. 41 illustrates an example of a schematic configuration of an image capturing device to which the embodiment is applied.
  • An image capturing device 960 captures an image of a subject to generate an image, encodes the image data, and records the image data in a recording medium.
  • the image capturing device 960 includes an optical block 961 , an image capturing section 962 , a signal processing section 963 , an image processing section 964 , a display section 965 , an external interface (I/F) section 966 , a memory 967 , a media drive 968 , an OSD 969 , a control section 970 , a user interface (I/F) section 971 , and a bus 972 .
  • I/F external interface
  • the optical block 961 is connected to the image capturing section 962 .
  • the image capturing section 962 is connected to the signal processing section 963 .
  • the display section 965 is connected to the image processing section 964 .
  • the user interface section 971 is connected to the control section 970 .
  • the bus 972 connects the image processing section 964 , the external interface section 966 , the memory 967 , the media drive 968 , the OSD 969 , and the control section 970 to each other.
  • the optical block 961 includes a focus lens, an aperture stop mechanism, and the like.
  • the optical block 961 forms an optical image of a subject on an image capturing surface of the image capturing section 962 .
  • the image capturing section 962 includes an image sensor such as a charge coupled device (CCD) and a complementary metal oxide semiconductor (CMOS), and converts the optical image formed on the image capturing surface into an image signal which is an electrical signal through photoelectric conversion.
  • the image capturing section 962 then outputs the image signal to the signal processing section 963 .
  • the signal processing section 963 performs various camera signal processes such as knee correction, gamma correction, and color correction on the image signal input from the image capturing section 962 .
  • the signal processing section 963 outputs the image data subjected to the camera signal process to the image processing section 964 .
  • the image processing section 964 encodes the image data input from the signal processing section 963 , and generates encoded data.
  • the image processing section 964 then outputs the generated encoded data to the external interface section 966 or the media drive 968 .
  • the image processing section 964 also decodes encoded data input from the external interface section 966 or the media drive 968 , and generates image data.
  • the image processing section 964 then outputs the generated image data to the display section 965 .
  • the image processing section 964 may also output the image data input from the signal processing section 963 to the display section 965 , and cause the image to be displayed. Furthermore, the image processing section 964 may superimpose data for display acquired from the OSD 969 on an image to be output to the display section 965 .
  • the OSD 969 generates an image of a GUI such as a menu, a button, and a cursor, and outputs the generated image to the image processing section 964 .
  • the external interface section 966 is configured, for example, as an USB input and output terminal.
  • the external interface section 966 connects the image capturing device 960 and a printer, for example, at the time of printing an image.
  • a drive is further connected to the external interface section 966 as needed.
  • a removable medium such as magnetic disks and optical discs is mounted on the drive, and a program read out from the removable medium may be installed in the image capturing device 960 .
  • the external interface section 966 may be configured as a network interface to be connected to a network such as a LAN and the Internet. That is, the external interface section 966 serves as a transmission unit of the image capturing device 960 .
  • a recording medium to be mounted on the media drive 968 may be a readable and writable removable medium such as magnetic disks, magneto-optical disks, optical discs, and semiconductor memory.
  • the recording medium may also be fixedly mounted on the media drive 968 , configuring a non-transportable storage section such as built-in hard disk drives or solid state drives (SSDs).
  • the control section 970 includes a processor such as a CPU, and a memory such as a RAM and a ROM.
  • the memory stores a program to be executed by the CPU, program data, and the like.
  • a program stored in the memory is read out and executed by the CPU, for example, at the time of activation of the image capturing device 960 .
  • the CPU controls the operation of the image capturing device 960 , for example, in accordance with an operation signal input from the user interface section 971 by executing the program.
  • the user interface section 971 is connected to the control section 970 .
  • the user interface section 971 includes, for example, a button, a switch, and the like used for a user to operate the image capturing device 960 .
  • the user interface section 971 detects an operation made by a user via these constituent elements, generates an operation signal, and outputs the generated operation signal to the control section 970 .
  • the image processing section 964 has the functions of the image encoding device 100 ( FIG. 18 ) and the image decoding device 200 ( FIG. 26 ) according to the above embodiment.
  • the imaging device 960 encodes and decodes an image, it is possible to suppress an increase in workload.
  • the scalable coding for example, is used for selection of data to be transmitted as examples illustrated in FIG. 42 .
  • a distribution server 1002 reads scalable encoded data stored in a scalable encoded data storage section 1001 , and distributes the scalable encoded data to a terminal device such as a personal computer 1004 , an AV device 1005 , a tablet device 1006 , or a mobile phone 1007 via a network 1003 .
  • the distribution server 1002 selects and transmits encoded data having proper quality according to capability of the terminal device, communication environment, or the like. Even when the distribution server 1002 transmits unnecessarily high-quality data, a high-quality image is not necessarily obtainable in the terminal device and it may be a cause of occurrence of delay or overflow. In addition, a communication band may be unnecessarily occupied or workload of the terminal device may unnecessarily increase. In contrast, even when the distribution server 1002 transmits unnecessarily low quality data, an image with a sufficient quality may not be obtained. Thus, the distribution server 1002 appropriately reads and transmits the scalable encoded data stored in the scalable encoded data storage section 1001 as the encoded data having a proper quality according to the capability of the terminal device, the communication environment, or the like.
  • the scalable encoded data storage section 1001 is configured to store scalable encoded data (BL+EL) 1011 in which the scalable coding is performed.
  • the scalable encoded data (BL+EL) 1011 is encoded data including both a base layer and an enhancement layer, and is data from which a base layer image and an enhancement layer image can be obtained by performing decoding.
  • the distribution server 1002 selects an appropriate layer according to the capability of the terminal device for transmitting data, the communication environment, or the like, and reads the data of the selected layer. For example, with respect to the personal computer 1004 or the tablet device 1006 having high processing capability, the distribution server 1002 reads the scalable encoded data (BL+EL) 1011 from the scalable encoded data storage section 1001 , and transmits the scalable encoded data (BL+EL) 1011 without change.
  • BL+EL scalable encoded data
  • the distribution server 1002 extracts the data of the base layer from the scalable encoded data (BL+EL) 1011 , and transmits the extracted data of the base layer as low quality scalable encoded data (BL) 1012 that is data having the same content as the scalable encoded data (BL+EL) 1011 but has lower quality than the scalable encoded data (BL+EL) 1011 .
  • the scalable encoded data Because an amount of data can easily be adjusted by employing the scalable encoded data, the occurrence of delay or overflow can be suppressed or the unnecessary increase in the workload of the terminal device or the communication media can be suppressed.
  • a redundancy between the layers is reduced in the scalable encoded data (BL+EL) 1011 , it is possible to further reduce the amount of data than when the encoded data of each layer is treated as the individual data. Therefore, it is possible to more efficiently use the storage region of the scalable encoded data storage section 1001 .
  • the hardware performance of the terminal devices differs according to the device.
  • the software performance thereof also varies.
  • all the communication networks including a wired, wireless, or both such as the Internet and the local area network (LAN) are applicable as the network 1003 serving as a communication medium, the data transmission performance thereof varies. Further, the data transmission performance may vary by other communications, or the like.
  • the distribution server 1002 may perform communication with the terminal device which is the data transmission destination before starting the data transmission, and then obtain information related to the terminal device performance such as hardware performance of the terminal device, or the application (software) performance which is executed by the terminal device, and information related to the communication environment such as an available bandwidth of the network 1003 . Then, distribution server 1002 may select an appropriate layer based on the obtained information.
  • the extraction of the layer may be performed in the terminal device.
  • the personal computer 1004 may decode the transmitted scalable encoded data (BL+EL) 1011 and display the image of the base layer or display the image of the enhancement layer.
  • the personal computer 1004 may be configured to extract the scalable encoded data (BL) 1012 of the base layer from the transmitted scalable encoded data (BL+EL) 1011 , store the extracted scalable encoded data (BL) 1012 of the base layer, transmit to another device, or decode and display the image of the base layer.
  • the numbers of scalable encoded data storage sections 1001 , distribution servers 1002 , networks 1003 , and terminal devices are arbitrary.
  • the example of the distribution server 1002 transmitting the data to the terminal device is described above, the example of use is not limited thereto.
  • the data transmission system 1000 is applicable to any system which selects and transmits an appropriate layer according to the capability of the terminal device, the communication environment, or the like when the scalable encoded data is transmitted to the terminal device.
  • the scalable coding for example, is used for transmission via a plurality of communication media as in an example illustrated in FIG. 43 .
  • a broadcasting station 1101 transmits scalable encoded data (BL) 1121 of the base layer by terrestrial broadcasting 1111 .
  • the broadcasting station 1101 transmits scalable encoded data (EL) 1122 of the enhancement layer via any arbitrary network 1112 made of a communication network that is wired, wireless, or both (for example, the data is packetized and transmitted).
  • a terminal device 1102 has a function of receiving the terrestrial broadcasting 1111 that is broadcast by the broadcasting station 1101 and receives the scalable encoded data (BL) 1121 of the base layer transmitted via the terrestrial broadcasting 1111 .
  • the terminal device 1102 further has a communication function by which the communication is performed via the network 1112 , and receives the scalable encoded data (EL) 1122 of the enhancement layer transmitted via the network 1112 .
  • the terminal device 1102 decodes the scalable encoded data (BL) 1121 of the base layer acquired via the terrestrial broadcasting 1111 , thereby obtaining or storing the image of the base layer or transmitting the image of the base layer to other devices.
  • BL scalable encoded data
  • the terminal device 1102 combines the scalable encoded data (BL) 1121 of the base layer acquired via the terrestrial broadcasting 1111 and the scalable encoded data (EL) 1122 of the enhancement layer acquired via the network 1112 , thereby obtaining the scalable encoded data (BL+EL), obtaining or storing the image of the enhancement layer by decoding the scalable encoded data (BL+EL), or transmitting the image of the enhancement layer to other devices.
  • BL scalable encoded data
  • EL scalable encoded data
  • the scalable encoded data for example, can be transmitted via the different communication medium for each layer. Therefore, it is possible to disperse the workload and suppress the occurrence of delay or overflow.
  • the communication medium used for the transmission for each layer may be configured to be selected.
  • the scalable encoded data (BL) 1121 of the base layer in which the amount of data is comparatively large may be transmitted via the communication medium having a wide bandwidth
  • the scalable encoded data (EL) 1122 of the enhancement layer in which the amount of data is comparatively small may be transmitted via the communication media having a narrow bandwidth.
  • whether the communication medium that transmits the scalable encoded data (EL) 1122 of the enhancement layer is the network 1112 or the terrestrial broadcasting 1111 may be switched according to the available bandwidth of the network 1112 .
  • the communication medium that transmits the scalable encoded data (EL) 1122 of the enhancement layer is the network 1112 or the terrestrial broadcasting 1111 may be switched according to the available bandwidth of the network 1112 .
  • what have been described above can be similarly applied to data of an arbitrary layer.
  • the number of layers is arbitrary, and the number of communication media used in the transmission is also arbitrary.
  • the number of terminal devices 1102 which are the destination of the data distribution is also arbitrary.
  • the data transmission system 1100 can be applied to any system which divides the scalable encoded data using a layer as a unit and transmits the scalable encoded data via a plurality of links.
  • the scalable coding is used in the storage of the encoded data as an example illustrated in FIG. 44 .
  • an image capturing device 1201 performs scalable coding on image data obtained by capturing an image of a subject 1211 , and supplies a scalable coding result as the scalable encoded data (BL+EL) 1221 to a scalable encoded data storage device 1202 .
  • BL+EL scalable encoded data
  • the scalable encoded data storage device 1202 stores the scalable encoded data (BL+EL) 1221 supplied from the image capturing device 1201 with quality according to the situation. For example, in the case of normal circumstances, the scalable encoded data storage device 1202 extracts data of the base layer from the scalable encoded data (BL+EL) 1221 , and stores the extracted data as scalable encoded data (BL) 1222 of the base layer having a small amount of data at low quality. On the other hand, for example, in the case of notable circumstances, the scalable encoded data storage device 1202 stores the scalable encoded data (BL+EL) 1221 having a large amount of data at high quality without change.
  • the scalable encoded data storage device 1202 can save the image at high quality only in a necessary case, it is possible to suppress the decrease of the value of the image due to the deterioration of the image quality and suppress the increase of the amount of data, and it is possible to improve the use efficiency of the storage region.
  • the image capturing device 1201 is assumed to be a motoring camera. Because content of the captured image is unlikely to be important when a monitoring subject (for example, an invader) is not shown in the captured image (in the case of the normal circumstances), the priority is on the reduction of the amount of data, and the image data (scalable encoded data) is stored at low quality. On the other hand, because the content of the captured image is likely to be important when a monitoring target is shown as the subject 1211 in the captured image (in the case of the notable circumstances), the priority is on the image quality, and the image data (scalable encoded data) is stored at high quality.
  • whether the case is the case of the normal circumstances or the notable circumstances may be determined by the scalable encoded data storage device 1202 by analyzing the image.
  • the image capturing device 1201 may be configured to make a determination and transmit the determination result to the scalable encoded data storage device 1202 .
  • a determination criterion of whether the case is the case of the normal circumstances or the notable circumstances is arbitrary and the content of the image which is the determination criterion is arbitrary.
  • a condition other than the content of the image can be designated as the determination criterion.
  • switching may be configured to be performed according to the magnitude or waveform of recorded sound, by a predetermined time interval, or by an external instruction such as the user's instruction.
  • the number of states is arbitrary, and for example, switching may be configured to be performed among three or more states such as normal circumstances, slightly notable circumstances, notable circumstances, and highly notable circumstances.
  • the upper limit number of states to be switched depends upon the number of layers of the scalable encoded data.
  • the image capturing device 1201 may determine the number of layers of the scalable coding according to the state. For example, in the case of the normal circumstances, the image capturing device 1201 may generate the scalable encoded data (BL) 1222 of the base layer having a small amount of data at low quality and supply the data to the scalable encoded data storage device 1202 . In addition, for example, in the case of the notable circumstances, the image capturing device 1201 may generate the scalable encoded data (BL+EL) 1221 of the base layer having a large amount of data at high quality and supply the data to the scalable encoded data storage device 1202 .
  • BL scalable encoded data
  • BL+EL scalable encoded data
  • the usage of the image capturing system 1200 is arbitrary and is not limited to the monitoring camera.
  • the present technology is not limited to the above examples and may be implemented as any constituent element mounted in the device or the device configuring the system, for example, a processor serving as a system (large scale integration) LSI or the like, a module using a plurality of processors or the like, a unit using a plurality of modules or the like, a set (that is, some constituent elements of the device) in which any other function is further added to a unit, or the like.
  • a processor serving as a system (large scale integration) LSI or the like
  • a module using a plurality of processors or the like a unit using a plurality of modules or the like
  • a set that is, some constituent elements of the device in which any other function is further added to a unit, or the like.
  • FIG. 45 illustrates an example of a schematic configuration of a video set to which the present technology is applied.
  • a video set 1300 illustrated in FIG. 45 is a multi-functionalized configuration in which a device having a function related to image encoding and/or image decoding is combined with a device having any other function related to the function.
  • the video set 1300 includes a module group such as a video module 1311 , an external memory 1312 , a power management module 1313 , and a front end module 1314 and a device having relevant functions such as connectivity 1321 , a camera 1322 , and a sensor 1323 as illustrated in FIG. 45 .
  • a module group such as a video module 1311 , an external memory 1312 , a power management module 1313 , and a front end module 1314 and a device having relevant functions such as connectivity 1321 , a camera 1322 , and a sensor 1323 as illustrated in FIG. 45 .
  • a module is a part having a set of functions into which several relevant part functions are mutually integrated.
  • a concrete physical configuration is arbitrary, but, for example, it is configured such that a plurality of processes having respective functions, electronic circuit elements such as a resistor and a capacitor, and other devices are arranged and integrated on a wiring substrate. Further, a new module may be obtained by combining another module or a processor with a module.
  • the video module 1311 is a combination of configurations having functions related to image processing, and includes an application processor, a video processor, a broadband modem 1333 , and a radio frequency (RF) module 1334 .
  • RF radio frequency
  • a processor is one in which a configuration having a certain function is integrated into a semiconductor chip through System On a Chip (SoC), and also refers to, for example, a system LSI or the like.
  • the configuration having the certain function may be a logic circuit (hardware configuration), may be a CPU, a ROM, a RAM, and a program (software configuration) executed using the CPU, the ROM, and the RAM, and may be a combination of a hardware configuration and a software configuration.
  • a processor may include a logic circuit, a CPU, a ROM, a RAM, and the like, some functions may be implemented through the logic circuit (hardware configuration), and other functions may be implemented through a program (software configuration) executed by the CPU.
  • the application processor 1331 of FIG. 45 is a processor that executes an application related to image processing.
  • An application executed by the application processor 1331 can not only perform a calculation process but can also control constituent elements inside and outside the video module 1311 such as the video processor 1332 as necessary in order to implement a certain function.
  • the video processor 1332 is a processor having a function related to image encoding and/or image decoding.
  • the broadband modem 1333 is a processor (or a module) that performs processing related to wired and/or wireless broadband communication that is performed via a broadband line such as the Internet or a public telephone line network.
  • the broadband modem 1333 performs digital modulation on data (a digital signal) to be transmitted and converts the data into an analog signal, or performs demodulation on a received analog signal and converts the analog signal into data (a digital signal).
  • the broadband modem 1333 can perform digital modulation and demodulation on arbitrary information such as image data processed by the video processor 1332 , a stream including encoded image data, an application program, or setting data.
  • the RF module 1334 is a module that performs a frequency transform process, a modulation/demodulation process, an amplification process, a filtering process, and the like on an RF signal transmitted and received through an antenna.
  • the RF module 1334 performs, for example, a frequency transform on a baseband signal generated by the broadband modem 1333 , and generates an RF signal.
  • the RF module 1334 performs, for example, a frequency transform on an RF signal received through the front end module 1314 , and generates a baseband signal.
  • the application processor 1331 and the video processor 1332 may be integrated into a single processor.
  • the external memory 1312 is a module that is installed outside the video module 1311 and has a storage device used by the video module 1311 .
  • the storage device of the external memory 1312 can be implemented by any physical configuration, but is commonly used to store large capacity data such as image data of frame units, and thus it is desirable to implement the storage device of the external memory 1312 using a relative inexpensive large-capacity semiconductor memory such as a dynamic random access memory (DRAM).
  • DRAM dynamic random access memory
  • the power management module 1313 manages and controls power supply to the video module 1311 (the respective constituent elements in the video module 1311 ).
  • the front end module 1314 is a module that provides a front end function (a circuit of a transmitting and receiving end at an antenna side) to the RF module 1334 .
  • the front end module 1314 includes, for example, an antenna section 2351 , a filter 1352 , and an amplification section 1353 as illustrated in FIG. 45 .
  • the antenna section 1351 includes an antenna that transmits and receives a radio signal and a peripheral configuration.
  • the antenna section 1351 transmits a signal provided from the amplification section 1353 as a radio signal, and provides a received radio signal to the filter 1352 as an electrical signal (RF signal).
  • the filter 1352 performs, for example, a filtering process on an RF signal received through the antenna section 1351 , and provides a processed RF signal to the RF module 1334 .
  • the amplification section 1353 amplifies the RF signal provided from the RF module 1334 , and provides the amplified RF signal to the antenna section 1351 .
  • the connectivity 1321 is a module having a function related to connection with the outside.
  • a physical configuration of the connectivity 1321 is arbitrary.
  • the connectivity 1321 includes a configuration having a communication function other than that of a communication standard supported by the broadband modem 1333 , an external I/O terminal, or the like.
  • the connectivity 1321 may include a module having a communication function based on a wireless communication standard such as Bluetooth (a registered trademark), IEEE 802.11 (for example, Wireless Fidelity (Wi-Fi) (a registered trademark)), Near Field Communication (NFC), InfraRed Data Association (IrDA), an antenna that transmits and receives a signal satisfying the standard, or the like.
  • the connectivity 1321 may include a module having a communication function based on a wired communication standard such as Universal Serial Bus (USB), or High-Definition Multimedia Interface (HDMI) (a registered trademark) or a terminal that satisfies the standard.
  • the connectivity 1321 may include any other data (signal) transmission function or the like such as an analog I/O terminal.
  • the connectivity 1321 may include a device of a transmission destination of data (signal).
  • the connectivity 1321 may include a drive (including a hard disk, a solid state drive (SSD), a Network Attached Storage (NAS), or the like as well as a drive of a removable medium) that reads/writes data from/in a recording medium such as a magnetic disk, an optical disc, a magneto optical disc, or a semiconductor memory.
  • the connectivity 1321 may include an output device (a monitor, a speaker, or the like) that outputs images or sound.
  • the camera 1322 is a module having a function of photographing a subject and obtaining image data of the subject. For example, image data obtained by image capture from the camera 1322 is provided to and encoded by the video processor 1332 .
  • the sensor 1323 is a module having an arbitrary sensor function such as a sound sensor, an ultrasonic sensor, an optical sensor, an illuminance sensor, an infrared sensor, an image sensor, a rotation sensor, an angle sensor, an angular velocity sensor, a velocity sensor, an acceleration sensor, an inclination sensor, a magnetic identification sensor, a shock sensor, or a temperature sensor.
  • data detected by the sensor 1323 is provided to the application processor 1331 and used by an application or the like.
  • a configuration described above as a module may be implemented as a processor, and a configuration described as a processor may be implemented as a module.
  • the present technology can be applied to the video processor 1332 as will be described later.
  • the video set 1300 can be implemented as a set to which the present technology is applied.
  • FIG. 46 illustrates an example of a schematic configuration of the video processor 1332 ( FIG. 45 ) to which the present technology is applied.
  • the video processor 1332 has a function of receiving an input of a video signal and an audio signal and encoding the video signal and the audio signal according to a certain scheme and a function of decoding encoded video data and audio data, and reproducing and outputting a video signal and an audio signal.
  • the video processor 1332 includes a video input processing section 1401 , a first image enlarging/reducing section 1402 , a second image enlarging/reducing section 1403 , a video output processing section 1404 , a frame memory 1405 , and a memory control section 1406 as illustrated in FIG. 46 .
  • the video processor 1332 further includes an encoding/decoding engine 1407 , video elementary stream (ES) buffers 1408 A and 1408 B, and audio ES buffers 1409 A and 1409 B.
  • ES video elementary stream
  • the video processor 1332 further includes an audio encoder 1410 , an audio decoder 1411 , a multiplexer (multiplexer (MUX)) 1412 , a demultiplexer (demultiplexer (IMUX)) 1413 , and a stream buffer 1414 .
  • an audio encoder 1410 an audio decoder 1411 , a multiplexer (multiplexer (MUX)) 1412 , a demultiplexer (demultiplexer (IMUX)) 1413 , and a stream buffer 1414 .
  • MUX multiplexer
  • IMUX demultiplexer
  • the video input processing section 1401 acquires a video signal input from the connectivity 1321 ( FIG. 45 ) or the like, and converts the video signal into digital image data.
  • the first image enlarging/reducing section 1402 performs, for example, a format conversion process and an image enlargement/reduction process on the image data.
  • the second image enlarging/reducing section 1403 performs an image enlargement/reduction process on the image data according to a format of a destination to which the image data is output through the video output processing section 1404 or performs the format conversion process and the image enlargement/reduction process which are similar to those of the first image enlarging/reducing section 1402 on the image data.
  • the video output processing section 1404 performs format conversion and conversion into an analog signal on the image data, and outputs a reproduced video signal, for example, to the connectivity 1321 ( FIG. 45 ) or the like.
  • the frame memory 1405 is an image data memory that is shared by the video input processing section 1401 , the first image enlarging/reducing section 1402 , the second image enlarging/reducing section 1403 , the video output processing section 1404 , and the encoding/decoding engine 1407 .
  • the frame memory 1405 is implemented as, for example, a semiconductor memory such as a DRAM.
  • the memory control section 1406 receives a synchronous signal from the encoding/decoding engine 1407 , and controls writing/reading access to the frame memory 1405 according to an access schedule for the frame memory 1405 written in an access management table 1406 A.
  • the access management table 1406 A is updated through the memory control section 1406 according to processing executed by the encoding/decoding engine 1407 , the first image enlarging/reducing section 1402 , the second image enlarging/reducing section 1403 , or the like.
  • the encoding/decoding engine 1407 performs an encoding process of encoding image data and a decoding process of decoding a video stream that is data obtained by encoding image data. For example, the encoding/decoding engine 1407 encodes image data read from the frame memory 1405 , and sequentially writes the encoded image data in the video ES buffer 1408 A as a video stream. Further, for example, the encoding/decoding engine 1407 sequentially reads the video stream from the video ES buffer 1408 B, sequentially decodes the video stream, and sequentially writes the decoded image data in the frame memory 1405 . Regarding the encoding or the decoding, the encoding/decoding engine 1407 uses the frame memory 1405 as a working area. Further, the encoding/decoding engine 1407 outputs the synchronous signal to the memory control section 1406 , for example, under timing under which processing of each macroblock starts.
  • the video ES buffer 1408 A buffers the video stream generated by the encoding/decoding engine 1407 , and then provides the video stream to the multiplexer (MUX) 1412 .
  • the video ES buffer 1408 B buffers the video stream provided from the demultiplexer (DMUX) 1413 , and then provides the video stream to the encoding/decoding engine 1407 .
  • the audio ES buffer 1409 A buffers an audio stream generated by the audio encoder 1410 , and then provides the audio stream to the multiplexer (MUX) 1412 .
  • the audio ES buffer 1409 B buffers an audio stream provided from the demultiplexer (DMUX) 1413 , and then provides the audio stream to the audio decoder 1411 .
  • the audio encoder 1410 converts an audio signal input from, for example, the connectivity 1321 ( FIG. 45 ) or the like into a digital signal, and encodes the digital signal according to a certain scheme such as an MPEG audio scheme or an AudioCode number 3 (AC3) scheme.
  • the audio encoder 1410 sequentially writes the audio stream that is data obtained by encoding the audio signal in the audio ES buffer 1409 A.
  • the audio decoder 1411 decodes the audio stream provided from the audio ES buffer 1409 B, performs, for example, conversion into an analog signal, and provides a reproduced audio signal to, for example, the connectivity 1321 ( FIG. 45 ) or the like.
  • the multiplexer (MUX) 1412 performs multiplexing of the video stream and the audio stream.
  • a multiplexing method (that is, a format of a bitstream generated by multiplexing) is arbitrary. Further, in the event of multiplexing, the multiplexer (MUX) 1412 may add certain header information or the like to the bitstream. In other words, the multiplexer (MUX) 1412 may convert a stream format by multiplexing. For example, the multiplexer (MUX) 1412 multiplexes the video stream and the audio stream to be converted into a transport stream that is a bitstream of a transfer format. Further, for example, the multiplexer (MUX) 1412 multiplexes the video stream and the audio stream to be converted into data (file data) of a recording file format.
  • the demultiplexer (DMUX) 1413 demultiplexes the bitstream obtained by 156 multiplexing the video stream and the audio stream by a method corresponding to the multiplexing performed by the multiplexer (MUX) 1412 .
  • the demultiplexer (DMUX) 1413 extracts the video stream and the audio stream (separates the video stream and the audio stream) from the bitstream read from the stream buffer 1414 .
  • the demultiplexer (DMUX) 1413 can perform conversion (inverse conversion of conversion performed by the multiplexer (MUX) 1412 ) of a format of a stream through the demultiplexing.
  • the demultiplexer (DMUX) 1413 can acquire the transport stream provided from, for example, the connectivity 1321 or the broadband modem 1333 (both FIG. 45 ) through the stream buffer 1414 and convert the transport stream into a video stream and an audio stream through the demultiplexing. Further, for example, the demultiplexer (DMUX) 1413 can acquire file data read from various kinds of recording media by, for example, the connectivity 1321 ( FIG. 45 ) through the stream buffer 1414 and convert the file data into a video stream and an audio stream by the demultiplexing.
  • the stream buffer 1414 buffers the bitstream.
  • the stream buffer 1414 buffers the transport stream provided from the multiplexer (MUX) 1412 , and provides the transport stream to, for example, the connectivity 1321 or the broadband modem 1333 (both FIG. 45 ) under certain timing or based on an external request or the like.
  • MUX multiplexer
  • the stream buffer 1414 buffers file data provided from the multiplexer (MUX) 1412 , provides the file data to, for example, the connectivity 1321 ( FIG. 45 ) or the like under certain timing or based on an external request or the like, and causes the file data to be recorded in various kinds of recording media.
  • MUX multiplexer
  • the stream buffer 1414 buffers the transport stream acquired through, for example, the connectivity 1321 or the broadband modem 1333 (both FIG. 45 ), and provides the transport stream to the demultiplexer (DMUX) 1413 under certain timing or based on an external request or the like.
  • DMUX demultiplexer
  • the stream buffer 1414 buffers file data read from various kinds of recording media in, for example, the connectivity 1321 ( FIG. 45 ) or the like, and provides the file data to the demultiplexer (DMUX) 1413 under certain timing or based on an external request or the like.
  • DMUX demultiplexer
  • the video signal input to the video processor 1332 for example, from the connectivity 1321 ( FIG. 45 ) or the like is converted into digital image data according to a certain scheme such as a 4:2:2Y/Cb/Cr scheme in the video input processing section 1401 and sequentially written in the frame memory 1405 .
  • the digital image data is read out to the first image enlarging/reducing section 1402 or the second image enlarging/reducing section 1403 , subjected to a format conversion process of performing a format conversion into a certain scheme such as a 4:2:0Y/Cb/Cr scheme and an enlargement/reduction process, and written in the frame memory 1405 again.
  • the image data is encoded by the encoding/decoding engine 1407 , and written in the video ES buffer 1408 A as a video stream.
  • an audio signal input to the video processor 1332 from the connectivity 1321 ( FIG. 45 ) or the like is encoded by the audio encoder 1410 , and written in the audio ES buffer 1409 A as an audio stream.
  • the video stream of the video ES buffer 1408 A and the audio stream of the audio ES buffer 1409 A are read out to and multiplexed by the multiplexer (MUX) 1412 , and converted into a transport stream, file data, or the like.
  • the transport stream generated by the multiplexer (MUX) 1412 is buffered in the stream buffer 1414 , and then output to an external network through, for example, the connectivity 1321 or the broadband modem 1333 (both FIG. 45 ).
  • the file data generated by the multiplexer (MUX) 1412 is buffered in the stream buffer 1414 , then output to, for example, the connectivity 1321 ( FIG. 45 ) or the like, and recorded in various kinds of recording media.
  • the transport stream input to the video processor 1332 from an external network through, for example, the connectivity 1321 or the broadband modem 1333 (both FIG. 45 ) is buffered in the stream buffer 1414 and then demultiplexed by the demultiplexer (DMUX) 1413 .
  • the file data that is read from various kinds of recording media in, for example, the connectivity 1321 ( FIG. 45 ) or the like and then input to the video processor 1332 is buffered in the stream buffer 1414 and then demultiplexed by the demultiplexer (DMUX) 1413 .
  • the transport stream or the file data input to the video processor 1332 is demultiplexed into the video stream and the audio stream through the demultiplexer (DMUX) 1413 .
  • the audio stream is provided to the audio decoder 1411 through the audio ES buffer 1409 B and decoded, and an audio signal is reproduced. Further, the video stream is written in the video ES buffer 1408 B, sequentially read out to and decoded by the encoding/decoding engine 1407 , and written in the frame memory 1405 .
  • the decoded image data is subjected to the enlargement/reduction process performed by the second image enlarging/reducing section 1403 , and written in the frame memory 1405 .
  • the decoded image data is read out to the video output processing section 1404 , subjected to the format conversion process of performing format conversion to a certain scheme such as a 4:2:2Y/Cb/Cr scheme, and converted into an analog signal, and a video signal is reproduced.
  • a certain scheme such as a 4:2:2Y/Cb/Cr scheme
  • the encoding/decoding engine 1407 preferably has the functions of the image encoding device 100 ( FIG. 18 ) and the image decoding device 200 ( FIG. 26 ) according to the above embodiments. Accordingly, the video processor 1332 can obtain advantageous benefits similar to the advantageous benefits described above with reference to FIGS. 1 to 33 .
  • the present technology (that is, the functions of the image encoding devices or the image decoding devices according to the above embodiment) may be implemented by either or both of hardware such as a logic circuit and software such as an embedded program.
  • FIG. 47 illustrates another example of a schematic configuration of the video processor 1332 ( FIG. 45 ) to which the present technology is applied.
  • the video processor 1332 has a function of encoding and decoding video data according to a certain scheme.
  • the video processor 1332 includes a control section 1511 , a display interface 1512 , a display engine 1513 , an image processing engine 1514 , and an internal memory 1515 as illustrated in FIG. 47 .
  • the video processor 1332 further includes a codec engine 1516 , a memory interface 1517 , a multiplexer/demultiplexer (MUX/DMUX) 1518 , a network interface 1519 , and a video interface 1520 .
  • MUX/DMUX multiplexer/demultiplexer
  • the control section 1511 controls an operation of each processing section in the video processor 1332 such as the display interface 1512 , the display engine 1513 , the image processing engine 1514 , and the codec engine 1516 .
  • the control section 1511 includes, for example, a main CPU 1531 , a sub CPU 1532 , and a system controller 1533 as illustrated in FIG. 47 .
  • the main CPU 1531 executes, for example, a program for controlling an operation of each processing section in the video processor 1332 .
  • the main CPU 1531 generates a control signal, for example, according to the program, and provides the control signal to each processing section (that is, controls an operation of each processing section).
  • the sub CPU 1532 plays a supplementary role of the main CPU 1531 .
  • the sub CPU 1532 executes a child process or a subroutine of a program executed by the main CPU 1531 .
  • the system controller 1533 controls operations of the main CPU 1531 and the sub CPU 1532 , for examples, designates a program executed by the main CPU 1531 and the sub CPU 1532 .
  • the display interface 1512 outputs image data to, for example, the connectivity 1321 ( FIG. 45 ) or the like under control of the control section 1511 .
  • the display interface 1512 converts image data of digital data into an analog signal, and outputs the analog signal to, for example, the monitor device of the connectivity 1321 ( FIG. 45 ) as a reproduced video signal or outputs the image data of the digital data to, for example, the monitor device of the connectivity 1321 ( FIG. 45 ).
  • the display engine 1513 performs various kinds of conversion processes such as a format conversion process, a size conversion process, and a color gamut conversion process on the image data under control of the control section 1511 in compliance with, for example, a hardware specification of the monitor device that displays the image.
  • the image processing engine 1514 performs certain image processing such as a filtering process for improving an image quality on the image data under control of the control section 1511 .
  • the internal memory 1515 is a memory that is installed in the video processor 1332 and shared by the display engine 1513 , the image processing engine 1514 , and the codec engine 1516 .
  • the internal memory 1515 is used for data transfer performed among, for example, the display engine 1513 , the image processing engine 1514 , and the codec engine 1516 .
  • the internal memory 1515 stores data provided from the display engine 1513 , the image processing engine 1514 , or the codec engine 1516 , and provides the data to the display engine 1513 , the image processing engine 1514 , or the codec engine 1516 as necessary (for example, according to a request).
  • the internal memory 1515 can be implemented by any storage device, but since the internal memory 1515 is mostly used for storage of small-capacity data such as image data of block units or parameters, it is desirable to implement the internal memory 1515 using a semiconductor memory that is relatively small in capacity (for example, compared to the external memory 1312 ) and fast in response speed such as a static random access memory (SRAM).
  • a semiconductor memory that is relatively small in capacity (for example, compared to the external memory 1312 ) and fast in response speed such as a static random access memory (SRAM).
  • SRAM static random access memory
  • the codec engine 1516 performs processing related to encoding and decoding of image data.
  • An encoding/decoding scheme supported by the codec engine 1516 is arbitrary, and one or more schemes may be supported by the codec engine 1516 .
  • the codec engine 1516 may have a codec function of supporting a plurality of encoding/decoding schemes and perform encoding of image data or decoding of encoded data using a scheme selected from among the schemes.
  • the codec engine 1516 includes, for example, an MPEG-2 Video 1541 , an AVC/H.264 1542, an HEVC/H.265 1543, an HEVC/H.265 (Scalable) 1544 , an HEVC/H.265 (Multi-view) 1545 , and an MPEG-DASH 1551 as functional blocks of processing related to a codec.
  • the MPEG-2 Video 1541 is a functional block for encoding or decoding image data according to an MPEG-2 scheme.
  • the AVC/H.264 1542 is a functional block for encoding or decoding image data according to an AVC scheme.
  • the HEVC/H.265 1543 is a functional block for encoding or decoding image data according to an HEVC scheme.
  • the HEVC/H.265 (Scalable) 1544 is a functional block for performing scalable coding or scalable decoding on image data according to the HEVC scheme.
  • the HEVC/H.265 (Multi-view) 1545 is a functional block for performing multi-view encoding or multi-view decoding on image data according to the HEVC scheme.
  • the MPEG-DASH 1551 is a functional block for transmitting and receiving image data according to MPEG-Dynamic Adaptive Streaming over HTTP (MPEG-DASH).
  • MPEG-DASH is a technique of streaming video using HyperText Transfer Protocol (HTTP), and has a feature of selecting an appropriate one from among a plurality of pieces of encoded data that differs in a previously prepared resolution or the like in units of segments and transmitting the one that it selects.
  • the MPEG-DASH 1551 performs generation of a stream complying with a standard, transmission control of the stream, and the like, and uses the MPEG-2 Video 1541 to the HEVC/H.265 (Multi-view) 1545 for encoding and decoding of image data.
  • HEVC/H.265 Multi-view
  • the memory interface 1517 is an interface for the external memory 1312 .
  • Data provided from the image processing engine 1514 or the codec engine 1516 is provided to the external memory 1312 through the memory interface 1517 . Further, data read from the external memory 1312 is provided to the video processor 1332 (the image processing engine 1514 or the codec engine 1516 ) through the memory interface 1517 .
  • the multiplexer/demultiplexer (MUX/DMUX) 1518 performs multiplexing and demultiplexing of various kinds of data related to an image such as a bitstream of encoded data, image data, and a video signal.
  • the multiplexing/demultiplexing method is arbitrary.
  • the multiplexer/demultiplexer (MUX/DMUX) 1518 can not only combine a plurality of pieces of data into one but can also add certain header information or the like to the data.
  • the multiplexer/demultiplexer (MUX/DMUX) 1518 can not only divide one piece of data into a plurality of pieces of data but can also add certain header information or the like to each divided piece of data.
  • the multiplexer/demultiplexer (MUX/DMUX) 1518 can convert a data format through multiplexing and demultiplexing.
  • the multiplexer/demultiplexer (MUX/DMUX) 1518 can multiplex a bitstream to be converted into a transport stream serving as a bitstream of a transfer format or data (file data) of a recording file format.
  • inverse conversion can also be performed through demultiplexing.
  • the network interface 1519 is an interface for, for example, the broadband modem 1333 or the connectivity 1321 (both FIG. 45 ).
  • the video interface 1520 is an interface for, for example, the connectivity 1321 or the camera 1322 (both FIG. 45 ).
  • the transport stream is received from the external network through, for example, the connectivity 1321 or the broadband modem 1333 (both FIG. 45 )
  • the transport stream is provided to the multiplexer/demultiplexer (MUX/DMUX) 1518 through the network interface 1519 , demultiplexed, and then decoded by the codec engine 1516 .
  • Image data obtained by the decoding of the codec engine 1516 is subjected to certain image processing performed, for example, by the image processing engine 1514 , subjected to certain conversion performed by the display engine 1513 , and provided to, for example, the connectivity 1321 ( FIG.
  • image data obtained by the decoding of the codec engine 1516 is encoded by the codec engine 1516 again, multiplexed by the multiplexer/demultiplexer (MUX/DMUX) 1518 to be converted into file data, output to, for example, the connectivity 1321 ( FIG. 45 ) or the like through the video interface 1520 , and then recorded in various kinds of recording media.
  • MUX/DMUX multiplexer/demultiplexer
  • file data of encoded data obtained by encoding image data read from a recording medium (not illustrated) through the connectivity 1321 ( FIG. 45 ) or the like is provided to the multiplexer/demultiplexer (MUX/DMUX) 1518 through the video interface 1520 , and demultiplexed, and decoded by the codec engine 1516 .
  • Image data obtained by the decoding of the codec engine 1516 is subjected to certain image processing performed by the image processing engine 1514 , subjected to certain conversion performed by the display engine 1513 , and provided to, for example, the connectivity 1321 ( FIG. 45 ) or the like through the display interface 1512 , and the image is displayed on the monitor.
  • image data obtained by the decoding of the codec engine 1516 is encoded by the codec engine 1516 again, multiplexed by the multiplexer/demultiplexer (MUX/DMUX) 1518 to be converted into a transport stream, provided to, for example, the connectivity 1321 or the broadband modem 1333 (both FIG. 45 ) through the network interface 1519 , and transmitted to another device (not illustrated).
  • MUX/DMUX multiplexer/demultiplexer
  • transfer of image data or other data between the processing sections in the video processor 1332 is performed, for example, using the internal memory 1515 or the external memory 1312 .
  • the power management module 1313 controls, for example, power supply to the control section 1511 .
  • the present technology When the present technology is applied to the video processor 1332 having the above configuration, it is desirable to apply the above embodiments of the present technology to the codec engine 1516 .
  • the codec engine 1516 it is preferable that the codec engine 1516 have a functional block for implementing the image encoding device 100 ( FIG. 18 ) and the image decoding device 200 ( FIG. 26 ) according to the above embodiments.
  • the video processor 1332 can have advantageous benefits similar to the advantageous benefits described above with reference to FIGS. 1 to 43 .
  • the present technology (that is, the functions of the image encoding devices or the image decoding devices according to the above embodiment) may be implemented by either or both of hardware such as a logic circuit and software such as an embedded program.
  • the configuration of the video processor 1332 is arbitrary and may be any configuration other than the above two exemplary configurations.
  • the video processor 1332 may be configured with a single semiconductor chip or may be configured with a plurality of semiconductor chips.
  • the video processor 1332 may be configured with a three-dimensionally stacked LSI in which a plurality of semiconductors is stacked.
  • the video processor 1332 may be implemented by a plurality of LSIs.
  • the video set 1300 may be incorporated into various kinds of devices that process image data.
  • the video set 1300 may be incorporated into the television device 900 ( FIG. 38 ), the mobile telephone 920 ( FIG. 39 ), the recording/reproducing device 940 ( FIG. 40 ), the imaging device 960 ( FIG. 41 ), or the like.
  • the devices can have advantageous benefits similar to the advantageous benefits described above with reference to FIGS. 1 to 33 .
  • the video set 1300 may also be incorporated into a terminal device such as the personal computer 1004 , the AV device 1005 , the tablet device 1006 , or the mobile telephone 1007 in the data transmission system 1000 of FIG. 42 , the broadcasting station 1101 or the terminal device 1102 in the data transmission system 1100 of FIG. 43 , or the imaging device 1201 or the scalable encoded data storage device 1202 in the imaging system 1200 of FIG. 44 .
  • the devices can have advantageous benefits similar to the advantageous benefits described above with reference to FIGS. 1 to 33 .
  • the video set 1300 may be incorporated into the content reproducing system of FIG. 48 or the wireless communication system of FIG. 54 .
  • each constituent element of the video set 1300 described above can be implemented as a configuration to which the present technology is applied.
  • the video processor 1332 alone can be implemented as a video processor to which the present technology is applied.
  • the processors indicated by the dotted line 1341 as described above, the video module 1311 , or the like can be implemented as, for example, a processor or a module to which the present technology is applied.
  • a combination of the video module 1311 , the external memory 1312 , the power management module 1313 , and the front end module 1314 can be implemented as a video unit 1361 to which the present technology is applied.
  • a configuration including the video processor 1332 can be incorporated into various kinds of devices that process image data, similarly to the case of the video set 1300 .
  • the video processor 1332 , the processors indicated by the dotted line 1341 , the video module 1311 , or the video unit 1361 can be incorporated into the television device 900 ( FIG. 38 ), the mobile telephone 920 ( FIG. 39 ), the recording/reproducing device 940 ( FIG. 40 ), the imaging device 960 ( FIG. 41 ), the terminal device such as the personal computer 1004 , the AV device 1005 , the tablet device 1006 , or the mobile telephone 1007 in the data transmission system 1000 of FIG.
  • the broadcasting station 1101 or the terminal device 1102 in the data transmission system 1100 of FIG. 43 the imaging device 1201 or the scalable encoded data storage device 1202 in the imaging system 1200 of FIG. 44 , or the like.
  • the configuration including the video processor 1332 may be incorporated into the content reproducing system of FIG. 48 or the wireless communication system of FIG. 54 .
  • the devices can have advantageous benefits similar to the advantageous benefits described above with reference to FIGS. 1 to 33 , similarly to the video set 1300 .
  • the present technology can also be applied to a system of selecting appropriate data from among a plurality of pieces of encoded data having different resolutions that is prepared in advance in units of segments and using the selected data, for example, a content reproducing system of HTTP streaming or a wireless communication system of a Wi-Fi standard such as MPEG DASH which will be described later.
  • a content reproducing system of HTTP streaming or a wireless communication system of a Wi-Fi standard such as MPEG DASH which will be described later.
  • FIG. 48 is an explanatory diagram of a configuration of a content reproducing system.
  • the content reproducing system includes content servers 1610 and 1611 , a network 1612 , and a content reproducing device 1620 (a client device) as illustrated in FIG. 48 .
  • the content servers 1610 and 1611 are connected with the content reproducing device 1620 via the network 1612 .
  • the network 1612 is a wired or wireless transmission path of information transmitted from a device connected to the network 1612 .
  • the network 1612 may include a public line network such as the Internet, a telephone line network, or a satellite communication network, various kinds of LANs such as Ethernet (a registered trademark), a wide area network (WAN), or the like. Further, the network 1612 may include a dedicated line network such as an Internet protocol-virtual private network (IP-VPN).
  • IP-VPN Internet protocol-virtual private network
  • the content server 1610 encodes content data, and generates and stores a data file including meta information of encoded data and encoded data.
  • encoded data corresponds to “mdat,” and meta information corresponds to “moov.”
  • content data may be music data such as music, a lecture, or a radio program, video data such as a movie, a television program, a video program, a photograph, a document, a painting, or a graph, a game, software, or the like.
  • the content server 1610 generates a plurality of data files for the same content at different bit rates. Further, in response to a content reproduction request received from the content reproducing device 1620 , the content server 1611 includes information of a parameter added to a corresponding URL by the content reproducing device 1620 in URL information of the content server 1610 , and transmits the resultant information to the content reproducing device 1620 . Details on this will be described below with reference to FIG. 49 .
  • FIG. 49 is an explanatory diagram of a data flow in the content reproducing system of FIG. 48 .
  • the content server 1610 encodes the same content data at different bit rates, and generates, for example, file A of 2 Mbps, file B of 1.5 Mbps, and file C of 1 Mbps as illustrated in FIG. 49 .
  • file A has a high bit rate
  • file B has a standard bit rate
  • file C has a low bit rate.
  • encoded data of each file is divided into a plurality of segments as illustrated in FIG. 49 .
  • encoded data of file A is divided into segments such as “A1,” “A2,” “A3,” . . . , and “An”
  • encoded data of file B is divided into segments such as “B1,” “B2,” “B3,” . . . , and “Bn”
  • encoded data of file C is divided into segments such as “C1,” “C2,” “C3,” . . . , and “Cn.”
  • each segment may be configured with a configuration sample rather than one or more pieces of encoded video data and encoded audio data that starts from a sink sample of MP4 (for example, an IDR-picture in video coding of AVC/H.264) and is independently reproducible.
  • a sink sample of MP4 for example, an IDR-picture in video coding of AVC/H.264
  • each segment may be encoded video and audio data of 2 seconds corresponding to 4 GOPs or may be encoded video and audio data of 10 seconds corresponding to 20 GOPs.
  • segments that are the same in an arrangement order in each file have the same reproduction ranges (ranges of a time position from the head of content).
  • the reproduction ranges of the segment “A2,” the segment “B2,” and the segment “C2” are the same, and when each segment is encoded data of 2 seconds, the reproduction ranges of the segment “A2,” the segment “B2,” and the segment “C2” are 2 to 4 seconds of content.
  • the content server 1610 stores file A to file C. Further, as illustrated in FIG. 49 , the content server 1610 sequentially transmits segments configuring different files to the content reproducing device 1620 , and the content reproducing device 1620 performs streaming reproduction on the received segments.
  • the content server 1610 transmits a play list file (hereinafter, a “media presentation description (MPD)”) including bit rate information and access information of each piece of encoded data to the content reproducing device 1620 , and the content reproducing device 1620 selects any of a plurality of bit rates based on the MPD, and requests the content server 1610 to transmit a segment corresponding to the selected bit rate.
  • MPD media presentation description
  • FIG. 48 illustrates only one content server 1610 , but the present disclosure is not limited to this example.
  • FIG. 50 is an explanatory diagram illustrating a specific example of the MPD.
  • the MPD includes access information of a plurality of pieces of encoded data having different bit rates (bandwidths) as illustrated in FIG. 50 .
  • the MPD illustrated in FIG. 50 indicates that there are encoded data of 256 Kbps, encoded data of 1.024 Mbps, encoded data of 1.384 Mbps, encoded data of 1.536 Mbps, and encoded data 2.048 Mbps, and includes access information related to each piece of encoded data.
  • the content reproducing device 1620 can dynamically change a bit rate of encoded data that is subjected to streaming reproduction based on the MPD.
  • FIG. 48 illustrates a mobile terminal as an example of the content reproducing device 1620 , but the content reproducing device 1620 is not limited to this example.
  • the content reproducing device 1620 may be an information processing device such as a personal computer (PC), a home video processing device (a DVD recorder, a video cassette recorder (VCR)), a personal digital assistant (PDA), a home-use game machine, or a household electric appliance.
  • the content reproducing device 1620 may be an information processing device such as a mobile telephone, a personal handyphone system (PHS), a portable music player, a portable video processing device, or a portable game machine.
  • PHS personal handyphone system
  • FIG. 51 is a functional block diagram illustrating a configuration of the content server 1610 .
  • the content server 1610 includes a file generation section 1631 , a storage section 1632 , and a communication section 1633 as illustrated in FIG. 51 .
  • the file generation section 1631 includes an encoder 1641 that encodes content data, and generates a plurality of pieces of encoded data having different bit rates for the same content and the MPD. For example, when encoded data of 256 Kbps, encoded data of 1.024 Mbps, encoded data of 1.384 Mbps, encoded data of 1.536 Mbps, and encoded data of 2.048 Mbps are generated, the file generation section 1631 generates the MPD illustrated in FIG. 50 .
  • the storage section 1632 stores the plurality of pieces of encoded data having different bit rates and the MPD generated by the file generation section 1631 .
  • the storage section 1632 may be a storage medium such as a non-volatile memory, a magnetic disk, an optical disc, or a magneto optical (MO) disc.
  • the non-volatile memory include an electrically erasable programmable read-only memory (EEPROM) and an erasable programmable ROM (EPROM).
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable programmable ROM
  • As a magnetic disk there are a hard disk, a disk type magnetic disk, and the like.
  • an optical disc there are a compact disc (CD) (a digital versatile disc recordable (DVD-R), a Blu-ray Disc (BD) (a registered trademark)), and the like.
  • CD compact disc
  • DVD-R digital versatile disc recordable
  • BD Blu-ray Disc
  • the communication section 1633 is an interface with the content reproducing device 1620 , and communicates with the content reproducing device 1620 via the network 1612 .
  • the communication section 1633 has a function as an HTTP server communicating with the content reproducing device 1620 according to HTTP.
  • the communication section 1633 transmits the MPD to the content reproducing device 1620 , extracts encoded data requested based on the MPD from the content reproducing device 1620 according to the HTTP from the storage section 1632 , and transmits the encoded data to the content reproducing device 1620 as an HTTP response.
  • the configuration of the content server 1610 according to the present embodiment has been described above. Next, a configuration of the content reproducing device 1620 will be described with reference to FIG. 52 .
  • FIG. 52 is a functional block diagram of a configuration of the content reproducing device 1620 .
  • the content reproducing device 1620 includes a communication section 1651 , a storage section 1652 , a reproduction section 1653 , a selecting section 1654 , and a present location acquisition section 1656 as illustrated in FIG. 52 .
  • the communication section 1651 is an interface with the content server 1610 , requests the content server 1610 to transmit data, and acquires data from the content server 1610 .
  • the communication section 1651 has a function as an HTTP client communicating with the content reproducing device 1620 according to HTTP.
  • the communication section 1651 can selectively acquire the MPD and the segments of the encoded data from the content server 1610 using an HTTP range.
  • the storage section 1652 stores various kinds of information related to reproduction of content. For example, the segments acquired from the content server 1610 by the communication section 1651 are sequentially buffered. The segments of the encoded data buffered in the storage section 1652 are sequentially supplied to the reproduction section 1653 in a first in first out (FIFO) manner.
  • FIFO first in first out
  • the storage section 1652 adds a parameter to a URL through the communication section 1651 based on an instruction to add a parameter to a URL of content that is described in the MPD and requested from the content server 1611 which will be described later, and stores a definition for accessing the URL.
  • the reproduction section 1653 sequentially reproduces the segments supplied from the storage section 1652 . Specifically, the reproduction section 1653 performs segment decoding, DA conversion, rendering, and the like.
  • the selecting section 1654 sequentially selects a bit rate to which a segment of encoded data to be acquired corresponds among bit rates included in the MPD in the same content. For example, when the selecting section 1654 sequentially selects the segments “A1,” “B2,” and “A3” according to the band frequency of the network 1612 , the communication section 21651 sequentially acquires the segments “A1,” “B2,” and “A3” from the content server 1610 as illustrated in FIG. 49 .
  • the present location acquisition section 1656 may be configured with a module that acquires a current position of the content reproducing device 1620 , for example, acquires a current position of a Global Positioning System (GPS) receiver or the like. Further, the present location acquisition section 1656 may acquire a current position of the content reproducing device 1620 using a wireless network.
  • GPS Global Positioning System
  • FIG. 53 is a diagram for describing an exemplary configuration of the content server 1611 .
  • the content server 1611 includes a storage section 1671 and a communication section 1672 as illustrated in FIG. 53 .
  • the storage section 1671 stores the URL information of the MPD.
  • the URL information of the MPD is transmitted from the content server 1611 to the content reproducing device 1620 according to the request received from the content reproducing device 1620 that requests reproduction of content. Further, when the URL information of the MPD is provided to the content reproducing device 1620 , the storage section 1671 stores definition information used when the content reproducing device 1620 adds the parameter to the URL described in the MPD.
  • the communication section 1672 is an interface with the content reproducing device 1620 , and communicates with the content reproducing device 1620 via the network 1612 .
  • the communication section 1672 receives the request for requesting the URL information of the MPD from the content reproducing device 1620 that requests reproduction of content, and transmits the URL information of the MPD to the content reproducing device 1620 .
  • the URL of the MPD transmitted from the communication section 1672 includes information to which the parameter is added through the content reproducing device 1620 .
  • Various settings can be performed on the parameter to be added to the URL of the MPD through the content reproducing device 1620 based on the definition information shared by the content server 1611 and the content reproducing device 1620 .
  • information such as a current position of the content reproducing device 1620 , a user ID of the user using the content reproducing device 1620 , a memory size of the content reproducing device 1620 , and the capacity of a storage of the content reproducing device 1620 may be added to the URL of the MPD through the content reproducing device 1620 .
  • the encoder 1641 of the content server 1610 has the function of the image encoding device 100 ( FIG. 18 ) according to the above embodiment.
  • the reproduction section 1653 of the content reproducing device 1620 has the function of the image decoding device 200 ( FIG. 26 ) according to the above embodiment.
  • wireless packets are transmitted and received until a peer to peer (P2P) connection is established, and a specific application is operated.
  • P2P peer to peer
  • wireless packets are transmitted and received until a specific application to be used is designated, then a P2P connection is established, and a specific application is operated. Thereafter, after a connection is established through the second layer, wireless packets for activating a specific application are transmitted and received.
  • FIGS. 54 and 55 are sequence charts illustrating an exemplary communication process by devices serving as the basis of wireless communication as an example of transmission and reception of wireless packets until a P2P connection is established, and a specific application is operated.
  • a direct connection establishment process of establishing a connection in the Wi-Fi Direct standard (which is also referred to as “Wi-Fi P2P”) standardized by the Wi-Fi Alliance is illustrated.
  • Wi-Fi Direct a plurality of wireless communication devices detects the presence of the wireless communication device of the other party (device discovery and service discovery). Further, when connection device selection is performed, device authentication is performed between the selected devices through Wi-Fi protected setup (WPS), and then a direct connection is established.
  • WPS Wi-Fi protected setup
  • a plurality of wireless communication devices decides whether to be a master device (a group owner) or a slave device (a client), and forms a communication group.
  • FIGS. 54 and 55 an exemplary communication process between a first wireless communication device 1701 and a second wireless communication device 1702 is illustrated, but what have been described above can be similarly applied to a communication process between other wireless communication devices.
  • the device discovery is performed between the first wireless communication device 1701 and the second wireless communication device 1702 ( 1711 ).
  • the first wireless communication device 1701 transmits a probe request (a response request signal), and receives a probe response (a response signal) to the probe request from the second wireless communication device 1702 .
  • the first wireless communication device 1701 and the second wireless communication device 1702 can discover the presence of the other party.
  • the device discovery it is possible to acquire a device name or a type (a TV, a PC, a smart phone, or the like) of the other party.
  • the service discovery is performed between the first wireless communication device 1701 and the second wireless communication device 1702 ( 1712 ).
  • the first wireless communication device 1701 transmits a service discovery query of querying a service supported by the second wireless communication device 1702 discovered through the device discovery.
  • the first wireless communication device 1701 can acquire a service supported by the second wireless communication device 1702 by receiving a service discovery response from the second wireless communication device 1702 .
  • the service discovery it is possible to acquire, for example, a service executable by the other party.
  • the service executable by the other party is a service or a protocol (Digital Living Network Alliance (DLNA), Digital Media Renderer (DMR), or the like).
  • DLNA Digital Living Network Alliance
  • DMR Digital Media Renderer
  • connection partner selection operation of selecting a connection partner ( 1713 ).
  • the connection partner selection operation may be performed in only one of the first wireless communication device 1701 and the second wireless communication device 1702 .
  • a connection partner selection screen is displayed on a display section of the first wireless communication device 1701 , and the second wireless communication device 1702 is selected on the connection partner selection screen as a connection partner according to the user's operation.
  • a group owner negotiation is performed between the first wireless communication device 1701 and the second wireless communication device 1702 ( 21714 ).
  • the first wireless communication device 1701 becomes a group owner 1715
  • the second wireless communication device 1702 becomes a client 1716 .
  • processes ( 1717 to 1720 ) are performed between the first wireless communication device 1701 and the second wireless communication device 1702 , and thus a direct connection is established.
  • association L2 (second layer) link establishment) ( 1717 ) and secure link establishment ( 1718 ) are sequentially performed.
  • IP address assignment ( 1719 ) and L4 setup ( 1720 ) on L3 by a simple the service discovery protocol (SSDP) are sequentially performed.
  • L2 (layer 2) indicates a second layer (a data link layer)
  • L3 (layer 3) indicates a third layer (a network layer)
  • L4 (layer 4) indicates a fourth layer (a transport layer).
  • the application designation/activation operation may be performed in only one of the first wireless communication device 1701 and the second wireless communication device 1702 .
  • an application designation/activation operation screen is displayed on a display section of the first wireless communication device 1701 , and a specific application is selected on the application designation/activation operation screen according to the user's operation.
  • a connection is considered to be performed between access point stations (AP-STAs) within a range of a specification (a specification standardized in IEEE802.11) older than the Wi-Fi Direct standard.
  • a specification a specification standardized in IEEE802.11 older than the Wi-Fi Direct standard.
  • it is difficult to detect a device to be connected in advance before a connection is established through the second layer (in the terminology of IEEE802.11, before “association” is performed).
  • connection partner candidate when a connection partner candidate is searched for through the device discovery or the service discovery (option), it is possible to acquire information of a connection partner.
  • the information of the connection partner include a type of a basic device and a supported specific application. Further, it is possible to allow the user to select the connection partner based on the acquired information of the connection partner.
  • FIG. 56 An example of a sequence of establishing a connection in this case is illustrated in FIG. 57 . Further, an exemplary configuration of a frame format transmitted and received in the communication process is illustrated in FIG. 56 .
  • FIG. 56 is a diagram schematically illustrating an exemplary configuration of a frame format transmitted and received in a communication process performed by devices serving as the basis of the present technology.
  • FIG. 56 illustrates an exemplary configuration of an MAC frame used to establish a connection through the second layer.
  • an example of a frame format of an association request/response ( 1787 ) for implementing the sequence illustrated in FIG. 57 is illustrated.
  • the MAC frame illustrated in FIG. 56 is basically an association request/response frame format described in sections 7.2.3.4 and 7.2.3.5 of the IEEE802.11-2007 specification.
  • IEs independently extended information elements
  • the decimal number 127 is set to an IE type (information element ID ( 1761 )).
  • IE type information element ID ( 1761 )
  • a length field ( 1762 ) and an OUI field ( 1763 ) are subsequent, and vendor specific content ( 1764 ) is subsequently arranged.
  • a field (IE type ( 1765 )) indicating a type of a vendor specific IE is first set. Subsequently, a configuration capable of storing a plurality of sub elements ( 1766 ) can be considered.
  • a name ( 1767 ) of a specific application to be used and a device role ( 1768 ) when the specific application operates can be included.
  • information (information for L4 setup) ( 1769 ) of a specific application, a port number used for control thereof, or the like and information (capability information) related to the capability in a specific application can be included.
  • the capability information is information for specifying whether or not audio transmission/reproduction is supported, whether or not video transmission/reproduction is supported, or the like.
  • the example in which various kinds of information are multiplexed into an encoded stream and transmitted from the encoding side to the decoding side has been described.
  • a technique of transmitting the information is not limited to this example.
  • the information may be transmitted or recorded as individual data associated with an encoded bitstream without being multiplexed in the encoded stream.
  • the term “associate” refers to that an image included in the bitstream (which may be part of an image such a slice or a block) and information corresponding to the image is configured to be linked at the time of decoding. That is, the information may be transmitted on a separate transmission path from an image (or bitstream).
  • the information may be recorded on a separate recording medium (or a separate recording area of the same recording medium) from the image (or bitstream).
  • the information and the image (or the bitstream) may be associated with each other in an arbitrary unit such as a plurality of frames, one frame, or a portion within the frame.
  • present technology may also be configured as below.
  • An image encoding device including:
  • a generation section configured to generate control information used to control a certain area in which encoding-related information, of another layer encoded for each of a plurality of certain areas obtained by dividing a picture, is referred to regarding a current layer of image data including a plurality of layers;
  • an encoding section configured to encode the current layer of the image data with reference to the encoding-related information of some areas of the other layer according to control of the control information generated by the generation section;
  • a transmission section configured to transmit encoded data of the image data generated by the encoding section and the control information generated by the generation section.
  • control information is information limiting an area in which the encoding-related information is referred to by designating an area in which reference to the encoding-related information of the other layer is permitted, designating an area in which reference to the encoding-related information is prohibited, or designating an area in which the encoding-related information is referred to.
  • control information designates the area using an identification number allocated in a raster scan order, information indicating positions of the area in vertical and horizontal directions in a picture, or information indicating a data position of the area in the encoded data.
  • the transmission section further transmits information indicating whether or not to control an area in which the encoding-related information is referred to.
  • the encoding-related information is information used for generation of a prediction image used in encoding of the image data.
  • the information used for the generation of the prediction image includes information used for texture prediction of the image data and information used for syntax prediction of the image data
  • control information is information used to independently control an area in which the information used for the texture prediction is referred to and an area in which the information used for the syntax prediction is referred to.
  • the generation section generates the control information for each of the plurality of certain areas obtained by dividing the picture of the current layer of the image data
  • the encoding section encodes the current layer of the image data with reference to the encoding-related information of some areas of the other layer for each of the areas according to control of the control information of each area generated by the generation section.
  • the transmission section further transmits information indicating whether or not an area division of the current layer is similar to an area division of the other layer.
  • the area is a slice or a tile of the image data.
  • An image encoding method including:
  • An image decoding device including:
  • a reception section configured to receive encoded data of a current layer of image data including a plurality of layers and control information used to control a certain area in which encoding-related information, of another layer encoded for each of a plurality of certain areas obtained by dividing a picture of the image data, is referred to;
  • a decoding section configured to decode the encoded data with reference to the encoding-related information of some areas of the other layer according to control of the control information received by the reception section.
  • control information is information limiting an area in which the encoding-related information is referred to by designating an area in which reference to the encoding-related information of the other layer is permitted, designating an area in which reference to the encoding-related information is prohibited, or designating an area in which the encoding-related information is referred to.
  • control information designates the area using an identification number allocated in a raster scan order, information indicating positions of the area in vertical and horizontal directions in a picture, or information indicating a data position of the area in the encoded data.
  • reception section further receives information indicating whether or not to control an area in which the encoding-related information is referred to.
  • the encoding-related information is information used for generation of a prediction image used in decoding of the encoded data.
  • the information used for the generation of the prediction image includes information used for texture prediction of the image data and information used for syntax prediction of the image data
  • control information is information used to independently control an area in which the information used for the texture prediction is referred to and an area in which the information used for the syntax prediction is referred to.
  • reception section receives the encoded data encoded for each of the plurality of certain areas obtained by dividing the picture of the current layer of the image data and the control information of each of the areas, and
  • the decoding section decodes the encoded data received by the reception section with reference to the encoding-related information of some areas of the other layer for each of the areas according to control of the control information of each area.
  • reception section further receives information indicating whether or not an area division of the current layer is similar to an area division of the other layer.
  • the area is a slice or a tile of the image data.
  • An image decoding method including:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
US14/773,834 2013-03-21 2014-03-11 Image encoding device and method and image decoding device and method Abandoned US20160014413A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2013-058679 2013-03-21
JP2013058679 2013-03-21
PCT/JP2014/056311 WO2014148310A1 (ja) 2013-03-21 2014-03-11 画像符号化装置および方法、並びに、画像復号装置および方法

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/056311 A-371-Of-International WO2014148310A1 (ja) 2013-03-21 2014-03-11 画像符号化装置および方法、並びに、画像復号装置および方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/237,661 Continuation US20210243439A1 (en) 2013-03-21 2021-04-22 Image encoding device and method and image decoding device and method

Publications (1)

Publication Number Publication Date
US20160014413A1 true US20160014413A1 (en) 2016-01-14

Family

ID=51579997

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/773,834 Abandoned US20160014413A1 (en) 2013-03-21 2014-03-11 Image encoding device and method and image decoding device and method
US17/237,661 Abandoned US20210243439A1 (en) 2013-03-21 2021-04-22 Image encoding device and method and image decoding device and method
US17/887,884 Abandoned US20220394253A1 (en) 2013-03-21 2022-08-15 Image encoding device and method and image decoding device and method
US18/204,010 Pending US20230308646A1 (en) 2013-03-21 2023-05-31 Image encoding device and method and image decoding device and method

Family Applications After (3)

Application Number Title Priority Date Filing Date
US17/237,661 Abandoned US20210243439A1 (en) 2013-03-21 2021-04-22 Image encoding device and method and image decoding device and method
US17/887,884 Abandoned US20220394253A1 (en) 2013-03-21 2022-08-15 Image encoding device and method and image decoding device and method
US18/204,010 Pending US20230308646A1 (en) 2013-03-21 2023-05-31 Image encoding device and method and image decoding device and method

Country Status (10)

Country Link
US (4) US20160014413A1 (ru)
EP (2) EP2978220B1 (ru)
JP (3) JP6331103B2 (ru)
KR (2) KR102309086B1 (ru)
CN (4) CN105230017B (ru)
BR (1) BR112015023318A2 (ru)
DK (1) DK2978220T3 (ru)
HU (1) HUE045215T2 (ru)
RU (2) RU2018128647A (ru)
WO (1) WO2014148310A1 (ru)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180007355A1 (en) * 2014-12-31 2018-01-04 Thomson Licensing High frame rate-low frame rate transmission technique
US10187648B2 (en) * 2014-06-30 2019-01-22 Sony Corporation Information processing device and method
US20190058927A1 (en) * 2017-08-15 2019-02-21 Chiun Mai Communication Systems, Inc. Electronic device and method for sharing streaming video
US10869046B2 (en) * 2013-07-12 2020-12-15 Canon Kabushiki Kaisha Image encoding apparatus, image encoding method, recording medium and program, image decoding apparatus, image decoding method, and recording medium and program
CN113663328A (zh) * 2021-08-25 2021-11-19 腾讯科技(深圳)有限公司 画面录制方法、装置、计算机设备及存储介质
US20220078462A1 (en) * 2019-05-21 2022-03-10 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021034230A2 (en) * 2019-12-23 2021-02-25 Huawei Technologies Co., Ltd. Method and apparatus of lossless video coding based on refinement of lossy reconstructed signal
JP2021132305A (ja) * 2020-02-20 2021-09-09 シャープ株式会社 画像符号化装置および画像符号化方法
RU202224U1 (ru) * 2020-12-02 2021-02-08 Акционерное общество Научно-производственный центр «Электронные вычислительно-информационные системы» (АО НПЦ «ЭЛВИС») Реконфигурируемый кодер полярных кодов 5g сетей

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173013B1 (en) * 1996-11-08 2001-01-09 Sony Corporation Method and apparatus for encoding enhancement and base layer image signals using a predicted image signal
US20070086516A1 (en) * 2005-10-19 2007-04-19 Samsung Electronics Co., Ltd. Method of encoding flags in layer using inter-layer correlation, method and apparatus for decoding coded flags
US20070133675A1 (en) * 2003-11-04 2007-06-14 Matsushita Electric Industrial Co., Ltd. Video transmitting apparatus and video receiving apparatus
US20090010331A1 (en) * 2006-11-17 2009-01-08 Byeong Moon Jeon Method and Apparatus for Decoding/Encoding a Video Signal
US20090016621A1 (en) * 2007-07-13 2009-01-15 Fujitsu Limited Moving-picture coding device and moving-picture coding method
US20090147848A1 (en) * 2006-01-09 2009-06-11 Lg Electronics Inc. Inter-Layer Prediction Method for Video Signal
US20090168872A1 (en) * 2005-01-21 2009-07-02 Lg Electronics Inc. Method and Apparatus for Encoding/Decoding Video Signal Using Block Prediction Information
US20090252220A1 (en) * 2006-01-16 2009-10-08 Hae-Chul Choi Method and apparatus for selective inter-layer prediction on macroblock basis
US20090310680A1 (en) * 2006-11-09 2009-12-17 Lg Electronic Inc. Method and Apparatus for Decoding/Encoding a Video Signal
US20100172412A1 (en) * 2007-09-18 2010-07-08 Fujitsu Limited Video encoder and video decoder
US20140003504A1 (en) * 2012-07-02 2014-01-02 Nokia Corporation Apparatus, a Method and a Computer Program for Video Coding and Decoding
US20140254669A1 (en) * 2013-03-05 2014-09-11 Qualcomm Incorporated Parallel processing for video coding
US20150103896A1 (en) * 2012-03-29 2015-04-16 Lg Electronics Inc. Inter-layer prediction method and encoding device and decoding device using same
US20150237376A1 (en) * 2012-09-28 2015-08-20 Samsung Electronics Co., Ltd. Method for sao compensation for encoding inter-layer prediction error and apparatus therefor
US20150304667A1 (en) * 2013-01-04 2015-10-22 GE Video Compression, LLC. Efficient scalable coding concept

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3098939B2 (ja) * 1995-07-14 2000-10-16 シャープ株式会社 動画像符号化装置及び動画像復号装置
JP4480119B2 (ja) * 2000-03-30 2010-06-16 キヤノン株式会社 画像処理装置及び画像処理方法
JP2002044671A (ja) * 2001-06-11 2002-02-08 Sharp Corp 動画像復号装置
JP2006014086A (ja) * 2004-06-28 2006-01-12 Canon Inc 動画像符号化装置及び動画像符号化方法
US20080089413A1 (en) * 2004-06-28 2008-04-17 Canon Kabushiki Kaisha Moving Image Encoding Apparatus And Moving Image Encoding Method
KR100878812B1 (ko) * 2005-05-26 2009-01-14 엘지전자 주식회사 영상신호의 레이어간 예측에 대한 정보를 제공하고 그정보를 이용하는 방법
KR100878811B1 (ko) * 2005-05-26 2009-01-14 엘지전자 주식회사 비디오 신호의 디코딩 방법 및 이의 장치
JP2009508454A (ja) * 2005-09-07 2009-02-26 ヴィドヨ,インコーポレーテッド スケーラブルなビデオ符号化を用いたスケーラブルで低遅延のテレビ会議用システムおよび方法
MX2008004760A (es) * 2005-10-12 2008-11-13 Thomson Licensing Region de codificacion de video h .264 escalable de interes.
WO2007081189A1 (en) * 2006-01-16 2007-07-19 Electronics And Telecommunications Research Institute Method and apparatus for selective inter-layer prediction on macroblock basis
JP2007235314A (ja) * 2006-02-28 2007-09-13 Sanyo Electric Co Ltd 符号化方法
EP2060122A4 (en) * 2006-09-07 2016-04-27 Lg Electronics Inc METHOD AND DEVICE FOR CODING AND DECODING A VIDEO SIGNAL
KR101365596B1 (ko) * 2007-09-14 2014-03-12 삼성전자주식회사 영상 부호화장치 및 방법과 그 영상 복호화장치 및 방법
US8897359B2 (en) * 2008-06-03 2014-11-25 Microsoft Corporation Adaptive quantization for enhancement layer video coding
JP2010035137A (ja) * 2008-07-01 2010-02-12 Sony Corp 画像処理装置および方法、並びにプログラム
CN102224734B (zh) * 2008-10-02 2013-11-13 索尼公司 图像处理设备和方法
CN101981936B (zh) * 2009-04-28 2013-03-06 松下电器产业株式会社 图像解码方法及图像解码装置
JP5604825B2 (ja) * 2009-08-19 2014-10-15 ソニー株式会社 画像処理装置および方法
JP2011050001A (ja) * 2009-08-28 2011-03-10 Sony Corp 画像処理装置および方法
JP2011223303A (ja) * 2010-04-09 2011-11-04 Sony Corp 画像符号化装置と画像符号化方法および画像復号化装置と画像復号化方法
US20120075436A1 (en) * 2010-09-24 2012-03-29 Qualcomm Incorporated Coding stereo video data
JP2012160991A (ja) * 2011-02-02 2012-08-23 Sony Corp 画像処理装置および方法、並びに、プログラム
WO2012173440A2 (ko) * 2011-06-15 2012-12-20 한국전자통신연구원 스케일러블 비디오 코딩 및 디코딩 방법과 이를 이용한 장치
CN103385004B (zh) * 2011-06-30 2016-12-28 三菱电机株式会社 图像编码装置、图像解码装置、图像编码方法以及图像解码方法
US20140092985A1 (en) * 2012-09-28 2014-04-03 Sharp Laboratories Of America, Inc. Content initialization for enhancement layer coding
US10021414B2 (en) * 2013-01-04 2018-07-10 Qualcomm Incorporated Bitstream constraints and motion vector restriction for inter-view or inter-layer reference pictures

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173013B1 (en) * 1996-11-08 2001-01-09 Sony Corporation Method and apparatus for encoding enhancement and base layer image signals using a predicted image signal
US20070133675A1 (en) * 2003-11-04 2007-06-14 Matsushita Electric Industrial Co., Ltd. Video transmitting apparatus and video receiving apparatus
US20090168872A1 (en) * 2005-01-21 2009-07-02 Lg Electronics Inc. Method and Apparatus for Encoding/Decoding Video Signal Using Block Prediction Information
US20070086516A1 (en) * 2005-10-19 2007-04-19 Samsung Electronics Co., Ltd. Method of encoding flags in layer using inter-layer correlation, method and apparatus for decoding coded flags
US20090147848A1 (en) * 2006-01-09 2009-06-11 Lg Electronics Inc. Inter-Layer Prediction Method for Video Signal
US20090252220A1 (en) * 2006-01-16 2009-10-08 Hae-Chul Choi Method and apparatus for selective inter-layer prediction on macroblock basis
US20090310680A1 (en) * 2006-11-09 2009-12-17 Lg Electronic Inc. Method and Apparatus for Decoding/Encoding a Video Signal
US20090010331A1 (en) * 2006-11-17 2009-01-08 Byeong Moon Jeon Method and Apparatus for Decoding/Encoding a Video Signal
US20090016621A1 (en) * 2007-07-13 2009-01-15 Fujitsu Limited Moving-picture coding device and moving-picture coding method
US20100172412A1 (en) * 2007-09-18 2010-07-08 Fujitsu Limited Video encoder and video decoder
US20150103896A1 (en) * 2012-03-29 2015-04-16 Lg Electronics Inc. Inter-layer prediction method and encoding device and decoding device using same
US20140003504A1 (en) * 2012-07-02 2014-01-02 Nokia Corporation Apparatus, a Method and a Computer Program for Video Coding and Decoding
US20150237376A1 (en) * 2012-09-28 2015-08-20 Samsung Electronics Co., Ltd. Method for sao compensation for encoding inter-layer prediction error and apparatus therefor
US20150304667A1 (en) * 2013-01-04 2015-10-22 GE Video Compression, LLC. Efficient scalable coding concept
US20140254669A1 (en) * 2013-03-05 2014-09-11 Qualcomm Incorporated Parallel processing for video coding
US9473779B2 (en) * 2013-03-05 2016-10-18 Qualcomm Incorporated Parallel processing for video coding

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10869046B2 (en) * 2013-07-12 2020-12-15 Canon Kabushiki Kaisha Image encoding apparatus, image encoding method, recording medium and program, image decoding apparatus, image decoding method, and recording medium and program
US10187648B2 (en) * 2014-06-30 2019-01-22 Sony Corporation Information processing device and method
US10623754B2 (en) 2014-06-30 2020-04-14 Sony Corporation Information processing device and method
US20180007355A1 (en) * 2014-12-31 2018-01-04 Thomson Licensing High frame rate-low frame rate transmission technique
US20190058927A1 (en) * 2017-08-15 2019-02-21 Chiun Mai Communication Systems, Inc. Electronic device and method for sharing streaming video
CN109413437A (zh) * 2017-08-15 2019-03-01 深圳富泰宏精密工业有限公司 电子设备及传送视频流的方法
US10791382B2 (en) * 2017-08-15 2020-09-29 Chiun Mai Communication Systems, Inc. Electronic device and method for sharing streaming video
US20220078462A1 (en) * 2019-05-21 2022-03-10 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11936893B2 (en) * 2019-05-21 2024-03-19 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
CN113663328A (zh) * 2021-08-25 2021-11-19 腾讯科技(深圳)有限公司 画面录制方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
BR112015023318A2 (pt) 2017-07-18
KR20210059001A (ko) 2021-05-24
CN105230017A (zh) 2016-01-06
DK2978220T3 (da) 2019-09-30
EP2978220A4 (en) 2016-11-02
US20210243439A1 (en) 2021-08-05
JP2020017998A (ja) 2020-01-30
EP2978220B1 (en) 2019-08-21
CN110234007A (zh) 2019-09-13
RU2015139086A (ru) 2017-03-20
RU2018128647A (ru) 2018-10-05
CN110177273A (zh) 2019-08-27
JP6331103B2 (ja) 2018-05-30
WO2014148310A1 (ja) 2014-09-25
CN110234007B (zh) 2023-05-23
JP6607414B2 (ja) 2019-11-20
KR102255012B1 (ko) 2021-05-24
JP2018137809A (ja) 2018-08-30
CN105230017B (zh) 2019-08-06
US20230308646A1 (en) 2023-09-28
RU2018128647A3 (ru) 2021-11-16
CN105915905B (zh) 2019-07-16
EP3550839B1 (en) 2020-09-09
CN110177273B (zh) 2023-06-02
KR20150132140A (ko) 2015-11-25
EP3550839A1 (en) 2019-10-09
EP2978220A1 (en) 2016-01-27
RU2665284C2 (ru) 2018-08-28
CN105915905A (zh) 2016-08-31
US20220394253A1 (en) 2022-12-08
HUE045215T2 (hu) 2019-12-30
KR102309086B1 (ko) 2021-10-06
JPWO2014148310A1 (ja) 2017-02-16

Similar Documents

Publication Publication Date Title
US11706448B2 (en) Image processing apparatus and image processing method
US20230308646A1 (en) Image encoding device and method and image decoding device and method
US10075719B2 (en) Image coding apparatus and method
AU2017251760B2 (en) Image processing device and method
US10616598B2 (en) Image processing device and method
US10834426B2 (en) Image processing device and method
US20150222913A1 (en) Decoding device, decoding method, coding device, and coding method
US20160286218A1 (en) Image encoding device and method, and image decoding device and method
WO2014203505A1 (en) Image decoding apparatus, image encoding apparatus, and image processing system

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SATO, KAZUSHI;REEL/FRAME:036574/0247

Effective date: 20150809

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION