GB2561491A - Image data encapsulation with tile support - Google Patents
Image data encapsulation with tile support Download PDFInfo
- Publication number
- GB2561491A GB2561491A GB1810396.0A GB201810396A GB2561491A GB 2561491 A GB2561491 A GB 2561491A GB 201810396 A GB201810396 A GB 201810396A GB 2561491 A GB2561491 A GB 2561491A
- Authority
- GB
- United Kingdom
- Prior art keywords
- information
- sub
- image
- images
- item
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005538 encapsulation Methods 0.000 title description 17
- 238000000034 method Methods 0.000 claims abstract description 67
- 238000012545 processing Methods 0.000 claims abstract description 34
- 238000004891 communication Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 238000003672 processing method Methods 0.000 claims 1
- 239000000203 mixture Substances 0.000 description 9
- 230000001419 dependent effect Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- AWSBQWZZLBPUQH-UHFFFAOYSA-N mdat Chemical compound C1=C2CC(N)CCC2=CC2=C1OCO2 AWSBQWZZLBPUQH-UHFFFAOYSA-N 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8146—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
- H04N21/8153—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/23439—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8451—Structuring of content, e.g. decomposing content into time segments using Advanced Video Coding [AVC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8455—Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/85406—Content authoring involving a specific file format, e.g. MP4 format
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computer Security & Cryptography (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A method for processing a media file based on an encoded bit-stream representing one or more still images, comprises obtaining a media file containing: identification information for each of one or more sub-images of the one or more still images, the identification information identifying a different portion of the encoded bit-stream for each of the one or more sub-images, sub-image description information comprising display parameters, which may not be spatial parameters, for each of the one or more sub-images and reference information for associating said identification information with said sub-image description information. The method further comprises processing, for displaying at least one of the one or more still images, said encoded bit-stream according to said obtained information in the media file, wherein the display parameters are used after decoding of the sub-image.
Description
(56) Documents Cited:
WO 2014/006863 A1 US 20090226088 A1
JP 2009253446 A1 US 20090103817 A1 (71) Applicant(s):
Canon Kabushiki Kaisha
30-2 Shimomaruko 3-Chome, Ohta-ku,
146-8501 Tokyo, Japan (58) Field of Search:
INT CL G06T, H04N
Other: Online: WPI, EPODOC (72) Inventor(s):
Franck Denoual Frederic Maze Cyril Concolato Jean Le Feuvre (74) Agent and/or Address for Service:
Santarelli
49, Avenue des Champs-Elysees, Paris 75008,
France (including Overseas Departments and Territori es) (54) Title of the Invention: Image data encapsulation with tile support Abstract Title: Processing a media file (57) A method for processing a media file based on an encoded bit-stream representing one or more still images, comprises obtaining a media file containing: identification information for each of one or more sub-images of the one or more still images, the identification information identifying a different portion of the encoded bit-stream for each of the one or more sub-images, sub-image description information comprising display parameters, which may not be spatial parameters, for each of the one or more sub-images and reference information for associating said identification information with said sub-image description information. The method further comprises processing, for displaying at least one of the one or more still images, said encoded bit-stream according to said obtained information in the media file, wherein the display parameters are used after decoding of the sub-image.
Fig. 8
1/9
Fig. 1 (prior art)
100 ο
2/9
CM cs>
i>
l· | 00 K |
m H | 1— |
CM h | !· |
«4 H | U~J |
/ re
CM
CM o
CM
3/9
4/9
405 ! moov 431 r-f
420 460 451 452 453 454 450
5/9
6/9
7/9
ιϋ
8/9
Fig. 8
9/9
οι ο ο I
Figure 9
TITLE OF THE INVENTION
Image data encapsulation with tile support
FIELD OF THE INVENTION
The present invention relates to the storage of image data, such as still images, bursts of still images or video data in a media container with descriptive metadata. Such metadata generally provides easy access to the image data and portions of the image data.
BACKGROUND OF THE INVENTION
Some of the approaches described in this section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Therefore, the approaches described in this section are not necessarily prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
The HEVC standard defines a profile for the encoding of still images and describes specific tools for compressing single still images or bursts of still images. An extension of the ISO Base Media File Format (ISOBMFF) used for such kind of image data has been proposed for inclusion into the ISO/IEC 23009 standard, in Part 12, under the name: “Image File Format. The standard covers two forms of storage corresponding to different use cases:
- the storage of image sequences, with timing that is optionally used at the decoder, and in which the images may be dependent on other images, and
- the storage of single images, and collections of independently coded images.
In the first case, the encapsulation is close to the encapsulation of the video tracks in the ISO Base Media File Format (see document « Information technology — Coding of audio-visual objects — Part 12: ISO base media file format», ISO/IEC 14496-12:2008, Third edition, October 2008), and the same tools and concepts are used, such as the ‘trak’ boxes and the sample grouping for description. The ‘trak’ box is a file format box that contains sub boxes for describing a track, that is to say, a timed sequence of related samples.
In the second case, a set of ISOBMFF boxes, the ‘meta’ boxes are used. These boxes and their hierarchy offer less description tools than the ‘track’ boxes and relate to “information items” or “items” instead of related samples.
The image file format can be used for locally displaying multimedia files or for streaming multimedia presentations. HEVC Still Images have many applications which raise many issues.
Image bursts are one application. Image bursts are sequences of still pictures captured by a camera and stored as a single representation (many picture items referencing a block of data). Users may want to perform several types of actions on these pictures: select one as thumbnail or cover, apply effects on these pictures or the like.
There is thus a need for descriptive metadata for identifying the list of pictures with their corresponding bytes in the block of data.
Computational photography is another application. In computational photography, users have access to different resolutions of the same picture (different exposures, different focuses etc.). These different resolutions have to be stored as metadata so that one can be selected and the corresponding piece of data can be located and extracted for processing (rendering, editing, transmitting or the like).
With the increase of picture resolution in terms of size, there is thus a need for providing enough description so that only some spatial parts of these large pictures can be easily identified and extracted.
Another kind of applications is the access to specific pictures from a video sequence, for instance for video summarization, proof images in video surveillance data or the like.
For such kind of applications, there is a need for image metadata enabling to easily access the key images, in addition to the compressed video data and the video tracks metadata.
In addition, professional cameras have reached high spatial resolutions. Videos or images with 4K2K resolution are now common. Even 8k4k videos or images are now being common. In parallel, video are more and more played on mobile and connected devices with video streaming capabilities. Thus, splitting the videos into tiles becomes important if the user of a mobile device wants to display or wants to focus on sub-parts of the video by keeping or even improving the quality. By using tiles, the user can therefore interactively request spatial sub-parts of the video.
There is thus a need for describing these spatial sub-parts of the video in a compact fashion in the file format in order to be accessible without additional processing other than simply parsing metadata boxes. For images corresponding to the so-described videos, it is also of interest for the user to access to spatial sub-parts.
The ISO/IEC 23009 standard covers two ways for encapsulating still images into the file format that have been recently discussed.
One way is based on ‘track’ boxes, and the notion of timed sequence of related samples with associated description tools, and another is based on ‘meta’ boxes, based on information items, instead of samples, providing less description tools, especially for region of interest description and tiling support.
There is thus a need for providing tiling support in the new Image File
Format.
The use of tiles is commonly known in the prior art, especially at compression time. Concerning their indexation in the ISO Base Media File format, tiling descriptors exist in drafts for amendment of Part 15 of the ISO/IEC 14496 standard “Carriage of NAL unit structured video in the ISO Base Media File Format”.
However, these descriptors rely on ‘track’ boxes and sample grouping tools and cannot be used in the Still Image File Format when using the ‘meta’ based approach. Without such descriptors, it becomes complicated to select and extract tiles from a coded picture stored in this file format.
Figure 1 illustrates the description of a still image encoded with tiles in the ‘meta’ box (100) of ISO Base Media File Format, as disclosed in MPEG contribution m32254.
An information item is defined for the full picture 101 in addition to respective information items for each tile picture (102, 103, 104 and 105). The box (106), called ‘ItemReferenceBox’, from the ISO BMFF standard is used for indicating that a ‘tile’ relationship (107) exists between the information item of the full picture and the four information items corresponding to the tile pictures (108). Identifiers of each information item are used so that a box (109), called ‘ItemLocationBox’, provides the byte range(s) in the encoded data (110) that represent each information item. Another box “ItemReferenceBox”’ (112) is used for associating EXIF metadata (111) with the information item for the full picture (101) and a corresponding data block (111) is created in the media data box (110). Also, an additional information item (113) is created for identifying the EXIF metadata.
Even if the full picture and its tiles are introduced as information items, no tiling information is provided here. Moreover, when associating additional metadata with an information item (like EXIF), no data block referenced using an additional ItemReferenceBox’is created.
Reusing information on tiling from EXIF and reusing the mechanism defined in the Still Image File format draft wouldn’t make it possible to describe non-regular grid with existing EXIF tags.
Thus, there is still a need for improvements in the file format for still images, notably HEVC still images. In particular, there is a need for methods for extracting a region of interest in still Images stored with this file format.
The invention lies within the above context.
SUMMARY OF THE INVENTION
According to a first aspect of the invention there is provided a method of processing a media file based on an encoded bit-stream representing one or more still images, the method comprising
- obtaining a media file containing:
identification information for each of one or more subimages of the one or more still images, the identification information identifying a different portion of the encoded bit-stream for each of the one or more subimages;
sub-image description information comprising display parameters for each of the one or more sub-images, and reference information for associating said identification information with said sub-image description information, and
- processing, for displaying at least one of the one or more still images, said encoded bit-stream according to said obtained information in the media file, wherein the display parameters are used after decoding of the sub-image.
A method according to the first aspect makes it possible to easily identify, select and extract tiles from, for example, ultra-high resolution images (4K2K, 8K4K...), by parsing syntax elements and without complex computation.
The description tools of the metadata boxes of the ISO Base Media File Format can be extended. In particular, it makes it possible to associate tile description with information items.
Parts of the ‘meta’ boxes hierarchy can be extended so as to provide additional description tools and especially to support tile-based access within still images.
A method according to the first aspect makes it possible to easily extract, from an encoded HEVC Still Picture, a region of interest based on HEVC tiles.
Embodiments of the invention provide tile description support and tile access for still images encoded according to the HEVC standard.
This makes it possible to preserve the region of interest feature available for video tracks for still image. In general, parts of a still picture corresponding to a user-defined region of interest can be identified and easily extracted for rendering or transmission to media players.
For example, the media file may contain information identifying a timed portion of said bit-stream corresponding to a video sequence.
For example, the sub-image description information may be embedded in the bit-stream.
For example, the reference information may include a reference type and additional descriptive metadata including said sub-image description information.
For example, the media file may further contain a metadata item for referencing said sub-image description information in the encoded bit-stream.
For example, items of identification information may be grouped and the reference information may be contained for associating a group of items of identification information with said sub-image description information.
For example, all references linking metadata items to another item may be included in a single reference box in the media file.
For example, all the relationships from one item, of any type, may be stored in a single item information descriptor.
For example, said media file may be obtained from a server module for adaptive streaming.
According to a second aspect of the invention there is provided a method of processing a media file based on encoded image data corresponding to one or more still images, the method comprising:
- obtaining a media file containing:
identification information for each of one or more subimages of the one or more still images, the one or more sub-images identified by the identification information corresponding to a different portion of the encoded image data ;
sub-image description information comprising one or more display parameters, other than a spatial parameter, for each of the one or more sub-images, and reference information for associating said identification information with said sub-image description information, and
- processing, for displaying at least one of the one or more still images, said encoded image data according to said obtained information in the media file.
For example, said sub-image description information may include one or more colour display information.
For example, said sub-image description information may include pixel aspect ratio information.
For example, said sub-image description information may include clean aperture information.
For example, said sub-image description information may further include spatial description information.
For example, said spatial description information may include one or more of display offset, display width and display height.
For example, said sub-image description information may include layer information.
For example, said media file may correspond to a standardized file format.
For example, the one or more display parameters may comprise clean aperture information. For example, the clean aperture information may be the ‘clap’ box as defined in ISOBMFF.
For example, the one or more display parameters may comprise colour information. For example, the colour information may be the ‘coir’ box as defined in ISOBMFF.
For example, the one or more display parameters may comprise pixel aspect ratio information. For example, the pixel aspect ratio information may be the ‘pasp’ box as defined in ISOBMFF.
According to a third aspect of the invention there is provided a device for processing a media file based on an encoded bit-stream representing one or more still images, the device comprising:
- obtaining means for obtaining a media file containing:
identification information for each of one or more subimages of the one or more still images, the identification information identifying a different portion of the encoded bit-stream for each of the one or more subimages;
sub-image description information comprising display parameters for each of the one or more sub-images, and reference information for associating said identification information with said sub-image description information, and
- processing means for processing, to display at least one of the one or more still images, said encoded bit-stream according to said obtained information in the media file, wherein the display parameters are used after decoding of the sub-image.
According to a fourth aspect of the invention, there is provided a device for processing a media file based on encoded image data corresponding to one or more still images, the device comprising:
- obtaining means for obtaining a media file containing:
identification information for each of one or more subimages of the one or more still images, the one or more sub-images identified by the identification information corresponding to a different portion of the encoded image data ;
sub-image description information comprising one or more display parameters, other than a spatial parameter, for each of the one or more sub-images, and reference information for associating said identification information with said sub-image description information, and
- processing means for processing, to display at least one of the one or more still images, said encoded image data according to said obtained information in the media file.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features and advantages of the invention will become apparent from the following description of non-limiting exemplary embodiments, with reference to the appended drawings, in which, in addition to Figure 1:
- Figure 2 illustrates an example of a tiled video;
- Figure 3 illustrates various tile/slice configurations in HEVC;
- Figure 4 illustrates the tile encapsulation according to the ISO Base Media File format with ‘track’ boxes;
- Figure 5 illustrates the standard metadata for describing information items in ‘meta’ boxes of the ISOBMFF;
- Figure 6 illustrates an exemplary extension to the information item description;
- Figure 7 illustrates the referencing mechanisms between information items;
- Figure 8 illustrates a context of implementation of embodiments of the invention;
- Figure 9 is a schematic block diagram of a computing device for implementation of one or more embodiments of the invention.
DETAILED DESCRIPTION OF THE INVENTION
In what follows, embodiments of the invention are described.
In order to better understand the technical context, video tiling is explained with reference to Figure 2 which shows a video (200) having consecutive temporal frames. Each frame (201) is divided into 8 portions (here rectangular portions) referred to as “tiles” T1 to T8. The number and the shape of the tiles can be different. In what follows, it is considered that the tiling is the same whatever the index of the video frame.
The result of this tiling is 8 independent sub-videos (202). These subvideos represent a partition of the whole global video. Each independent subvideo can be encoded as an independent bitstream, according to the AVC or HEVC standards for example. The sub-video can also be part of one single video bitstream, like for example tiles of the HEVC standard or slices of the AVC standard.
The HEVC standard defines different spatial subdivision of pictures: tiles, slices and slice segments. These different subdivisions (or partitions) have been introduced for different purposes: the slices are related to streaming issues while the tiles and the slice segments have been defined for parallel processing.
A tile defines a rectangular region of a picture that contains an integer number of Coding Tree Units (CTU). Figure 3 shows the tiling of an image (300) defined by row and column boundaries (301, 302). This makes the tiles good candidates for regions of interest description in terms of position and size. However, the HEVC standard bitstream organization in terms of syntax and its encapsulation into Network Abstract Layer (NAL) units is rather based on slices (as in AVC standard).
According to the HEVC standard, a slice is a set of slice segments, with at least the first slice segment being an independent slice segment, the others, if any, being dependent slice segments. A slice segment contains an integer number of consecutive CTUs (in the raster scan order). It has not necessarily a rectangular shape (thus less appropriate than tiles for region of interest representation). A slice segment is encoded in the HEVC bitstream as a header called “slice_segment_header” followed by data called “slice_segment_data”. Independent slice segments and dependent slice segments differ by their header: dependent slice segments have a shorter header because they reuse information from the independent slice segment’s header. Both independent and dependent slice segments contain a list of entry points in the bitstream: either to tiles or to entropy decoding synchronization points.
Figure 3 shows different configurations of images 310 and 320 of slice, slice segments and tiles. These configurations differ from the configuration of image 300 in which one tile has one slice (containing only one independent slice segment). Image 310 is partitioned into two vertical tiles (311, 312) and one slice (with 5 slice segments). Image 320 is split into two tiles (321, 322), the left tile 321 having two slices (each with two slice segments), the right tile 322 having one slice (with two slice segments). The HEVC standard defines organization rules between tiles and slice segments that can be summarized as follows (one or both conditions have to be met):
- All CTUs in a slice segment belong to the same tile, and
- All CTUs in a tile belong to the same slice segment
In order to have matching region of interest support and transport, the configuration 300, wherein one tile contains one slice with one independent segment, is preferred. However, the encapsulation solution would work with the other configurations 310 or 320.
While the tile is the appropriate support for regions of interest, the slice segment is the entity that will be actually put into NAL units for transport on the network and aggregated to form an access unit (coded picture or sample at file format level). According to the HEVC standard, the type of NAL unit is specified in a NAL unit header. For NAL units of type “coded slice segment”, the slice_segment_header indicates via the “slice_segment_address” syntax element the address of the first coding tree block in the slice segment. The tiling information is provided in a PPS (Picture Parameter Set) NAL unit. The relation between a slice segment and a tile can then be deduced from these parameters.
By definition, on tiles borders, the spatial predictions are reset. However, nothing prevents a tile from using temporal predictors from a different tile in the reference frame(s). In order to build independent tiles, at encoding time, the motion vectors for the prediction units inside a tile are constrained to remain in the co-located tile in the reference frame(s). In addition, the in-loop filters (deblocking and SAO) have to be deactivated on the tiles borders so that no error drift is introduced when decoding only one tile. This control of the inloop filters is already available in the HEVC standard and is set in slice segment headers with the flag called “loop_filter_across_tiles_enabled_flag”. By explicitly setting this flag to 0, the pixels at the tiles borders do not depend on the pixels that fall on the border of the neighbor tiles. When the two conditions on motion vectors and on in-loop filters are met, the tiles are said “independently decodable” or “independent”.
When a video sequence is encoded as a set of independent tiles, it may be decoded using a tile-based decoding from one frame to another without risking missing reference data or propagation of reconstruction errors. This configuration makes it possible to reconstruct only a spatial part of the original video that corresponds, for example, to a region of interest.
In what follows, independent tiles are considered.
With reference to Figure 4, encapsulation of tiles into ISOBMFF file format is described. For example, each tile is encapsulated into a dedicated track. The setup and initialization information common to all tiles is encapsulated into a specific track, called for example the “tile base track”. The full video is thus encapsulated as a composition of all these tracks, namely the tile base track and the set of tile tracks.
Figure 4 illustrates an exemplary encapsulation. One way to encapsulate tiled video according to the ISOBMFF standard is to split each tile into a dedicated track, to encapsulate the setup and initialization information common to all tiles in a specific track, called for example the “tile base track” and to encapsulate the full video as a composition of all these tracks: tile base track plus a set of tile tracks. The encapsulation is thus referred to as “multitrack tile encapsulation”. An example of multi-track tile encapsulation is provided in Figure 4.
Box 401 represents the main ISOBMFF box ‘moov’ and contains the full list of tracks with their identifiers. For example, boxes 411 to 414 represent tile tracks (four tiles in the present example) and box 420 represents the tile base track. Additional tracks such as audio or text tracks may be used and encapsulated in the same file. However, for the sake of conciseness such additional tracks are not discussed here.
As represented in Figure 4, the tile data is split into independent and addressable tracks so that any combination of tile track(s) can easily be reconstructed from the tile base track referencing the tile tracks for decoding and display. The tile base track may also be referred to as the “composite track” or “reference track” since it is designed to allow combination of any tiles: one, many or all tiles. The tile base track 420 contains common information to all the tile tracks and a list of samples 450 (only the first one is represented in Figure 4) in a “mdat” box. Each sample 450 of the tile base track 420 is built by reference to each tile track through the use of extractors (451 to 454 each one representing one extractor to each tile). Each tile track 411 to 414 represents a spatial part of the whole, or full-frame, video. The tile description (position, size, bandwidth etc.) is stored in the track header boxes (not represented) of each tile track 411 to 414. The tile base track and each tile track are cross-referenced (405) using a box “TrackReferenceBox” in each track. Each tile track 411 to 414 refers to the tile base track 420 as the ‘tbas’ track (‘tbas’ is a specific code indicating a coding dependency from each tile track to the tile base track, in particular where to find the parameter “HEVCDecoderConfigurationRecord” that makes it possible to setup the video decoder that will process the elementary stream resulting from the file format parsing). Conversely, in order to enable fullvideo reconstruction, the tile base track 420 indicates a dependency of type ‘seal’ to each tile track (405). This is to indicate the coding dependency and to reflect the sample 450 definition of the tile base track as extractors to the tile tracks data. These extractors are specific extractors that, at parsing time, can support the absence of data. In Figure 4, in order to provide a streamable version of the file, each track is decomposed into media segments (431 to 434 for the tile tracks and 460 for the tile base track). Each media segment comprises one or more movie fragments, indicated by the ‘moof’ box plus data. For tile tracks, the data part corresponds to a spatial sub-part of the video while for the tile base track, it contains the parameter sets, SEI messages when present and the list of extractors. The “moov” box 401 in case of streaming application would fit in an initialization segment. Figure 4 illustrates only one segment but the tracks can be decomposed into any number of segments, the constraint being that segments for tile tracks and for tile base track follow the same temporal decomposition (i.e. they are temporally aligned), this is to make switching possible from full-video to a tile or a set of tiles. The granularity of this temporal decomposition is not described here, for the sake of conciseness.
The file format has descriptive metadata (such as “VisualSampleGroupEntries” for instance, or track reference types in ‘tref’ boxes) that describe the relationships between the tracks so that the data corresponding to one tile, a combination of tiles or all the tiles can easily be identified by parsing descriptive metadata.
In what follows, still images are described at the same level. Thus, upon user selection of any tiles, combination of tiles or all tiles of a picture, identification and extraction is facilitated. In case the pictures are mixed with video data, the description comes in parallel to the descriptive metadata for the video. Thus, for the same data set, an additional indexation layer is provided for the pictures (in addition to the indexation layers for the video and for the audio).
In still image file formats using ‘meta’ boxes, the pictures with the related information are described as information items. As illustrated in Figure 5, the information items are listed in a dedicated sub-box “ItemlnfoBox” 500 of the ‘meta’ box. This sub-box provides the number of information items present in the file. The sub-box also provides for each item, descriptive metadata represented as “ItemlnfoEntry” 501. Several versions 502 (0, 1, 2) of this box exist according to the ISO BMFF standard evolution.
“Meta” items may not be stored contiguously in a file. Also, there is no particular restriction concerning the interleaving of the item data. Thus, two items in a same file may share one or several blocks of data. This is particularly useful for HEVC tiles (tiles can be stored contiguously or not), since it can make it straightforward to have one item per independently decodable tile. This item indicates the data offset in the main HEVC picture and length of the slice(s) used for the tile through an ItemLocationBox.
According to embodiments, a new item type for describing a tile picture may be added, named for example: “hvct” or ‘tile’ or reused from ISO/IEC 14496-15: ‘hvtT. Each item representing the tile picture (whatever the four character code chosen) may have a reference of type “tbas” to the ‘hvc1 ’ item from which it is extracted. Each item has an identifier “itemJD” 503 and is further described in a box “ItemLocationBox” in terms of byte position and size in the media data box containing the compressed data for the pictures.
Such syntax makes it possible for a file format reader (or “parser”), to determine, via the list of information items, how many information items are available with information concerning their type 504, for example ‘tile’ to indicate an information item is a tile picture of a full picture.
Thus, it is made possible to select a subset of information items in the file, a combination thereof, or the full set of information items in order to download only one tile of the image and the associated decoder configuration, while skipping the other tiles.
For cases where an HEVC tile depends on another HEVC tile for decoding, the dependency shall be indicated by an item reference of type ‘dpnd’ (or any specific four character code that indicates coding dependencies) as described in document w14123, WD of ISO/IEC 14496-15:2013 AMD 1, “Enhanced carriage of HEVC and support of MVC with depth information”, MPEG 107 San Jose January 2014.
This document defines tools for associating HEVC tile NALUs with sample group descriptions indicating the spatial position of the tile (using the “TileRegionGroupEntry” descriptor). However, there is no direct equivalent of sample grouping for metadata information items which could allow reuse of these descriptors.
Therefore, according to embodiments, a tile description item is defined per tile and the tile is linked to its description using a modified version of the “ItemReferenceBox” box as explained below.
According to other embodiments, only one tiling description is provided, preferably in a generic way. Thus, the item list does not get too long.
The design may be as follows:
- allow some items to describe a set of metadata, similar to sample groups but specific to each item type,
- for any item, add the ability to describe one parameter for a given type of item reference. The parameter would then be interpreted depending on the type of the referred item (similar to grouping type).
An upgrade of the descriptive metadata for an information item may be needed as explained in what follows with reference to Figure 6.
According to the ISOBMFF standard, the sample grouping mechanism is based on two main boxes having a “grouping_type” parameter as follows:
- the box “SampleGroupDescriptionBox” has a parameter ‘sgpd’ that defines a list of properties (a list “SampleGroupEntry”),
- the box “SampleToGroupBox” has a parameter ‘sbgp’ that defines a list of sample group with their mapping to a property.
The “grouping_type” parameter links a list of sample groups to a list of properties, the mapping of a sample group to one property in the list being specified in the box “SampleToGroupBox”.
In order to provide the same functionality for the information items, a list of information items groups and a list of properties have to be described. Also, it should be made possible to map each group of information items to a property.
In what follows, there is described how to make possible such descriptive metadata to be embedded in the Still Image File Format. In other words, how to link a descriptor to an image item. Even if the use cases are described for the HEVC Still Image File Format, the following features may be used in other standards such as ISO/IEC 14496-12 for associating any kind of information item with additional descriptive metadata.
According to embodiments, the existing “ItemlnformationEntry” box
601 with parameter ‘infe’ is extended with a new version number (602 and 603) in order to link each item to a property via a new parameter called “iref_type” 604 as shown in Figure 6. This makes it possible to avoid the creation of new boxes and improves the description while keeping it short.
The original definition of ItemlnformationEntry box is given by:
if (version == 2) { unsigned int(16) item ID; unsigned int(16) item protection index;
unsigned int(32) item type;
string item name ; if (item type=='mime') { string content type;
string content encoding; //optional } else if (item type == 'uri ') { string item uri type;
A new version making linking a tile picture to its description may be as follows:
if ((version == 2) || (version == 3)) { unsigned int(16) item ID; unsigned int(16) item protection index;
unsigned int(32) item type;
string item name;
if (version == 2) { if (item type=='mime') { string content type;
string content encoding; //optional } else if (item type == 'uri ') { string item uri type;
if (version == 3) { unsigned int(32) item iref parameter count; for (i = 0 ; i< item iref parameter count ; i++) { unsigned int(32) iref type; unsigned int(32) iref parameter;
}
According to other embodiments, closer to the box “SampleToGroupBox”, the definition of the box “ItemlnformationBox” with four character code ‘iinf’ is changed as follows, for example by introducing a new version of this box:
the current version:
aligned(8) class ItemlnfoBox extends FullBox( 'iinf', version = 0, 0) { unsigned int(16) entry count;
ItemlnfoEntry[ entry count ] item infos;
} is changed into:
aligned(8) class ItemlnfoBox extends FullBoxI unsigned int(16)group entry count; for (int g=0; g< group entry count;g++){ unsigned int(16) item run;
grouping type; property index; entry count;
unsigned int(16) unsigned int(16) unsigned int(16)
ItemlnfoEntry[ entry count ] item infos;
'iinf' version = 1, 0) { unsigned int(16) remaining entry count;
ItemlnfoEntry[remaining entry count ] item infos;
Alternatively, in order to signal whether group is in use or not, the current version is changed into:
aligned(8) class ItemlnfoBox extends FullBox( 'iinf', version = 1, 0) { unsigned int(1)group is used;
if (group is used == 0){ // standard iinf box but with 1 additional byte overhead unsigned int(7) reserved; // for byte alignment unsigned int(16) entry count;
ItemlnfoEntry) entry count ] item infos;
} else { unsigned int(15)group entry count;
for (int g=0; g< group entry count;g++){ unsigned int(16) item run; unsigned int (16) grouping type; unsigned int(16) property index; unsigned int (16) entry count;
ItemlnfoEntry[ entry count ] item infos;
} unsigned int(16) remaining entry count;
ItemlnfoEntry[remaining entry count ] item infos;
The “group_entry_count” parameter defines the number of information items groups in the media file. For each group of information item, a number of information items is indicated, starting from item_ID=0. Since information items have no time constraints and relationships, contrary to the samples, the encapsulation module can assign the information item identifiers in any order. By assigning increasing identifiers numbers following the items group, the list of information group can be more efficiently represented using a parameter item_run identifying the runs of consecutive information items identifiers in a group.
The related information items have an index called for example “property_index”. This “property_index” parameter associated with the “grouping_type” parameter enables a file format parser (or “reader”) to identify either a reference to descriptive metadata or the descriptive metadata itself. Figure 7 illustrates two exemplary embodiments.
The group feature in box “SingleltemTypeReferenceBox” 701 may be used with a group identification “group_ID” instead of the information item identification (item_ID) that is usually used for the value of the from_item_ID parameter. By design, thebox “SingleltemTypeReferenceBox” makes it easier to find all the references of a specific kind or from a specific item. Using it with a “group_ID” instead of “item_ID” makes it possible to find for a group of items to easily identify all the references of a specific type. Advantageously, since there is at most one box “Item Information Box” per encapsulated file, there is no need to define group identifications. The encapsulation module (during encoding) and the parsing module (during decoding) can run a respective counter (as the “g” variable in the box “ItemlnformationBox) on the list of information item groups as they are created or read. Alternatively, the parser may be informed, using the flag “group_used_flag”, whether to maintain or not the group identification counter.
Back to the example with one group of information items corresponding to the tile pictures, one group may contain four entries and the reference 700 “SingleltemTypeReference” may indicate the list of information items 704 on which the four tile picture information items depend, and so for a particular reference type 703.
According to other exemplary embodiments, the information item is used in a new kind of box “ItemReferenceBox”, as described hereinafter, that makes it possible, from one item 722, to list multiple reference types 723 to various other information items 724.
For the latter case, the specific box “ItemReferenceBox” 721 may be implemented as follows:
aligned(8) class MultipleltemTypeReferenceBox(void) extends Box(void) { unsigned int(16) from item ID;
unsigned int(16) reference count;
for (j=0; preference count; j++) { unsigned int(32) reference_type; // new parameter to allow multiple types unsigned int(16) to item ID;
} }
As for the standard box “ItemlnformationBox”, the list of item entries is described, but this time with a different order depending on the grouping. In the tile example, this may lead to a first group of four information items corresponding to the tile pictures gathered in a group with a parameter that may be named ‘tile’ followed by non-grouped information items for the configuration information, for the full picture information item and optionally for the EXIF metadata.
Thus, one box is modified and one box is created that is a specific kind of ItemReferenceBox. In what follows, this new kind of ItemReferenceBox is described.
The box “ItemReferenceBox” may also be extended by distinguishing between the various kinds of ItemReferenceBox by using the flag parameters in the box “FullBox” which is part of the ItemReferenceBox as follows:
aligned(8) class ItemReferenceBox extends FullBox('iref', 0, flags) { switch (flags) { case 0:
SingleltemTypeReferenceBox references[]; break;
case 1:
MultipleltemTypeReferenceBox references[]; break;
case 2 :
SharedltemTypeReferenceBox references[]; break;
Using the box “MultipleltemTypeReferenceBox” 721, one picture with four tiles may be described as follows:
Item Reference Box (version=l or flags=l): fromID=2, ref count=l, type='cdsc', toID=l;
type='init', toID=3; type='tbas', toID=l, type='tbas', toID=l, type='tbas', toID=l, type='tbas', toID=l, fromID=l, fromID=4, fromID=5, fromID=6, fromID=7, ref count=l, ref count=2, ref count=2, ref count=2, ref count=2, type='tile' type='tile' type='tile' type='tile' toID=E toID=E toID=E toID=E
This design makes it fairly easier to find all the references of any kinds from a specific item.
Description support 711 for a list of items 712 referencing a same item 714 with a given type 713 may be as follows:
aligned(8) class SharedltemTypeReferenceBox(ref type) extends
Box ( referenceType) { unsigned int(16) reference count; for (j=0; preference count; j++) { unsigned int(16) from item ID;
} unsigned int(16) to item ID;
} }
In the example of a picture with four tiles, then we may have:
type='cdsc', ref count=l, fromID=2, toID=l;
type='init', ref_count=l, fromID=l, toID=3;
type='tbas', ref count=4, fromID=4, fromID=5, fromID=6, fromID=7, toID=l; _ type='tile', ref count=4, fromID=4, fromID=5, fromID=6, fromID=7, toID=8;
The design of the box “SharedltemTypeReferenceBox makes it easier to find all the references of a specific type pointing to a specific item. This is in contrast with box “SingleltemTypeReferenceBox”. But since most of the “reference_type” defined for track references are not bi-directional, the box “SingleltemTypeReferenceBox” may not be used with some unidirectional reference type to signal all nodes having this reference type to other items. Alternatively, a flag may be provided in the “SingleltemTypeReference” for indicating whether it is a direct reference or a reverse reference, thereby alleviating the need for the new SharedltemTypeReferenceBox.
In view of the above, an information item can be associated with tiling information. A description of this tiling information has now to be provided.
For example, each tile may be described using a tile descriptor, such as the “iref_parameter” 605 of the extended “ItemlnfoEntry” 601. A specific descriptor may be as follows:
aligned (8) unsigned unsigned unsigned unsigned unsigned unsigned unsigned class TilelnfoDataBlock() { int(8) version;
int(32) reference width; // full image sizes int(32) reference height;
int(32) horizontal offset; // tile positions int(32) vertical offset;
int(32) region width; // tile sizes int(32) region height;
According to embodiments, a descriptor may be used for the grid of tiles to apply to the one or more pictures to be stored. Such descriptor may be as follows:
aligned (8) | class | TilelnfoDataltem ( | |
unsigned | int | (8) | version; |
unsigned | int | (1) | regular spacing; |
unsigned | int | (7) | reserved = 0; |
unsigned | int | (32) | reference width; |
unsigned | int | (32) | reference height; |
unsigned | int | (32) | nb cell horiz; |
unsigned | int | (32) | nb cell vert; |
if ((regular spacing) { for (i=0; i<nb cell width; i++) unsigned int(16) cell width; for (i=0; i<nb cell height; i++) unsigned int(16) cell height;
// regular grid or not // full-frame sizes
This descriptor “TilelnfoDataltem” allows describing a tiling grid (regular or irregular). The grid is described rows by rows starting from top-left.
The descriptor shall be stored as an item of type ‘tile’. When another item refers to this item, it shall use a reference of type “tile” to this description and it shall have a parameter “iref_parameter” specified, whose value is the IDbased index of the cell in the grid defined by the descriptor, where 0 is the topleft item, 1 is the cell immediately to the right of cell 0 and so on.
In the descriptor:
- “version” indicates the version of the syntax for the TilelnfoDataltem. Only value 0 is defined.
- “regular_spacing” indicates if all tiles in the grid have the same width and the same height.
- “reference_width, reference_height” indicates the units in which the grid is described. These units may or may not match the pixel resolution of the image which refers to this item. If the grid is regular, the “reference_width” (resp. “reference_height”) shall be a multiple of “nb_cell_horiz” (resp. “nb_cell_vert”).
- “cell_width” gives the horizontal division of the grid in non-regular tiles, starting from the left.
- “cell_height” gives the vertical division of the grid in non-regular tiles, starting from the top.
The above approach makes it possible to share the tiling information for all tiles.
Moreover, in case there are multiple pictures sharing the same tiling, even more description may be shared by simply referencing a cell in the grid of tiles.
The tiling configuration can be put in the media data box or in a dedicated box shared (by reference) among the tile information items.
The above descriptors are pure spatial descriptors in the sense that they only provide spatial locations and sizes for sub-image(s) in a greater image. In some use cases, for example with image collections or image composition, a spatial location is not enough to describe the image, typically when images overlap. This is one limitation of the TiieinfoDataBiock descriptor above. In order to allow image composition, whatever the image i.e. a tile or an independent/complete image, it may be useful to define a descriptor that contains on the one hand the positions and sizes of the image (spatial relations) and on the other hand display information (color, cropping...) for that picture. For example, color information can be provided to transform a sub-image from a color space to another one for display. This kind of information can be conveyed in the ColorlnformationBox ‘coir’ of the ISOBMFF. It can be useful, for compacity, to have the same data prepared for different kinds of display just by providing the transformation parameters to apply rather than conveying the two different so-transformed pictures. As well, the pixel aspect ratio like PixelAspectRatio box ‘pasp’ defined in the ISOBMFF Part-12 can be put in this descriptor to redefine a width and height that can be different than the encoded width and height of each picture. This would indicate the scale ratio to apply by the display after the decoding of an image. We would then have the coded sizes stored in the video sample entries (‘stsd’ box for example) and the display sizes deduced from the ‘pasp’ box. Another possible information for display could be the clean aperture information box ‘clap’ also defined in ISOBMFF. According to standard SMPTE 274M, the clean aperture defines an area within which picture information is subjectively uncontaminated by all edge transient distortions (possible ringing effects at the borders of images after analog to digital conversions). This list of parameters useful for display is not limitative and we could put as optional components in the sub-image descriptor any other descriptive metadata box. These ones can be explicitly mentioned because they are already part of the standard and they provide generic tools to indicate image cropping, sample aspect ratio modification and color adjustments. Unfortunately their use was only possible for media tracks, not for image file format relying on ‘meta’ boxes. We then suggest a new descriptor called for example “SimplelmageMetaData” to support spatial description of image items, along with other properties such as clean aperture or sample aspect ratio. This applies to any sub-image (tile or independent image) intended to be composed in a bigger image or at the reverse extracted from a bigger image:
aligned(8) class SimplelmageMetaData {
CleanApertureBox clap; // optional PixelAspectRatioBox pasp; // optional ColourlnformationBox colour; // optional ImageSpatialRelationBox location; // optional }
Or its variation when considering extension parameters to help the display process (through for example extra_boxes):
aligned(8) class SimplelmageMetaData {
CleanApertureBox PixelAspectRatioBox ColourlnformationBox clap; // optional pasp; // optional colour; // optional
ImageSpatialRelationBox extra boxes }
location; boxes; // // optional optional
Where the ImageSpatialRelationBox is an extension of the TiieinfoDataBiock as described in the following. Another useful parameter to consider is the possibility to compose images as layers. We then suggest inserting a parameter to indicate the level associated to an image in this layered composition. This is typically useful when images overlap. This can be called ‘layer’ for example with layer information indication. An example syntax for such descriptor is provided:
Definition:
Box Type: ‘isre’
Container: Simple image meta-data item (‘simd j
Mandatory: No
Quantity: Zero or one per item
Syntax:
aligned(8) class ImageSpatialRelationBox extends FullBox('isre, version = 0, 0) { unsigned int(32) horizontal display offset;
vertical display offset; display width; display height;
unsigned int(32) unsigned int(32) unsigned int(32) int(16) layer;
} with the associated semantics:
horizontal display offset specifies the horizontal offset of the image.
vertical display of f set specifies the vertical offset of the image.
display width specifies the width of the image.
display height specifies the height of the image.
layer specifies the front-to-back ordering of the image; images with lower numbers are closer to the viewer. 0 is the normal value, and -1 would be in front of layer 0, and so on
This new ‘isre’ box type gives the ability to describe the relative position of an image with other images in an image collection. It provides a subset of the functionalities of the transformation matrix usually found in the movie or track header box of a media file. Coordinates in the ImageSpatialRelationBox are expressed on a square grid giving the author’s intended display size of the collection; these units may or may not match the coded size of the image. The intended display size is defined by:
- Horizontally: the maximum value of (horizontal_display_offset + display_width) for all ‘isre’ boxes
- Vertically: the maximum value of (vertical_display_offset + display_height) for all ‘isre’ boxes
When some images do not have any ‘isre’ associated while other images in the file have ‘isre’ associated, the default images without any ‘isre’ shall be treated as if their horizontal and vertical offsets are 0, their display size is the intended display size and their layer is 0.
The ImageSpatialRelationBox indicates the relative spatial position of images after any cropping or sample aspect ratio has been applied to the images. This means, when ‘isre’ is combined with ‘pasp’, etc in a SimplelmageMetaData, the image is decoded, the ‘pasp’, ‘clap’, ‘coir’ are applied if present and then the image is moved and scaled to the offset and size declared in the ‘isre’ box.
This new descriptor can be used as description of an image (tile or single image) by defining an association between the item information representing the image and the item information representing the descriptor (let’s give the type ‘simd’ for SimplelmageMetadata Definition, any reserved 4 character code would be acceptable for a mp4 parser to easily identify the kind of metadata it is currently processing). This association is done with an ItemRefererenceBox and with a new reference type; ‘simr’ to indicate “spatial image relation”. The example description below illustrates the case of a composition of 4 images where the composition itself has no associated item. Each image item is associated to a SimplelmageMetaData item through an item reference of type ‘simr’ and shares the DecoderConfigurationRecord information in a dedeicated ‘hvcC’ item..
'hevc' item provided ftyp box: major-brand = 'hevc', compatible-brands : meta box: (container) handler box: hdlr = 'hvcl' // no primary
Item Information | Entries : | |||||
item | type = | 'hvcl', | itemID=l, | item | protection | index |
item | type = | 'hvcl', | itemID=2 , | item | protection | index |
item | type = | 'hvcl', | itemID=3, | item | protection | index |
item | type = | 'hvcl', | itemID=4, | item | protection | index |
item type='simd' itemID=5 (sub-image descriptor) item type='simd' itemID=6 (sub-image descriptor) item type='simd' itemID=7 (sub-image descriptor) item type='simd' itemID=8 (sub-image descriptor) item type = 'hvcC', item ID=9, item protection index
Item Reference:
toID=5 = 0.
type='simr' type='simr' type='simr' type='simr' type='init', type='init', type='init', type='init', fromID=l, fromID=2, toID=6 fromID=3, toID=7 fromID=4, toID=8 fromID=l, toID=9; fromID=3, toID=9; fromID=4, toID=9; fromID=5, toID=9;
Item Location:
itemID = | 1, | extent | count = | 1, | extent | offset | = Pl, | extent | length = | LI |
itemID = | 2, | extent | count = | 1, | extent | offset | = P2, | extent | length = | L2 |
itemID = | 3, | extent | count = | 1, | extent | offset | = P3, | extent | length = | L3 |
itemID = | 4, | extent | count = | 1, | extent | offset | = P4, | extent | length = | L4 |
itemID = | 5, | extent | count = | 1, | extent | offset | = P5, | extent | length = | L5 |
itemID = | 6, | extent | count = | 1, | extent | offset | = P6, | extent | length = | L6 |
itemID = | 7, | extent | count = | 1, | extent | offset | = P7, | extent | length = | L7 |
itemID = | θ, | extent | count = | 1, | extent | offset | = P8, | extent | length = | L8 |
itemID = | 9, | extent | count = | 1, | extent | offset | = P0, | extent | length = | L0 |
Media data box:
HEVC Decoder Configuration Record ('hvcC' at offset P0) 4 HEVC Images (at file offsets Pl, P2, P3, P4) simple image metadata (at file offsets P5, P6, P7, P8)
The above organization of data is provided as an example: image and metadata could be interlaced in the media data box for example to have an image plus its metadata addressable as a single byte range. When receiving this description, a parser is informed, by parsing the informations in the ‘simd’ items whether a sub-image is cropped from a full picture, or conversely if a full picture is a composition from sub-images. In case of crop, the full picture item and the cropped image would share the same data range as in example below and the same decoder configuration information. The sub-image would then then be associated to a ‘simd’ item having only ‘clap’ information and no positioning, then no ‘isre’.
In case of composition: in such case, the full picture item is associated to a ‘simd’ item that only contains ‘isre’ information and the subimage would be associated to a ‘simd’ item reflecting its position in the full image.
The example below illustrates the case where 4 images are composed into a larger one. All images, including the composed one are exposed as a playable item using the proposed descriptor.
ftyp box: major-brand = 'hevc' meta box: (container) handler box: hdlr = 'hvcl'
Item Information Entries: item type = 'hvcl', itemID=l, item type = 'hvcl', itemID=2, item type = 'hvcl', itemID=3, item type = 'hvcl', itemID=4, item type = 'hvcl', itemID=5, item type = 'simd' itemID=6 item type = 'simd' itemID=7 item type = 'simd' itemID=8 item type = 'simd' itemID=9 item type = 'hvcC', item ID=10 item type = 'simd', item ID=11 compatible-brands = 'hevc' primary item: itemID item protection index = 0. item protection index = 0. item protection index = 0. item protection index = 0. item protection index = 0.
(sub-image descriptor)... (sub-image descriptor)... (sub-image descriptor)... (sub-image descriptor)... (decoder config record) (sub-image descriptor) // // // // // = 1;
full-image sub-image sub-image sub-image sub-image
Item Reference Entries:
type= | 'simr', | fromID=l, | toID=ll |
type= | 'simr', | fromID=2, | toID=6 |
type= | 'simr', | fromID=3, | toID=7 |
type= | 'simr', | fromID=4, | toID=8 |
type= | 'simr', | fromID=5, | toID=9 |
type= | 'init', | fromID=l, | toID=10 |
type= | 'init', | fromID=2, | toID=10 |
type= | 'init', | fromID=3, | toID=10 |
type= | 'init', | fromID=4, | toID=10 |
type= | 'init', | fromID=5, | toID=10 |
Item Location:
itemID = 1, extent | count = P2, = P3, = P4, = P5, | = 4,// full image is extent length = L2; extent length = L3; extent length = L4; extent length - 15; | |
extent extent extent extent | offset offset offset offset | ||
itemID = 2, | extent | count | = 1, extent offset = |
itemID = 3, | extent | count | = 1, extent offset = |
itemID = 4, | extent | count | = 1, extent offset = |
itemID = 5, | extent | count | = 1, extent offset = |
itemID = 6, | extent | count | = 1, extent offset = |
itemID = 7, | extent | count | = 1, extent offset = |
itemID = 8, | extent | count | = 1, extent offset = |
itemID = 9, | extent | count | = 1, extent offset = |
itemID = 10, | extent | count | = 1, extent offset |
itemID = 11, | extent | count | = 1, extent offset |
L10; |
composed of 4 sub-images
P2, | extent | length = | L2 |
P3, | extent | length = | L3 |
P4, | extent | length = | L4 |
P5, | extent | length = | L5 |
P6, | extent | length = | L6 |
P7, | extent | length = | L7 |
P8, | extent | length = | L8 |
P9, | extent | length = | L9 |
P0, extent length = L0; P10, extent length =
Media data box:
HEVC Decoder Configuration Record ('hvcC' at offset P0)
HEVC (sub) Images (at file offsets P2, P3, P4, P5) simple image metadata (at file offsets P6, P7, P8, P9, P10)
This other example illustrates the case where the full picture is actually a tiled HEVC picture (4 tiles):
ftyp box: major-brand = 'hevc' meta box: (container) handler box: hdlr = 'hvcl'
Item Information Entries: item type = 'hvcl', itemID=l, item type = 'hvtl', itemID=2, item type = 'hvtl', itemID=3, item type = 'hvtl', itemID=4, item type = 'hvtl', itemID=5, item type = 'simd' itemID=6 item type = 'simd' itemID=7 item type = 'simd' itemID=8 item type = 'simd' itemID=9 item type , compatible-brands = primary item: itemID item protecti item protecti item protecti item protecti item protecti (sub-image desc (sub-image desc (sub-image desc (sub-image desc = 'hvcC', item ID=10 (decoder con on index = on index = on index = on index = on index = riptor)... riptor)... riptor)... riptor)... fig record) 'hevc' = 1;
0... // full-image 0... // sub-image 0... // sub-image 0... // sub-image 0... // sub-image
Item Reference Entries: type= 'init', fromID=l, toID=10...
// declare sub-images as tiles of the full image
type= | 'tbas', fromID=2, | toID=l... |
type= | 'tbas', fromID=3, | toID=l... |
type= | 'tbas', fromID=4, | toID=l... |
type= | 'tbas', fromID=5, | toID=l... |
// providing positions | and sizes | |
type= | 'simr', fromID=2, | toID=6 |
type= | 'simr', fromID=3, | toID=7 |
type= | 'simr', fromID=4, | toID=8 |
type= | 'simr', fromID=5, | toID=9 |
Item Location:
itemID = 1, | extent count | = 4,// full image is | composed of 4 tiles | |
extent | offset = P2, | extent length = L2... | // | data for tile 1 |
extent | offset = P3, | extent length = L3... | // | data for tile 2 |
extent | offset = P4, | extent length = L4... | // | data for tile 3 |
extent | offset = P5, | extent length = L5... | // | data for tile 4 |
itemID = 2, | extent count | = 1, extent offset = | P2, | extent length = L2; |
itemID = 3, | extent count | = 1, extent offset = | P3, | extent length = L3; |
itemID = 4, | extent count | = 1, extent offset = | P4, | extent length = L4; |
itemID = 5, | extent count | = 1, extent offset = | P5, | extent length = L5; |
itemID = 6, | extent count | = 1, extent offset = | P6, | extent length = L6; |
itemID = 7, | extent count | = 1, extent offset = | P7, | extent length = L7; |
itemID = 8, | extent count | = 1, extent offset = | P8, | extent length = L8; |
itemID = 9, | extent count | = 1, extent offset = | P9, | extent length = L9; |
itemID = 10, Media | extent count data box: | = 1, extent offset | = P0 | , extent length = L0 |
HEVC Decoder Configuration Record ('hvcC' at offset P0)
HEVC Image (with 4 tiles at file offsets P2, P3, P4, P5) 4 simple image metadata (at file offsets P6, P7, P8, P9)
Depending on use cases, it would be possible to have several image items sharing the same metadata, for example when the same cropping is to be applied to all images. It is also possible for an image item to have multiple ‘simr’ references to different SimplelmageMetaData, for example when cropping is shared among images but not spatial information.
An alternative embodiment to the new version of the ItemlnfoEntry (as illustrated in Figure 6) is to define more than one parameter (605) per information item entry and reference. In the embodiment of Figure 6, the iref_parameter is a four bytes code that is useful in case of a tile index to refer to a cell in a tiling grid. But in order to have richer description and to be able to embed linked description inside the item info entry itself rather than with the data (in mdat box), the following extension can be useful:
if (version == 3) { unsigned int(32) item iref parameter count; for (i=0 ; i< item iref parameter count ; i++) { unsigned int(32) iref type;
ItemReferenceParameterEntry parameter;
aligned(8) abstract class ItemReferenceParameterEntry (unsigned int(32) format) extends Box(format){ }
// Example to reference a tile index aligned(8) abstract class TilelndexItemReferenceParameterEntry extends ItemReferenceParameterEntry(1 tile 1) { unsigned int(32) tile index;
// Example to inline the tile description aligned(8) abstract class TilelndexItemReferenceParameterEntry extends ItemReferenceParameterEntry(1 tile 1 ) { unsigned int(32) tile index;
In the above extension:
item iref parameter count gives the number of reference types for which a parameter is given. This is unchanged compared to item 605 in Figure 6, iref type gives the reference type, as indicated in the ‘iref’ box, for which the parameter applies for this item. This is unchanged compared to item 605 in Figure 6.
parameter here differs from iref parameter (item 605 in Figure 6) because it provides an extension means via the new box ItemReferenceParameterEntry. By specializing this new box (as done above With TilelndexItemRef erenceParameterEntry fOT tile index ΙΠ 3 tiling configuration), any kind of additional metadata can be associated with an information item entry provided that the encapsulation and the parsing modules are aware of the strucuture of this specialized box. This can be done by standard types of itemReferenceParameterEntry or by providing by construction or in a negotiation step the structure of the parameter entry. The semantics of the parameter is given by the semantics of the item with type iref type.
In what follows, there are provided exemplary descriptive metadata for information items describing a picture with 4 tiles and the EXIF meta data of the full picture.
In the prior art, the tile pictures were listed as information items without any corresponding description provided as show herein below. Moreover, the setup information denoted ‘hvcC’ type was not described as an item. This makes it possible to factorize the common data related to HEVC parameter sets and SEI messages that apply to all tile pictures and to the full picture.
ftyp box: major-brand = 'hevc', compatible-brands = 'hevc' meta box: (container) handler box: hdlr = 'hvcl' primary item: itemID = 1;
Item information:
item | type = | 'hvcl', | itemID=l, | item | protection | index = | 0 | (unused) | => |
Full | pict. | ||||||||
item | type = | 'Exit', | itemID=2 , | item | protection | index = | 0 | (unused) | |
item | type = | 'hvcC', | itemID=3, | item | protection | index = | 0 | (unused) | |
item | type = | 'hvct', | itemID=4, | item | protection | index = | 0 | (unused) | => |
Tile- | pict. | ||||||||
item | type = | 'hvct', | itemID=5, | item | protection | index = | 0 | (unused) | => |
Tile- | pict. | ||||||||
item | type = | 'hvct', | itemID=6, | item | protection | index = | 0 | (unused) | => |
Tile- | pict. | ||||||||
item | type = | 'hvct', | itemID=7, | item | protection | index = | 0 | (unused) | => |
Tile | pict. |
Item Location:
itemID = | 1, | extent | count | = 1, | extent offset | = X, | extent length = Y; |
itemID = | 2, | extent | count | = 1, | extent offset | = P, | extent length = Q; |
itemID = | 3, | extent | count | = 1, | extent offset | = R, | extent length = S; |
itemID = | 4, | extent | count | = 1, | extent offset | = X, | extent length = ET1 |
itemID = | 5, | extent | count | = 1, | extent offset | = X+ET1, extent length = | |
ET2; | |||||||
itemID = | 6, | extent | count | = 1, | extent offset | = X+ET2, extent length = | |
ET3; | |||||||
itemID = | 7, | extent | count | = 1, | extent offset | = X+ET3, extent length = |
ET4;
Item Reference:
type='cdsc', | fromID=2, | toID=l; |
type='init', | fromID=l, | toID=3; |
type='tbas', | fromID=4, | toID=l; |
type='tbas', | fromID=5, | toID=l; |
type='tbas', | fromID=6, | toID=l; |
type='tbas', | fromID=7, | toID=l; |
Media data box: HEVC Image (at file | offset X, with | |
Exif data | block (at | file offset P, |
HEVC Config Record (at file offset // No Tile description length Y) with length Q)
R, with length S)
According to embodiments, using the extension with version 3 (see Figure 6, 602, 603) of ItemlnfoEntry box (601): tile picture information is listed with associated references to parts of the tiling configuration that is also described as an information item (ID=8).
ftyp box: major-brand = 'hevc' meta box: (container) handler box: hdlr = 'hvcl'
Item information: item type = 'hvcl', itemID=l, item type = 'Exif', itemID=2, item type = 'hvcC', itemID=3, item type = 'hvct', itemID=4, tile_index=0 item type = 'hvct', itemID=5, tile_index=l item type = 'hvct', itemID=6, tile_index=2 item type = 'hvct', itemID=7, tile_index=3 item type = 'tile', itemID=8, compatible-brands = 'hevc' primary item: itemID = 1;
item protection index = 0 (unused) item protection index = 0 (unused) item protection index = 0 (unused) parameter for ireftype==tile:
parameter for ireftype==tile:
parameter for ireftype==tile:
parameter for ireftype==tile:
(tiling configuration)
Item Location:
itemID = | 1, | extent | count = | 1, | extent offset | = X, | extent length = Y; |
itemID = | 2, | extent | count = | 1, | extent offset | = P, | extent length = Q; |
itemID = | 3, | extent | count = | 1, | extent offset | = R, | extent length = S; |
itemID = | 4, | extent | count = | 1, | extent offset | = X, | extent length = ET1; |
itemID = | 5, | extent | count = | 1, | extent offset | = X+ET1, extent length = | |
ET2; | |||||||
itemID = | 6, | extent | count = | 1, | extent offset | = X+ET2, extent length = | |
ET3; | |||||||
itemID = | 7, | extent | count = | 1, | extent offset | = X+ET3, extent length = | |
ET4; | |||||||
itemID = | θ, | extent | count = | 1, | extent offset | = i, | extent length = I; |
Item Reference:
type='cdsc', | fromID=2, | toID=l |
type='init', | fromID=l, | toID=3 |
type='tbas', | fromID=4, | toID=l |
type='tbas', | fromID=5, | toID=l |
type='tbas', | fromID=6, | toID=l |
type='tbas', | fromID=7, | toID=l |
type='tile', type='tile', type='tile', type='tile', fromID=4, fromID=5, fromID=6, fromID=7, toID=8; toID=8; toID=8; toID=8;
// // link each tile pict.
// to the tiling config item //
Media data box:
HEVC Image (at file offset X, with length Y)
Exit data block (at file offset P, with length Q)
HEVC Config Record (at file offset R, with length S)
Tile description data block (at file offset i, with length I)
Figure 8 illustrates a context of implementation of embodiments of the invention. First different media are recorded: for example audio during step 800a, video during step 800b and one or more pictures during step 800c. Each medium is compressed during respective steps 801a, 801b and 801c. During these compression steps elementary streams 802a, 802b and 802c are generated. Next, at application level (user selection from graphical user interface; configuration of the multimedia generation system etc.), an encapsulation mode is selected in order to determine whether or not all these elementary streams should be merged or not. When the “merge” mode is activated (test 803, “yes”), data for audio, video and still images are encapsulated in the same file during step 806c as described hereinabove. If the “merge” mode is not activated (test 803, “no”), then two encapsulated files are generated during steps 806a and 806b consecutively or in parallel thereby respectively leading to the creation of one file for synchronized time media data during step 807a and an additional file with only the still images 907b. During step 806a, audio and video elementary streams are encapsulated according to the ISOBMFF standard and the still pictures are encapsulated during step 806b as described herein above in order to provide tile description and region of interest features. Finally, a media presentation 807 is obtained and can be provided to a DASH generator to prepare it for streaming (step 820a) or stored into a memory (step 820b) or rendered on a display unit (step 820c) or transmitted (step 820d) to a remote entity either entirely or after some parts (such as tiles), have been extracted by parsing the descriptive metadata.
Figure 9 is a schematic block diagram of a computing device 900 for implementation of one or more embodiments of the invention. The computing device 900 may be a device such as a micro-computer, a workstation or a light portable device. The computing device 900 comprises a communication bus connected to:
-a central processing unit 901, such as a microprocessor, denoted
CPU;
-a random access memory 902, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method for reading and writing the manifests and/or for encoding the video and/or for reading or generating the Data under a given file format, the memory capacity thereof can be expanded by an optional RAM connected to an expansion port for example;
-a read only memory 903, denoted ROM, for storing computer programs for implementing embodiments of the invention;
-a network interface 904 is typically connected to a communication network over which digital data to be processed are transmitted or received. The network interface 904 can be a single network interface, or composed of a set of different network interfaces (for instance wired and wireless interfaces, or different kinds of wired or wireless interfaces). Data are written to the network interface for transmission or are read from the network interface for reception under the control of the software application running in the CPU 901;
- a user interface 9805 for receiving inputs from a user or to display information to a user;
- a hard disk 906 denoted HD
- an I/O module 907 for receiving/sending data from/to external devices such as a video source or display
The executable code may be stored either in read only memory 903, on the hard disk 906 or on a removable digital medium such as for example a disk. According to a variant, the executable code of the programs can be received by means of a communication network, via the network interface 904, in order to be stored in one of the storage means of the communication device 900, such as the hard disk 906, before being executed.
The central processing unit 901 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, the CPU 901 is capable of executing instructions from main RAM memory 902 relating to a software application after those instructions have been loaded from the program ROM 903 or the hard-disc (HD) 906 for example. Such a software application, when executed by the CPU 901, causes the steps of a method according to embodiments to be performed.
Alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC).
The present invention may be embedded in a device like a camera, a smartphone or a tablet that acts as a remote controller for a TV, for example to zoom in onto a particular region of interest. It can also be used from the same devices to have personalized browsing experience of the TV program by selecting specific areas of interest. Another usage from these devices by a user is to share with other connected devices some selected sub-parts of his preferred videos. It can also be used in smartphone or tablet to monitor what happened in a specific area of a building put under surveillance provided that the surveillance camera supports the generation part of this invention.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiment. Other variations to the disclosed embodiment can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.
Claims (31)
1. A method of processing a media file based on an encoded bitstream representing one or more still images, the method comprising
- obtaining a media file containing:
identification information for each of one or more subimages of the one or more still images, the identification information identifying a different portion of the encoded bit-stream for each of the one or more subimages;
sub-image description information comprising display parameters for each of the one or more sub-images, and reference information for associating said identification information with said sub-image description information, and
- processing, for displaying at least one of the one or more still images, said encoded bit-stream according to said obtained information in the media file, wherein the display parameters are used after decoding of the sub-image.
2. The method according to claim 1, wherein said media file also contains information identifying a timed portion of said bit-stream corresponding to a video sequence.
3. The method according to any one of the preceding claims, wherein sub-image description information is embedded in the bit-stream.
4. The method according to any one of the preceding claims, wherein the reference information includes a reference type and additional descriptive metadata including said sub-image description information.
5. The method according to claim 3, the media file further containing a metadata item for referencing said sub-image description information in the encoded bit-stream.
6. The method according to any one of the preceding claims, wherein items of identification information are grouped and wherein the reference information is contained for associating a group of items of identification information with said sub-image description information.
7. The method according to any one of the preceding claims, wherein all references linking metadata items to another item are included in a single reference box in the media file.
8. The method according to any one of the preceding claims, wherein all the relationships from one item, of any type, are stored in a single item information descriptor.
9. The method according to any one of the preceding claims, wherein said media file is obtained from a server module for adaptive streaming.
10. A method of processing a media file based on encoded image data corresponding to one or more still images, the method comprising:
- obtaining a media file containing:
identification information for each of one or more subimages of the one or more still images, the one or more sub-images identified by the identification information corresponding to a different portion of the encoded image data ;
sub-image description information comprising one or more display parameters, other than a spatial parameter, for each of the one or more sub-images, and reference information for associating said identification information with said sub-image description information, and
- processing, for displaying at least one of the one or more still images, said encoded image data according to said obtained information in the media file.
11. The method according to any one of the preceding claims, wherein said sub-image description information includes one or more colour display information.
12. The method according to any one of the preceding claims, wherein said sub-image description information includes pixel aspect ratio information.
13. The method according to any one of the preceding claims, wherein said sub-image description information includes clean aperture information.
14. The method according to any one of the preceding claims, wherein said sub-image description information further includes spatial description information.
15. The method according to claim 14, wherein said spatial description information includes one or more of display offset, display width and display height.
16. The method according to any one of the preceding claims, wherein said sub-image description information includes layer information.
17. The method according to any one of the preceding claims, wherein said media file corresponds to a standardized file format.
18. The method according to any one of the preceding claims, wherein the one or more display parameters comprise clean aperture information.
19. The method according to claim 18, wherein the clean aperture information is the ‘clap’ box as defined in ISOBMFF.
20. The method according to any one of the preceding claims, wherein the one or more display parameters comprise colour information.
21. The method according to claim 20, wherein the colour information is the ‘coir’ box as defined in ISOBMFF.
22. The method according to any one of the preceding claims, wherein the one or more display parameters comprise pixel aspect ratio information.
23. The method according to claim 22, wherein the pixel aspect ratio information is the ‘pasp’ box as defined in ISOBMFF.
24. A device for processing a media file based on an encoded bitstream representing one or more still images, the device comprising:
- obtaining means for obtaining a media file containing:
identification information for each of one or more subimages of the one or more still images, the identification information identifying a different portion of the encoded bit-stream for each of the one or more subimages;
sub-image description information comprising display parameters for each of the one or more sub-images, and reference information for associating said identification information with said sub-image description information, and
- processing means for processing, to display at least one of the one or more still images, said encoded bit-stream according to said obtained information in the media file, wherein the display parameters are used after decoding of the sub-image.
25. A device for processing a media file based on encoded image data corresponding to one or more still images, the device comprising:
- obtaining means for obtaining a media file containing:
5 identification information for each of one or more subimages of the one or more still images, the one or more sub-images identified by the identification information corresponding to a different portion of the encoded image data ;
sub-image description information comprising one or more
10 display parameters, other than a spatial parameter, for each of the one or more sub-images, and reference information for associating said identification information with said sub-image description information, and
- processing means for processing, to display at least one of the
15 one or more still images, said encoded image data according to said obtained information in the media file.
Amendment to Claims have been filed as follows
1009 18
5 1. A method of encapsulating an encoded bit-stream representing one or more still images, the method comprising generating sub-image information comprising both of (i) one or more itemJDs for identifying one or more sub-images of each of the one or more still images and (ii) location information used for identifying a portion, in the encoded
10 bit-stream, of each of the one or more sub-images identified by the one or more item IDs;
generating sub-image description information comprising display parameters relating to the one or more sub-images, the display parameters being used after decoding of the sub-image, and
15 generating reference information for associating said sub-image information with said sub-image description information, and outputting the bitstream together with the generated information as an encapsulated data file,
20 2, The method according to claim 1, wherein sub-image description information is embedded in the bit-stream.
3. The method according to claim 2, further comprising generating a metadata item for referencing said sub-image description information in the
25 encoded bit-stream,
4. The method according to any one of the preceding claims, wherein the one or more sub-images are grouped into one or more groups and wherein the reference information is generated for associating a group of sub-images with
30 a display parameter in the sub-image description information.
5. The method according to any one ofthe preceding claims, wherein all references linking metadata items to another item are included in a single reference box in the encapsulated data file.
5 6. The method according to claim 1, wherein the outputting is performed by a server module for adaptive streaming, is performed for storage info a memory, is performed to a display module, or is performed by a communication module for transmission.
10 7. The method according to any one of the preceding claims, wherein all the relationships from one item, of any type, are stored in a single item information descriptor.
8. The method according to any one ofthe preceding claims, wherein 15 said media file is obtained from a server module for adaptive streaming.
9. A processing method comprising:
- obtaining an encapsulated data file including:
an encoded bitstream representing one or more still images;
20 88 sub-image information comprising both of (i) one or more itemJDs for identifying one or more sub-images of each of the one or more still images, and (ii) location information used for identifying a portion, in the bitstream, of each of the one or more sub-images identified by the one or more item IDs;
25 88 sub-image description information comprising display parameters relating to the one or more sub-images, and reference information for associating the one or more itemJDs with a display parameter in the sub-image description information, and
- reproducing at least one of the sub-images of the one or more still 30 images represented by the encoded bitstream, by using the sub-image information, sub-image description information, and reference information.
10. The method according to any one of the preceding claims, wherein sub-image description information includes colour display information, pixel aspect ratio information, clean aperture information, layer information, or spatial description information.
11, The method according to any one of claims 1 to 9, wherein said sub-image description information includes pixel aspect ratio information.
12, The method according to any one of claims 1 to 9 or 11, wherein 10 said sub-image description information includes clean aperture information.
OO
CD
O
O
13, The method according to any one of claims 1 to 9, 11 or 12, wherein sub-image description information includes spatial description information and wherein spatial description information includes one or more of
15 display offset, display width and display height of each of the one or more subimages.
14. The method according to claim 13, wherein said spatial description information includes one or more of display offset, display width and display
15. The method according to any one of claims 1 to 9, or 11 to 14, wherein said sub-image description information includes layer information.
16. The method according to any one of the preceding claims, wherein 25 the encapsulated data file corresponds to a standardized file format.
17, The method according to any one of claims 1 to 9 or 11 to 16, wherein the one or more display parameters comprise clean aperture information.
18, The method according to any one of claims 1 to 9 or 11 to 17, wherein the one or more display parameters comprise colour information.
19. The method according to claim 18, wherein the colour information is the ‘coir’ box as defined in ISOBMFF.
20. The method according to any one of claims 1 to 9 or 11 to 19, 5 wherein the one or more display parameters comprise pixel aspect ratio information,
21. The method according to claim 20, wherein the pixel aspect ratio information is the ‘pasp’ box as defined in ISOBMFF.
22. The method according to any one of the preceding claims, wherein the sub-image information is defined in an “ItemLocationBox”.
23. The method according to any one of the preceding claims, wherein 15 the reference information is defined in an “ItemReferenceBox”,
24. The method according to any one of the preceding claims, further comprising generating information for associating the one or more sub-images with a single still image among the one or more still images.
25. The method of claim 24, wherein the information for associating the one or more sub-images with the single still image is defined in an “ItemReferenceBox”.
25
26, A device for encapsulating an encoded bit-stream representing one or more still images, the device comprising a processing unit adapted to:
generate sub-image information comprising both of (i) one or more item IDs for identifying one or more sub-images of each of the one or more
30 still images and (ii) location information used for identifying a portion, in the encoded bit-stream, of each of the one or more sub-images identified by the one or more item IDs;
® generate sub-image description information comprising display parameters relating to the one or more sub-images, the display parameters being used after decoding of the sub-image, and « generate reference information for associating said sub5 image information with said sub-image description information,
- the device further comprising a communication unit adapted to output the bitstream together with the generated information as an encapsulated data file.
10
27, The device of claim 26, wherein the sub-image information is defined in an “ItemLocationBox” and the reference information is defined in an “ItemReferenceBox” box.
28. The device of claim 26, wherein the processing unit is further 15 adapted to generate information for associating the one or more sub-images with a single still image among the one or more still images.
29. The device of claim 28, wherein the information for associating the one or more sub-images with the single still image is defined in an
20 “ItemReferenceBox”.
30. A computer-readable storage medium storing instructions of a computer program for implementing a method according to any one of claims 1 to 25.
31. A processing device comprising:
- obtaining means for obtaining an encapsulated data file including:
88 an encoded bitstream representing one or more still images; sub-image information comprising both of (i) one or more
30 ItemJDs for identifying one or more sub-images of each of the one or more still images, and (ii) location information used for identifying a portion, in the bitstream, of each of the one or more sub-images identified by the one or more itemJDs;
* sub-image description information comprising display parameters relating to the one or more sub-images, and
5 88 reference information for associating the one or more ilem_JDs with a display parameter in the sub-image description information, and
- reproducing means for reproducing at least one of the sub-images of the one or more still images, represented by the encoded bitstream by using the sub-image information, sub-image description information, and reference
10 information.
CD
O
O
Intellectual
Property
Office
Application No: GB1810396.0 Examiner: Mr Joe McCann
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1810396.0A GB2561491B (en) | 2014-07-01 | 2014-07-01 | Image data encapsulation with tile support |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1411740.2A GB2524599B (en) | 2014-03-25 | 2014-07-01 | Image data encapsulation with tile support |
GB1810396.0A GB2561491B (en) | 2014-07-01 | 2014-07-01 | Image data encapsulation with tile support |
Publications (3)
Publication Number | Publication Date |
---|---|
GB201810396D0 GB201810396D0 (en) | 2018-08-08 |
GB2561491A true GB2561491A (en) | 2018-10-17 |
GB2561491B GB2561491B (en) | 2019-05-22 |
Family
ID=63042741
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1810396.0A Active GB2561491B (en) | 2014-07-01 | 2014-07-01 | Image data encapsulation with tile support |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2561491B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2617913A (en) * | 2020-12-17 | 2023-10-25 | Canon Kk | Method and apparatus for encapsulating image data in a file for progressive rendering |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114444439B (en) * | 2022-04-08 | 2022-08-26 | 深圳市壹箭教育科技有限公司 | Test question set file generation method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090103817A1 (en) * | 2007-10-23 | 2009-04-23 | Samsung Techwin Co., Ltd. | Digital image processing apparatus, a method of controlling the same, and a digital image compression method |
US20090226088A1 (en) * | 2008-03-04 | 2009-09-10 | Okazawa Atsuro | Multi-image file editing apparatus and multi-image file editing method |
JP2009253446A (en) * | 2008-04-02 | 2009-10-29 | Olympus Imaging Corp | Multi-image file generating device, program, and method |
WO2014006863A1 (en) * | 2012-07-02 | 2014-01-09 | Canon Kabushiki Kaisha | Method of generating media file and storage medium storing media file generation program |
-
2014
- 2014-07-01 GB GB1810396.0A patent/GB2561491B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090103817A1 (en) * | 2007-10-23 | 2009-04-23 | Samsung Techwin Co., Ltd. | Digital image processing apparatus, a method of controlling the same, and a digital image compression method |
US20090226088A1 (en) * | 2008-03-04 | 2009-09-10 | Okazawa Atsuro | Multi-image file editing apparatus and multi-image file editing method |
JP2009253446A (en) * | 2008-04-02 | 2009-10-29 | Olympus Imaging Corp | Multi-image file generating device, program, and method |
WO2014006863A1 (en) * | 2012-07-02 | 2014-01-09 | Canon Kabushiki Kaisha | Method of generating media file and storage medium storing media file generation program |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2617913A (en) * | 2020-12-17 | 2023-10-25 | Canon Kk | Method and apparatus for encapsulating image data in a file for progressive rendering |
GB2617913B (en) * | 2020-12-17 | 2024-02-14 | Canon Kk | Method and apparatus for encapsulating image data in a file for progressive rendering |
Also Published As
Publication number | Publication date |
---|---|
GB2561491B (en) | 2019-05-22 |
GB201810396D0 (en) | 2018-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11962809B2 (en) | Image data encapsulation with referenced description information | |
US11876994B2 (en) | Description of image composition with HEVC still image file format | |
US11985302B2 (en) | Image data encapsulation | |
US10595062B2 (en) | Image data encapsulation | |
GB2561491A (en) | Image data encapsulation with tile support | |
GB2573939A (en) | Image data encapsulation | |
EP4135336B1 (en) | Image data encapsulation | |
GB2560649A (en) | Image data encapsulation with tile support |