GB2593893A - Method and apparatus for encapsulating region related annotation in an image file - Google Patents
Method and apparatus for encapsulating region related annotation in an image file Download PDFInfo
- Publication number
- GB2593893A GB2593893A GB2005063.9A GB202005063A GB2593893A GB 2593893 A GB2593893 A GB 2593893A GB 202005063 A GB202005063 A GB 202005063A GB 2593893 A GB2593893 A GB 2593893A
- Authority
- GB
- United Kingdom
- Prior art keywords
- item
- data structure
- image
- region
- annotation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/71—Indexing; Data structures therefor; Storage structures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
The present invention concerns a method of encapsulating image items in a file, the method comprising for an image item: generating an item portion annotation data structure 330 related to a portion of the image item, the item portion annotation data structure being associated with the image item; generating a geometry data structure 331 describing the geometry of the portion of the image item associated with the item portion annotation data structure; generating at least one annotation data structure 332 associated with the item portion annotation data structure; and embedding the image item, the item portion annotation data structure, the geometry data structure and the at least one annotation data structure in the file.
Description
METHOD AND APPARATUS FOR ENCAPSULATING REGION RELATED ANNOTATION IN AN IMAGE FILE
FIELD OF THE INVENTION
The present disclosure concerns a method and a device for encapsulation of information related to a region in an image file.
BACKGROUND OF INVENTION
Modern cameras and image analysis services enable to generate localized metadata for images. Localized metadata are metadata related to a region, a portion, of a media content and not to the whole media content. The media content is typically an image, but it may also be a video content, or a set of images. For example, a camera may generate the focusing region for a photo or detect faces while taking a picture. As another example, a deep-learning system may identify objects inside an image. These localized metadata may be seen as region annotations.
Images captured by a camera or processed by an image analysis service are stored on a storage device like a memory card, for example. The images are typically encoded to reduce the size of data on the storage device. Many encoding standards may be used, like JPEG, AV1, or the more recent HEVC standard.
The HEVC standard defines a profile for the encoding of still images and describes specific tools for compressing single still images or bursts of still images. An extension of the ISO Base Media File Format (ISOBMFF) used for such kind of image data has been proposed for inclusion into the ISO/IEC 23008 standard, in Part 12, under the name. "HEIF" or "High Efficiency Image File Format".
HEIF (High Efficiency Image File Format) is a standard developed by the Moving Picture Experts Group (MPEG) for storage and sharing of images and image sequences. The MIAF (Multi-Image Application Format) is a standard developed by MPEG into ISO/IEC 23000 standard part 22. The MIAF specification specifies a multimedia application format, the Multi-Image Application Format (MIAF), which enables precise interoperability points for the creation, reading, parsing and decoding of images embedded in the High Efficiency Image File (HEIF) format. The MIAF specification fully conforms to the HEIF format and only defines additional constraints to ensure higher interoperability.
While providing the ability to store documents containing metadata such as EXIF or XMP documents, the HEIF and MIAF file formats do not provide a mechanism adapted to linking annotations to a region of an image.
SUMMARY OF THE INVENTION
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention, there is provided a method of encapsulating image items in a file, the method comprising for an image item: generating an item portion annotation data structure related to a portion of the image item, the item portion annotation data structure being associated with the image item; generating a geometry data structure describing the geometry of the portion of the image item associated with the item portion annotation data structure; generating at least one annotation data structure associated with the item portion annotation data structure; embedding the image item, the item portion annotation data structure, the geometry data structure and the at least one annotation data structure in the file.
In an embodiment: the item portion annotation data structure is an item property; the item portion annotation data structure comprises the geometry data structure; the item portion annotation data structure comprises reference information on the at least one annotation data structure generated as item properties.
In an embodiment: the item portion annotation data structure is an item property; the item portion annotation data structure comprises the geometry data structure; the item portion annotation data structure being associated with the image item by an association information in an association container, the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by another association information in the association container.
In an embodiment: - the item portion annotation data structure is an item property; the item portion annotation data structure comprises the geometry data structure; - the item portion annotation data structure being associated with the image item by an association information in an association container, the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by the association information in the association container.
In an embodiment: the item portion annotation data structure is an item property; - the item portion annotation data structure comprises the geometry data structure; the item portion annotation data structure comprises the at least one annotation data structure.
In an embodiment: the item portion annotation data structure is an item; the item portion annotation data structure comprises the geometry data structure; the item portion annotation data structure is associated with the image item by a reference information; -the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by an association information in an association container.
In an embodiment: -the item portion annotation data structure is an item; the item portion annotation data structure comprises the geometry data structure; - at least one item portion annotation data structure associated with the image item are grouped, the group is associated with the image item by a reference information; the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by an association information in an association container.
In an embodiment: - the item portion annotation data structure is an item; the item portion annotation data structure is associated with the image item by a reference information; - the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by an association information in an association container; - the item portion annotation data structure is associated with the geometry data structure generated as an item property by the association information in the association container.
According to another aspect of the invention, there is provided a method of reading a file comprising image items, the method comprising for an image item: reading an item portion annotation data structure related to a portion of the image item, the item portion annotation data structure being associated with the image item; reading a geometry data structure describing the geometry of the portion of the image item associated with the item portion annotation data structure; reading at least one annotation data structure associated with the item portion annotation data structure; reading the associated image item.
According to another aspect of the invention, there is provided a computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing a method according to the invention, when loaded into and executed by the programmable apparatus.
According to another aspect of the invention, there is provided a computer-readable storage medium storing instructions of a computer program for implementing a method according to the invention.
According to another aspect of the invention, there is provided a computer program which upon execution causes the method of the invetnion to be performed.
According to another aspect of the invention, there is provided a device for encapsulating image items in a file, the device comprising a processor configured for, for an image item: generating an item portion annotation data structure related to a portion of the item item, the item portion annotation data structure being associated with the image item; generating a geometry data structure describing the geometry of the portion of the image item associated with the item portion annotation data structure; generating at least one annotation data structure associated with the item portion annotation data structure; embedding the image item, the item portion annotation data structure, the geometry data structure and the at least one annotation data structure in the file.
According to another aspect of the invention, there is provided a device for reading a file comprising image items, the device comprising a processor configured for, for an image item: reading an item portion annotation data structure related to a portion of the item item, the item portion annotation data structure being associated with the image item; reading a geometry data structure describing the geometry of the portion of the image item associated with the item portion annotation data structure; reading at least one annotation data structure associated with the item portion annotation data structure; reading the associated image item.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system".
Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer-readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible, non-transitory carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid-state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g., a microwave or RE signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which: Figure 1 illustrates an example of an HEIF file 101 that contains media data like one or more still images and possibly video or sequence of images; Figure 2a illustrates a high-level view of the invention for associating annotations to a region of an image; Figure 2b illustrates an example of region annotations; Figure 3 illustrates a first embodiment of the invention; Figure 4 illustrates a second embodiment of the invention; Figure 5 illustrates a third embodiment of the invention; Figure 6 illustrates the main steps of a process for adding a new region annotation associated with an image item stored in a HEIF file when the region annotation is described by an item property according to embodiments of the invention; Figure 7 illustrates the main steps of a process for reading a HEIF file containing region annotations when the region annotations are described by an item property according to embodiments of the invention; Figure 8 illustrates a fifth embodiment of the invention; Figure 9 illustrates a sixth embodiment of the invention; Figure 10 illustrates a seventh embodiment of the invention; Figure 11 illustrates the main steps of a process for adding a new region annotation to an image item stored in a HEIF file when the region annotation is described by an item according to embodiments of the invention; Figure 12 illustrates the main steps of a process for reading a HEIF file containing region annotations when the region annotations are described by an item according to embodiments of the invention; Figure 13 illustrates the main steps of a process for reading a HEIF file containing region annotations when the region annotations are described by an item according to the sixth embodiment; Figure 14 illustrates a process for processing a HEIF file containing an image and one or more region annotation items associated with this image according to embodiments of the invention; Figure 15 is a schematic block diagram of a computing device for implementation of one or more embodiments of the invention.
DETAILED DESCRIPTION OF THE INVENTION
The HEVC standard defines a profile for the encoding of still images and describes specific tools for compressing single still images or bursts of still images. An extension of the ISO Base Media File Format (ISOBMFF) used for such kind of image data has been proposed for inclusion into the ISO/IEC 23008 standard, in Part 12, under the name: "HEIF" or "High Efficiency Image File Format".
The HEIF and MIAF standards cover two forms of storage corresponding to different use cases: the storage of image sequences, which can be indicated to be displayed as a timed sequence or by other means, and in which the images may be dependent on other images, and the storage of a single coded image or a collection of independently coded images, possibly with derived images.
In the first case, the encapsulation is close to the encapsulation of the video tracks in the ISO Base Media File Format (see document « Information technology -Coding of audiovisual objects -Part 12: ISO base media file format», w18855, ISO/IEC 14496-12, Sixth edition, October 2019), and similar tools and concepts are used, such as the Irak' boxes and the sample grouping for description of groups of samples. The trak' box is a file format box that contains sub boxes for describing a track, that is to say, a timed sequence of related samples.
Boxes, also called containers, are data structures provided to describe the data in the files. Boxes are object-oriented building blocks defined by a unique type identifier (typically a four-character code, also noted FourCC or 4CC) and a length. All data in a file (media data and metadata describing the media data) is contained in boxes. There is no other data within the file. File-level boxes are boxes that are not contained in other boxes.
In the second case, a set of ISOBMFF boxes, the 'meta' boxes are used. These boxes and their hierarchy offer fewer description tools than the 'track-related' boxes ('trak' box hierarchy) and relate to "information items" or "items" instead of related samples. It is to be noted that the wording 'box' and the wording 'container' may be both used with the same meaning to refer to data structures that contain metadata describing the organization or/and properties of the image data in the file.
Figure 1 illustrates an example of a HEIF file 101 that contains media data like one or more still images and possibly one or more video and/or one or more sequences of images. This file contains a first ityp' box (FileTypeBox) 111 that contains an identifier of the type of file (typically a set of four character codes). This file contains a second box called 'meta' (MetaBox) 102 that is used to contain general unfimed metadata including metadata structures describing the one or more still images. This 'meta' box 102 contains an 'lint box (ItemInfoBox) 121 that describes several single images. Each single image is described by a metadata structure ItemInfoEntry also denoted items 1211 and 1212.
Each item has a unique 16-bit or 32-bit identifier item_ID. The media data corresponding to these items is stored in the container for media data, the tridat box 104. An 'Hoc' box (ItemLocationBox) 122 provides for each item the offset and length of its associated media data in the tndat' box 104. An 'ire box (ItemReferenceBox) 123 may also be defined to describe the association of one item with other items via typed references.
Optionally, for describing the storage of image sequences or video, the H El F file 101 may contain a third box called 'moov' (MovieBox) 103 that describes one or more image sequences or video tracks 131 and 132. Typically, the track 131 may be an image sequence ('pict') track designed to describe a set of images for which the temporal information is not necessarily meaningful and 132 may be a video ('vide') track designed to describe video content. Both tracks describe a series of image samples, an image sample being a set of pixels captured at the same time, for example a frame of a video sequence. The main difference between the two tracks is that in 'pict' tracks the timing information is not necessarily meaningful whereas for 'vide' tracks the timing information is intended to constraint the timing of the display of the samples. The data corresponding to these samples is stored in the container for media data, the 'mdat' box 104.
The 'mdat' container 104 stores the untimed encoded images corresponding to items as represented by the data portions 141 and 142 and the timed encoded images corresponding to samples as represented by the data portion 143.
An HEIF file 101 offers different alternatives to store multiple images. For instance, it may store the multiple images either as items or as a track of samples that can be a 'pict' track or a 'vide' track. The actual choice is typically made by the application or device generating the file according to the type of images and the contemplated usage of the file.
The ISO Base Media File Format specifies several alternatives to group samples or items depending on the container that holds the samples or items to group. These alternatives can be considered as grouping data structures or grouping mechanism, i.e., boxes or data structures providing metadata describing a grouping criterion and/or group properties and/or group entities.
A first grouping mechanism represented by an EntityToGroupBox is adapted for the grouping of items or tracks. In this mechanism, the wording 'entity' is used to refer to items or tracks or other EntityToGroupBoxes. This mechanism specifies the grouping of entities. An EntityToGroupBox is defined according to the following syntax: aligned(8) class EntityToGroupBox(grouping_type, version, flags) extends FullBox(grouping_type, version, flags) { unsigned int(32) group_id; unsigned int(32) num_entities_in_group; for(i=0; i<num_entities_in_group; i++) unsigned int(32) entity_id; // the remaining data may be specified for a particular grouping_type The grouping_type is used to specify the type of the group. Several values for the grouping_type are specified in H El F. The group_id provides an identifier for the group of entities. The entity_id represents the identifier of entities that compose the group, i.e., either a track_ID for a track, an item_ID for an item or another group_id for an entity group. In Figure 1, the groups of entities inheriting from the EntityToGroup box 1241 and 1242 are comprised in the container 124 identified by the four characters code 'gprl' for Groups ListBox.
Entity grouping consists in associating a grouping type which identifies the reason of the grouping of a set of items, tracks or other entity groups. In this document, it is referred to Grouping Information as information in one of the EnfityToGroup Boxes which convey information to group a set of images.
ISOBMFF provides a mechanism to describe and associate properties with items. These properties are called item properties. The ItemPropertiesBox 'iprp' 125 enables the association of any item with an ordered set of item properties. The ItemPropertiesBox consists of two parts: an item property container boxipco' 1251 that contains an implicitly indexed list of item properties 1253, and an item property association box tipma' 1252 that contains one or more entries. Each entry in the item property association box associates an item with its item properties. The H El F standard extends this mechanism to enable the association of item properties with items and/or entity groups. Note that in the description, for genericity, we generally use item properties to designate both properties of an item or properties of an entity group. An item property associated with an entity group applies to the entity group as a whole and not individually to each entity within the group.
The associated syntax is as follows: aligned(8) class ItemProperty(property_type) extends Box(property_type) aligned(8) class ItemFullProperty(property_type, version, flags) extends FullBox(property_type, version, flags) aligned(8) class ItemPropertyContainerBox extends Box(ripco1) properties ItemProperty() []; // boxes derived from // ItemProperty or ItemFullProperty, to fill box aligned(8) class ItemPropertyAssociation extends FullBox('ipma', version, flags) { unsigned int(32) entry_count; for(i = 0; i < entry_count; i++) { if (version < 1) unsigned int(16) item_ID; else unsigned int(32) item_ID; unsigned int(8) association_count; for (i=0; i<association_count; i++) { bit(1) essential; if (flags & 1) unsigned int(15) property index; else unsigned int(7) property_index; aligned(8) class ItemPropertiesBox extends Box('iprp') ItemPropertyContainerBox property_container; ItemPropertyAssociation association[]; The ItemProperty and ItemFullProperty boxes are designed for the description of an item property. ItemFullProperty allows defining several versions of the syntax of the box and may contain one or more parameters whose presence is conditioned by either the version or the flags parameter.
The ItemPropertyContainerBox is designed for describing a set of item properties as an array of ItemProperty boxes or ItemFullProperty boxes.
The ItemPropertyAssociation box is designed to describe the association between items and/or entity groups and their item properties. It provides the description of a list of item identifiers and/or entity group identifiers, each identifier (item_ID) being associated with a list of item property index referring to an item property in the ItemPropertyContainerBox.
None of these mechanisms allow describing some properties associated with a portion, or a region of an image. The invention aims at solving this problem by providing ways of describing region annotations and to associate these region annotations with actual regions of an image Figure 2a illustrates a high-level view of the invention for associating annotations to a region of an image. A region annotation 210 is associated with an entity 200. This entity is for example an image item (e.g., a coded image item or a derived image item) that describes an image. It can also be any type of item or an entity group. This association means that the region annotation 210 comprises information related to a region, meaning a portion, of the media content, image, or entity group, described by the entity 200.
When the entity is an image item, the reconstructed image is the image resulting from decoding a coded image item or from applying the operation of a derived image item to a set of input images (that are themselves coded image items or derived image items). The output image is the image resulting from applying the potential transformative item properties of the image item to the reconstructed image. For example, the output image can be the image obtained after decoding the data associated with the item and applying transformative item properties on the decoded image. Transformafive item properties include for example the Irof transformative item property that rotates the image, the 'clap' transformative item property that crops the image, or the 'imir transformative item property that mirrors the image. The reconstructed image of entity 200 is the annotated image onto which the region annotation 210 applies, meaning that the region is a portion of the reconstructed image.
This region annotation 210 may be defined by its geometry 220. The geometry may be defined as a location, a location and a shape, a set of points or as a mask. For example, the geometry of a region may be a rectangle defined by its top-left corner and its width and height. For example, a set of points may represent a polygon defined by the coordinates of all its vertices. The geometry may be defined by other means. The geometry may be split into a location and optionally a shape. Several kinds of shapes can be defined: for example a point (in such case the geometry is only a location), a rectangle, or a circle. The location is the position on the annotated image of a reference point for the shape. For example, the reference point may be the top-left corner of a rectangle, the center of a circle... The geometry can also be defined as a mask, selecting individual pixels in the image corresponding to the entity 200. Other geometries can also be used, such as a 3D box projected onto the image plane. The geometry may be described with one generic identifier plus a parameter providing its type (point, rectangle, circle...) or as a specific identifier per type. For an image encapsulated within a HEIF file, this identifier may be a four-character code.
The region annotation 210 may be linked to one or more annotations 230, 231, and 232. These annotations are used to store different annotations corresponding to the region 210 of the entity 200. An annotation can be for example the focus location of the picture, a face detected in the image, an object detected in the image, a GPS location corresponding to an object in the image, a description for a part of the image, the text contained in a region of the image, a user annotation for a part of the image, or any kind of information associated with the region.
Figure 2b illustrates an example of region annotations. The image 250 is for example captured by a camera. The camera adds, at capture time, information about two human faces that were detected. The faces correspond to the regions 260 and 261 in the image 250. Both regions are rectangular regions. This captured image may be stored in a HEIF file using the invention.
Later on, for example while transferring the image 250 into an online-based photo storage, further processing is applied on the image, enhancing the information describing the image. First, a recognition algorithm finds the name of the person whose face is in region 261. Second, an object detection algorithm detects and identifies a building in the region 270 of the image 250. This region is a circular region. These processing steps result in an annotated image that may be stored in a HEIF file using the invention. For example, the HEIF file containing the image 250 is edited so as to store the description of the new region annotations 270 and/or the name of the detected person in the region 261 Last, a user edits the image 250, for example to keep only the part containing the known person in region 261 and the building in region 270. She also corrects the perspective distortion. As a result, only the part 280 of the image 250 is kept. Due to the perspective correction, this part 280 is not rectangular. These editing steps result in an edited image that may be stored in a HEIF file using the invention. The so-edited HEIF file may then contain either the original image 250 plus the instruction to obtain the cropped image 280, or it may contain only the image description and data for the region 280.
In the different embodiments, the geometry 220 of a region annotation may be represented using the following structure: aligned(8) class Geometry() { unsigned int(1) flag; unsigned int(7) type;
unsigned int field_size = (flag + 1) * 16;
if (type == 0) { // point
unsigned int(field_size) x;
unsigned int(field_size) y;
else if (type == 1) { // rectangle
unsigned int(field_size) x;
unsigned int(field_size) y;
unsigned int(field_size) width; unsigned int(field_size) height; else if (type == 2) { // circle unsigned int(field_size) x; unsigned int(field_size) y; unsigned int(field_size) radius; else if (type == 3) { // polygon unsigned int(8) point_count; int i; for (i=0; i < point_count; i++) {
unsigned int(field_size) x;
unsigned int(field_size) y;
else if (type == 4) { // 3d box
unsigned int(field_size) x_bottom_0;
unsigned int(field_size) y_bottom_0;
unsigned int(field_size) x_bottom_1; unsigned int(field_size) y_bottom_1;
unsigned int(field_size) x_bottom_2;
unsigned int(field_size) y_bottom_2;
unsigned int(field_size) x_bottom_3; unsigned int(field_size) y_bottom_3;
unsigned int(field_size) x_top_0;
unsigned int(field_size) y_top_0;
unsigned int(field_size) x_top_1;
unsigned int(field_size) y_top_1; unsigned int(field_size) x_top_2;
unsigned int(field_size) y_top_2;
unsigned int(field_size) x_top_3;
unsigned int(field_size) y_top_3;
else if (type == 5) { // bit-mask
unsigned int(field_size) ox;
unsigned int(field_size) oy;
unsigned int(field_size) mask_item_ID;
else if (type == 6) { // colored mask unsigned int(field_size) ox; unsigned int(field_size) oy; unsigned int(32) mask_color;
unsigned int(field_size) mask_item_ID;
The semantics of this structure may be: - type is an integer value specifying the type of the geometry.
- field_size is an integer value specifying the size in bits (for example 16 or 32 bits) of the fields used for describing the geometry.
-If the geometry is a point, x and y are the coordinates of this point in the image.
-If the geometry is a rectangle, x and y are the coordinates of the top-left corner of the rectangle and width and height are the width and height of the rectangle.
-If the geometry is a circle, x and y are the coordinates of the center of the circle and radius is the radius of the circle.
-If the geometry is a polygon, point_count is the number of points in the polygon and for each point, x and y are the coordinates of this point in the image. The polygon is a closed polygon.
-If the geometry is a 3d box, then x_bottom_0, y_bottom_0, x_bottom_3, y_bottom_3 are the coordinates of the bottom rectangle of the 3d box.
x_top_O, y_top_O, x_top_3, y_top_3 are the coordinates of the top rectangle of the 3d box.
-If the geometry is a bit-mask or a color mask, ox and oy define the location of the top-left pixel of the image mask inside the annotated image.
mask_item_ID is the identifier of an image item describing the mask image defining the region. The output image of this image item is the mask image for the region. Preferably, the size of the mask image is the same as or smaller than the size of the annotated image.
-If the geometry is a bit-mask, its mask image is preferably a black and white image (i.e., an image with a 1-bit color component). The region is defined as all the pixels with a black value in the mask image and its top-left corner is located at the position defined by ox and oy.
-If the geometry is a color mask, mask_color defines the color used to define the region in the mask image. The region is defined as all the pixels in the mask image whose color value is the value of the mask_color. The top-left corner of the region is located at the position defined by ox and oy.
In a different example, the geometry of the region may also be split into two structures corresponding to its location and to its shape: aligned(8) class Location() unsigned int(1) flags; unsigned int(7) reserved;
unsigned int field_size = (flags + 1) * 16;
unsigned int(field_size) x;
unsigned int(field_size) y;
aligned(8) class Shape() { unsigned int(1) flag; unsigned int(7) type;
unsigned int field_size = (flag + 1) * 16;
if (type == 0) // point 1 else if (type == 1) // rectangle unsigned int(field_size) width; unsigned int(field_size) height; else if (type == 2) // circle
unsigned int(field_size) radius;
else if (type == 3) { // polygon unsigned int(8) point_count; int i; for (i=0; i < point_count; i++) {
int(field_size) dx;
int(field_size) dy;
else if (type == 4) 1 7/ 3d box int(field_size) dx_bottom_1; int(field_size) dy_bottom_1; int(field_size) dx_bottom_2; int(field_size) dy_bottom_2;
int(field_size) dx_bottom_3;
int(field_size) dy_bottom_3;
int(field_size) dx_top_0; int(field_size) dy_top_0; int(field_size) dx top 1;
int(field_size) dy_top_1;
int(field_size) dx_top_2; int(field_size) dy_top_2; int(field_size) dx_top_3; int(field_size) dy_top_3; else if (type == 5) { // bit-mask
unsigned int(field_size) mask_item_ID; 5}
else if (type == 6) { // colored mask unsigned int(32) mask_color;
unsigned int(field_size) mask_item_ID;
The semantics of the Location structure in this example may be: -field_size is an integer value specifying the size in bits (for example 16 or 32 bits) of the fields used for describing the Location.
-x and y are the coordinates of the region in the image.
The semantics of the Shape structure are similar to those of the Geometry structure with the following changes: -For all the shapes, the position on the annotated image of the reference point of the shape is specified by the location.
-If the shape is a point, the reference point is the point itself.
-If the shape is a rectangle, the reference point is its top-left corner.
-If the shape is a circle, the reference point is its center.
If the shape is a polygon, the first point is the reference point. For the other points, dx and dy are the relative coordinates of the point in reference to the reference point.
If the geometry is a 3d box, the first point of the bottom rectangle is the reference point. All the other points are specified using relative coordinates in reference to the reference point.
If the geometry is a bit-mask or a colored mask, the location specifies the position of the top-left pixel of the image mask inside the annotated image. In some embodiments, the points for the polygon shape and/or the 3d box shape are not specified using relative coordinates but using absolute coordinates.
The units used for defining coordinates, width, height, radius in the geometry may be in pixels or luma samples as integer values or fixed-point 16.16 values. It may be a different unit. Possibly the units used may be specified in the geometry, the location, and/or the shape structure.
Figures 3 to 7 illustrate some embodiments of the invention for the HEIF or MIAF standards, where the region annotation 210 is represented using an item property 330.
Figure 3 illustrates a first embodiment of the invention. The 'meta' box 300 comprises an 'item' 310, corresponding for example to an image item. The 'meta' box 300 also comprises an 'iprp' box 320 that itself comprises two boxes. First, an 'ipco' box 323 comprises the item properties of the different entities present in the 'meta' box. Second, an ipma' box 321 comprises the association between item properties and entities (e.g., image items or entity groups).
The 'ipco' box 323 comprises a first item property, the region annotation 330. This region annotation comprises two parts: first the description of its geometry 331, and second the links 332 to the annotations for the region. These links associate the item properties 340 and 341 with the region.
The Iipma' box 321 associates the region annotation 330 with the item 310 through its entry 322.
In this embodiment, the region annotation is an item property that may be defined as follows: -Box type: 'rgan' -Property type: Descriptive item property -Container: ItemPropertyContainerBox -Mandatory (per item): No -Quantity (per item): Zero or more The syntax of the region annotation item property may be: aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rgan', version = 0, flags = 0) { Geometry geometry; unsigned int(1) flag; unsigned int(15) entry_count; for (i=0; i < entry_count; i++) { if (flag) unsigned int(16) property_index; else unsigned int(8) property_index; The semantics of the region annotation item property may be: -geometry is the geometry structure of the region. A variant may describe the geometry 331 as a Location structure and optionally a Shape structure.
- entry_count is the number of item properties (340, 341...) associated with the region.
- property_index is either 0 indicating that no property is associated, or is the 1-based index of the associated property box in the ItemPropertyContainerBox.
In this embodiment, all the item properties associated by their property_index to the region inside the region annotation item property 330 apply to the region of the annotated image defined by the geometry 331.
In this embodiment, the annotated image 250 from Figure 2b may be stored in a HEIF file with the following structure: FileTypeBox 'ftyp': major-brand = 'heic', compatible-brands = 'heic' MetaBox 'meta': (container) HandlerBox 'hdlr': 'pict' PrimaryItemBox 'pitm': item_ID = 1; ItemInfoBox 'iinf': entry_count = 1 infe': item_ID = 1, item_type = 'hvc11; // (ref 250) ItemLocationBox 'iloc': item_count = 1 item_ID = 1, extent_count = 1, extent_offset = X, extent_length = Y; ItemPropertiesBox 'iprp': ItemPropertyContainerBox 'ipco': 1) 'hvcC' // Config for annotated image 2) 'ispe' // Size of annotated image 3) 'rgan' // (ref 260) geometry: type = 1, x = x0, y = y0" width = wO, height = h0; entry_count = 1 property_index = 6; 4) 'rgan' // (ref 261) geometry: type = 1, x = xl, y = yl, width = wl, height = hl; entry_count -2 property_index = 6; property_index = 7; 5) 'rgan' // (ref 270) geometry: type = 2, x = x2, y = y2, radius = r2; entry_count = 1 property_index = 8; 6) 'udes', lang = en, name = 'face', description = ", tags = face 7) 'udes', lang = fr, name = 'Jean', description = ", tags = person 8) 'udes', lang = fr, name = 'Notre-Dame', description = 'Notre-Dame de Paris', tags = building ItemPropertyAssociation 'ipma': entry_count = 1 1) item_ID = 1, association_count = 5 essential = 1, property_index = 1; essential = 0, property_index = 2; essential = 0, property_index = 3; essential = 0, property_index = 4; essential = 0, property_index = 5; MediaDataBox 'mdat' or lidat': HEVC Image (at file offset X, with length Y) It is to be noted that in this description of the structure of this HEIF file, as well as in the description of the structure of other HEIF files hereafter, parts not necessary to the understanding of the invention are omitted. For example, the version fields of the 'meta' box and of the 'jiff box are omitted, the item_name field of the 'infe' box is also omitted.
In this HEIF file, the image item 250 is represented by the item with an item_ID value of 1. The region 260 is represented by the item property at index 3 and is associated with the item property at index 6 corresponding to a face annotation. The region 261 is represented by the item property at index 4 and is associated with the item properties at index 6 and 7, corresponding respectively to a face annotation and to an annotation describing a person named "Jean". The region 270 is represented by the item property at index 5 and is associated with the item property at index 8 corresponding to an annotation describing "Notre-Dame de Paris" as a building.
In this HEIF file, the association between a region and its item properties is described inside the tgan' item property representing the region.
In a first variant of this first embodiment, the syntax of the region annotation item property 330 may be: aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rgan', version = 0, flags = 0) unsigned int(8) region_count; for (r=0; r < region_count; r++) { Geometry geometry; unsigned int(1) flag; unsigned int(15) entry_count; for (i=0; i < entry_count; i++) { if (flag) unsigned int(16) property_index; else unsigned int(8) property_index; This first variant enables to define several regions with their respective associated item properties in a single region annotation 330 item property.
In a second variant to this embodiment, the syntax of the region annotation item property 330 may be: aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rgan', version = 0, flags = 0) { unsigned int(8) region_count; for (r=0; r < region_count; r++) { Geometry geometry; unsigned int(1) flag; unsigned int(15) entry_count; for (i=0; i < entry_count; i++) { if (flag) unsigned int(16) property_index; else unsigned int(8) property_index; This second variant enables to define several regions sharing a set of associated item properties in a single region annotation item property 330.
The first and second variant may be combined to enable to define multiple sets of regions sharing a set of associated item properties in a single region annotation item property.
Figure 4 illustrates a second embodiment of the invention. This embodiment is similar to the first embodiment except for the association between the region annotation item property 330 and its item properties (340, 341). In this embodiment, the association is realized through the Ipma' box 321. This Ipmai box contains a new entry 423 associating the region annotation 330 and the item properties 340 and 341.
In this embodiment, the syntax of the region annotation item property 330 may be: aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rgan', version = 0, flags = 0) { Geometry geometry; // (ref 331) The syntax of the region annotation may also be: aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rgan', version = 0, flags = 0) { unsigned int(8) region_count; for (r=0; r < region_count; r++) { Geometry geometry; // (ref 331) As in the first embodiment, the geometry may be described as a Location structure and optionally a Shape structure.
To enable the association of item properties with a region annotation item property, the syntax of the cipmai box may be: aligned(8) class ItemPropertyAssociation extends FullBox('ipma', version, flags) // Entity association loop unsigned int(32) entry_count; for (i = 0; i < entry_count; i++) { if ((version == 0) 11 (version == 2)) unsigned int(16) item_ID; else unsigned int(32) item_ID; unsigned int(8) association_count; for (j = 0; j < association_count; j++) { bit(1) essential; if (flags & 1) unsigned int(15) property_index; else unsigned int(7) property_index; if (version >= 2) { // Region annotation association loop unsigned int(32) region count; for (i = 0; i < region_count; i++) { if (flags & 2) unsigned int(16) from_property_index; else unsigned int(8) from_property_index; unsigned int(8) association_count; for (j = 0; j < association_count; j++) { if (flags & 2) unsigned int(16) to_property_index; else unsigned int(8) to_property_index; The semantics of the region annotation association loop of the cipma' box may be: region_count is the number of region annotation item properties for which associations are defined in the 'ipma' box. Possibly this number may be encoded on 8 or 16 bits. Possibly a limit to the number of region annotation item properties present in a H El F file may be specified.
from_property_index is the 1-based index of a region annotation item property box with which item properties are associated.
-association_count is the number of item properties associated with the region annotation item property box.
-to_property_index is the 1-based index of an item property box associated with the region annotation item property box. The value of the to_property_index field may be constrained to be different from the value of the from_property_index
field.
To keep a backward compatibility with existing HEIF files and implementations, two new versions of the cipma' box may be defined. These new versions, numbered 2 and 3 both contain associations between regions and their item properties. These
versions differ in the size of the item_ID field.
The index of a region annotation item property box should appear only once as the value of the from_property_index field in the 'ipma' box. An index in the ipco corresponding to a region annotation item property box should not appear as the value of the to_property_index field in the ipma' box.
The second bit of the flags field of the box is used to define the size of the from_property_index and the to_property_index fields in the region annotation association loop. Possibly the first bit of the flags field could be used instead to have the same size for the from_property_index, the to_property_index, and the property_index fields. Possibly different bits could be used to enable different sizes for the from_property_index and for the to_property_index fields.
In this second embodiment, the annotated image from Figure 2b may be stored in a HEIF file with the following structure: FileTypeBox 'ftyp': major-brand = 'heic', compatible-brands = 'heic' MetaBox 'meta': (container) HandlerBox 'hdlr': 'pict' PrimaryItemBox 'pitm': item_ID = 1; ItemInfoBox 'iinf': entry_count = 1 infe': item_ID = 1, item_type = 'hvc11; // (ref 250) ItemLocationBox 'iloc': item_count = 1 item_ID = 1, extent_count = 1, extent_offset = X, extent length = Y; ItemReferenceBox 'iref': ItemPropertiesBox 'iprp': ItemPropertyContainerBox 'ipco': 1) 'hvcC' // Config for annotated image 2) 'ispe' // Size of annotated image 3) it-gen' // (ref 260) geometry: type = 1, x = x0, y = yO, width = wO, height =h0; 4) 'rgan' // (ref 261) geometry: type = 1, x = xl, y = yl, width = wl, height = hl; 5) 'rgan' // (ref 270) geometry: type = 2, x = x2, y = y2, radius = r2; 6) 'udes', lang = en, name = 'face', description = ", tags = face 7) 'udes', lang = fr, name = 'Jean', description = ", tags = person 8) 'udes', lang = fr, name = 'Notre-Dame', description = 'Notre-Dame de Paris', tags -building ItemPropertyAssociation 'ipma': version = 2, entry_count = 1 // Item -Property associations 1) item_ID = 1, association_count = 5 essential = 1, property_index = 1; essential = 0, property_index = 2; essential = 0, property index = 3; essential = 0, property_index = 4; essential = 0, property_index = 5; MediaDataBox 'mdat' or 'idat': HEVC Image (at file offset X, with length Y) This HEIF file is similar to the one corresponding to the first embodiment, except for the association between a region annotation item property and its item properties. In this H El F file, the association between a region annotation item property 330 and its item properties 340, 341 is described inside a new version the cipma' box 321.
In a variant of the second embodiment, the entity association loop and the region annotation association loop are merged together. In this alternative embodiment, the item property index is used as the identifier for a region annotation item property to be used as the value of the item_ID field. Consequently, identifiers for an item, an entity group and/or a track must not correspond to the index of an item property. Possibly, an identifier for an item, an entity group and/or a track takes priority before the index of an item property. Possibly, a new version of the ipma' box is defined to signal the use of this alternative embodiment.
In another variant of the second embodiment, the entity association loop and the region annotation association loop are merged together. In this other variant of the second embodiment, the first bit of the item_ID field is used to signal whether the item_ID field corresponds to an item, an entity group or a track, or if it corresponds to an item // Region -Property assocations region_count = 3 1) from_property_index -3, association_count -1 to_property_index = 6; 2) from_property_index = 4, association_count = 2 to_property_index = 6; to_property_index = 7; 3) from_property_index = 5, association_count = 1 to_property_index = 8; property index. Possibly, a new version of the 'ipma' box is defined to signal the use of this variant of the second embodiment.
Figure 5 illustrates a third embodiment of the invention. This embodiment is similar to the first and second embodiments except for the association between the region annotation and its properties. In this embodiment, the association is realized through the ipma' box 321, as part of the association between the item 310 and its properties. This association is specified inside the entry 522 as a list of properties. The properties 340 and 341 corresponding to the region annotation 330 are associated directly with the item 310 inside the entry 522.
In this third embodiment, the ordering of the properties inside the entry 522 indicates to which part of the item 310 each item property applies. The descriptive properties associated with the whole item 310 come first, before any region annotation item property. Then, each region annotation item property is listed, followed by the descriptive properties associated with it. Last, the transformative properties associated with the whole item 310 are listed.
In this third embodiment, a descriptive item property is either associated with the previous region annotation item property in the list, if there is one, or with the whole image if there is no previous region annotation item property.
In this third embodiment, the specific handling of region annotation properties may be indicated by defining a new item property type for them: "region item property".
In this third embodiment, the region annotation properties may be indicated as essential to inform a reader that it should not ignore these properties.
In this third embodiment, the annotated image from Figure 2b may be stored in a HEIF file with the following structure: FileTypeBox 'ftyp': major-brand = 'heic', compatible-brands = 'heic' MetaBox 'meta': (container) HandlerBox 'hdlr': 'pict' PrimaryItemBox 'pitm': item_ID = 1; ItemInfoBox 'iinf': entry_count = 1 infe': item_ID = 1, item_type = 'hvc11; // (ref 250) ItemLocationBox 'iloc': item_count = 1 item_ID = 1, extent_count = 1, extent_offset X, extent_length = Y; ItemReferenceBox 'iref': ItemPropertiesBox 'iprp': ItemPropertyContainerBox 'ipco': 1) 'hvcC' // Config for annotated image 2) 'ispe' // Size of annotated image 3) 'rgan' // (ref 260) geometry: type = 1, x = x0, y = ye, width = w0, height = he); 4) 'rgan' // (ref 261) geometry: type = 1, x = xl, y = yl, width = w1, height = hi; 5) 'rgan' // (ref 270) geometry: type = 2, x = x2, y = y2, radius = r2; 6) 'udes', lang = en, name = 'face', description = ", tags = face 7) 'udes', lang fr, name -'Jean', description -", tags = person 8) 'udes', lang = fr, name = 'Notre-Dame', description = 'Notre-Dame de Paris', tags = building MediaDataBox 'mdat' or 'idat': HEVC Image (at file offset X, with length Y) This HEIF file is similar to those corresponding to the first and second embodiments, except for the association between a region and its properties. In this HEIF file, the association between a region and its properties are described inside the ipma' box, inside the list of properties associated with the image item to which the region annotation applies.
The image item with an item_ID of value 1, corresponding to the image item 250, has 9 properties associated with it. First, the properties of indexes 1 and 2 apply to the whole image. They describe the encoding parameters used for the image and the size of the image. Then, the item property of index 3 describes a first region corresponding to the region 260. The item property of index 6, describing a face annotation, follows and is associated with this region. Next, the item property of index 4 describes a second region corresponding to the region 261. Two properties of indexes 6 and 7 follow and are associated with this region. These properties describe a face annotation, and the annotation of a person named "Jean". Last, the item property of index 5 describes a third region corresponding to the region 270. This item property is followed by the item property of index 8, corresponding to the annotation of the "Notre-Dame de Paris" building.
ItemPropertyAssociation 'ipma': entry_count = 1 1) item_ID = 1, association_count = 9 essential = 1, property index = 1; essential = 0, property_index = 2; essential = 0, property_index = 3; // first region 260 essential = 0, property_index = 6; essential = 0, property_index = 4; // second region essential = 0, property_index = 6; essential = 0, property_index = 7; essential = 0, property_index = 5; // third region 270 essential = 0, property_index = 8; A fourth embodiment of the invention not illustrated in the drawings is now described. In this embodiment, the geometry of the region and the annotations associated with it are described inside a single (aggregated) item property. Preferably, a new item property is defined for each type of annotation that can be used for a region of an entity.
A syntax of the region annotation item property may be: aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rgan', version = 0, flags 0) { Geometry geometry; utf8string lang, utf8string name;
utf8string description;
utf8string tags; This syntax combines a geometry with the fields from a user description item property. Their semantics are not modified.
A variant may be: aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rgan', version = 0, flags 0) 1 Location anchor_point; Shape shape; utf8string lang; utf8string name;
utf8string description;
utf8string tags; with the same semantics.
Figure 6 illustrates the main steps of a process for adding a new region annotation associated with an image item stored in a HEIF file according to embodiments of the invention. It corresponds to the embodiments where the region annotation is described by an item property. These steps can be applied to a HEIF file stored on a disk, stored in memory, or stored with an adapted representation in memory. The new region annotation comprises the geometry of the region and the annotation itself. Possibly, these steps may be modified to add simultaneously several annotations to a region of an image item. Possibly, these steps may also be modified to add simultaneously an annotation to several regions of an image item.
This process can be used when creating a new HEIF file, or when modifying an existing HEIF file.
In a first step 600, it is determined whether the region already exists in the HEIF file. This determination is realized by comparing the geometry of the region with the geometry of existing regions.
If the region already exists, the next step is step 610, otherwise, the next step is step 620.
In step 610, the RegionAnnotationProperty structure corresponding to the existing region is selected. The next step is step 640.
At step 620, a new RegionAnnotationProperty structure for representing the region is created. The geometry of the region is stored inside this new RegionAnnotationProperty structure.
Then, at step 630, the new RegionAnnotationProperty structure is associated with the image item by storing the association inside the ipma' box. The next step is step 15 640.
At step 640, it is determined whether an item property corresponding to the annotation already exists in the HEIF file. This may be done by parsing the item properties in the 'ipco' container box. If the item property already exists, the next step is step 650, otherwise, the next step is step 660.
In step 650, the existing item property is selected.
In step 660, a new item property is created to represent the annotation within the ipco' container box. The type of the item property may depend on the content of the annotation. The information contained in the annotation is stored inside the item property.
After either step 650 or step 660, at step 670, the item property is associated with the region.
In the context of the first embodiment, the index of the item property is added to the RegionAnnotationProperty structure.
In the case of the second embodiment, it is determined whether an entry exists in the tipma' box for the RegionAnnotationProperty structure. If this is not the case, an entry is created in the 'ipma' box for the RegionAnnotationProperty structure. The index of the item property is added to this entry.
In the case of the third embodiment, it is determined whether there is an entry in the 'ipma' box corresponding to the image item. If this is not the case, an entry is created in the 'ipma' box for the image item. Then, it is determined whether the index of the RegionAnnotationProperty structure is present in this cipma' box. If this is not the case, this index is inserted in the entry after the indexes of the descriptive properties associated with the image item and before the indexes of the transformative properties associated with the image item. Last, the index of the item property is inserted after the index of the RegionAnnotationProperty structure and before the index of any other RegionAnnotationProperty structure or any transformative item property. Preferably, the index is inserted just before the index of the next RegionAnnotationProperty structure, or before the index of the first transformative item property, or at the end of the entry. In the case of the fourth embodiment, the RegionAnnotationProperty structure contains both the geometry and the annotations and no association is needed. In this case, only steps 620, 630, and 660 are realized.
Possibly, if a RegionAnnotationProperty structure enables to define several regions with their respective associated properties, at step 620, it is determined whether such a structure is already associated with the image item in the HEIF file. If this is the case, then a new entry is added to the RegionAnnotationProperty structure, and the geometry of the region is stored inside this entry. Otherwise, a new RegionAnnotationProperty structure is created and the geometry of the region is stored inside this structure.
Possibly, if a RegionAnnotationProperty structure enables to associate a set of properties with several regions, several steps may be modified. At step 600, it is determined whether the region exist in a RegionAnnotationProperty structure not comprising any other region. At step 620, it is determined whether one or more regions are already associated inside a RegionAnnotationProperty structure with a set of properties corresponding to the annotation of the new region annotation. If this is the case, the region is created inside this RegionAnnotationProperty structure and steps 640 to 670 are not executed. If this is not the case, a new RegionAnnotationProperty structure is created.
Figure 7 illustrates the main steps of a process for reading a HEIF file containing region annotations when the region annotations are described by an item property according to embodiments of the invention.
In a first step 700, an image item is extracted from the HEIF file. Possibly, only a portion of the metadata describing the image item is extracted.
Next, in step 710, the first item property associated with the image item is extracted. The index of this item property is the first non-zero index in the entry of the ipma' box corresponding to the image item. This index is used to extract the item property from the lipco' box. If there is no entry corresponding to the image item in the ipma' box, or if the entry contains no item property index or contains only property indexes with the value 0, the process ends directly at step 790.
In step 720, it is determined whether the extracted item property is a region annotation item property or not. If it is a region annotation item property, the next step is step 730, otherwise, the next step is step 770.
In step 730, the description of the region is extracted from the region annotation item property. The description of the region may include the type of the geometry and/or the parameters defining this geometry. This region may be associated with the image item extracted at step 700.
Next, in step 740, the index of the first item property associated with the region are extracted. If there is no item property associated with the region, the process continues directly at step 770.
In the first embodiment of the invention, the index of the first item property associated with the region is the first one present in the region annotation item property structure.
In the second embodiment of the invention, the index of the first item property associated with the region is the first one in the entry corresponding to the region annotation item property inside the Ipmal box.
In the third embodiment of the invention, the index of the first item property associated with the region is the first one in the entry for the image item in the ipma' box that follows the index of the region annotation item property.
The index of the item property is used to extract the item property from the cipco' box.
Then, at step 750, the item property may be associated with the region. The information contained in the item property may be extracted and associated with the geometry of the region extracted at step 730.
At step 760, it is determined whether there are other item properties associated with the region.
In the first embodiment of the invention, it is determined whether there are any non-processed indexes in the region annotation item property structure.
In the second embodiment of the invention, it is determined whether there are any non-processed indexes in the entry corresponding to the region annotation item property inside the 'ipma' box.
In the third embodiment of the invention, it is determined whether the next index in the entry for the image item in the ipma' box corresponds to a descriptive item property that is not a region annotation item property.
If there are other item properties associated with the region, the next step is step 765. Otherwise, the next step is step 770.
In step 765, the index of another item property associated with the region is retrieved. The first, second and third embodiments of the invention are handled in a similar way as in step 740. The next step is step 750.
In the fourth embodiment of the invention, there is no item property associated with the region and steps 740, 760 and 765 are skipped. The annotations for the region are extracted directly from the region annotation item property and may be associated with the region at step 750.
In step 770, it is determined whether there is another item property associated with the image item. This determination is realized by checking whether there are any non-processed indexes in the entry corresponding to the image item inside the ipma' box. If there are other item properties associated with the image item, the next step is step 780. Otherwise, the next step is step 790.
At step 780, another item property associated with the image item is extracted. This step is similar to step 710. The next step is step 720.
At step 790, the process ends.
Figure 8 to 13 illustrates some embodiments of the invention for the HEIF or MIAF standards, where the region annotation 210 is represented using an item.
Figure 8 illustrates a fifth embodiment of the invention. The 'meta' box 800 contains an item 810, corresponding for example to an image item. The 'meta' box 800 also contains also two other items 820 and 830 that correspond to region annotations. The geometry of these two region annotation items is described in their respective contents, indicated as geometry 821 and geometry 826. These contents are identified in the 'Hoc' box. The content of a region annotation item may be stored preferably in an iclat box. It may may also be stored in either a imdat or an 'imda' box.
The iref box 830 contains two entries, 831 and 832 associating the region annotation items, respectively 820 and 825, with the image item 810.
The iprp' box 840 contains the tipco' box 860 and the tipma' box 850. The tipco' box 860 contains the item properties 861 and 862 corresponding to the annotations of the two region annotation items 820 and 825. The ipmai box 850 associates the region annotation items 820 and 825 with their respective item properties 861 and 862 through two entries, respectively 851 and 852.
In this embodiment, a region annotation item may be defined with an item_type of 'man'. The region annotation item may be associated with its image item through an item reference box of type cdsc'.
The syntax of the content of region annotation item may be: aligned(8) class RegionAnnotationItem { Geometry geometry; The semantics of the region annotation item may be: -geometry is the geometry of the region.
Possibly a version field may be added to the region annotation item for future evolution. Extensions for the region annotation item may also be realized by defining new geometry types.
In this embodiment, all the item properties associated through an entry of the ipma' box to the region annotation item apply to the region of the annotated image defined by the geometry of the region annotation item.
In this embodiment, the annotated image from Figure 2b may be stored in a HEIF file with the following structure: FileTypeBox 'ftyp': major-brand = 'heic', compatible-brands = 'heic' MetaBox 'meta': (container) HandlerBox 'hdlr': 'pict' PrimaryItemBox 'pitm': item ID = 1; ItemInfoBox 'iinf': entry_count = 4 infe': item_ID = 1, item_type = 'hvc1'; // (ref 250) 'infe': item_ID = 2, item_type = 'rgan', hidden = true; // (ref 260) infe': item_ID = 3, item_type = 'rgan', hidden = true; // (ref 261) infe': item_ID = 4, item_type = 'rgan', hidden = true; // ItemLocationBox 'iloc': item_count = 1 item_ID = 1, extent_count extent_count extent_count extent_count = = = = 1, 1, 1, 1, extent_offset extent_offset extent_offset extent_offset = X, extent_length = Y; 2, = 1, item_ID = 3, = 2, extent_length = Li; 4, = 3, item_ID = extent_length = L2; item_ID = extent_length = L3; (ref 270) ItemReferenceBox 'iref': referenceType = 'cdsc', from_item_ID = 2, reference_count = 1, to_item_ID = 1; referenceType = 'cdsc', from_item_ID = 3, reference_count = 1, to_item_ID = 1; referenceType = 'cdsc', from_item_ID = 4, reference_count = 1, to_item_ID = 1; ItemPropertiesBox 'iprp': ItemPropertyContainerBox 'ipco': 1) 'hvcC' // Config for annotated image 2) 'ispe' // Size of annotated image 3) 'udes', lang = en, name = 'face', description = ", tags = face 4) 'udes', lang = fr, name = 'Jean', description = ", tags = person 5) 'udes', lang = fr, name = 'Notre-Dame', description = 'Notre-Dame de Paris', tags = building MediaDataBox 'mdat' or 'idat': HEVC Image (at file offset X, with length Y) Region Annotation (at file offset 01, with lenght L1) geometry: type = 1, x = x0, y = ye, width = w0, height = h0; Region Annotation (at file offset 02, with lenght L2) geometry: type = 1, x = xl, y = yl, width = wl, height = hl; Region Annotation (at file offset 03, with lenght L3) geometry: type = 2, x = x2, y = y2, radius = r2; In this HEIF file, the image item 250 is represented by the item with an item_ID value of 1. The region 260 is represented by the item with an item_ID value of 2. It is associated with the item property at index 3 through the second entry of the 'ipma' box.
This property corresponds to a face annotation. The geometry of the region annotation item is described in the first region annotation part of the MediaDataBox.
The region 261 is represented by the item with an item_ID value of 3. It is associated with the item properties at index 3 and 4 through the third entry of the 'ipma' box. These properties correspond respectively to a face annotation and to an annotation describing a person named "Jean". The geometry of the region annotation item is described in the second region annotation part of the MediaDataBox.
The region 270 is represented by the item with an item_ID value of 4. It is associated with the item property at index 5 through the fourth entry of the Ipma' box.
ItemPropertyAssociation 'ipma': entry_count = 4 1) item_ID = 1, association_count = 2 essential = 1, property_index = 1; essential -0, property_index -2; 2) item_ID = 2, association_count = 1 essential = 0, property_index = 3; 3) item_ID = 3, association_count = 2 essential = 0, property_index = 3; essential = 0, property_index = 4; 4) item_ID = 4, association_count = 1 essential = 0, property index = 5; This property corresponds to an annotation describing "Notre-Dame de Paris" as a building. The geometry of the region annotation item is described in the third region annotation part of the MediaDataBox.
Each region annotation item is associated with the annotated image through an item reference of type cdsci.
All the region annotation items may have a hidden property set to true indicating to a reader that this item is not intended to be displayed.
In a first variant of this fifth embodiment, the syntax of the region annotation item may be: aligned(8) class RegionAnnotationItem { unsigned int(8) region_count; for (r=0; r < region_count; r++) { Geometry geometry; This first variant of the fifth embodiment enables to define several regions sharing the same set of associated item properties in a single region annotation item.
Figure 9 illustrates a sixth embodiment of the invention. This embodiment is similar to the fifth embodiment except for the association between the region annotation items and the image item.
In this embodiment, the region annotations 820 and 825 are grouped together by 25 the region group 970. The region group 970 may be an EntityToGroupBox with a grouping_type of 'rgan' or of 'tog'. This region group 970 is associated with the image item 810 through the entry 933 of the 'ire box 830.
In this embodiment, the annotated image from Figure 2b may be stored in a HEIF file with the following structure: FileTypeBox 'ftyp': major-brand = 'heic', compatible-brands 'heic' MetaBox 'meta': (container) HandlerBox 'hdlr': 'pict' PrimaryItemBox 'pitm': item_ID = 1; ItemInfoBox 'iinf': entry_count = 4 infe': item_ID = 1, item_type = 'hvc11; // (ref 250) 'infe': item_ID = 2, item_type = 'rgan', hidden = true; // (ref 260) (ref 261) (ref 270) infe': item_ID = 3, item_type 'rgan', hidden = true; // infe': item_ID = 4, item_type 'rgan', hidden true; // ItemLocationBox 'iloc': item_count = 1 item_ID = 1, extent_count extent_count extent_count extent_count = = = = 1, 1, 1, 1, extent_offset extent_offset extent_offset extent_offset = X, extent_length = Y; 2, = 1, item_ID = 3, = 2, extent_length = Li; 4, = 3, item_ID = extent_length = L2; item_ID = extent_length = L3; GroupsListBox 'grpl': grouping_type = 'rgan', group_id = 100, num_entities_in_group = 3, entity_id = 2, 3, 4; ItemReferenceBox 'iref': referenceType = 'cdsc', from_item_ID = 100, reference_count = 1, to_item_ID = 1; ItemPropertiesBox 'iprp': ItemPropertyContainerBox 'ipco': 1) 'hvcC' // Config for annotated image 2) 'ispe' // Size of annotated image 3) 'udes', lang = en, name = 'face', description = ", tags -face 4) 'udes', lang = fr, name = 'Jean', description = ", tags = person 5) 'udes', lang = fr, name = 'Notre-Dame', description = 'Notre-Dame de Paris', tags = building ItemPropertyAssociation 'ipma': entry_count = 4 1) item_ID = 1, association_count = 2 essential = 1, property index = 1; essential = 0, property_index = 2; 2) item_ID = 2, association_count = 1 essential = 0, property_index = 3; 3) item_ID = 3, association_count = 2 essential -0, property_index -3; essential = 0, property_index = 4; 4) item_ID = 4, association_count = 1 essential = 0, property_index = 5; MediaDataBox 'mdat' or 'idat': HEVC Image (at file offset X, with length Y) Region Annotation (at file offset 01, with lenght L1) geometry: type = 1, x = x0, y = ye, width = w0, height = h0; Region Annotation (at file offset 02, with lenght L2) geometry: type = 1, x = xl, y = yl, width = wl, height = hl; Region Annotation (at file offset 03, with lenght L3) geometry: type = 2, x = x2, y = y2, radius = r2; This HEIF file is similar to the one corresponding to the fifth embodiment, except for the association between a region annotation item and the image item. In this HEIF file, the region annotation items with the item_ID 2, 3, and 4 are grouped inside a group of grouping_type tgan' and with a group_id 100. This group is associated with the image item through an item reference with a referenceType 'cdsc'.
In a variant of this embodiment, the item properties 861 and 862 are associated with the region group 970.
In another variant of this embodiment, the item properties 861 and 862 are associated with the region group 970, the region group 970 is not associated with the image item 810, and the region annotation items 820 and 825 are associated with the image item 810 through item references.
Figure 10 illustrates a seventh embodiment of the invention. This embodiment is similar to the fifth embodiment except for the description of the geometry of the regions. In this embodiment, the two region annotation items 1020 and 1025 don't have their geometry inside their respective contents. Instead, these region annotation items are associated with the geometry item properties 1063 and 1064 respectively, through the entries 851 and 852 inside the 'ipma' box 850 respectively.
In this embodiment, the geometry item property may be defined as follows: -Box type: 'geom' -Property type: Descriptive item property -Container: ItemPropertyContainerBox -Mandatory (per item): Yes for an item of type 'rgan' -Quantity (per item): Zero or one The syntax of the geometry item property may be: aligned(8) class GeometryProperty extends ItemFullProperty('geom', version = 0, flags = 0) { Geometry geometry; The semantics of the geometry item property may be: -geometry is the geometry of the region.
In this embodiment, the annotated image from Figure 2b may be stored in a HEIF file with the following structure: FileTypeBox 'ftyp': major-brand = 'heic', compatible-brands = 'heic' MetaBox 'meta': (container) HandlerBox 'hdlr': 'pict' PrimaryItemBox 'pitm': item_ID = 1; ItemInfoBox 'iinf': entry_count = 4 'infe': item_ID = 1, item_type = 'hvc11; // (ref 250) infe': item_ID = 2, item_type = 'rgan', hidden = true; // (ref 260) infe': item_ID = 3, item_type = 'rgan', hidden = true; // (ref 261) 'infe': item_ID = 4, item_type = 'rgan', hidden = true; // (ref 270) ItemLocationBox 'iloc': item_count = 1 item_ID = 1, extent_count = 1, extent_offset = X, extent_length = Y; ItemReferenceBox 'iref': referenceType = 'cdsc', from_item_ID = 2, reference_count = 1, to_item_ID = 1; referenceType = 'cdsc', from_item_ID = 3, reference_count = 1, to_item_ID = 1; referenceType = 'cdsc', from_item_ID = 4, reference_count = 1, to_item_ID -1; ItemPropertiesBox 'iprp': ItemPropertyContainerBox 'ipco': 1) 'hvcC' // Config for annotated image 2) 'ispe' // Size of annotated image 3) 'geom' geometry: type = 1, x = x0, y = yo, width = wO, height = h0; 4) 'geom' geometry: type = 1, x = xl, y = yl, width = wl, height =h1; 5) 'geom' geometry: type = 2, x = x2, y = y2, radius = r2; 6) 'udes', lang = en, name = 'face', description = ", tags = face 7) 'udes', lang = fr, name = 'Jean', description = ", tags = person 8) 'udes', lang = fr, name = 'Notre-Dame', description = 'Notre-Dame de Paris', tags = building ItemPropertyAssociation 'ipma': entry_count = 4 1) item_ID = 1, association_count = 2 essential = 1, property_index = 1; essential = 0, property_index = 2; 2) item_ID -2, association_count -2 essential = 0, property_index = 3; essential = 0, property_index = 6; 3) item_ID = 3, association_count = 3 essential = 0, property_index = 4; essential = 0, property_index = 6; essential = 0, property_index = 7; 4) item_ID = 4, association_count = 2 essential = 0, property_index = 5; essential = 0, property_index = 8; MediaDataBox 'mdat' or 'idat': HEVC Image (at file offset X, with length Y) This HEIF file is similar to the one corresponding to the fifth embodiment, except for the description of the geometry of region annotation items. A region annotation item is associated with a geometry item property to specify its geometry. As a region annotation item doesn't have any content in this HEIF file, it may have no associated entry in the ItemLocationBox, or its associated entry may have no extent.
In a first variant to this seventh embodiment, the syntax of the geometry item property may be: aligned(8) class GeometryProperty extends ItemFullProperty('geomi, version = 0, flags 0) { unsigned int(8) geometry_count; for (i=0; i < geometry_count; i++) { Geometry geometry; This first alternative enables to define several geometries inside a single geometry item property. A region annotation item associated with a geometry item property comprises one region for each geometry defined in the geometry item property. All the item properties associated with the region annotation item apply to all the regions of the region annotation item.
In a second variant to this seventh embodiment, the geometry of a region is defined by two item properties: a location item property and a shape item property.
In this second variant, the location item property may be defined as follows: Box type: telo' Property type: Descriptive item property Container: ItemPropertyContainerBox Mandatory (per item): Yes for an item of type 'rgan' Quantity (per item): Zero or one The syntax for the location property may be: aligned(8) class LocationProperty extends ItemFullProperty('relo', version = 0, flags = 0) { Location location; The semantics of the location item property may be: - location is the location of the region.
In this second variant, the shape item property may be defined as follows: - Box type: cresh' -Property type: Descriptive item property Container: ItemPropertyContainerBox - Mandatory (per item): Yes for an item of type 'rgan' Quantity (per item): Zero or one The syntax for the shape property may be: aligned(8) class ShapeProperty extends ItemFullProperty('resh', version = 0, flags = 0) { Shape shape; The semantics of the shape item property may be: -shape is the shape of the region.
Alternatively, several location item properties may be associated with the same region annotation item. In this alternative, the region annotation item comprises one region for each location item property. All these regions share the same shape.
Alternatively, several location item properties and/or several shape item properties may be associated with the same region annotation item. In this alternative, the region annotation item comprises one region for each combination of a location item property and a shape item property. For example, if a region annotation item is associated with two location item properties and three shape item properties, it comprises six regions.
In yet another alternative of this second variant, a location item property and or a shape item property may contain respectively several locations or several shapes. In this alternative, the region annotation item comprises one region for each combination of a location contained in one of the location item properties and a shape contained in one of the item properties.
In another alternative, several geometry item properties may be associated with the same region annotation item. In this alternative, the region annotation item comprises one region for each geometry item property.
For all the alternatives where several regions are comprised inside a region annotation item, all the item properties associated with the region annotation item apply to all the regions comprised inside the region annotation item.
Figure 11 illustrates the main steps of a process for adding a new region annotation to an image item stored in a HEIF file when the region annotation is described by an item according to embodiments of the invention. This figure shares similarities with Figure 6. These steps can be applied to a HEIF file stored on a disk, stored in memory, or stored with an adapted representation in memory. The new region annotation comprises the geometry of the region and the annotation itself. Possibly, these steps may be modified to add simultaneously several annotations to a region of an image item.
Possibly, these steps may also be modified to add simultaneously an annotation to several regions of an image item.
This process can be used when creating a new HEIF file, or when modifying an existing HEIF file.
In a first step 1100, it is determined whether a region annotation item with the same geometry already exists in the HEIF file.
If a region annotation item with the same geometry already exists, the next step is step 1110, otherwise, the next step is step 1120.
In step 1110, the item_ID value corresponding to the existing region annotation item is selected. The next step is step 1140.
At step 1120, a new region annotation item for representing the region is created.
An 'infe' box describing the region annotation item may be created inside the 'uric box of the HEIF file. An entry inside the 'iloc' box may be added to indicate the location of the content of the region annotation item. An item_ID value is associated with this new region annotation item.
In the case of the fifth embodiment or of the sixth embodiment, the geometry of the region is stored inside the content of the region annotation item.
In the case of the seventh embodiment, a new geometry item property may be created to store the geometry of the region. The index of this new item property is memorized. Possibly, if an identical geometry item property exists in the HEIF file its index is memorized and no new item property is created.
In the case of the seventh embodiment, a new location item property and a new shape item property may be created to store the geometry of the region. The indexes of these new item properties are memorized. Possibly, if either or both of these item properties exist in the HEIF file their indexes are memorized and no corresponding item property is created.
Then, at step 1130, the new region annotation item is associated with the image item In the case of the fifth or seventh embodiment and for some variants of the sixth embodiment, a new reference of type 'cdsc' is created in the 'ire box of the HEIF file. This reference associates the region annotation item with the image item.
In the case of some variants of the sixth embodiment, it is first determined whether a region group that may be an EntityToGroupBox of grouping_type tgan' is associated with the image item. If this is the case, the region annotation item is added to the region group. If this is not the case, a new region group of type 'rgan' is created, containing the new region annotation item. Then a new reference of type cdsc' is created in the 'ire' box of the HEIF file. This reference associates the region group to the image item.
In the case of a combination of the sixth embodiment with either the fifth or the seventh embodiment, either of these processes may be applied. Possibly, it is first determined whether a region group that is an EnfityToGroupBox of grouping_type tgan' is associated with the image item. If this is the case, the region annotation item is added to the region group. If this is not the case, it is determined whether another region annotation item is associated with the image item. If this is the case, a new region group of type 'rgan' is created, containing both the other region annotation item and the new region annotation item. The reference between the other region annotation item and the image item is removed and a new reference of type ccdsc' between the region group and the image item is created. If this is not the case, a new reference of type cdsc' between the new region annotation item and the image item is created.
The next step is step 1140.
At step 1140, it is determined whether an item property corresponding to the annotation already exists in the HEIF file. If the item property already exists, the next step is step 1150, otherwise, the next step is step 1160.
In step 1150, the existing item property is selected.
In step 1160, a new item property is created to represent the annotation. The type of the item property depends on the content of the annotation. The information contained in the annotation is stored inside the item property.
After either step 1150 or step 1160, the next step is step 1170.
At step 1170, the item property is associated with the region annotation item.
If the region annotation item has already an associated entry in the 'ipma' box, then the index of the item property is added to this entry.
If the region annotation item doesn't have an associated entry in the 'ipma' box, then a new entry is created for this item and the index of the item property is added to this entry.
In the case of the seventh embodiment, the geometry item property, the indexes of the location item property, and/or the shape item property that were memorized at step 1120 are also added to this entry.
Possibly, if a region annotation item may comprise several regions, several steps may be modified. At step 1100, it is determined whether a region annotation item comprising a single region with the same geometry exists. At step 1120, it is determined whether an existing region annotation item is associated with the image item with a set of properties corresponding to the annotation of the new region annotation. If this is the case, the geometry of the new region is added to the existing region annotation item and steps 1140 to 1170 are not executed. If this is not the case, a new region annotation item is created.
Figure 12 illustrates the main steps of a process for reading a HEIF file containing region annotations when the region annotations are described by an item according to embodiments of the invention.
In a first step 1200, an image item is extracted from the HEIF file. Possibly, only part of the metadata describing the image item is extracted.
In step 1210, a first item, different from the image item, is extracted from the HEIF file. If no other items exist in the HEIF file, the algorithm continues directly at step 1270.
Then, in step 1220, it is determined whether the other item is a region annotation item. If it is a region annotation item, the next step is step 1230, otherwise, the next step is step 1250.
At step 1230, it is determined whether the region annotation item is associated with the image item by a reference of type 'cdsc' inside the 'iref' box. If this is the case, the next step is step 1240, otherwise the next step is step 1250.
At step 1240, the item properties associated with the region annotation item through an entry of the 'ipma' box are extracted.
Possibly in the context of the sixth embodiment, it is determined whether the region annotation item is contained in a region group that is an EntityToGroupBox with a grouping_type of type trgan'. If this is the case, the item properties associated with the region group through an entry of the 'ipma' box are extracted.
Then the geometry of the region annotation item is extracted.
In the context of the fifth embodiment, or of some variants of the sixth embodiment, the geometry of the region annotation item is extracted from the content of the region annotation item.
In the context of the seventh embodiment, the geometry of region annotation item from the geometry item properties, the location item properties, and/or the shape item properties associated with the region annotation item.
The region in link with the extracted item properties is associated with the image item. The information contained in the item properties may be extracted and associated with the geometry of the region to the image item.
If the region annotation item comprises several regions, then each of these regions in link with the extracted item properties is associated with the image item. The information contained in the item properties may be extracted and associated with the geometry of each of these regions to the image item.
The next step is step 1250.
At step 1250, it is determined whether there are other items different from the image item in the HEIF file. If this is the case, the next step is step 1260, otherwise, the next step is step 1270.
At step 1260, another item is extracted from the HEIF file. The next step is step 1220 At step 1270, the process ends.
Figure 13 illustrates the main steps of a process for reading a HEIF file containing region annotations when the region annotations are described by an item according to some variants of the sixth embodiment. This process is similar to the one illustrated by Figure 12.
In a first step 1300, an image item is extracted from the HEIF file. Possibly, only part of the metadata describing the image item is extracted.
In step 1310, a first group is extracted from the HEIF file. If no group exist in the HEIF file, the algorithm continues directly at step 1370.
Then, in step 1320, it is determined whether the group is a region group that is an EntityToGroupBox with a grouping type of tgan'. If it is a region group, the next step is step 1330, otherwise, the next step is step 1350.
At step 1330, it is determined whether the region group is associated with the image item by a reference of type 'cdsc' inside the 'iref' box. If this is the case, the next step is step 1340, otherwise the next step is step 1350.
At step 1340, the region annotation items contained in the group are retrieved. For each region annotation item, its region and its item properties are extracted and associated with the image item as described in step 1240.
If the region annotation item comprises several regions, then each of these regions in link with the extracted item properties is associated with the image item as described in step 1240.
Possibly, the item properties associated to the region group are extracted. For each region annotation item contained in the group, its region is extracted. Then each extracted region combined with all the extracted item properties is associated with the image item as described in step 1240.
The next step is step 1350.
At step 1350, it is determined whether there are other groups in the HEIF file. If this is the case, the next step is step 1360, otherwise, the next step is step 1370.
At step 1360, another group is extracted from the HEIF file. The next step is step At step 1370, the process ends.
Figure 14 illustrates a process for processing a HEIF file containing an image and one or more region annotation items associated with this image according to embodiments of the invention. The process may be an image edition process. It may be a process removing parts of an image such as a crop. It may be a process transforming the geometry of the image such as a perspective correction, a rotation, or an affine transformation. It may be a process changing the content of the image such as applying a filter, or drawing on the image. The process may be a metadata edition process. It may be a process for removing private metadata. It may be a process for filtering metadata. It may be a process for translating metadata.
In the first step 1400, the process is applied on the image item. Possibly the image associated with the image item may be modified.
Possibly the result of the process may be stored in another image item as a derived image item. In this case, the region annotation items associated with the original image item are also associated with this derived image item. In the following steps, the processed image item is the derived image item.
In the step 1410, a first region annotation associated with the processed image item is retrieved. If no region annotation is associated with the processed image item, the next step is the step 1470.
In the step 1420, it is determined whether the region annotation should be removed. Depending on the process, different criteria may be used. A process may remove all region annotations. A process may remove any region annotation with a specific type. For example, a privacy preserving filter may remove any region annotation represented by a user-defined item property. A process may remove a region annotation depending on its localization. For example, a crop may remove any region annotation that doesn't intersect with the remaining image.
As an example, in Figure 2b, the region annotation 260 may be removed from the edited image as the region targeted by the region annotation 260 was cropped out of the edited image.
If it is determined that the region annotation is to be removed, the next step is step 1425. Otherwise the next step is step 1430.
In step 1425, the region annotation is removed. The next step is step 1450.
In step 1430, it is determined whether the region annotation's geometry should be modified. Any process transforming the geometry of the image should also modify the region annotation's geometry. Such process comprises the rotation of the image, the scaling of the image, applying an affine transform on the image, correcting the perspective distortion of the image...
If it is determined that the region annotation's geometry should be modified, the next step is step 1435. Otherwise, the next step is step 1440.
In step 1435, the geometry of the region annotation is modified according to the process. Possibly the modified geometry is the exact result of applying the geometry transformation to the geometry of the region annotation. Possibly the modified geometry is an approximate result of applying the geometry transformation to the geometry of the region annotation.
For example, in Figure 2b, the region 261 is initially a rectangle. The perspective correction applied on the image transforms this region from a rectangle to a trapezoid.
The modified geometry may be a polygon representing exactly this trapezoid. It may be a polygon with integer coordinates representing an approximation of the rational coordinates of the trapezoid. It may be a rectangle approximating the shape of the trapezoid.
The next step is step 1440.
In step 1440, it is determined whether the annotation of the region annotation should be modified. Depending on the process different criteria may be used. A process translating textual annotation may modify the text representing the annotation. A process filtering the image, for example by applying a blur, may modify the annotation to remove precise parts from it.
If it is determined that the annotation of the region annotation should be modified, the next step is step 1445. Otherwise, the next step is step 1450.
In step 1445, the annotation of the region annotation is modified according to the process.
For example, in Figure 2b, the region 261 has a region annotation corresponding to the description of a person. A privacy preserving process may keep the indication that the region annotation corresponds to a person, but remove the name of the person.
The next step is step 1450.
In step 1450, it is determined whether there are other region annotations to process. If it is determined that there are other region annotations to process, the next step is step 1460. Otherwise, the next step is step 1470.
In step 1460, another region annotation associated with the image item is retrieved. The next step is step 1420.
In step 1470, the process ends.
Annotation description
An annotation may be the focus location inside a captured picture. It may be represented using a user description item property, for example using a specific value for the name field and/or for the tags field, such as for example "focus". It may be represented by a new item property.
An annotation may be a detected face inside an image. It may be represented using a user description item property, for example using a specific value for the name field and/or for the tags field such as for example "face". It may be represented by a new item property.
An annotation may be an object detected inside an image by an object detection tool. It may be represented using a user description item property, for example using a specific value for the name field and/or for the tags field, and/or using a more descriptive value for the description field. For example, the name field may be "building" and the description field may be "House, building or monument". It may be represented by a new item property.
An annotation may be a specific object instance detected inside an image by an object detection tool. It may be represented using a user description item property, for example using a specific value for describing the generic type of the object in the tags field, using a more precise value corresponding to the object instance in the name field, and/or using descriptive value for the object instance in the description field. For example, the tags field may "church", the name field may be "Notre Dame" and the description field may be "Notre-Dame de Paris". It may be represented by a new item property.
An annotation may be a GPS location for an object in the image. It may be represented by a new item property.
An annotation may be text extracted from the image through an OCR tool. It may be represented by a user description item property, for example using a specific value for indicating that the annotation corresponds to an OCR result in the name field and/or in the tags field, and representing the OCR output in the description field. For example, the tags field may "ocr" and the description field may be "Notre-Dame de Paris". It may be represented by a new item property. This new item property may include information about the font used, such as its family, style, weight, and/or size. Possibly the result of the OCR tool is split into several region annotations corresponding to different detected text area and/or different detected text styling.
An annotation may describe an edition or a modification applied to a region of an image. It may be represented by a user description item property. It may be represented by a new item property.
An annotation may be stored in an item. This item may be associated with a region annotation item by a reference of type 'cdsc'. This item may be associated with a region annotation item property through a new item property associated to the region annotation item and referencing this item. The box type of this new item property may be rgcd'. For example, an annotation may be stored in an item of Exif. As another example, an annotation may be stored in an XMP document contained in an item of type 'mime' and with content type application/rdf+xml'.
Possibly, when using a user description item property, the language field is set to an appropriate value. Possibly, when using a user description item property, several instances of this item property are used with different language field values.
Geometry alternatives Possibly the geometry of a region annotation may include other types. It may include a type for representing an ellipse. It may include a type for representing a 3d box expressed in world coordinates and the associated world to image projection. It may include a type for representing a specific polygon. It may include a type for representing a sphere.
Possibly the geometry of a region annotation may be limited to fewer types. It may be limited to a rectangle area.
Possibly the type field of the Geometry structure and/or of the Shape structure is a four-character code.
Possibly the location inside the geometry of a region annotation may correspond to the center of the shape.
Possibly a location without an associated shape represents a region of type point. Possibly the geometry of a region may be represented as one or more equations and/or inequalities. For example, a line may be represented by an equation. As another example, the inside of a triangle may be represented by three inequalities.
Possibly some of or all the coordinates of the geometry of a region annotation may be expressed as rational numbers instead of integers. For example, a point may be represented by the following fields: 7/ point unsigned unsigned unsigned int(32) xN; int(32) xD; int(32) yN; int(32) yD; unsigned where -xN and xD are respectively the numerator and the denominator of the abscissa of the point.
-yN and yD are respectively the numerator and the denominator of the ordinate of the point.
Possibly the field_size of the Geometry structure, of the Location structure, and/or of the Shape structure may be fixed, for example using 32 bits. In this case, the field_size may not be specified inside the structure. When several of these structures are contained in a box, the definition of the field_size may be shared by all the structures. When one or more structures are contained in a full box, the definition of the field_size may be based on the flags field or on the version field of the full box. When one or more structures are contained in a box or a full box, the field_size may depend on the box type. For example, two different geometry item property boxes could be defined, each with its own
specific field_size.
Possibly in some embodiments the Geometry structure, the Location structure, and/or the Shape structure may be used together for representing the geometry of a region annotation.
Possibly several geometries sharing the same type could be represented in a compressed form. For example, a list of rectangles could be represented by expressing each field as a difference from the corresponding field of the previous rectangle.
Possibly a mask is always located at the top-left of the image and no position is specified for it.
Possibly a mask has always the same size as the annotated image.
Possibly a mask has a pixel density different from the one of the annotated image.
For example, one pixel of the mask may correspond to 4 pixels of the annotated image.
As another example, 4 pixels of the mask may correspond to 1 pixel of the annotated image. This difference in pixel density may be described in the Geometry structure or in the Shape structure. This difference may be described using the pixel aspect ratio property.
Possibly a color mask may combine the masks corresponding to several region annotations into another mask for another region annotation. For example, the mask for a region annotation describing a first specific person could use the pixel color value of Ox1. The mask for another region annotation describing a second specific person could use the pixel color value of 0x2. The mask for a region annotation describing persons in a generic way and combining these two regions could use the pixel color value of 0x3.
Possibly, the type of the Geometry structure or of the Shape structure could be specified by the type of the region annotation item property, of the region annotation item, of the geometry item property, and/or of the shape item property.
For example, a point region annotation item property and a rectangle region annotation item property may be defined by the following syntax: // Point aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rpntr, version = 0, flags = 0) {
unsigned int field_size = ((flags & 1) + 1) * 16;
unsigned int(field_size) x; unsigned int(field_size) y; unsigned int(1) flag; unsigned int(15) entry_count; for (i=0; i < entry_count; i++) { if (flag) unsigned int(16) property_index; else unsigned int(8) property_index; 1 // Rectangle aligned(8) class RegionAnnotationProperty extends ItemFullProperty('rrecr, version = 0, flags = 0) {
unsigned int field_size = ((flags & 1) + 1) * 16;
unsigned int(field_size) x; unsigned int(field_size) y; unsigned int(field_size) width;
unsigned int(field_size) height;
unsigned int(1) flag; unsigned int(15) entry_count; for (i=0; i < entry_count; i++) if (flag) unsigned int(16) property_index; else unsigned int(8) property_index; Possibly a specific region annotation item may be defined for representing a mask. The content of this specific region annotation item may be the image corresponding to the mask. Other information used to define the region represented by the mask, such as the location of the mask or the color used by the mask, may be represented in a specific item property associated with the mask region annotation item. Possibly, the syntax for the content of such an item may be: aligned(8) class MaskItem { bit(8) data[]; Possibly, the content of such an item may include the location of the region with the following syntax: aligned(8) class MaskItem unsigned int(1) flags; unsigned int(7) reserved;
unsigned int field_size -(flags + 1) * 16;
unsigned int(field_size) x;
unsigned int(field_size) y; bit(8) data[];
Possibly the mask can be stored directly inside the geometry of the region.
Possibly the location of a region annotation item may be represented in the content of the item and/or in an associated item property, and the shape of a region annotation item may be represented in the content of the item and/or in an associated item property. For example, the shape of the region may be represented in the content of the item and the location of the region may be represented in an associated item property. In this example, several locations may be associated with the region annotation item. The region annotation item comprises one region for each location, all these regions sharing the same shape.
Possibly the fields defining the number of items contained in a list may be represented with a different size. For example, the region_count field may be represented as a 16-bit unsigned integer.
Item property alternatives Possibly, the region annotation item property could extend the ItemProperty box instead of extending the Item FullProperty box.
Possibly, a region annotation item property associated with an item has the essential field set to the value 1 to ensure that the reader understands the region annotation item property.
Possibly, a new type of property, regional property, is defined in addition to the descriptive property and transformative property types. A processing specific to region annotation item property may be associated with the regional property type instead of being associated with the region annotation item property itself. For example, the processing of item properties according to the third embodiment of the invention could be defined based on the regional property type.
Item alternatives Possibly, two region annotation items may share the same content to represent the same region but with different annotations.
Possibly, the region annotation item may be associated with an image item using a different reference type. The reference type used may be tbas' or 'climg'. A new reference type, for example tgan' may be defined for describing such an association.
Possibly different types of reference may be used depending on the type of the annotation. For example, a region annotation item corresponding to the focus area may be associated with its annotated image item by a reference of type rfoc' indicating that this is a focus area. In this example, the region annotation item may not be associated with an item property for describing its annotation.
Possibly, the region annotation item is of type hvc1' and its content is the annotated image.
Possibly, the region annotation item is a derived image item of type iden' without any content.
Possibly, the region annotation item is of type 'hitt and its content is a tile of the annotated image. The geometry of the region annotation item may be the area covered by the tile. Possibly, the content of the region annotation item is a set of tiles of the annotated image and the geometry of its region may be the area covered by the files. Possibly, the region annotation item corresponds to a sub-picture of the annotated image. The geometry of the region annotation item may be the area covered by the subpicture. Possibly, the region annotation item corresponds to a set of sub-pictures of the annotated image. The geometry of the region annotation item may be the area covered by the subpictures.
Possibly the box type of the geometry item property may be different. For example, it may be 'rcgr'.
Possibly, the location of the region annotation item is described using the rloc' item property.
Possibly, the shape of the region annotation item is described as a rectangle using the 'ispe' item property.
Possibly, the geometry of the region annotation item is described using the 'clap' item property.
For example, in Figure 2b, the region 261 may be represented by a region annotation item 'rgan' without any content. This region annotation item is associated with the image item by a reference of type tgan'. The location of the region 261 is represented by a 'doe item property. The shape of the region 261 is represented by a Ispe' item property.
Groups alternative The sixth embodiment of the invention may be combined with the fifth embodiment, enabling to associate region annotation items with image items either directly and/or through a region group.
The sixth embodiment of the invention may be combined with the seventh embodiment, enabling to associate region annotation items with image items either directly and/or through a region group.
Possibly in the sixth embodiment, an item property may be associated with a region group to indicate that the property applies to all the region annotation items comprised in the region group. For example, a shape property may be associated with a region group to describe the shape of all the region annotation items of the group. As another example, a user description item property may be associated with a region group to associate this user description item property to all the region annotation items of the group.
In the sixth embodiment of the invention, the grouping_type used by the region group may be tcog'.
Possibly, the region group may be associated with an image item using a different reference type. A new reference type, for example 'rcob may be defined for describing such an association.
Using some of these alternatives, as an example the image from Figure 2b may be stored in a HEIF file with the following structure: 15 20 25 GroupsListBox 'grpl': grouping_type = 'rcog', group_id = 100, num_entities_in_group = 2, entity_id = 2, 3; ItemPropertiesBox 'iprp': ItemPropertyContainerBox 'ipco': 1) 'hvcC' // Config for annotated image 2) 'ispe' // Size of annotated image 3) 'rloc', x = x0, y = y0 4) 'ispe', width = w0, height = h0 5) 'rloc', x = xl, y = yl 6) 'ispe', width = wl, height = hl 7) 'rloc', x = x2, y = y2 8) 'icir', radius = r2 9) 'udes', lang = en, name = 'face', description = ", tags = face 10) 'udes', lang = fr, name = 'Jean', description = ", tags = person 11) 'udes', lang = fr, name = 'Notre-Dame', description = 'Notre-Dame de Paris', tags = building ItemPropertyAssociation 'ipma': entry_count = 4 FileTypeBox 'ftyp': major-brand = 'heic', compatible-brands = 'heic' MetaBox 'meta': (container) HandlerBox 'hdlr': 'pict' PrimaryItemBox 'pitm': item_ID = 1; ItemInfoBox 'iinf': entry_count = 4 = 'hvc11; infe': item_ID = 1, item_type infe': item_ID = 2, item_type = 'rgan', hidden = true; infe': item_ID = 3, item_type = 'rgan', hidden = true; infe': item_ID = 4, item_type = 'rgan', 1 hidden = true; ItemLocationBox 'iloc': item_count = item_ID = 1, extent_count = 1, extent_offset = X, extent_length = Y; ItemReferenceBox 'iref': referenceType = to_item_ID = 1; 'rgan', from_item_ID = 2, reference_count = 1, to_item_ID = referenceType = 1; 'rgan', from_item_ID = 3, reference_count = 1, to_item_ID = referenceType = 1; 'rgan', from item ID = 4, reference count = 1, 1) item_ID = 1, association_count = 2 essential = 1, property_index = 1; essential = 0, property_index = 2; 2) item_ID = 2, association_count = 2 essential = 0, property_index = 3; essential = 0, property_index = 4; 3) item_ID = 3, association_count = 3 essential = 0, property_index = 5; essential = 0, property_index = 6; essential = 0, property_index = 10; 4) item_ID = 4, association_count = 3 essential = 0, property_index = 7; essential -0, property_index -8; essential = 0, property_index = 11; 5) item_ID = 100, association_count = 1 essential = 0, proeperty_index = 9 MediaDataBox 'mdat' or 'idat': HEVC Image (at file offset X, with length Y) Misc alternatives Possibly, after step 790 of the process illustrated by Figure 7, the transformafive properties associated with the image item are extracted. Then, they are applied according to their ordering inside the ipma' entry to the annotated image item and to the geometries of its associated region annotation item properties. This application may be realized according to the process illustrated by Figure 14.
Possibly, after step 1370 of the process illustrated by Figure 7, the transformative properties associated with the image item are extracted. Then, they are applied according to their ordering inside the ipma' entry to the annotated image item and to the geometries of its associated region annotation items. This application may be realized according to the process illustrated by Figure 14.
Possibly, region annotations are applied on the output image of the image item. Possibly a region annotation may be associated with several image items. For example, the same region annotation may apply to several image items corresponding to an image burst.
Possibly a region annotation may be associated with several image items. In addition, a geometry modification item property is associated to each association to adapt the geometry of the region to each image item. The geometry modification item property may be associated to the region and/or to the region annotation and may contain an indication of the image item to which it applies. The geometry modification item property may be associated to the image item and may contain an indication of the region and/or of the region annotation to which it applies. For example, the same region annotation may be associated to several image items corresponding to a burst of images containing a moving object. The geometry modification item properties change the geometry of the region for fitting the moving object in each image item.
Possibly a region annotation may be associated with a group of items. This associated means that the region annotation applies to each item comprised in the group.
Possibly the reference between a region annotation item or a region group and an image item could be reversed. For example, the association between one or more region annotation items and an image item could use a reference of type tgan' from the image item to the one or more region annotation items.
Possibly a region annotation may be described by a new type of boxes. The structure of these boxes may be similar to the structure described for geometry item properties, location item properties and/or shape item properties. These boxes may have an identifier associated with them. These boxes may be associated with items inside the 'ire box or inside a new box. These boxes may be associated with properties inside the 'ipma' box or inside a new box.
Figure 15 is a schematic block diagram of a computing device 150 for implementation of one or more embodiments of the invention. The computing device 150 may be a device such as a microcomputer, a workstation or a light portable device. The computing device 150 comprises a communication bus connected to: -a central processing unit 151, such as a microprocessor, denoted CPU; -a random access memory 152, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method according to embodiments of the invention, the memory capacity thereof can be expanded by an optional RAM connected to an expansion port, for example; -a read-only memory 153, denoted ROM, for storing computer programs for implementing embodiments of the invention; -a network interface 154 is typically connected to a communication network over which digital data to be processed are transmitted or received. The network interface 154 can be a single network interface, or composed of a set of different network interfaces (for instance wired and wireless interfaces, or different kinds of wired or wireless interfaces). Data packets are written to the network interface for transmission or are read from the network interface for reception under the control of the software application running in the CPU 151; -a user interface 155 may be used for receiving inputs from a user or to display information to a user; -a hard disk 156 denoted HD may be provided as a mass storage device; -an I/O module 157 may be used for receiving/sending data from/to external devices such as a video source or display.
The executable code may be stored either in read only memory 153, on the hard disk 156 or on a removable digital medium such as for example a disk. According to a variant, the executable code of the programs can be received by means of a communication network, via the network interface 154, in order to be stored in one of the storage means of the communication device 150, such as the hard disk 156, before being executed.
The central processing unit 151 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, the CPU 151 is capable of executing instructions from main RAM memory 152 relating to a software application after those instructions have been loaded from the program ROM 153 or the hard disk (HD) 156 for example. Such a software application, when executed by the CPU 151, causes the steps of the flowcharts of the invention to be performed.
Any step of the algorithms of the invention may be implemented in software by execution of a set of instructions or program by a programmable computing machine, such as a PC ("Personal Computer"), a DSP ("Digital Signal Processor") or a microcontroller; or else implemented in hardware by a machine or a dedicated component, such as an FPGA ("Field-Programmable Gate Array") or an ASIC ("Application-Specific Integrated Circuit").
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
Each of the embodiments of the invention described above can be implemented solely or as a combination of a plurality of the embodiments. Also, features from different embodiments can be combined where necessary or where the combination of elements or features from individual embodiments in a single embodiment is beneficial.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.
Claims (14)
- CLAIMS1 A method of encapsulating image items in a file, the method comprising for an image item: generating an item portion annotation data structure related to a portion of the image item, the item portion annotation data structure being associated with the image item; - generating a geometry data structure describing the geometry of the portion of the image item associated with the item portion annotation data structure; generating at least one annotation data structure associated with the item portion annotation data structure; embedding the image item, the item portion annotation data structure, the geometry data structure and the at least one annotation data structure in the file.
- 2 The method of claim 1, wherein: the item portion annotation data structure is an item property; - the item portion annotation data structure comprises the geometry data structure; the item portion annotation data structure comprises reference information on the at least one annotation data structure generated as item properties.
- 3 The method of claim 1, wherein - the item portion annotation data structure is an item property; the item portion annotation data structure comprises the geometry data structure; the item portion annotation data structure being associated with the image item by an association information in an association container, the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by another association information in the association container.
- 4. The method of claim 1, wherein: the item portion annotation data structure is an item property; the item portion annotation data structure comprises the geometry data structure; the item portion annotation data structure being associated with the image item by an association information in an association container, the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by the association information in the association container.
- 5 The method of claim 1, wherein - the item portion annotation data structure is an item property; the item portion annotation data structure comprises the geometry data structure; - the item portion annotation data structure comprises the at least one annotation data structure.
- 6 The method of claim 1, wherein: - the item portion annotation data structure is an item; the item portion annotation data structure comprises the geometry data structure; the item portion annotation data structure is associated with the image item by a reference information; the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by an association information in an association container.
- 7 The method of claim 1, wherein: the item portion annotation data structure is an item; the item portion annotation data structure comprises the geometry data structure; at least one item portion annotation data structure associated with the image item are grouped, the group is associated with the image item by a reference information; the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by an association information in an association container.
- 8 The method of claim 1, wherein the item portion annotation data structure is an item; the item portion annotation data structure is associated with the image item by a reference information; the item portion annotation data structure is associated with the at least one annotation data structure generated as item properties by an association information in an association container; the item portion annotation data structure is associated with the geometry data structure generated as an item property by the association information in the association container.
- 9 A method of reading a file comprising image items, the method comprising for an image item: reading an item portion annotation data structure related to a portion of the image item, the item portion annotation data structure being associated with the image item; reading a geometry data structure describing the geometry of the portion of the image item associated with the item portion annotation data structure; reading at least one annotation data structure associated with the item portion annotation data structure; reading the associated image item.
- 10. A computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing a method according to any one of claims 1 to 9, when loaded into and executed by the programmable apparatus.
- 11. A computer-readable storage medium storing instructions of a computer program for implementing a method according to any one of claims 1 to 9.
- 12. A computer program which upon execution causes the method of any one of claims 1 to 9 to be performed.
- 13 A device for encapsulating image items in a file, the device comprising a processor configured for, for an image item: generating an item portion annotation data structure related to a portion of the item item, the item portion annotation data structure being associated with the image item; generating a geometry data structure describing the geometry of the portion of the image item associated with the item portion annotation data structure; generating at least one annotation data structure associated with the item portion annotation data structure; embedding the image item, the item portion annotation data structure, the geometry data structure and the at least one annotation data structure in the file.
- 14 A device for reading a file comprising image items, the device comprising a processor configured for, for an image item: reading an item portion annotation data structure related to a portion of the item item, the item portion annotation data structure being associated with the image item; reading a geometry data structure describing the geometry of the portion of the image item associated with the item portion annotation data structure; reading at least one annotation data structure associated with the item portion annotation data structure; reading the associated image item.
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2005063.9A GB2593893A (en) | 2020-04-06 | 2020-04-06 | Method and apparatus for encapsulating region related annotation in an image file |
GB2009669.9A GB2593945A (en) | 2020-04-06 | 2020-06-24 | Method and apparatus for encapsulating region related annotation in an image file |
GBGB2019967.5A GB202019967D0 (en) | 2020-04-06 | 2020-12-17 | Method and apparatus for encapsulating region related annotation in an image file |
PCT/EP2021/057355 WO2021204526A1 (en) | 2020-04-06 | 2021-03-23 | Method and apparatus for encapsulating region related annotation in an image file |
US17/917,135 US20230169107A1 (en) | 2020-04-06 | 2021-03-23 | Method and apparatus for encapsulating region related annotation in an image file |
JP2022542789A JP7450729B2 (en) | 2020-04-06 | 2021-03-23 | Method and apparatus for encapsulating region-related annotations in image files |
CN202180040599.XA CN115699780A (en) | 2020-04-06 | 2021-03-23 | Method and apparatus for encapsulating region-related annotations in an image file |
EP21715201.6A EP4133742A1 (en) | 2020-04-06 | 2021-03-23 | Method and apparatus for encapsulating region related annotation in an image file |
KR1020227037564A KR20220160083A (en) | 2020-04-06 | 2021-03-23 | Method and Apparatus for Encapsulating Region-Related Annotation in Image Files |
JP2024030633A JP2024063141A (en) | 2020-04-06 | 2024-02-29 | Method and device for storing information associated with region of image in image file |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2005063.9A GB2593893A (en) | 2020-04-06 | 2020-04-06 | Method and apparatus for encapsulating region related annotation in an image file |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202005063D0 GB202005063D0 (en) | 2020-05-20 |
GB2593893A true GB2593893A (en) | 2021-10-13 |
Family
ID=70768757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2005063.9A Pending GB2593893A (en) | 2020-04-06 | 2020-04-06 | Method and apparatus for encapsulating region related annotation in an image file |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2593893A (en) |
-
2020
- 2020-04-06 GB GB2005063.9A patent/GB2593893A/en active Pending
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
GB202005063D0 (en) | 2020-05-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230169107A1 (en) | Method and apparatus for encapsulating region related annotation in an image file | |
JP7039668B2 (en) | Image data encapsulation | |
KR102465188B1 (en) | Method and apparatus for encapsulating images in files | |
US11853351B2 (en) | Method and apparatus for encapsulating panorama images in a file | |
US10595062B2 (en) | Image data encapsulation | |
US20230300349A1 (en) | Method and apparatus for encapsulating images or sequences of images with proprietary information in a file | |
WO2021259611A1 (en) | Method and apparatus for encapsulating annotated region in isobmff tracks | |
GB2582025A (en) | Method and apparatus for encapsulating groups of images in a file | |
US20230353842A1 (en) | Method, device, and computer program for encapsulating region annotations in media tracks | |
GB2593893A (en) | Method and apparatus for encapsulating region related annotation in an image file | |
GB2575288A (en) | Method and apparatus for encapsulating images or sequences of images with proprietary information in a file | |
WO2023111099A1 (en) | Method and apparatus for encapsulating 3d region related annotation in a media file | |
US20240107129A1 (en) | Method and apparatus for encapsulating image data in a file for progressive rendering | |
GB2573096A (en) | Method and apparatus for encapsulating images with proprietary information in a file | |
WO2024004449A1 (en) | Information processing device, information processing method, and computer program | |
GB2582024A (en) | Method and apparatus for encapsulating groups of images in a file | |
WO2022129235A1 (en) | Method and apparatus for encapsulating image data in a file for progressive rendering | |
GB2602101A (en) | Method and apparatus for encapsulating image data in a file for progressive rendering |