CN100419748C - Supporting advanced coding formats in media files - Google Patents

Supporting advanced coding formats in media files Download PDF

Info

Publication number
CN100419748C
CN100419748C CNB038152029A CN03815202A CN100419748C CN 100419748 C CN100419748 C CN 100419748C CN B038152029 A CNB038152029 A CN B038152029A CN 03815202 A CN03815202 A CN 03815202A CN 100419748 C CN100419748 C CN 100419748C
Authority
CN
China
Prior art keywords
described
sample
sei
medium data
data
Prior art date
Application number
CNB038152029A
Other languages
Chinese (zh)
Other versions
CN1666195A (en
Inventor
M·Z·维沙拉姆
A·塔巴塔拜
T·沃尔克
Original Assignee
索尼电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US37665102P priority Critical
Priority to US37665202P priority
Priority to US60/376,651 priority
Priority to US60/376,652 priority
Priority to US10/371,464 priority
Priority to US10/371,464 priority patent/US20030163477A1/en
Priority to US10/425,685 priority patent/US20040006575A1/en
Priority to US10/425,685 priority
Application filed by 索尼电子有限公司 filed Critical 索尼电子有限公司
Publication of CN1666195A publication Critical patent/CN1666195A/en
Application granted granted Critical
Publication of CN100419748C publication Critical patent/CN100419748C/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8451Structuring of content, e.g. decomposing content into time segments using Advanced Video Coding [AVC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network, synchronizing decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network, synchronizing decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format

Abstract

One or more descriptions pertaining to multimedia data are identified (figure 1) and included into supplemental enhancement information (SEI) associated with the multimedia data (figure 24). Subsequently, the SEI containing the one or more descriptions is transmitted to a decoding system for optional use in decoding of the multimedia data (figure 2).

Description

The device and the system of a kind of computing method and this method of employing

Related application

The sequence number of submitting in the application and on April 29th, 2002 is that the sequence number submitted on April 29th, 60376651 and 2002 is that 60/376652 U.S. Provisional Patent Application is relevant and require its rights and interests, and they are incorporated herein by reference.The application is that 10/371464 U.S. Patent application is relevant with sequence number that on February 21st, 2003 submitted to also.

Technical field

In general, the present invention relates to the storage and the retrieval of the audio-visual content of multimedia file format, specifically, relate to file layout with ISO media file format compatibility.

Copyright notice/permission

The disclosed part of patent document comprises the data that is subjected to copyright protection.The literary property owner does not oppose anyone facsimile copy patent document or patent disclosure, because it appears in patent and trademark office patent file or the record, but still keeps all literary propertys in others.Below state software and data in the accompanying drawing that is applicable to the following stated and is correlated with therewith: 2001, Sony Electronics, Inc., all rights reserved, must not reprint.

Background technology

Along with the demand that increases rapidly, many multimedia codings and storage scheme have been developed to network, multimedia, database and other numerical capacity.Be used for audio-visual data is encoded and one of the well-known file layout of storing is Apple Computer Inc exploitation File layout.The QuickTime file layout is as creating ISO (International Standards Organization) (ISO) multimedia file format, ISO/IEC 14496-12, infotech--coding of audiovisual object--the 12nd part: the starting point of ISO media file format (being called the ISO file layout again), the ISO media file format is used as the template of two Standard File Formats again: (1) is used for the MPEG-4 file layout of Motion Picture Experts Group's exploitation, be called MP4 (ISO/IEC 14496-14, coding-Di 14 parts of infotech-audiovisual object: the MP4 file layout); And the file layout of (2) JPEG 2000 (ISO/IEC 15444-1), develop by JPEG (joint photographic experts group) (JPEG).

The ISO media file format is made up of the OO structure that is called box (being called atom or object again).Two important top-level boxes comprise media data or metadata.Most of box has been provided by the hierarchy that provides about the metadata of explanation, structure and the temporal information of physical medium data.This set of box is included in the box that is called movie box.Media data itself can be arranged in media data boxes or outside.Provide set hierarchy to be called track about the metadata box of the information of specific medium data.

Main metadata is a movie objects.Movie box comprises track box, and it describes the of short duration media data that presents.The media data that is used for track can be various types of (for example video data, voice data, binary format screen representations (BIFS) etc.).Each track also is divided into some samples (being called access unit or picture again).Schedule of samples is shown in the media data units of particular point in time.The sample meta-data pack is contained in the set of sample box.Each track box comprises sample watchcase metadata box, and it comprises time that each sample is provided, with the box of sample size of byte representation etc.Sample is the minimum data entity that can represent timing, position and other metadata information.Sample can be grouped into the chunk that comprises the continuous sample set.Chunk can be different sizes, and comprises the sample of different sizes.

Recently, the video team of MPEG and the video of International Telecommunication Union coding expert group (VCEG) beginning are as new encoding and decoding of video (codec) standard of joint video team (JVT) cooperative development, be called ITU and advise H.264 or MPEG-4 the 10th part " advanced video codec (AVC) or JVT codec ".These terms and abbreviation thereof, as H.264, JVT and AVC, here can be used alternatingly.

The JVT codec design is distinguished two different conceptual levels, video coding layer (VCL) and network abstraction layer (NAL).VCL comprises the coding relevant portion of codec, for example motion compensation, transformation of coefficient coding and entropy coding.The output of VCL is some bars, and wherein each comprises a series of macro blocks and related heading information.NAL extracts VCL from the details of the transport layer that is used for carrying the VCL data.The above irrelevant general expression of transmission of Information of its definition bar level.Interface between the NAL definition Video Codec itself and the external world.In inside, NAL uses the NAL bag.NAL includes the type field that indicates payload type and adds hyte in the payload.Data in the wall scroll can be further divided into different data partitions.

In many existing video code models, the encoding stream data comprise various titles, wherein comprise the parameter of controlling decode procedure.For example, the MPEG-2 video standard comprises sequence-header, strengthens the image header before image sets (GOP) and corresponding those the video data.In JVT, the information required to the VCL data decode is grouped into parameter set.Each parameter set is endowed an identifier, and this identifier is used as the reference in the bar subsequently.Not that (in the band) sends parameter set in stream, but can outside stream, (outside the band) send.

Existing file layout does not provide the instrument of the storage parameter set relevant with encoded media data; They are not provided for yet, and a media data (being sample or subsample) links effectively with parameter set, the method for the feasible collection of can retrieving effectively and pass a parameter.

In the ISO media file format, the least unit that gets final product access without the analysis media data is a sample, i.e. full picture among the AVC.In many coded formats, sample also can be divided into and be called the subsample more junior unit of (being called sample segments or access unit fragment again).Under the situation of AVC, the subsample is corresponding to bar.But existing file layout is not supported the subdivision of access sample.Neatly storage data are hereof formed the system of packet for stream transmission for needs, the ability that lacks the access subsample has hindered the flexible subpackage of JVT media data so that stream transmission.

Another restriction of existing storage format must be handled the network condition that has changed when transmitting media data as a stream between the storage flow with different bandwidth and the exchange of carrying out.In typical stream transmission situation, one of key request is the bit rate that comes reducing compressed data along with the network condition that changes.This realizes by a plurality of streams of quality settings with different bandwidth and representative network conditions being encoded and it being stored in one or more files usually.Then, but the server response to network condition between these streams of encoding in advance, exchange.In existing file layout, the exchange between the stream only do not rely on be only on the sample that previous sample comes reconstruct feasible.These samples are called the I frame.Do not come those samples (promptly relevant P frame or B frame) of reconstruct upward to exchange between convection current at present any support is provided with a plurality of samples for reference at the previous sample of dependence.

The AVC standard provides a kind of instrument that is called swap image (being called SI image and SP image) to realize effectively exchange between stream, random access and error resilient and other function.Swap image is a kind of special image, and its reconstruction value is equal to the image that its expection will exchange to.Swap image can use the different reference picture of those images with the image that is used to predict that they mate, thereby provides Billy to use the more effective coding of I frame.In order to use storage swap image hereof effectively, must know which image set is equivalent, must know that also which image is used to prediction.Existing file layout does not provide this information, so this information must extract by the analysis encoding stream, and this is a poor efficiency and slowly.

Therefore, need to strengthen storage means, so that deal with the new function that emerging video encoding standard provides, and the existing restriction that solves those storage meanss.

Summary of the invention

The one or more descriptions relevant with multi-medium data are identified, and are comprised in the supplemental enhancement information relevant with multi-medium data (SEI).Subsequently, the SEI that comprises these descriptions is sent to decode system, so that be used for the decoding of multi-medium data alternatively.

The present invention is intended to overcome the shortcoming of the above prior art of discussing, to satisfy the demand that increases rapidly to network, multimedia, database and other numerical capacity.

According to the present invention, a kind of computing method are provided, comprising:

Discern the one or more descriptions relevant with multi-medium data, described one or more description comprises descriptor and description scheme, wherein said descriptor is described the syntactical and semantical feature in the audio-visual content of described multi-medium data performance, and described description scheme is specified structure and semantic relation between the described descriptor; And

Described one or more descriptions are included in the supplemental enhancement information relevant with described multi-medium data, the SEI that comprises described one or more descriptions is sent to decode system subsequently, so that be used for the decoding of described multi-medium data alternatively, wherein said SEI and described multi-medium data are sent to decode system at the track that separates, and the timing information of the described track that separates is used for the described track that separates synchronously.

According to the present invention, a kind of device also is provided, comprising:

Storer;

The media file creator forms first file that comprises multi-medium data, and described media file creator comprises the instruction that is stored in the described storer; And

The meta data file creator, discern the one or more descriptions relevant with multi-medium data, described one or more description comprises descriptor and description scheme, wherein said descriptor is described the syntactical and semantical feature in the audio-visual content of described multi-medium data performance, and described description scheme is specified structure and semantic relation between the described descriptor, and described one or more descriptions are included in the supplemental enhancement information relevant with described multi-medium data, the described SEI that comprises described one or more descriptions is sent to decode system subsequently, so that be used for the decoding of described multi-medium data alternatively, wherein said SEI and described multi-medium data are sent to decode system at the track that separates, and the timing information of the described track that separates is used for the described track that separates synchronously, and described meta data file creator comprises the instruction that is stored in the described storer.

According to the present invention, a kind of device also is provided, comprising:

Processor;

Be used to dispose the parts of the described processor identification one or more descriptions relevant with multi-medium data, described one or more description comprises descriptor and description scheme, wherein said descriptor is described the syntactical and semantical feature in the audio-visual content of described multi-medium data performance, and described description scheme is specified structure and semantic relation between the described descriptor; And

Be used for described one or more descriptions are included in the parts of the supplemental enhancement information relevant with described multi-medium data, the described SEI that comprises described one or more descriptions is sent to decode system subsequently, so that be used for the decoding of described multi-medium data alternatively, wherein said SEI and described multi-medium data are sent to decode system at the track that separates, and the timing information of the described track that separates is used for the described track that separates synchronously.

According to the present invention, a kind of system also is provided, comprising:

Storer; And

Be connected at least one processor of described storer, described at least one processor execution command collection, thereby make described at least one the processor identification one or more descriptions relevant with multi-medium data, described one or more description is described the syntax and semantics of described multi-medium data, described one or more description comprises descriptor and description scheme, wherein said descriptor is described the syntactical and semantical feature in the audio-visual content of described multi-medium data performance, and described description scheme is specified structure and semantic relation between the described descriptor, and

Described one or more descriptions are included in the supplemental enhancement information relevant with described multi-medium data, the described SEI that comprises described one or more descriptions is sent to decode system subsequently, so that be used for the decoding of described multi-medium data alternatively, wherein said SEI and described multi-medium data are sent to decode system at the track that separates, and the timing information of the described track that separates is used for the described track that separates synchronously.

Description of drawings

In the accompanying drawings, by way of example rather than the mode that limits the present invention will be described, in each accompanying drawing similarly reference number represent similar elements, in the accompanying drawing:

Fig. 1 is the block diagram of an embodiment of coded system;

Fig. 2 is the block diagram of an embodiment of decode system;

Fig. 3 is the block diagram that is fit to implement computer environment of the present invention;

Fig. 4 is the process flow diagram of the method for storage subsample metadata in coded system;

Fig. 5 is the process flow diagram that utilizes the method for subsample metadata in decode system;

Fig. 6 explanation has the expansion MP4 media stream model of subsample;

Fig. 7 A-7K explanation is used to store the exemplary data structure of subsample metadata;

Fig. 8 is the process flow diagram of the method for stored parameter set metadata in coded system;

Fig. 9 is the process flow diagram that utilizes the method for parameter set metadata in decode system;

Figure 10 A-10E explanation is used for the exemplary data structure of stored parameter set metadata;

Figure 11 illustrates the enhancing image sets (GOP) of demonstration;

Figure 12 is the process flow diagram of the method for storage sequence metadata in coded system;

Figure 13 is the process flow diagram that utilizes the method for sequent data in decode system;

Figure 14 A-14E explanation is used for the exemplary data structure of storage sequence metadata;

Figure 15 A and 15B explanation are used for the use of the exchange sample set of bit stream exchange;

Figure 15 C is the process flow diagram of an embodiment that is used for determining carrying out the method for the point that exchanges between two bit streams;

Figure 16 is the process flow diagram of the method for memory transactions sample metadata in coded system;

Figure 17 is the process flow diagram that utilizes the method for exchange sample metadata in decode system;

Figure 18 explanation is used for the exemplary data structure of memory transactions sample metadata;

The use of the exchange sample set of the random access entrance of Figure 19 A and 19B explanation realization bit stream;

Figure 19 C is the process flow diagram of an embodiment of method that is used for determining the random access point of sample;

Figure 20 A and 20B explanation realize the use of the wrong exchange sample set that recovers;

Figure 20 C is the process flow diagram of an embodiment of the method recovered of the mistake when being used to realize sending sample;

Figure 21 and 22 explanations are according to the storage of the parameter set metadata of some embodiments of the present invention; And

Figure 23-26 explanation is according to the storage of the supplemental enhancement information (SEI) of some embodiments of the present invention.

Embodiment

In the following detailed description of embodiments of the invention, with reference to accompanying drawing, among the figure similarly reference number represent similar element, provide in illustrational mode among the figure and can implement specific embodiments of the invention.These embodiment are described in detail, be enough to enable those skilled in the art to implement the present invention, and be appreciated that and adopt other embodiment, under the prerequisite that does not deviate from the scope of the invention, can carry out logic, machinery, electric, function and other change.Therefore, it is not restrictive below describing in detail, and scope of the present invention is only defined by claims.

General introduction

General introduction with operation of the present invention begins, and Fig. 1 illustrates an embodiment of coded system 100.Coded system 100 comprises media encoders 104, metadata maker 106 and file creator 108.Media encoders 104 receiving media datas wherein can comprise video data (for example object video and other external video object of creating from natural source video scene), voice data audio object and other external audio object of natural source audio scene creation (for example from), synthetic object or above every any combination.Media encoders 104 can be formed or comprised that sub-encoders is to handle various media datas by many single encoded devices.104 pairs of media datas of media encoders are encoded, and it is passed to metadata maker 106.The metadata that provides about the information of media data is provided according to media file format metadata maker 106.Media file format can derive from ISO media file format (or its any derivant, as MPEG-4, JPEG 2000 etc.), QuickTime or other any media file format, and comprises some additional data structure.In one embodiment, additional data structure is defined as storing the metadata relevant with subsample in the media data.In another embodiment, additional data structure is defined as storing the metadata that several portions (as sample or subsample) and the corresponding parameter set that comprises the decoded information that is stored in a conventional manner in the media data media data link.In yet another embodiment, additional data structure be defined as storing with metadata in the relevant metadata of various sample groups, they are to create according to the correlativity between the sample in the media data.In yet another embodiment, additional data structure is defined as storing the exchange sample set relevant metadata relevant with media data.Exchange one group of sample that sample set is represented to have identical decode value but may be relevant with different samples.In other embodiment, the various combinations of definition additional data structure in employed file layout.Be described in more detail below these additional data structure and functional.

File creator 108 is responsible for memory encoding media data and metadata.In one embodiment, encoded media data and associated metadata (for example subsample metadata, parameter set metadata, group sample metadata or exchange sample metadata) are stored in the same file.The structure of this file is defined by media file format.

In another embodiment, the metadata of all or part of type and media data separate storage.For example, parameter set metadata can with the media data separate storage.In particular, file creator 108 can comprise: media data file creator 114 forms the file with encoded media data; Meta data file creator 112 forms the file with metadata; And synchronizer 116, make media data and metadata corresponding synchronous.State the synchronous of the storage of metadata separately and it and media data in more detail below.

In one embodiment, meta data file creator 112 is responsible for storage supplemental enhancement information (SEI) message relevant with media data, as the metadata of separating with media data.SEI message represents to be used for the optional data of media data decoding.Demoder not necessarily uses the SEI data, because there is not it also can not hinder decode operation.In one embodiment, SEI message is used for comprising the description of media data.Description defines according to the MPEG-7 standard, and is made up of descriptor and description scheme.Descriptor is represented the feature of audio-visual content, and defines the syntax and semantics of each character representation.The example of descriptor comprises color description symbol, texture description symbol, motion descriptor etc.Description scheme (DS) is specified structure and the semanteme that concerns between its composition.These compositions can be descriptor and description scheme.The use of describing has improved through the search of decoded media data and has checked.Because the washability of SEI message, therefore description being included in the SEI message not can the negative effect decode operation, because demoder not necessarily will use SEI message, allows ability and the customized configuration used like this unless it has.Discuss SEI message in more detail below as metadata store.

The file that file creator 108 is set up can be provided on the channel 110 so that storage or transmission.

Fig. 2 illustrates an embodiment of decode system 200.Decode system 200 comprises meta-data extractor 204, media data stream processor 206, media decoder 210, compositor 212 and renderer 214.Decode system 200 can reside in the client apparatus, and can be used for local the playback.Perhaps, decode system 200 can be used for streaming data, and has by network (for example the Internet) 208 mutual server in communication part and client computer parts.Server section can comprise meta-data extractor 204 and media data stream processor 206.The client computer part can comprise media decoder 210, compositor 212 and renderer 214.

Meta-data extractor 204 is responsible for extracting metadata from be stored in database 216 or the file by network (for example from coded system 100) reception.File can comprise, also can not comprise the media data relevant with the metadata that just is being extracted.The metadata of extracting from this document comprises one or more above-mentioned additional data structure.

The metadata of extracting is delivered to media data stream processor 206, and it also receives relevant encoded media data.Media data stream processor 206 adopts metadata to form the media data flow that is sent to media decoder 210.In one embodiment, media data stream processor 206 adopts the metadata of relevant subsample to search the position (for example being used for subpackage) of subsample in the media data.In another embodiment, media data stream processor 206 adopts the metadata of relevant parameter set that the several portions of media data is linked to its corresponding parameters collection.In yet another embodiment, media data stream processor 206 adopt the metadata of the various sample groups of definition in the metadata visit in concrete group sample (for example, comprise the group that does not have the associated sample of other sample so that the response transmission condition reduces transmission bit rate by abandoning, realize scalability).In yet another embodiment, media data stream processor 206 adopts the metadata of definition exchange sample sets to search to have and expects decode value that the sample that will exchange is identical but do not rely on the exchange sample (for example allow exchange to the stream that P frame or B frame on has different bit rates) of gained sample with relative sample.

In case media data flow forms, it directly (is for example reset for this locality) or sends to media decoder 210 by network 208 (for example for streaming data), is used for decoding.The output of compositor 212 receiving media demoders 210, and synthetic will be presented on scene on user's display device by renderer 214.

The following description of Fig. 3 is used to the overview that is fit to realize computer hardware of the present invention and other service part is provided, but is not to be intended to limit environment applicatory.Fig. 3 explanation is suitable as an embodiment of the computer system of the meta-data extractor 204 of the metadata maker 106 of Fig. 1 and/or file creator 108 or Fig. 2 and/or media data stream processor 206.

Computer system 340 comprises processor 350, storer 355 and the I/O function 360 with system bus 365 couplings.Storer 355 is configured to storage instruction, and method as herein described is carried out in these instructions when being carried out by processor 350.I/O 360 also comprises various computer-readable medias, comprising can be by the memory storage of any kind of processor 350 accesses.Those skilled in the art can be appreciated that at once term " computer-readable media " also comprises the carrier wave to the data-signal coding.Will appreciate that also system 340 controls by the operating system software that carries out in storer 355.I/O and related media 360 storages are used for the computer executable instructions of operating system and method for the present invention.Each of metadata maker 106 shown in Fig. 1 and Fig. 2, file creator 108, meta-data extractor 204 and media data stream processor 206 can be the stand-alone assembly that is coupled to processor 350, perhaps can be included in the computer executable instructions of processor 350 execution.In one embodiment, computer system 340 can be the part of ISP (ISP) or by I/O 360 and ISP coupling, thereby sends or receiving media data by the Internet.Obviously, the invention is not restricted to that the Internet inserts and based on the website of the Internet; Also imagined direct-coupled and special-purpose network.

Be appreciated that computer system 340 is examples with many possible computer system of different architecture.Typical computer comprises processor, storer and the bus that storer is coupled to processor usually at least.One skilled in the art will appreciate that the present invention can adopt other computer system configurations to implement, comprising multicomputer system, small-size computer, mainframe computer or the like.The present invention also can implement in distributed computing environment, and in these environment, task is by carrying out by the teleprocessing device of communication network link.

The subsample accessibility

Fig. 4 and Fig. 5 illustrate the process of storage and retrieval subsample metadata, and these processes are carried out by coded system 100 and decode system 200 respectively.These processes can be carried out by the processing logic that can comprise hardware (for example circuit, special logic etc.), software (for example running on general-purpose computing system or custom-built machine) or their both combinations.Process for the software realization, the description of process flow diagram enables those skilled in the art to develop this class method, wherein is included in the instruction (processor of computing machine is carried out these instructions from the computer-readable media that comprises storer) of carrying out these processes in the computing machine of suitable configuration.The computer executable instructions programming language that can use a computer is write, and perhaps can be included in the firmware logic.If write with the programming language that meets recognised standard, then these instructions can be carried out on various hardware platforms and be used for and various operating system interfaces.In addition, embodiments of the invention are not described with reference to any certain programmed language.Should be appreciated that various programming languages can be used to realize theory as herein described.In addition, the software of one or another kind of form is often told about (as program, process, process, application program, module, logic with the form of taking to move or bear results in this area ...).These expression waies are just told about a kind of simplified way that the computing machine executive software makes that the processor of computing machine is carried out action or born results.Should be appreciated that operation more or less can be attached in Fig. 4 and the described process of Fig. 5, and do not deviate from scope of the present invention, the arrangement of the illustrated and described frame of this paper is not to mean any particular order.

Fig. 4 is the process flow diagram of an embodiment that is used for creating in coded system 100 method 400 of subsample metadata.At first, method 400 begins (processing block 402) with the processing logic that reception has the file of encoded media data.Subsequently, processing logic extracts the information (processing block 404) on the border of subsample in the identification medium data.According to used file layout, but the minimum unit of the data stream of additional period attribute is called sample (by ISO media file format or QuickTime definition), access unit (by the MPEG-4 definition) or image (by the JVT definition) etc.The continuous part of the data stream of sample level is represented to be lower than in the subsample.Coded format is depended in the definition of subsample, but in general, and the subsample is the significant subelement that can be used as the sample of independent community's decoding, perhaps as the combination of subelement to obtain the part reconstruct of sample.The subsample also can be called access unit fragment.Cutting apart of the data stream of subsample ordinary representation sample, make that other subsample in each subsample and the same sample has seldom or do not have correlativity.For example, in JVT, the subsample is the NAL bag.Equally, for the MPEG-4 video, the subsample is a video packets.

In one embodiment, coded system 100 is in the network abstraction layer work of aforesaid JVT definition.The JVT media data flow is made up of a series of NAL bags, and wherein each NAL bag (being called the NAL unit again) comprises title division and payload part.A kind of NAL bag is used for comprising the coding VCL data of each bar or the individual data subregion of bar.In addition, the NAL bag can be the packets of information that comprises SEI message.In JVT, the subsample can be the complete NAL bag with title and payload.

In processing block 406, processing logic is set up the subsample metadata of the subsample in the definition media data.In one embodiment, the subsample metadata is organized into the set (for example set of box) of predetermined data-structure.The set of predetermined data-structure can comprise the data structure that comprises about the information of each subsample size, comprise data structure about the information of each sample neutron total sample number, comprise the information of describing each subsample (for example what is defined as the subsample) data structure, comprise data structure, comprise or include other any data structures of the data of closing the subsample about the data structure of the information of the priority of each subsample about the information of each chunk neutron total sample number.

Subsequently, in one embodiment, processing logic determines whether any data structure comprises the data sequence of repetition (decision box 408).If this is judged to be certainly, then processing logic is converted to the reference of sequence appearance and the number of times (processing block 410) that repetitive sequence occurs to the data sequence of each repetition.

Then, in processing block 412, processing logic utilizes particular media files form (for example JVT file layout) that the subsample meta-data pack is contained in the file relevant with media data.According to media file format, the subsample metadata can store with the sample metadata (for example the subsample data structure can be included in the sample watchcase that comprises the sample data structure) or with sample metadata separate storage.

Fig. 5 is the process flow diagram of an embodiment that is used for using at decode system 200 method 500 of subsample metadata.At first, method 500 begins (processing block 502) with the processing logic that receives the file relevant with encoded media data.This document can receive from (local or outside) database, coded system 100 or other any device from network.This document comprises the subsample metadata of the subsample in the definition media data.

Subsequently, processing logic extracts subsample metadata (processing block 504) from file.As mentioned above, the subsample metadata can be stored in the data structure set (for example box set).

In addition, in processing block 506, the metadata that the processing logic utilization has been extracted is come the subsample (being stored in same file or the different file) in the recognition coding media data, and each seed specimen formed bag so that send to media decoder, thereby realize the flexible subpackage of media data, so that stream transmission (for example in order to support error resilient, scalability etc.).

Below with reference to expansion ISO media file format (being called expansion MP4) demonstration subsample metadata structure is described.Those skilled in the art is fully aware of, and other media file format can easily be expanded, so that in conjunction with the similar data structure that is used for storing the subsample metadata.

Fig. 6 explanation has the expansion MP4 media stream model of subsample.Present data (for example comprising presenting of isochronous audio and video) by film 602 expressions.Film 602 comprises one group of track 604.Each track 604 presentation medium data stream.Each media data flow is divided into sample 606.The media data units of each sample 606 expression particular point in time.Sample 606 also is divided into subsample 608.In the JVT standard, NAL bag or unit can be represented in subsample 608, for example single of image, have a data subregion, band confidential reference items manifold or a SEI packets of information of the bar of a plurality of data partitions.Perhaps, other any structure element of sample, for example coded data of space or time zone in the presentation medium can be represented in subsample 606.In one embodiment, can be used as increment according to any subregion of the encoded media data of certain structure or semantic criteria handled originally.

When the film section was used to provide about duration of each sample and the information of size, the deterioration priority of specifying each sample and other sample information, the track expansion box was used for the sample in the identified tracks section.Worsen the importance of priority definition sample, promptly define lacking (for example in transmission course, losing) and can how influencing video quality of sample because of it.In one embodiment, the track expansion box is expanded to comprising the default information about the subsample in the orbital segment box.This information can comprise the reference that subsample size for example and subsample are described.

Track can be divided into plurality of sections.Each section can comprise zero or a plurality of continuous distances of swimming of sample.Sample in the orbital segment distance of swimming box identified tracks section, provide about the information of duration of each sample in the orbital segment and size and with orbital segment in the relevant out of Memory of sample stored.Orbital segment title box sign is used for the default data value of orbital segment distance of swimming box.In one embodiment, orbital segment distance of swimming box and orbital segment title box are expanded to comprising the information about the subsample in the orbital segment.Extend information in the orbital segment distance of swimming box can comprise the subsample quantity of each sample that for example is stored in the orbital segment, the size of each subsample, reference and the attribute set that the subsample is described.Attribute set shows whether medium data in the chunk of sample or subsample of orbital segment, and whether the subsample data exist in orbital segment distance of swimming box, and whether each subsample has the size data that exists and/or describe reference data in orbital segment distance of swimming box.Extend information in the orbital segment title box can comprise and for example show whether each subsample has the size data of existence and/or describe the default value of the sign of reference data.

Fig. 7 A-7L explanation is used to store the exemplary data structure of subsample metadata.

With reference to Fig. 7 A, the sample watchcase 700 that comprises the defined sample metadata of ISO media file format box is expanded to comprising the subsample access boxes, and for example related box 704 is described in the big capsule 702 in subsample, subsample, box 708 is described to sample box 706 and subsample in the subsample.In one embodiment, the subsample access boxes comprises that also the subsample arrives chunk box and priority box.In one embodiment, the use of subsample access boxes is optional.

With reference to Fig. 7 B, sample 710 can for example can be divided into the data partition and the ROI of ROI (zone of concern) 716 for example of bar, for example subregion 714 of bar 712 for example.Each expression sample in these examples is cut apart kind to the difference of subsample.Subsample in the single sample can have different sizes.

The big capsule 718 in subsample comprises the version field of the version of specifying the big capsule 718 in subsample, the subsample size field of specifying default subsample size, the entry size field that the subsample count area of track neutron sample size is provided and specifies each subsample size.If the subsample size field is set to 0, then the subsample has different sizes, and they are stored in the size table 720 of subsample.If the subsample size field is not set to 0, then it specifies constant subsample size, and it is empty indicating subsample size table 720.Table 720 can have 32 fixed size or be used to represent the variable-length field of subsample size.If this field is a variable-length, then the subsample table comprises the field of representing the length of subsample size field with byte number.

With reference to Fig. 7 C, the subsample comprises to sample box 722 specifies the subsample to the version field of the version of sample box 722 and the clauses and subclauses count area that number of entries in the table 723 is provided.The subsample comprises the first sample field of index of first sample in the sample distance of swimming that the subsample of sharing each sample equal number is provided and the subsample field that each sample of each sample neutron sample size in the sample distance of swimming is provided to each clauses and subclauses in the schedule of samples.

Table 723 can be used for by calculating sample size in the distance of swimming, multiplied each other in the subsample of this quantity and suitable every sample and the results added of all distances of swimming, obtaining the subsample sum in the track.

In other embodiments, the subsample can be grouped into chunk rather than sample.Then, the subsample is used for identifying subsample in the chunk to the chunk box.The information of the index that the subsample is described about the index of first chunk in the chunk distance of swimming of the subsample of sharing equal number, subsample quantity in each chunk and subsample to chunk box storage.The subsample can be used to search the position of subsample in the chunk that comprises specific subsample, the chunk and the description of this subsample to the chunk box.In one embodiment, when the group sample packet was chunk, the subsample did not exist to sample box 722.Equally, when the group sample packet was sample, the subsample did not exist to the chunk box.

As mentioned above, the subsample access boxes can comprise the priority box of the deterioration priority of specifying each subsample.Worsen the importance of priority definition subsample, promptly define the quality how (for example losing in transmission course because of it) can influence the decoded media data that lacks of subsample.The size of priority box is defined by the subsample quantity in the track, as determining to the chunk box from the subsample to sample box 722 or subsample.

With reference to Fig. 7 D, the description type identifier of the type that related box 724 comprises the version field of specifying the subsample to describe the version of related box 724, indicates described subsample (for example NAL bag, the zone paid close attention to etc.) and the clauses and subclauses count area that the number of entries in the table 726 is provided are described in the subsample.Each clauses and subclauses in the table 726 comprise and show that the subsample describes the first subsample field that the subsample of ID is described the type identifier field and the index of sharing first subsample in the subsample distance of swimming of describing ID in identical subsample is provided.

The use that id field is described in type identifier control subsample is described in the subsample.That is to say, according to the type of describing appointment in the type identifier, id field itself is described in the subsample can specify the description ID that directly subsample among the ID itself is described coding, is perhaps the index that id field can be used as different table (being subsample described below description list) described in the subsample? for example, indicate the JVT description if describe type identifier, then the code that id field can comprise appointment JVT subsample characteristic is described in the subsample.In this case, it can be 32 bit fields that id field is described in the subsample, and wherein so that have the tentation data subregion in the expression subsample, and higher 24 be used for representing NAL bag type or be used for later expansion as bitmask for 8 least significant bit (LSB)s.

With reference to Fig. 7 E, the subsample describes that box 728 comprises version field, the clauses and subclauses count area that number of entries in the table 730 is provided of specifying the subsample to describe the version of box 728, the description type identifier field of description type that subsample description field (this field is used to provide the information about the subsample feature) is provided and comprise the table that clauses and subclauses 730 are described in one or more subsamples.The type identification type relevant with descriptor described in the subsample, and describes same field in the contingency table 724 corresponding to the subsample.Each clauses and subclauses in the table 730 comprise the subsample and describe clauses and subclauses, wherein have the information about the feature of describing the relevant subsample of clauses and subclauses therewith.Information and the form of describing clauses and subclauses depend on the description type field.For example, when the description type is parameter set, the value that clauses and subclauses will comprise parameter set is described respectively then.

Descriptor can relate to other required any information of feature of parameter set information, the information relevant with ROI or performance subsample.For parameter set, the subsample is described contingency table 724 and is shown the parameter set relevant with each subsample.In this case, ID is described corresponding to parameter set identifier in the subsample.Equally, the following different district that is subjected to pay close attention to can be represented in the subsample.The subsample is defined as one or more coded macroblockss, and then uses the subsample to describe contingency table and represent that the coded macroblocks of frame of video or image is divided into zones of different.For example, the coded macroblocks in certain frame can be divided into and has prospect and the background macro block that ID (promptly ID is described in 1 and 2 subsample) described in two subsamples, shows the distribution of prospect and background area respectively.

The subsample that Fig. 7 F explanation is dissimilar.The final data subregion 740, SEI packets of information 742 of data partition 738, the bar at title 736 in the bar 732 of subregion, the bar 734 with a plurality of data partitions, bar, bar middle part etc. can be represented not have in the subsample.In these subsample types each can be relevant with the particular value of 8 bitmasks 744 shown in Fig. 7 G.8 bitmasks can be formed 8 least significant bit (LSB)s that 32 seat samples are described id field, as mentioned above.Fig. 7 H explanation has the subsample of the description type identifier that equals " jvtd " and describes related box 724.Table 726 comprises that 32 seat samples of the value shown in the storage map 7G describe id field.

The compression of the data in the contingency table is described in Fig. 7 H-7K explanation subsample.

With reference to Fig. 7 I, compaction table 726 does not comprise that the subsample of repetitive sequence 748 describes the sequence 750 of ID.In compaction table 746, repetitive sequence 750 has been compressed into the reference of sequence 748 and the number of times that this sequence occurs.

In an embodiment shown in Fig. 7 J, sequence occur can be by utilizing it highest significant position as the distance of swimming of sequence flag 754, its 23 conducts subsequently occur index 756 and it than low order as length 758 occurring, be coded in the subsample and describe in the id field.If indicate that 754 are set to 1, then it shows that these clauses and subclauses are the appearance of repetitive sequence.Otherwise these clauses and subclauses are that ID is described in the subsample.Appearance index 756 is that the index in the related box 724 is described in the subsample of the appearance first time of sequence, and length 758 shows the length that repetitive sequence occurs.

In another embodiment shown in Fig. 7 K, repetitive sequence table 760 occurs and is used for representing that repeating sequences occurs.The highest significant position that id field is described in the subsample is used as the distance of swimming of sequence flag 762, shows that to describe that the repetitive sequence of the ingredient of related box 724 occurs in the table 760 be that ID or sequence index 764 are described in the subsample of clauses and subclauses to these clauses and subclauses belonging to the subsample.Repetitive sequence table 760 occurs and comprises that first the subsample of specifying in the repetitive sequence describes the appearance index field of index in the related box 724 and the length field of specifying the length of repetitive sequence.

Parameter set

In at some media formats, as JVT, comprise media data is correctly decoded " title " information of required crucial controlling value and the remainder separate/of coded data, and be stored in the parameter set.Then, be not in stream, these controlling values and coded data to be mixed, but coded data for example can utilize mechanism such as unique identifier to quote the necessary parameter collection.This method makes the transmission of higher level code parameter separate with coded data.Simultaneously, it is also by sharing the incompatible minimizing redundance of common set as the controlling value of parameter set.

For effective transmission of the media streams of supporting the operation parameter collection, sender or broadcaster must be able to be linked to corresponding parameters to coded data rapidly, must transmit or the time and the position of access parameter collection so that know.One embodiment of the present of invention provide this function by data related between the counterpart with media file format storage designated parameter collection and media data as parameter set metadata.

Fig. 8 and Fig. 9 illustrate the process of storage and search argument set metadata, and they are carried out by coded system 100 and decode system 200 respectively.These processes can be carried out by the processing logic that can comprise hardware (for example circuit, special logic etc.), software (for example running on general-purpose computing system or the custom-built machine) or their both combinations.

Fig. 8 is the process flow diagram of an embodiment that is used for creating in coded system 100 method 800 of parameter set metadata.At first, method 800 begins (processing block 802) with the processing logic that reception has the file of encoded media data.This document comprises that appointment is how to the coding parameter collection of the each several part decoding of media data.Subsequently, the processing logic inspection is called the relation (processing block 804) between the counterpart of the coding parameter collection of parameter set and media data, and creates defined parameters collection and they and media data related parameter set metadata (processing block 806) partly.The media data part can be represented by sample or subsample.

In one embodiment, parameter set metadata is organized into predetermined data-structure set (for example box set).Predetermined data-structure set can comprise and comprising about the data structure of the descriptor of parameter set and the data structure that comprises information related between definition sample and the corresponding parameter set.In one embodiment, predetermined data-structure set also comprises the data structure that comprises information related between definition subsample and the corresponding parameter set.The data structure that comprises subsample and parameter set related information can be ignored, also can not ignore the data structure that comprises sample and parameter set related information.

Subsequently, in one embodiment, processing logic determines whether any parameter set data structure comprises the data sequence of repetition (decision box 808).If this is judged to be certainly, then processing logic is converted to the reference of sequence appearance and the number of times (processing block 810) that sequence occurs to the data sequence of each repetition.

Afterwards, in processing block 812, processing logic utilizes particular media files form (for example JVT file layout) that parameter set metadata is covered in the file relevant with media data.According to media file format, parameter set metadata can (for example be stored with track metadata and/or sample metadata, the data structure that comprises about the descriptor of parameter set can be included in the track box, and the data structure that comprises related information can be included in the sample watchcase) or with track metadata and/or sample metadata separate storage.

Fig. 9 is the process flow diagram that is used at an embodiment of the method 900 of decode system 200 operation parameter set metadata.At first, method 900 begins (processing block 902) with the processing logic that receives the file relevant with encoded media data.This document can receive from (local or outside) database, coded system 100 or other any device from network.This document comprises related parameter set metadata between the counterpart (for example corresponding sample or subsample) of the parameter set that defines media data and parameter set and media data.

Subsequently, processing logic extracts parameter set metadata (processing block 904) from file.As mentioned above, parameter set metadata can be stored in the data structure set (for example box set).

In addition, in processing block 906, the metadata that the processing logic utilization is extracted determines which parameter set is relevant with specific medium data division (for example sample or subsample).Then, this information can be used to control the delivery time of media data part and corresponding parameter set.That is to say, be used for before the packet that comprises sample or subsample, to be sent out, perhaps send with the packet that comprises sample or subsample to the parameter set of specific sample or subsample decoding.

Therefore, the use of parameter set metadata can independently transmit parameter set on more reliable channel, reduces to cause the mistake of Media Stream partial loss or the possibility of loss of data.

Below with reference to expansion ISO media file format (being called expansion ISO) exemplary parameters set metadata structure is described.But, should be pointed out that other media file format can expand in conjunction with the various data structures that are used for the stored parameter set metadata.

Figure 10 A-10E explanation is used for the exemplary data structure of stored parameter set metadata.

With reference to Figure 10 A, the track box 1002 that comprises the track metadata box of ISO file layout definition is expanded to comprising parameter set description box 1004.In addition, the sample watchcase 1006 that comprises the sample metadata box of ISO file layout definition is expanded to comprising that sample arrives parameter set box 1008.In one embodiment, sample watchcase 1006 comprises that the subsample arrives parameter set box, and it can ignore sample to parameter set box 1008, can discuss in more detail below.

In one embodiment, parameter set metadata boxes 1004 and 1008 is enforceable.In another embodiment, it is enforceable having only parameter set description box 1004.In yet another embodiment, all parameter set metadata boxes all are optional.

With reference to Figure 10 B, the parameter set entry field that the designated parameter collection is described the version field of the version of box 1010, provided the parameter set of the number of entries in the table 1012 to describe count area and comprise the clauses and subclauses of parameter set itself is provided parameter set description box 1010.

Parameter set can be quoted from the sample level or the increment corresponding levels.With reference to Figure 10 C, sample provides quoting parameter set to parameter set box 1014 from the sample level.Sample comprises to parameter set box 1014 specifies sample to the version field of the version of parameter set box 1014, the default parameter set id field of specifying default parameter set ID and the clauses and subclauses count area that the number of entries in the table 1016 is provided.Each clauses and subclauses in the table 1016 comprise the first sample field of index of first sample in the distance of swimming that the sample of sharing same parameter set is provided and the parameter set index of the index that the designated parameter collection is described box 1010.If default parameter set ID equals 0, then sample has different parameter sets, and they are stored in the table 1016.Otherwise, use constant parameter set, and afterwards without any array.

In one embodiment, the number of times that the reference by each repetitive sequence being converted to original series and this sequence occur comes the data in the compaction table 1016, describes the more detailed argumentation of contingency table as above zygote sample.

Parameter set can be by related between defined parameters collection and the subsample and quote from the increment corresponding levels.In one embodiment, utilize above-mentioned subsample to describe related box and come related between defined parameters collection and the subsample.Related box 1018 is described in Figure 10 D explanation subsample, wherein has the description type identifier (for example describe type identifier and equal " pars ") of reference parameter collection.Describe type identifier according to this, the subsample in the table 1020 is described ID and is indicated index in the parameter set description box 1010.

In one embodiment, when related box 1018 was described in the subsample that existence has a description type identifier of reference parameter collection, it ignored sample to parameter set box 1014.

Change between the time that parameter set can be used for the appropriate section of media data is decoded at the time and the parameter set of establishment parameter set.If this variation occurs, then decode system 200 receives the parameter update bag of pointing out the change of parameter set.Parameter set metadata comprises the identification parameter collection before upgrading and the data of the state after upgrading.

With reference to Figure 10 E, parameter set description box 1010 is included in time t 0The clauses and subclauses of the initial parameter collection of creating 1022 and response are at time t 1The parameter update bag 1026 that receives and the clauses and subclauses of the undated parameter collection 1024 created.It is relevant with corresponding subsample two parameter sets that related box 1018 is described in the subsample.

The sample group

When the sample in the track can have the various logic grouping (subregion) of sequence (may discontinuous) of the higher structure of sample in the presentation medium data, the valid document form was not provided for representing and storing the convenient mechanism of these groupings.For example, the higher level code form, as JVT its MD of the sample evidence in the monorail is organized as some groups.These groups (being called sequence or sample group herein) can be used to identify disposable sample chain when network condition needs, thereby support time-scaling.Metadata with certain stored in file format definition sample group makes the sender of medium can realize above-mentioned functions easily and effectively.

An example of sample group is that its frame-to-frame correlation allows they and the irrespectively decoded sample set of other sample.In JVT, this sample group is called enhancing image sets (strengthening GOP).In strengthening GOP, sample can be divided into subsequence.Each subsequence comprises the sample set that is relative to each other, and can be used as a unit and handle.In addition, the sample that strengthens GOP can be configured to several layers according to layered mode, makes that the sample in the upper strata is only predicted according to the sample of lower floor, thereby allows the top sample can be processed when the ability that does not influence other sample decoding.Comprise with other any layer in the lowermost layer of the incoherent sample of sample be called basic unit.Any layer of except that basic unit other is called enhancement layer.

Figure 11 illustrates that a demonstration strengthens GOP, and wherein, that sample is divided into is two-layer, be basic unit 1102 and enhancement layer 1104 and two subsequences 1106 and 1108.Two subsequences 1106 and 1108 can irrespectively be dropped each other.

Figure 12 and Figure 13 illustrate the process of storage and sample retrieval group metadata, and they are carried out by coded system 100 and decode system 200 respectively.These processes can be carried out by the processing logic that can comprise hardware (for example circuit, special logic etc.), software (for example running on general-purpose computing system or the custom-built machine) or their both combinations.

Figure 12 is the process flow diagram of an embodiment that is used for creating in coded system 100 method 1200 of sample group metadata.At first, method 1200 begins (processing block 1202) with the processing logic that reception has the file of encoded media data.Sample in the track of media data has some MD.For example, track can comprise: the I frame, and uncorrelated with other any sample; The P frame is relevant with single previous sample; And the B frame, relevant with two previous samples of any combination that comprises I frame, P frame and B frame.According to their MD, the one-tenth sample logically capable of being combined of the sample in track group (for example strengthening GOP, layer, subsequence etc.).

Subsequently, processing logic is checked media data to discern the sample group (processing block 1204) in each track, and establishment is described the sample group and defined the sample group metadata (processing block 1206) that comprises which sample in each sample group.In one embodiment, the sample group metadata is organized into predetermined data-structure set (for example box set).The data structure of the information that predetermined data-structure set can comprise the data structure, the data structure that comprises the information of the sample that comprises in each sample group of sign that comprise about the descriptor of each sample group, comprise the descriptor sequence and the data structure that comprises the information of describing layer.Subsequently, in one embodiment, processing logic determines whether any sample group data structure comprises the data sequence of repetition (decision box 1208).If this is judged to be certainly, then processing logic is converted to the reference of sequence appearance and the number of times (processing block 1210) that sequence occurs to the data sequence of each repetition.

Afterwards, in processing block 1212, processing logic utilizes particular media files form (for example JVT file layout) that the sample group metadata is covered in the file relevant with media data.According to media file format, the sample group metadata can store with the sample metadata (for example sample group data structure can be included in the sample watchcase) or with sample metadata separate storage.

Figure 13 is the process flow diagram of an embodiment that is used for using at decode system 200 method 1300 of sample group metadata.At first, method 1300 begins (processing block 1302) with the processing logic that receives the file relevant with encoded media data.This document can receive from (local or outside) database, coded system 100 or other any device from network.This document comprises the sample group metadata of the sample group in the definition media data.

Subsequently, processing logic extracts sample group metadata (processing block 1304) from file.As mentioned above, the sample group metadata can be stored in the data structure set (for example box set).

In addition, in processing block 1306, the sample group metadata that the processing logic utilization is extracted is discerned the sample chain that can handle under the prerequisite that does not influence the ability that other sample is decoded.In one embodiment, this information can be used to the sample in the access specific sample group, and the variation of response to network capacity and determine which sample can be dropped.In other embodiments, the sample group metadata is used for filtering sample, only makes in the track that some sample is processed or presents.

Therefore, the sample group metadata helps selective access and the scalability to sample.

Below with reference to expansion ISO media file format (being called expansion MP4) the illustrative samples group metadata structures is described.But, should be pointed out that other media file format can be through expansion, with in conjunction with being used for the various data structures of storing sample group metadata.

Figure 14 A-14E explanation is used for the exemplary data structure of storing sample group metadata;

With reference to Figure 14 A, the sample watchcase 1400 that comprises the sample metadata box of MP4 definition is expanded to comprising that sample group box 1402 and sample group describe box 1404.In one embodiment, sample group metadata box 1402 and 1404 is optional.In an embodiment (not shown), sample watchcase 1400 comprises other optional sample group metadata box, and for example subsequence is described the clauses and subclauses box and layer is described the clauses and subclauses box.

With reference to Figure 14 B, sample group box 1406 is used for searching the sample set that comprises in the specific sample group.The a plurality of examples that allow sample group box 1406 are corresponding to dissimilar sample groups (for example strengthening GOP, subsequence, layer, parameter set etc.).Sample group box 1406 comprise the type of the version field of the version of specifying sample group box 1406, the clauses and subclauses count area that number of entries in the table 1408 is provided, sign sample group sample group identifier field, provide the first sample field of index of first sample in the distance of swimming of the sample that comprises in the same sample group and the sample group of specifying the sample group to describe the index of box to describe index.

With reference to Figure 14 C, the sample group is described box 1410 information about the feature of sample group is provided.The sample group is described box 1410 and is comprised the sample group identifier field of type of the version field of specifying the sample group to describe the version of box 1410, the clauses and subclauses count area that the number of entries in the table 1412 is provided, sign sample group and the sample group description field that the sample group descriptor is provided.

With reference to Figure 14 D, the use of the sample group box 1416 that is used for layer (" layr ") sample set type is described.Sample 1 to 11 MD according to sample is divided into three layers.In layer 0 (basic unit), sample (sample 1,6 and 11) only is relative to each other, but uncorrelated with the sample in other any layer.At layer 1, the sample in sample (sample 2,5,7,10) and the lower floor (i.e. layer 0) and relevant with sample in this layer 1.At layer 2, the sample in sample (sample 3,4,8,9) and the lower floor (i.e. layer 0 and 1) and relevant with sample in this layer 2.Therefore, layer 2 sample can be processed under the situation that not have influence to the ability of decoding from the sample of lower floor 0 and 1.

Data declaration sample in the sample group box 1416 with the layer between above-mentioned related.As shown in the figure, these data comprise repeat layer pattern 1414, and it can be by being each repeat layer mode switch that the reference of original layers pattern and the number of times of this pattern appearance compress, as above more detailed argumentation.

With reference to Figure 14 E, the use of the sample group box 1418 that is used for subsequence (" sseq ") sample set type is described.Sample 1 to 11 MD according to sample is divided into four subsequences.Each subsequence the subsequence 0 on layer 0 comprises there is not the relative sample of other subsequence.Therefore, the sample in the subsequence can be used as a unit when needed and handles.

Related between data declaration sample in the sample group box 1418 and the subsequence.This data allow when corresponding subsequence begins the random access to sample.

In one embodiment, subsequence is described each subsequence that the clauses and subclauses box is used for describing the sample among the GOP.Subsequence is described the clauses and subclauses box and provides with sub-sequence identifier data, mean bit rate data, average frame rate data, reference number data and comprise the relevant correlation information of array about the information of reference data.

The correlation information sign is as the subsequence of the reference of the subsequence of describing in these clauses and subclauses.The sub-sequence identifier data provide the identifier of the subsequence of describing in these clauses and subclauses.The mean bit rate data comprise the mean bit rate (being unit with position or second for example) of this subsequence.In one embodiment, payload and payload title are considered in the calculating of mean bit rate.In one embodiment, if mean bit rate is undefined, then mean bit rate equals zero.

The average frame rate data comprise the average frame rate in the frame of subsequence of clauses and subclauses.In one embodiment, if average frame rate is undefined, then average frame rate equals zero.

Quote incremental data and provide the quantity of directly quoting subsequence in the subsequence of clauses and subclauses.The array of reference data provides the identification information of quoting subsequence.

In one embodiment, extra play is described the clauses and subclauses box and is used to provide a layer information.Layer is described the clauses and subclauses box provides the mean bit rate and the average frame rate of the quantity of layer, layer.For basic unit, the quantity of layer can equal zero, and for each enhancement layer, can be one or more.When mean bit rate was undefined, mean bit rate can equal zero, and when average frame rate was undefined, average frame rate can equal zero.

The stream exchange

In typical stream transmission situation, one of key request is the network condition of response change and the bit rate of reducing compressed data.The short-cut method that achieves this end is to adopt different bit rates and quality settings that a plurality of streams are encoded for representative network conditions.Then, but the server response to network condition between these streams of encoding in advance, change.

The JVT standard provides a kind of new image type, is called swap image, and it allows an image to be reconfigured as another in full accordly, and does not need these two images to use identical frame to be used for prediction.Specifically, JVT provides two kinds of swap images: the SI image, as the I frame, irrespectively encode with other any image; And the SP image, encode with reference to other image.Swap image can be used to the transmission conditions of response change and realizes having exchange between the stream of different bit rates and quality settings, thereby error resilient is provided, and realizes for example special-effect mode such as F.F. and rewind down.

But, must know in the medium data that in order when realizing stream exchange, error resilient, special-effect mode and other function, to use JVT swap image, broadcaster effectively which sample has some alternative expressions and what their correlativity is.The existing file form does not provide this ability.

One embodiment of the present of invention solve above-mentioned restriction by definition exchange sample set.The exchange sample set is represented one group of sample, and their decode value is identical, but may use different reference samples.Reference sample is the sample that is used for predicting the value of another sample.Each member of exchange sample set is called the exchange sample.The use of the exchange sample set of Figure 15 A explanation bit stream exchange.

With reference to Figure 15 A, stream 1 and stream 2 are two kinds of codings with identical content of different quality and bit-rate parameters.Sample S12 is the SP image, does not appear in any stream, and it is used for realizing from flowing 1 exchange (exchange has directivity) to stream 2.Sample S12 and S2 are included in the exchange sample set.All according to the prediction of the sample P12 in the track 1, S2 predicts according to the sample P22 in the track 2 for S1 and S12.Though sample S12 uses different reference samples with S2, their decode value is identical.Therefore, can obtain via exchange sample S12 to stream 2 exchange (the sample S1 stream 1 and flow S2 in 2) from flowing 1.

Figure 16 and Figure 17 illustrate the process of storage and retrieval exchange sample metadata, and they are carried out by coded system 100 and decode system 200 respectively.These processes can be carried out by the processing logic that can comprise hardware (for example circuit, special logic etc.), software (for example running on general-purpose computing system or the custom-built machine) or their both combinations.

Figure 16 is the process flow diagram that is used for creating in coded system 100 embodiment of the method 1600 that exchanges the sample metadata.At first, method 1600 begins (processing block 1602) with the processing logic that reception has the file of encoded media data.File comprises the one or more alternative coding different bandwidth and the quality settings of representative network conditions (for example for) of media data.Alternative coding comprises one or more swap images.These images can be included in alternative media data flow inside or conduct realizes for example independent community of specific function such as error resilient or special-effect mode.The method that is used to create these tracks and swap image is not appointment of the present invention, those skilled in the art's various possibilities that are perfectly clear.For example, regular (for example every 1 second) of exchange sample between every pair of track is placed and is comprised alternative coding.

Subsequently, processing logic checks that file comprises the exchange sample set (processing block 1604) of those samples that have identical decode value but use different reference samples with establishment, and the exchange sample metadata (processing block 1606) of creating the exchange sample set and the sample in the description exchange sample set of definition media data.In one embodiment, exchange sample metadata is organized into a kind of predetermined data-structure, for example comprises the watchcase of nested table set.

Subsequently, in one embodiment, processing logic determines whether exchange sample metadata structure comprises the data sequence (decision box 1608) of repetition.If this is judged to be certainly, then processing logic is converted to the reference of sequence appearance and the number of times (processing block 1610) that sequence occurs to the data sequence of each repetition.

Afterwards, in processing block 1612, processing logic utilizes particular media files form (for example JVT file layout) that exchange sample metadata is comprised in the file relevant with media data.In one embodiment, exchange sample metadata can be stored in in the separate track that flows the exchange appointment.In another embodiment, exchange sample metadata is stored (for example sequences data structures can be included in the sample watchcase) with the sample metadata.

Figure 17 is the process flow diagram that is used for using at decode system 200 embodiment of the method 1700 that exchanges the sample metadata.At first, method 1700 begins (processing block 1702) with the processing logic that receives the file relevant with encoded media data.This document can receive from (local or outside) database, coded system 100 or other any device from network.This document comprises the exchange sample metadata of the exchange sample set that definition is relevant with media data.

Subsequently, processing logic extracts exchange sample metadata (processing block 1704) from file.As mentioned above, exchange sample metadata can be stored in a kind of data structure, for example comprise in the watchcase of nested table set.

In addition, in processing block 1706, the metadata that the processing logic utilization is extracted is searched the exchange sample set that comprises specific sample and select alternative sample from the exchange sample set.Then, the alternative sample with decode value identical with original sample can be used for the network condition of response change and exchanges between the bit stream of two different codings, thereby is provided to the random access entrance in the bit stream, and helps error resilient etc.

Below with reference to expansion ISO media file format (being called expansion MP4) demonstration exchange sample metadata structure is described.But, should be pointed out that other media file format can expand in conjunction with the various data structures that are used for memory transactions sample metadata.

Figure 18 explanation is used for the exemplary data structure of memory transactions sample metadata.Exemplary data structure is to comprise the form that the nested table intersection of sets varies this watchcase.Exchange of each clauses and subclauses sign sample set in the table 1802.Each exchange sample set comprises one group of exchange sample, their reconstruct objectively identical (or sensuously identical), but they can according to may, may be not yet different reference samples in the track (stream) identical with the exchange sample predict.Each clauses and subclauses in the table 1802 are linked to corresponding table 1804.Table 1804 sign is included in and respectively exchanges sample in the exchange sample set.Each clauses and subclauses in the table 1804 also are linked to corresponding table 1806, sum and each used reference sample of exchange sample of the reference sample that position (being its track and catalogue number(Cat.No.)), the track that comprises the used reference sample of exchange sample, the exchange sample of table 1806 definition exchange sample is used.

Shown in Figure 15 A, in one embodiment, exchange sample metadata is used between the different coding form of same content and exchanges.In MP4, each alternative coding is stored as independent MP4 track, and " alternative group " in the track title shows that it is the alternative coding of certain content.

Figure 15 B explanation comprises the table of the metadata of the exchange sample set 1502 that definition is made up of sample S2 and S12 according to Figure 15 A.

Figure 15 C is the process flow diagram of an embodiment that is used for determining carrying out the method 1510 of the point that exchanges between two bit streams.Supposing will be from flowing 1 to stream 2 execution exchanges, then method 1510 begins with search exchange sample metadata, all exchange sample sets (processing block 1512) of exchange sample of searching the exchange sample that comprises the reference orbit with stream 1 and having the exchange path of stream 2.Subsequently, assessment gained exchange sample set is with all available exchange sample set (processing block 1514) of all reference samples of the exchange sample of the reference orbit of selecting wherein to have stream 1.For example, be the P frame if having the exchange sample of the reference orbit of stream 1, then a sample requirement before the exchange is available.In addition, the sample in the selected exchange sample set is used for determining exchange spot (processing block 1516).That is to say that exchange spot is considered to be right after after the highest reference sample of the exchange sample with stream reference orbit of 1, via the exchange sample of the reference orbit with stream 1, arrives the sample of the exchange sample that is right after the exchange path with stream 2.

In another embodiment, exchange sample metadata can be used to be implemented to the random access entrance in the bit stream, shown in Figure 19 A-19C.

With reference to Figure 19 A and 19B, exchange sample 1902 is made up of sample S2 and S12.S2 is according to the P frame of P22 prediction, uses in the normal stream playback procedure.S12 is as random access point (for example being used for montage).In case S12 is decoded, stream is reset and is proceeded the decoding of P24, is decoded after S2 as P24.

Figure 19 C is the process flow diagram of an embodiment of method 1910 that is used for determining the random access point of sample (for example sample S on the track T).Thereby method 1910 is searched all exchange sample sets that comprise the exchange sample with exchange path T with search exchange sample metadata and is begun (processing block 1912).Subsequently, assessment gained exchange sample set, with the exchange sample of selecting wherein to have exchange path T in decoding order before sample S near the exchange sample set (processing block 1914) of sample.In addition, the exchange sample (sample SS) (processing block 1916) beyond the selected exchange sample set of the random access point that is used for sample S selects to have the exchange sample of exchange path T.In the stream playback procedure, sample SS rather than sample S decoding (following the decoding of any reference sample of appointment in the clauses and subclauses of sample SS afterwards).

In yet another embodiment, exchange sample metadata can be used to help error resilient, shown in Figure 20 A-20C.

With reference to Figure 20 A and 20B, exchange sample 2002 is made up of sample S2, S12 and S22.Sample S2 predicts according to sample P4.Sample S12 predicts according to sample S1.If mistake occurs between sample P2 and P4, then exchanging sample S12 rather than S2 can be decoded.Stream transmission is then proceeded as usual from sample P6.If mistake also has influence on sample S1, then exchange sample S22 rather than sample S2 can be decoded, transmit as a stream from sample P6 then and proceed as usual.

Figure 20 C is the process flow diagram of an embodiment of the method 2010 recovered of the mistake when being used for help sending sample (for example sample S).Thereby method 2010 is searched all exchange sample sets that comprise the exchange sample that equals sample S or follow sample S in decoding order with search exchange sample metadata and is begun (processing block 2012).Subsequently, assessment gained exchange sample set is to select to have near sample S and its reference sample known (via feedback or other certain information source) the exchange sample set (processing block 2014) for correct exchange sample SS.In addition, exchange sample SS rather than sample S are sent out (processing block 2016).

The storage of parameter set and supplemental enhancement information

As mentioned above, certain metadata, as parameter set metadata can with related media data separate storage.Figure 21 illustrates the separate, stored of parameter set metadata according to an embodiment of the invention.With reference to Figure 21, media data is stored in the track of video 2102, and parameter set metadata is stored in the independent parameter track 2104, and it can be marked as " inertia " does not have the medium data to show it.Timing information 2106 provide between track of video 2102 and the parameter track 2104 synchronously.In one embodiment, timing information is stored in each the sample watchcase of track of video 2102 and parameter set track 2104.In one embodiment, each parameter set is represented by a parameter set sample, if the timing information of media sample equals the timing information of parameter set sample, then realizes synchronously.

In another embodiment, Object Descriptor (OD) message is used to comprise parameter set metadata.According to the MPEG-4 standard, Object Descriptor is represented one or more basic flow descriptors, and they provide the configuration or the out of Memory of the stream relevant with single object (media object or picture describe).Object Descriptor message sends in Object Descriptor stream.As shown in figure 22, parameter set is involved in Object Descriptor stream 2202 as Object Descriptor message 2204.Object Descriptor stream 2202 is synchronous with the video-frequency basic flow that carries media data.

Discuss the storage of SEI in more detail below.

In one embodiment, the SEI data are stored in the basic stream with media data.Figure 23 illustrates the SEI message 2304 that directly embeds basic flow data 2303 with media data.

In another embodiment, SEI message as sample storage in SEI track independently.Figure 24 and 25 explanations are according to the storage of SEI message in separate track of some embodiments of the present invention.

With reference to Figure 24, media data is stored in the track of video 2402, the SEI message stores in SEI track 2404 independently as sample.Timing information 2406 provide between track of video 2402 and the SEI track 2404 synchronously.

With reference to Figure 25, media data is stored in the track of video 2502, and the SEI message stores is in contents of object information (OCI) track 2504.Timing information 2506 provide between track of video 2502 and the OCI track 2504 synchronously.According to the MPEG-4 standard, OCI track 2504 is designated as, and the OCI data that are usually used in providing about the textual description information of picture incident are provided.Each SEI message stores is in OCI track 2504, as Object Descriptor.In one embodiment, specify the OCI descriptor element field of the data type of storing in the OCI track to be used for carrying SEI message usually.

In yet another embodiment, the SEI data are as metadata and media data separate storage.Figure 26 illustrates that SEI data according to an embodiment of the invention are as metadata store.

With reference to Figure 26, the user data box 2602 of ISO media file format definition is used for storing SEI message.In particular, in the SEI user data box 2604 in the user data box 2602 that in track or movie box, comprises of each SEI message stores.

In one embodiment, be included in the description that metadata in the SEI message comprises media data.These descriptions can be represented defined descriptor of MPEG-7 standard and description scheme.In one embodiment, the support of SEI message comprise data based on XML, as description based on XML.In addition, SEI message is supported the registration of dissimilar enhancing information.For example, SEI message can be supported the anonymous data and not need to register newtype.This data can be designed to be exclusively used in application-specific or mechanism.In one embodiment, in the bit stream environment, indicate the existence of SEI by the appointment initial code.

In one embodiment, demoder provides the ability of any or all enhancement function of describing in the SEI message to come signaling by external means (for example advising H.245 or SDP).Do not provide the demoder of enhancement function can just abandon SEI message.

In one embodiment, utilize specific field in the payload title of SEI message provide media data (for example video coding layer data) and the description that comprises media data SEI message synchronously, will discuss in more detail below.

In one embodiment, network adaptation layer is supported in the method that transmits supplemental enhancement information message in the basic transmission system.Network adaptation can allow to be used for sending the band interior (with the identical transport stream of video coding layer) or the out-of-band method of SEI message.

In one embodiment, to comprise in the SEI message be by SEI is realized as the transfer layer of MPEG-7 metadata to the MPEG-7 metadata.Specifically, the one or more MPEG-7 system access unit (section) of describing section of SEI message encapsulation expression.Specific field in the payload title that can utilize SEI message synchronously of MPEG-7 access unit and media data provides.

In another embodiment, the MPEG-7 metadata comprises in the SEI message by allowing description unit to send realization with text or binary coding form in SEI message.Description unit can be single MPEG-7 descriptor or description scheme, and can be used to represent the partial information from complete description.For example, below provide the XML grammer of scalable color description symbol:

<Mpeg7>

<DescriptionUnit?xsi:type=″ScalableColorType″numOfCoeff=″16″

numOfBitplanesDiscarded=″O″>

<Coeff>1234567890123456</Coeff>

</DescriptionUnit>

</Mpeg7>

Descriptor or description scheme example can be relevant with the counterpart (for example subsample, sample, section etc.) of media data by the SEI message header, will discuss in more detail below.This embodiment for example allows, and the scale-of-two or the text code color description symbol of single frames send as SEI message.Utilize SEI message, can provide the implicit expression of video encode stream to describe.It is the complete description of video encode stream that implicit expression is described, and has wherein implied description unit.Implicit expression is described can have following form:

<Mpeg7>

<Description?xsi:type=″ContentEntityType″>

<MultimediaContent?xsi:type=″VideoType″>

<Video>

<Creationlnformation>

<Creation>

<Title>Worldcup?Soccer</Title>

</Creation>

</Creationlnformation>

<MediaTime>

<MediaTimePoint>TOO:OO:OO</MediaTi

mePoint>

<MediaDuration>PT1M30S</MediaDuratio

n>

</MediaTime>

<VisualDescriptor?xsi:type=″GoFGoPColorType″

aggregation=″Average″>

<ScalableColor?numOfCoeff=″16″

numOfBitplanesDiscarded=″O″>

<Coeff>1234567890123456

</Coeff>

</ScalableColor>

</isualDescriptor>

</Video>

</MultimediaContent>

</Description>

</Mpeg7>

In one embodiment, provide the revision form of SEI to cover in the SEI message to support to describe.In particular, SEI is expressed as one group of SEI message.In one embodiment, SEI is encapsulated as the chunk of data.Each SEI chunk can comprise one or more SEI message.Each SEI message comprises SEI title and SEI payload.The SEI title is to begin from first byte of SEI chunk or the byte alignment position of first byte after previous SEI message.Payload is right after the SEI title, and the byte place after the SEI title begins.

The SEI title comprises the optional identifier and the payload length of type of message, media data part (for example subsample, sample and section).The grammer of SEI title can be as follows:

aligned(8)SupplementalEnhancementInformation

{

aligned?unsigned?int(13)MessageType;

aligned?unsigned?int(2}MessageScope

if(MessageScope==O)

{

//Message?is?related?to?a?sample

unsigned?int(16)SampleID;

}

else

{

//Reserved

}

aligned?unsigned?int(16)PayloadLength;

aligned?unsigned?int(8)Payload[PayloadLength];

}

The MessageType field is represented the type of message in the payload.Demonstration SEI message type code is stipulated as follows in table 1:

Message code Image message Bar message Message semantic definition ??MPEG-7 MPEG-7 scale-of-two access unit MPEG-7 text access unit MPEG-7JVT metadata D/DS section text MPEG-7JVT metadata D/DS section scale-of-two Newtype Arbitrary XML xxx message The XML message of JVT appointment H.263 appendix I Video time section start-tag Video time section end-tag

H.263L appendix W ??0 Any binary data ??1 Arbitrary text ??2 The literary property text ??3 Title text ??4 Video presentation text Human Readable Text ??5 The unified resource identifier text ??6 The present image title repeats ??7 Previous image header repeats ??8 Next image header repeats, reliable TR ??9 Next image header repeats unreliable TR ??10 The staggered field indication in top ??11 The staggered field indication in bottom ??12 Picture number ??13 Standby reference picture

Table 1

The PayloadLength field is the length that unit specifies SEI message with the byte number.The SEI title also comprises and shows the sample synchronous mark that this SEI message is whether relevant with specific sample and show the subsample synchronous mark (if the subsample synchronous mark is set up, then the sample synchronous mark also is set up) that this SEI message is whether relevant with specific subsample.The SEI payload also comprises the optional sample identification symbol field of the sample that this message of appointment is relevant and the optional subsample identifier field of the subsample that specify message is correlated with.Sample identification symbol field only just exists when being provided with the sample synchronous mark.Equally, the subsample identifier field only just exists when being provided with the subsample synchronous mark.Sample identification symbol and subsample identifier field allow the synchronous of SEI message and media data.

In one embodiment, each SEI message sends in SEI message semantic definition symbol.The SEI descriptor is packaged in the SEI unit that comprises one or more SEI message.The grammer of SEI message elements is as follows:

aligned(8)class?SEIMessageUnit

{

SEIMessageDescriptor?descriptor[0..255];

}

The grammer of SEI message semantic definition symbol is as follows:

abstract?expandable(2**16-1)aligned(8)class?SEIMessageDescriptor

:tag?unsigned?int(16)

{

unsigned?int(16)type=tag;

}

Type field is represented the type of SEI message.Demonstration SEI type of message provides in table 2, and is as follows:

Label value Bookmark name 0x0000 Forbid 0x0000 Associate InformationSEI SEIMetadataDescriptorTag SEIMetadataRefDscriptorTag SEITextDescriptorTag SEIXMLDescriptorTag SEIStartSegmentTag SEIEndSegmentTag -0x6FFF Keep and use for ISO 0x7000-FFF Keeping supply uses with program 0x8000-FFFF Keep for SC29 registration body and distribute.

Table 2

Be described in more detail below the various types of SEI message shown in the table 2.

The SEIXMLDescriptor type refers to a kind of descriptor, and its encapsulation may comprise for example complete XML document or from the data based on XML of the XML section of bigger document.The grammer of SEIXMLDescriptor is as follows:

class?SEIXMLDescriptor:SEIMessageDescriptor(SEIXMLDescriptorTag)

{

unsigned?int(8)xmlData□;

{

The SEIMetadataDescriptor type refers to the descriptor that comprises metadata.The grammer of SEIMetadataDescriptor is as follows:

class?SEIMetadataDescriptor:SEIMessageDescriptor(SEIXMLDescriptorTag)

{

unsigned?int(8)metadataFormat;

unsigned?int(8)metadataContent□;

}

The form of metadataFormat field identification metadata.The exemplary value of metadata format is described as follows in table 3:

Value Describe ??0x00-0x0F Keep ??0x10 ISO 15938 (MPEG-7) definition ??0x11-0x3F Keep ??0x40-0xFF Registration body's definition

Table 3

The data of value 0x10 sign MPEG-7 definition.Can be used for signal is sent in the use of professional format from the value of 0x40 up to the scope of 0xFF.

The metadataContent field comprises the expression of metadata of the form of metadataFormat field appointment.

SEIMetadataRefDescriptor type list first finger definiteness is to the descriptor of the URL of the position of metadata.The grammer of SEIMetadataRefDescriptor is as follows:

class?SEIMetadataRefDescriptor:

SEIMessageDescriptor(SEIMetdataRefDescriptorTag)

{

bit(8)URLString□;

}

The URLString field comprises the URL of the UTF-8 coding of the position of pointing to metadata.

The SEITextDescriptor type represents to comprise the descriptor of describing video content or the text relevant with it.The grammer of SEITextDescriptor is as follows:

Class?SEIMessageDescriptor.SEIMessageDescriptor(SEIXML?DescriptorTag)

{

unsigned?int(24)languageCode;

unsigned?int(8)text□;

}

The languageCode field comprises the language codes of the language of following the text field.The text field comprises the text data of UTF-8 coding.

The SEIURIDescriptor type represents to comprise the descriptor of the unified resource identifier (URI) relevant with video content.The grammer of SEIURIDescriptor is as follows:

class?SEIURIDescriptor:SEIMessageDescriptor(SEIURIDescriptorTag)

{

unsigned?int(16)uriString□;

}

The urString field comprises the URI of video content.

The SEIOCIDescriptor type relates to the descriptor of the SEI message that comprises indicated object content information (OCI) descriptor.The grammer of SEIOCIDescriptor is as follows:

class?SEIOCIDescriptor:SEIMessageDescriptor(SEIOCIDescriptorTag)

{

OCI_Descriptor?ociDescr;

}

The ociDescr field comprises the OCI descriptor.

The SEIStartSegmentDescriptor type relates to the initial descriptor of the section of showing, it then can be cited in other SEI message.Applied certain layer (for example sample group, section, sample or subsample) of the initial descriptor of SEI therewith of section is relevant.The grammer of SEIStartSegmentDescriptor is as follows:

class?SEIStartSegmentDescriptor:

SEIMessageDescriptor(SEIStartSegmentDescriptorTag)

{

unsigned?int(32)segmentID;

}

The segmentID field is represented unique binary identification symbol that this stream of this section is interior.This value can be used to quote the section in other SEI message.

The SEIEndSegmentDescriptor type relates to the descriptor that the section of showing finishes.Must exist comprise identical segmentID value in preceding SEIStartSegment message.If do not match, then demoder must be ignored this message.Section finishes therewith applied certain layer (for example sample group, section, sample or subsample) of SEI descriptor and is correlated with.The grammer of SEIStartSegmentDescriptor is as follows:

class?SEIEndsegmentDescriptor:

SEIMessageDescriptor(SEIEndSegmentDescriptorTag)

{

unsigned?int(32)segmentID;

}

The segmentID field is represented the unique binary identification symbol in this stream of this section.This value can be used to quote the section in other SEI message.

The storage and the retrieval of audiovisual metadata have been described.Though this paper has illustrated and described specific embodiment, those skilled in the art can understand, and is suitable for realizing that any configuration of identical purpose all can replace described specific embodiment.The application is intended to contain any modification of the present invention or change.

Claims (18)

1. computing method comprise:
Discern the one or more descriptions relevant with multi-medium data, described one or more description comprises descriptor and description scheme, wherein said descriptor is described the syntactical and semantical feature in the audio-visual content of described multi-medium data performance, and described description scheme is specified structure and semantic relation between the described descriptor; And
Described one or more descriptions are included in the supplemental enhancement information relevant with described multi-medium data, the SEI that comprises described one or more descriptions is sent to decode system subsequently, so that be used for the decoding of described multi-medium data alternatively, wherein said SEI and described multi-medium data are sent to decode system at the track that separates, and the timing information of the described track that separates is used for the described track that separates synchronously.
2. the method for claim 1, it is characterized in that: described SEI is as metadata and described multi-medium data separate storage.
3. method as claimed in claim 2 is characterized in that: described SEI metadata comprises a plurality of SEI message.
4. method as claimed in claim 3 is characterized in that: each in described a plurality of SEI message is stored in the track of movie box as box.
5. method as claimed in claim 2 is characterized in that:
Described multi-medium data is stored in the track of video; And
Described SEI metadata store is in the SEI track.
6. method as claimed in claim 5 is characterized in that also comprising:
Make described SEI track and described track of video synchronous.
7. method as claimed in claim 6 is characterized in that: described SEI track comprises a plurality of SEI message in sample.
8. method as claimed in claim 2 is characterized in that also comprising:
In video-frequency basic flow, transmit described multi-medium data; And
In contents of object information (OCI) stream, transmit described SEI metadata.
9. method as claimed in claim 8 is characterized in that: each in described a plurality of SEI message sends as the OCI descriptor in OCI stream.
10. the method for claim 1, it is characterized in that: each in one or more descriptions is each in descriptor and the descriptor scheme.
11. method as claimed in claim 3 is characterized in that, each in described a plurality of SEI message comprises the payload title, wherein has each data that are associated with the counterpart of described multi-medium data in described a plurality of SEI message.
12. method as claimed in claim 11 is characterized in that: the counterpart of described multi-medium data is each in sample, subsample and the sample group.
13. the method for claim 1 is characterized in that: one or more descriptions are covered comprise among the described SEI MPEG-7 system access unit package to a plurality of SEI message in one of them.
14. the method for claim 1 is characterized in that also comprising:
Transmit in described one or more descriptions each in one of them in a plurality of SEI message.
15. method as claimed in claim 14 is characterized in that: each in described one or more descriptions is with text or binary coding.
16. a device comprises:
Storer;
The media file creator forms first file that comprises multi-medium data, and described media file creator comprises the instruction that is stored in the described storer; And
The meta data file creator, discern the one or more descriptions relevant with multi-medium data, described one or more description comprises descriptor and description scheme, wherein said descriptor is described the syntactical and semantical feature in the audio-visual content of described multi-medium data performance, and described description scheme is specified structure and semantic relation between the described descriptor, and described one or more descriptions are included in the supplemental enhancement information relevant with described multi-medium data, the described SEI that comprises described one or more descriptions is sent to decode system subsequently, so that be used for the decoding of described multi-medium data alternatively, wherein said SEI and described multi-medium data are sent to decode system at the track that separates, and the timing information of the described track that separates is used for the described track that separates synchronously, and described meta data file creator comprises the instruction that is stored in the described storer.
17. a device comprises:
Processor;
Be used to dispose the parts of the described processor identification one or more descriptions relevant with multi-medium data, described one or more description comprises descriptor and description scheme, wherein said descriptor is described the syntactical and semantical feature in the audio-visual content of described multi-medium data performance, and described description scheme is specified structure and semantic relation between the described descriptor; And
Be used for described one or more descriptions are included in the parts of the supplemental enhancement information relevant with described multi-medium data, the described SEI that comprises described one or more descriptions is sent to decode system subsequently, so that be used for the decoding of described multi-medium data alternatively, wherein said SEI and described multi-medium data are sent to decode system at the track that separates, and the timing information of the described track that separates is used for the described track that separates synchronously.
18. a system comprises:
Storer; And
Be connected at least one processor of described storer, described at least one processor execution command collection, thereby make described at least one the processor identification one or more descriptions relevant with multi-medium data, described one or more description is described the syntax and semantics of described multi-medium data, described one or more description comprises descriptor and description scheme, wherein said descriptor is described the syntactical and semantical feature in the audio-visual content of described multi-medium data performance, and described description scheme is specified structure and semantic relation between the described descriptor, and
Described one or more descriptions are included in the supplemental enhancement information relevant with described multi-medium data, the described SEI that comprises described one or more descriptions is sent to decode system subsequently, so that be used for the decoding of described multi-medium data alternatively, wherein said SEI and described multi-medium data are sent to decode system at the track that separates, and the timing information of the described track that separates is used for the described track that separates synchronously.
CNB038152029A 2002-02-25 2003-04-29 Supporting advanced coding formats in media files CN100419748C (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US37665102P true 2002-04-29 2002-04-29
US37665202P true 2002-04-29 2002-04-29
US60/376,651 2002-04-29
US60/376,652 2002-04-29
US10/371,464 2003-02-21
US10/371,464 US20030163477A1 (en) 2002-02-25 2003-02-21 Method and apparatus for supporting advanced coding formats in media files
US10/425,685 US20040006575A1 (en) 2002-04-29 2003-04-28 Method and apparatus for supporting advanced coding formats in media files
US10/425,685 2003-04-28

Publications (2)

Publication Number Publication Date
CN1666195A CN1666195A (en) 2005-09-07
CN100419748C true CN100419748C (en) 2008-09-17

Family

ID=29554444

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB038152029A CN100419748C (en) 2002-02-25 2003-04-29 Supporting advanced coding formats in media files

Country Status (8)

Country Link
EP (1) EP1500002A1 (en)
JP (1) JP2006505024A (en)
KR (1) KR20040106414A (en)
CN (1) CN100419748C (en)
AU (1) AU2003237120B2 (en)
DE (1) DE10392598T5 (en)
GB (1) GB2403835B (en)
WO (1) WO2003098475A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104246773A (en) * 2012-04-25 2014-12-24 高通股份有限公司 Identifying parameter sets in video files

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8472792B2 (en) 2003-12-08 2013-06-25 Divx, Llc Multimedia distribution system
US7519274B2 (en) 2003-12-08 2009-04-14 Divx, Inc. File format for multiple track digital data
US8879641B2 (en) 2004-02-10 2014-11-04 Thomson Licensing Storage of advanced video coding (AVC) parameter sets in AVC file format
US8315307B2 (en) * 2004-04-07 2012-11-20 Qualcomm Incorporated Method and apparatus for frame prediction in hybrid video compression to enable temporal scalability
US8010566B2 (en) 2004-10-13 2011-08-30 Electronics And Telecommunications Research Institute Extended multimedia file structure and multimedia file producting method and multimedia file executing method
EP1869891A4 (en) 2005-04-13 2014-06-11 Coding, storage and signalling of scalability information
TW200638335A (en) * 2005-04-13 2006-11-01 Dolby Lab Licensing Corp Audio metadata verification
CA2647823A1 (en) * 2006-03-27 2007-10-04 Vidyo, Inc. System and method for management of scalability information in scalable video and audio coding systems using control messages
US8761263B2 (en) 2006-03-27 2014-06-24 Vidyo, Inc. System and method for management of scalability information in scalable video and audio coding systems using control messages
CN101313578B (en) * 2005-09-26 2011-10-19 韩国电子通信研究院 Method and apparatus for defining and reconstructing rois in scalable video coding
US8670437B2 (en) 2005-09-27 2014-03-11 Qualcomm Incorporated Methods and apparatus for service acquisition
US8229983B2 (en) 2005-09-27 2012-07-24 Qualcomm Incorporated Channel switch frame
FR2894739A1 (en) * 2005-12-12 2007-06-15 Thomson Licensing Sa Encoding method, decoding method, encoding device, and video data decoding device
US20090232469A1 (en) * 2006-05-03 2009-09-17 Clinton Priddle Method and apparatus for re-constructing media from a media representation
WO2008007304A2 (en) 2006-07-12 2008-01-17 Nokia Corporation Signaling of region-of-interest scalability information in media files
EP2041974A4 (en) 2006-07-12 2014-09-24 Lg Electronics Inc Method and apparatus for encoding/decoding signal
AU2012202346B2 (en) * 2006-08-24 2014-07-24 Provenance Asset Group Llc System and method for indicating track relationships in media files
US8365060B2 (en) 2006-08-24 2013-01-29 Nokia Corporation System and method for indicating track relationships in media files
CA2661578C (en) 2006-08-24 2014-06-17 Nokia Corporation System and method for indicating track relationships in media files
WO2008046243A1 (en) * 2006-10-16 2008-04-24 Thomson Licensing Method and device for encoding a data stream, method and device for decoding a data stream, video indexing system and image retrieval system
MX2009005086A (en) 2006-11-14 2009-05-27 Qualcomm Inc Systems and methods for channel switching.
CN101536524B (en) 2006-11-15 2012-06-13 高通股份有限公司 Systems and methods for applications using channel switch frames
CN101193321B (en) * 2006-11-27 2012-02-22 汤姆森许可贸易公司 encoding device, decoding device, recording device, audio/video data transmission system
US20080181298A1 (en) * 2007-01-26 2008-07-31 Apple Computer, Inc. Hybrid scalable coding
CN101611612A (en) * 2007-02-23 2009-12-23 诺基亚公司 The back compatible characteristic of aggregated media data cell
EP1983755A1 (en) * 2007-04-17 2008-10-22 Thomson Licensing Method to transmit video data in a data stream and associated metadata
EP2533545B1 (en) 2007-04-18 2017-05-03 Dolby International AB Coding system using supplemental sequence parameter set for scalable video coding or multi-view coding
US20100142613A1 (en) * 2007-04-18 2010-06-10 Lihua Zhu Method for encoding video data in a scalable manner
DE102007045741A1 (en) * 2007-06-27 2009-01-08 Siemens Ag Method and device for coding and decoding multimedia data
KR101451239B1 (en) * 2007-08-13 2014-10-15 삼성전자 주식회사 Method for creating and accessing media metadata in media file format and apparatus thereof
CN101802823A (en) * 2007-08-20 2010-08-11 诺基亚公司 Segmented metadata and indexes for streamed multimedia data
RU2486679C2 (en) * 2007-09-19 2013-06-27 Фраунхофер-Гезелльшафт цур Фёрдерунг дер ангевандтен Форшунг Е.Ф. Apparatus and method for storing and reading file, having media data storage and metadata storage
KR101394154B1 (en) 2007-10-16 2014-05-14 삼성전자주식회사 Method and apparatus for encoding media data and metadata thereof
US8081635B2 (en) 2008-10-08 2011-12-20 Motorola Solutions, Inc. Reconstruction of errored media streams in a communication system
US20120011270A1 (en) * 2009-04-09 2012-01-12 Clinton Priddle Methods and arrangements for creating and handling media files
KR101290467B1 (en) * 2009-09-22 2013-07-26 퀄컴 인코포레이티드 Multi-track video coding methods and apparatus using an extractor that references two or more non-consecutive nal units
US8976871B2 (en) 2009-09-16 2015-03-10 Qualcomm Incorporated Media extractor tracks for file format track selection
US9185153B2 (en) 2010-02-19 2015-11-10 Telefonaktiebolaget L M Ericsson (Publ) Method and arrangement for representation switching in HTTP streaming
JP2013534101A (en) * 2010-06-14 2013-08-29 トムソン ライセンシングThomson Licensing Method and apparatus for encapsulating encoded multi-component video
US9131033B2 (en) * 2010-07-20 2015-09-08 Qualcomm Incoporated Providing sequence data sets for streaming video data
WO2012030120A2 (en) * 2010-08-31 2012-03-08 ㈜휴맥스 Methods of transmitting and receiving a media information file for http streaming
JP5953307B2 (en) 2010-10-05 2016-07-20 テレフオンアクチーボラゲット エルエム エリクソン(パブル) Client, content creator entity and methods for media streaming by them
KR20120138604A (en) 2011-06-14 2012-12-26 삼성전자주식회사 Method and apparatus for transmitting/receiving hybrid media content in a multimedia system
EP2835969A4 (en) * 2012-04-06 2015-09-09 Sony Corp Decoding apparatus, decoding method, encoding apparatus and encoding method
US9161039B2 (en) * 2012-09-24 2015-10-13 Qualcomm Incorporated Bitstream properties in video coding
KR101484843B1 (en) 2013-04-19 2015-01-20 삼성전자주식회사 A method and apparatus for transmitting a media transport packet in a multimedia transport system
CN105409235B (en) * 2013-07-19 2019-07-09 索尼公司 File creating apparatus and method and content reproduction apparatus and method
JP6549605B2 (en) * 2014-03-18 2019-07-24 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Audiovisual content item data stream
US9979983B2 (en) 2015-03-16 2018-05-22 Microsoft Technology Licensing, Llc Application- or context-guided video decoding performance enhancements
US10129566B2 (en) 2015-03-16 2018-11-13 Microsoft Technology Licensing, Llc Standard-guided video decoding performance enhancements
GB2539462B (en) 2015-06-16 2019-04-03 Canon Kk Obtaining media data and metadata from encapsulated bit-streams wherein operating point descriptors can be dynamically set
EP3338453A1 (en) * 2015-08-20 2018-06-27 Koninklijke KPN N.V. Forming a tiled video on the basis of media streams
GB2560921B (en) * 2017-03-27 2020-04-08 Canon Kk Method and apparatus for encoding media data comprising generated content
US20190020915A1 (en) * 2017-07-13 2019-01-17 Qualcomm Incorporated Processing media data using file tracks for web content

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864682A (en) * 1995-07-14 1999-01-26 Oracle Corporation Method and apparatus for frame accurate access of digital audio-visual information

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5706493A (en) * 1995-04-19 1998-01-06 Sheppard, Ii; Charles Bradford Enhanced electronic encyclopedia
US6453355B1 (en) * 1998-01-15 2002-09-17 Apple Computer, Inc. Method and apparatus for media data transmission
US6426778B1 (en) * 1998-04-03 2002-07-30 Avid Technology, Inc. System and method for providing interactive components in motion video
GB2362078B (en) * 1999-01-22 2003-01-22 Kent Ridge Digital Labs Method and apparatus for indexing and retrieving images using visual keywords
US6400996B1 (en) * 1999-02-01 2002-06-04 Steven M. Hoffberg Adaptive pattern recognition based control system and method
GB2387287B (en) * 2002-04-05 2006-03-15 Snell & Wilcox Limited Video compression transcoding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864682A (en) * 1995-07-14 1999-01-26 Oracle Corporation Method and apparatus for frame accurate access of digital audio-visual information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
H.26L over IP and H.324 Framework. Stephan Wenger,Thomas Stockhammer.ITU - Telecommunications Standardization Sector. 2001 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104246773A (en) * 2012-04-25 2014-12-24 高通股份有限公司 Identifying parameter sets in video files
CN104246773B (en) * 2012-04-25 2018-09-14 高通股份有限公司 Identify the parameter set in video file

Also Published As

Publication number Publication date
AU2003237120A1 (en) 2003-12-02
DE10392598T5 (en) 2005-05-19
GB2403835B (en) 2005-11-23
JP2006505024A (en) 2006-02-09
KR20040106414A (en) 2004-12-17
AU2003237120B2 (en) 2008-10-09
GB2403835A (en) 2005-01-12
EP1500002A1 (en) 2005-01-26
WO2003098475A1 (en) 2003-11-27
CN1666195A (en) 2005-09-07
GB0424069D0 (en) 2004-12-01

Similar Documents

Publication Publication Date Title
US9860611B2 (en) Broadcast service transmitting method, broadcasting service receiving method and broadcast service receiving apparatus
US10630759B2 (en) Method and apparatus for generating and reproducing adaptive stream based on file format, and recording medium thereof
US20190007718A1 (en) Method and apparatus for decoding metadata including multiple segments of an audiovisual stream
US8898228B2 (en) Methods and systems for scalable video chunking
CN104471951B (en) Handle the method and device of digital service signal
KR101991321B1 (en) Method and apparatus for transmitting and receiving multimedia service
RU2616185C2 (en) Method, device and software for segmented synchronized multimedia data encapsulating
KR101973544B1 (en) Method and apparatus for transmitting/receiving hybrid media content in a multimedia system
TWI489843B (en) Arranging sub-track fragments for streaming video data
US9967598B2 (en) Adaptive streaming method and apparatus
CN107071513B (en) Method, client and server for providing media content
CN103109540B (en) The method of transmitting-receiving media file and the device of use the method sending/receiving
KR101242472B1 (en) Method and apparatus for track and track subset grouping
US8056110B2 (en) Service system of thumbnail image and transmitting/receiving method thereof
RU2635549C1 (en) Method, device and computer program for encapsulation of scaled divided multimedia data with load-time binding
US8494339B2 (en) Combining material and data
US6947598B2 (en) Methods and apparatus for generating, including and using information relating to archived audio/video data
US7340528B2 (en) Data referencing system
CN103309933B (en) Method and apparatus for media data transmission
RU2618373C2 (en) Device and method for distribution of data streaming, device and method for receiving data streaming, streaming system, programme and recording medium
RU2308172C2 (en) System for transmission and processing of video signal, meant for creation of user mosaics
CN101982979B (en) Apparatus and method for adapting 2d and 3d stereoscopic video signal
ES2328222T3 (en) Keys incorporated subliminally in video for synchronization.
KR101639358B1 (en) Transmission apparatus and method, and reception apparatus and method for providing 3d service using the content and additional image seperately transmitted with the reference image transmitted in real time
KR101316743B1 (en) Method for providing metadata on parts of video image, method for managing the provided metadata and apparatus using the methods

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1082564

Country of ref document: HK

GR01 Patent grant
C14 Grant of patent or utility model
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080917

Termination date: 20100429

C17 Cessation of patent right
REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1082564

Country of ref document: HK