CN101079248A - Video processing method, circuit and system - Google Patents

Video processing method, circuit and system Download PDF

Info

Publication number
CN101079248A
CN101079248A CN 200710104100 CN200710104100A CN101079248A CN 101079248 A CN101079248 A CN 101079248A CN 200710104100 CN200710104100 CN 200710104100 CN 200710104100 A CN200710104100 A CN 200710104100A CN 101079248 A CN101079248 A CN 101079248A
Authority
CN
China
Prior art keywords
video
subframe
frame
metadata
frame sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200710104100
Other languages
Chinese (zh)
Other versions
CN100587793C (en
Inventor
詹姆士·D·贝内特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Broadcom Corp
Zyray Wireless Inc
Original Assignee
Zyray Wireless Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zyray Wireless Inc filed Critical Zyray Wireless Inc
Publication of CN101079248A publication Critical patent/CN101079248A/en
Application granted granted Critical
Publication of CN100587793C publication Critical patent/CN100587793C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present invention relates to video processing device that generates sub-frame metadata for use in modifying a sequence of original video frames includes a video interface for receiving video data representing the sequence of original video frames, a user interface for receiving as user input sub-frame information identifying a sub-frame corresponding to a region of interest within a scene depicted in at least one frame of the sequence of original video frames and processing circuitry for generating the sub-frame metadata from the sub-frame information. Editing information associated with each sub-frame may also be included in the sub-frame metadata. Targeted sub-frame metadata can be specifically generated for use with a particular screen or screen size. A first player with a small screen and a second player with a larger screen may each receive different sub-frame metadata files, while both receive the same original video. Pursuant to the differing sub-frame metadata, the original video will be displayed in two different forms on the small and larger screens.

Description

Method for processing video frequency, circuit and system
Technical field
The present invention relates to video processing equipment, more particularly, relate to a kind of interactive video disposal system of utilizing video data to move, described video data is used for carrying out playback on video display.
Background technology
It is 16: 9 35mm film photographic that film and other video content adopt aspect ratio usually.When film enters elementary film market, the 35mm film is replicated, and is distributed to each big cinema, thereby film is sold to the filmgoer.For example, cinema makes high-luminance light pass through the 35mm film usually, and motion-picture projection on " giant-screen ", is shown to the paying spectators.In case film leaves " giant-screen ", this film enters the secondary market usually, wherein comprise that by selling the video disc of film or video tape (for example, VHS video tape, DVD, high definition (HD)-DVD, blue-ray DVD and other recording medium) finish distribution to single spectators.Other selection of secondary market film distribution comprises by the Internet download or by the broadcasting of TV network provider.
In order to distribute by the secondary market, the content quilt of 35mm film film frame one by one converts raw digital video to.Because the every film frame at least 1920 of HD resolution requirement * 1080 pixels, such raw digital video can require the storage space of about 25GB to store two hours film.For fear of such memory requirement, with scrambler raw digital video is encoded usually and compress, significantly reduce memory requirement.The example of coding standard includes but not limited to dynamic image expert group (MPEG)-1, MPEG-2, high definition MPEG-2, MPEG-4 AVC, H.261, H.263 and film and (SMPTE) VC-1 of Television Engineer association.
In order to adapt to the requirement of movie on phone, PDA(Personal Digital Assistant) and other handheld device, usually by the Internet download or upload compressed digital video data, perhaps it is stored in the handheld device, handheld device decompresses and decode video data, thereby is shown to the user on the video display relevant with handheld device.But the size of this handheld device has often limited the size of the video display on the handheld device (screen).For example, the diagonal line of the small screen on the handheld device only is 2 inches usually.Comparatively speaking, the diagonal line of TV screen is a 30-60 inch or bigger.The quality of image that this species diversity of different screen perceives the beholder has great influence.
For example, usually, traditional PD A and high-end phone have the screen width high ratio of human eye.On the small screen, human eye often can not the very little details of perception, for example literal, facial characteristics and distant objects.For example, at the cinema in, watch comprising movable person's and pavement marker wide screen at a distance, can discern facial expression easily and read literal on the sign.On the HDTV screen, such perception also is possible.But when converting the small screen of handheld device to, because the restriction of human eye, perception facial expression and literal are normally impossible.
No matter screen size is much, resolution of screen is not that to be subjected to technology limitation be exactly the restriction that is subjected to human eye.But on the small screen, the having the greatest impact of these restrictions.For example, usually, the screen width high ratio of traditional PD A and high-end phone is 4: 3, usually can play the QVGA video with the resolution of 320 * 240 pixels.Comparatively speaking, the HD TV has 16: 9 screen ratio usually, can play the resolution up to 1920 * 1080 pixels.In the processing of conversion HD video with few pixel count of adapting to the small screen, pixel data is merged, and details is significantly lost.Attempt the pixel count that pixel count on the small screen more is increased to the HD TV and can avoid conversion process, still, as previously mentioned, human eye only can be strengthened the restriction of himself, and details still can be lost.
Video code conversion and editing system are commonly used to convert video to another kind from a kind of form and resolution, thereby replay on specific screen.For example, such system can import the DVD video, after carrying out conversion process, and the video that output can playback on the QVGA screen.Also can use the interactive editing function, to generate output video through editor and conversion with conversion process.In order to support various screen size, resolution and coding standard, must generate a plurality of outputting video streams or file.
By legacy system and each side of the present invention are compared, to one skilled in the art, the more limitations and the defective of tradition and conventional method can become obvious.
Summary of the invention
The present invention relates to a kind of equipment and operation method thereof, further description is arranged in description of drawings, embodiment and claim.
According to an aspect of the present invention, a kind of video processing circuits of using with first video display is provided, be used to handle the initial video frame sequence that is used by the target video player circuit, described target video player circuit is connected to second video display communicatedly, described first video display has the visible area bigger than second video display, and described video processing circuits comprises:
Driving circuit is connected to first video display communicatedly;
Treatment circuit, mutual by described driving circuit, thus on first video display, show at least one frame in the initial video frame sequence;
Input interface circuit, the signal that the expression user is imported is sent to described treatment circuit;
Described treatment circuit is undertaken alternately by described driving circuit, showing subframe on described first video display, thereby the signal of described expression user input is made response;
Described subframe is corresponding to the zone of being imported by described expression user in described at least one frame in the described initial video frame sequence that signal identified; And
Described treatment circuit generates and described subframe metadata corresponding (metadata), is used to revise the initial video frame sequence by described target video player circuit, thereby generates the full screen display corresponding with described subframe on described second video display.
Preferably, described treatment circuit comprises the relevance of a plurality of frames in described subframe and the initial video frame sequence in described metadata.
Preferably, described treatment circuit is undertaken alternately by described driving circuit, shows and the relevant additional sub-frames of described initial video frame sequence on described first video display, thereby the additional signal of expression user input is made response;
Described treatment circuit generates the attaching metadata corresponding with described additional sub-frames, is used to revise the initial video frame sequence by described target video player circuit, thereby generates described full screen display on described second video display; And
The described metadata and the described attaching metadata that are generated by described treatment circuit define one group of subframe jointly.
Preferably, at least two subframes in the described subframe group are corresponding with a frame in the initial video frame sequence.
Preferably, at least two subframes in the described subframe group comprise the object that the locus changes on the initial video frame sequence.
Preferably, two subframes in the described subframe group are corresponding with at least two different frames in the initial video frame sequence.
Preferably, at least a portion of described subframe group is corresponding to the sub-scene of the scene of describing on the described initial video frame sequence.
Preferably, described metadata also comprises edit file, is used to edit the subframe that is presented on second video display by described target video player circuit.
Preferably, described edit file sign in the signal of expression user input is come out.
Preferably, described edit file comprises the visual modification that is applied to the initial video frame sequence part relevant with described subframe.
Preferably, described edit file comprises the movable information (motioninformation) that is applied to described subframe.
Preferably, described edit file comprises the adjustment size information that is applied to described subframe.
Preferably, described edit file comprises the medium that are applied to described subframe.
According to an aspect of the present invention, propose a kind of processing system for video, receive the video data of expression initial video frame sequence, described processing system for video comprises:
Receive the user interface of user's input;
Described user's input comprises the sub-frame information of definition first subframe and second subframe, first area-of-interest in the first at least of the corresponding described initial video frame sequence of described first subframe; At least second area-of-interest in the second portion of the corresponding described initial video frame sequence of described second subframe;
Treatment circuit is connected to described user's input communicatedly;
Described treatment circuit is used to revise described video data according to described sub-frame information generator data, thereby after revising described initial video frame sequence according to described sub-frame information, generates the full screen display of described initial video frame sequence.
Preferably, the subsequence of the corresponding described initial video data sequence of described first subframe.
Preferably, in the configuration that comprises video player, the metadata that described treatment circuit generates is used for revising video data according to described sub-frame information by described video player.
Preferably, described sub-frame information also comprises first edit file of corresponding described first subframe and second edit file of the corresponding described second word frame.
Preferably, but described metadata show and the manual editing with textual form.
Preferably, described treatment circuit is created the meta data file that includes described metadata, at least one frame in the corresponding described initial video frame sequence of the clauses and subclauses in the described meta data file.
According to an aspect of the present invention, a kind of method relevant with the initial video frame sequence is provided, be used for showing on first video display of a plurality of video displays the modification of described initial video frame sequence, described a plurality of video displays are of different sizes, and described method comprises:
Reception comprises the video data of described initial video frame sequence;
On first video display, show at least one frame in the described initial video frame sequence;
Receive user's input, described user's input identify with described initial video frame sequence at least one frame in the corresponding subframe of area-of-interest; And
Import generation subframe metadata according to the user, be used to revise described initial video frame sequence, to show the modification of described initial video frame sequence.
Preferably, described method also comprises based on a plurality of subframes and repeats described demonstration, receives user's input and generate step.
Preferably, described method also comprises the described video data of decoding.
Preferably, described method also comprises the input of reception additional user, the described additional user input expression edit file relevant with described subframe.
Preferably, described edit file comprises visual modification.
Preferably, described edit file comprises the movable information that is applied to described subframe.
Preferably, described edit file comprises the adjustment size information that is applied to described subframe.
Preferably, described edit file comprises the medium that are applied to described subframe.
From following embodiment with reference to accompanying drawing, various feature and advantage of the present invention will become clearer.
Description of drawings
The invention will be further described below in conjunction with drawings and Examples, in the accompanying drawing:
Fig. 1 is the schematic block diagram according to processing system for video of the present invention, and described system generates the subframe metadata, is used to revise the initial video frame sequence, on the video display that is presented at different sizes;
Fig. 2 is the schematic block diagram that is used to generate the exemplary video treatment facility of subframe metadata according to the present invention;
Fig. 3 is the schematic block diagram that generates the operation of subframe metadata according to video processing equipment of the present invention;
Fig. 4 is the synoptic diagram of typical initial video frame and corresponding subframe;
Fig. 5 is the synoptic diagram of the subframe metadata of typical sequence of subframes;
Fig. 6 is the synoptic diagram that comprises the typical sub-frame of subframe edit file;
Fig. 7 provides the synoptic diagram of the exemplary video processing display of graphical user interface, and described graphical user interface comprises the video editing instrument that is used to edit subframe;
Fig. 8 is used to generate the schematic block diagrams of organizing the exemplary video treatment facility of subframe metadata more;
Fig. 9 is the schematic block diagram that generates the exemplary video disposal system of many group subframe metadata for a plurality of target video displays; And
Figure 10 is the logical flow chart that is used to generate the exemplary process of subframe metadata according to the present invention.
Embodiment
Fig. 1 is the schematic block diagram according to processing system for video 100 of the present invention, and described system is presented on the video display of different sizes video content.Processing system for video 100 comprises video processing equipment 120, for example computing machine or miscellaneous equipment that can processing video data 110; And display 130, be connected to video processing equipment 120 communicatedly, with display video data 110.
The video data 110 of input comprises that the initial video frame sequence with any form sends or stored video content.In one embodiment, video data 110 is HD video data, and wherein each frame of video all is made of for example 1920 * 1080 pixels, and aspect ratio is 16: 9.In other embodiments, video data 110 is standard or low definition video data, and wherein each frame of video is that the pixel of 4: 3 specific quantity is formed by aspect ratio all.For example, if the normal video data are NTSC (NTSC) video datas, each frame of video all by horizontal x longitudinally 720 * 486 or 720 * 540 pixels constitute.As another example, if the normal video data are line-by-line inversion (PAL) video datas, each frame of video all by horizontal x longitudinally 720 * 576 pixels constitute.In addition, can use any coding standard video data 110 is encoded or to compress, for example, MPEG-1, MPEG-2, high definition MPEG-2, MPEG-4 AVC, H.261, H.263 and SMPTE VC-1, video data 110 is also uncompressed but through coding, perhaps uncompressed and coding.
Video processing equipment 120 is also carried out the subframe metadata and is generated application program 140.As used herein, term " the subframe metadata generates application program " refers to hardware, software and/or the firmware of any kind necessary concerning the function of carrying out subframe metadata generation application program 140 discussed below.In a word, the subframe metadata generate application program 140 with video data 110 as importing, and, be used to revise video data 110, on the different big or small target video displays 165 that are presented at different video display device 160 according to video data 110 generation subframe metadata 150.
The example of video display apparatus 160 includes but not limited to TV 160a, PDA(Personal Digital Assistant) 160b, cell phone 160c and kneetop computer 160d.Each video display apparatus 160a-160d is connected to video display 165a-165d separately communicatedly, and each display has size (or visible area) 162,165,166 and 168 separately.The visible area 162,165,166 and 168 of each video display 165a-165d is that the diagonal line along each display 165a-165d records.Video display 165b and the 165c of PDA 160b and cell phone 160c represent the small video display respectively, and the video display 165a of TV 160a and kneetop computer 160d and 165d represent big video display.As used herein, term " small video display " refers to the video display of visible area (for example, 164 and 166) less than the visible area of the display 130 relevant with the video processing equipment 120 that generates subframe metadata 150.
Typically in service, the subframe metadata generates application program 140 and from video source (for example can be used for, video camera, video disk or video video tape) receiving video data 110, on display 130 to user's display video data 110, receive user's input from the user who the video data 110 that shows is made response, and response user input, generate subframe metadata 150.More particularly, subframe metadata generation application program 140 is shown to the user with at least one frame of the initial video frame sequence in the video data 110 on display 130; Reception identify with shown frame in the scene described the user of the corresponding subframe of interesting areas import sub-frame information; And according to described sub-frame information generation subframe metadata 150.As used herein, term " subframe " comprises at least a portion of initial video frame, but also can comprise whole initial video frame.The subframe metadata 150 definition sequence of subframes that obtain are revised initial video frame sequence (video data 110) to produce the full screen display of this subframe on target video display 165a-165d.
The subframe metadata generates the subframe metadata 150 that application program 140 generates can comprise one or more groups subframe metadata 150, and every group all is that video display 165a-165d for specific target video display 165a-165d and/or specific dimensions 162-168 generates.Therefore, for (for example at the particular video frequency display, go up to show that display 165a) each display device in the video display apparatus 160 is revised initial video data 110 by what receive for one in the subframe metadata 150 of these video display 165 specific generations.For example, receiving initial video data 110 and one group of subframe metadata 150 (just, subframe set of metadata C) after, cell phone 160c uses received subframe metadata 150 to revise initial video data 110, and goes up the video that demonstration is revised at its video display (being video display 165c).
In addition, the subframe metadata generates application program 140 also can add edit file to subframe metadata 150, is applied on the initial video data 110 for the target video display device.For example, in one embodiment, input provides edit file as further user by the user, as the response that the mutual demonstration of initial video data 110 is made.This edit file generates application program 140 by the subframe metadata and receives, and is added in the subframe metadata 150 that is generated as a part.
This edit file includes but not limited to that camera lens moves (pan) direction and camera lens rate travel, view convergent-divergent (zoom) rate, contrast adjustment, brightness adjustment, filtering parameter and video effect parameter etc.More particularly, the edit file relevant with subframe has several, can be applicable to: a) visual modification, for example, brightness, filtering, video effect, contrast and color adjustment; B) movable information, for example, camera lens moves, acceleration, speed, the direction of subframe motion on the initial frame sequence; C) adjust size information, for example, the view convergent-divergent (comprise amplification, dwindle and ratio) of subframe on the initial frame sequence; And d) with the supplementing media (for example, text or graphic overlaying layer or supplementary audio) of any kind that falls into those part correlations, merging or stack in the subframe of initial video data.
Fig. 2 is the schematic block diagram that is used to generate the exemplary video treatment facility 120 of subframe metadata 150 according to the present invention.Video processing equipment 120 comprises video processing circuits 200, is used for processing video data 110, and generates subframe metadata 150 according to video data 110.Video processing circuits 200 comprises treatment circuit 210 and is connected to the local storage 230 of treatment circuit 210 communicatedly.Local storage 230 storage with at the corresponding operational order of at least some functions shown in this and by treatment circuit 210 it is carried out.For example, in one embodiment, local storage 230 is safeguarded operating system 240, the subframe metadata generates software module 250, demoder 260 and pixel transitions module 270.
The subframe metadata generates software module 250 and comprises instruction, and described instruction is carried out by treatment circuit 210, to generate subframe metadata 150 from video data 110.Therefore, the subframe metadata generates software module 250 and provides instruction to treatment circuit 210, obtaining the initial video frame sequence from video data 110, the initial video frame be shown to the user, handle the user is made response and user's input of input to the initial video frame that shows, and response user's input and generate subframe metadata 150.
In the embodiment of video data 110 through coding, demoder 260 includes the instruction of being carried out by treatment circuit 210, thus the video data of this process coding of decoding, to generate decoded video data.For example; in based on coding/compressed format of discrete cosine transform (DCT) (for example; H.261 and H.263 MPEG-1, MPEG-2, high definition MPEG-2, MPEG-4 AVC); consider the interframe (inter-frame) that can occur usually or between (inter-field) motion, use motion vector according to contiguous frame or field make up based on frame or prediction.For example, when using the mpeg encoded standard, the initial video frame sequence is encoded into the sequence that three kinds of dissimilar frames are formed: " I " frame, " B " frame and " P " frame." I " frame is through intraframe coding (intra-coded), and " P " frame and " B " frame are through interframe encode (inter-coded).Therefore, the I frame is independently, that is to say that they can be rebuild and not with reference to any other frame, and P frame and B frame is dependent, that is to say, they need rebuild according to other frame.More particularly, from a last I frame or P frame forward prediction P frame, and can predict and predict backward the B frame forward from last one/next I frame or P frame.Utilize DCT that the IPB frame sequence is compressed, thereby convert the N * N pixel data blocks in " I " frame, " P " frame or " B " frame to the DCT territory, wherein N is set to 8 usually, and more easily carries out quantification in the DCT territory.Then the bit stream through quantizing is carried out Run-Length Coding and entropy coding, to generate bit rate significantly less than initial compression compression of video data bit stream.The video data of 260 pairs of compressions of demoder decompresses to generate coding video frequency data, then to this coding video frequency data decoding, generates initial video frame sequence (through the video data of decoding).
The subframe metadata generates software module 250 decoded video data is offered treatment circuit 210, so that the initial video frame is shown to the user and generates subframe metadata 150.For example, in one embodiment, generate subframe metadata 150 with reference to the initial video frame sequence.In another embodiment, if for example use mpeg encoded standard (wherein the initial video frame sequence is encoded as " I ", " B " and " P " frame sequence) that video data 110 is encoded, then the IPB of reference video frames (being encoded) sequence generates subframe metadata 150.
Pixel transitions module 270 includes by the performed instruction of treatment circuit 210, the pixel resolution of video data 110 is converted to the pixel resolution of the target video display that is associated with subframe metadata 150.For example, the pixel resolution of video data 110 be the high definition resolution (for example, every frame 1920 * 1080 pixels) and the resolution of the target video display relevant with subframe metadata 150 only be among the embodiment of every frame 320 * 240 pixels, pixel transitions module 270 is every frame 320 * 240 pixels with video data from every frame 1920 * 1080 pixel transitions, to be presented at suitably on the target video display.
Treatment circuit 210 can use shared processing equipment, single treatment facility or a plurality of treatment facility to realize.Such treatment facility can be microprocessor, microcontroller, digital signal processor, micro computer, CPU (central processing unit), field programmable gate array, programmable logic device, state machine, logical circuit, mimic channel, digital circuit and/or any equipment of handling (simulation and/or numeral) signal based on operational order.Local storage 230 can be single memory device or a plurality of memory device.Such memory device can be the equipment of ROM (read-only memory), random access memory, volatile storage, nonvolatile memory, static memory, dynamic storage, flash memory and/or any storing digital information.Be noted that, when treatment circuit 210 was realized its one or more function by state machine, mimic channel, digital circuit and/or logical circuit, the storer of storage respective operations instruction was embedded in the circuit that comprises described state machine, mimic channel, digital circuit and/or logical circuit.
Video processing circuits 200 also comprises main display interface 220, the first target display interface 222, the second target display interface 224, user's input interface 217, full frame video and subframe metadata output interface 280 and full frame video output interface 290, is connected to local storage 230 and treatment circuit 210 each interface communication.Main display interface 220 provides to the interface of the basic display unit of video processing equipment, and the first target display interface 222 and the second target display interface 224 provide the interface of target video display extremely separately respectively, will be presented on the described target video display by subframe metadata 150 amended video datas 110.User's input interface 217 provides one or more interfaces, is used for receiving user's input by one or more input equipments (for example, mouse, keyboard etc.) from the user of operating video processing apparatus.For example, such user input can comprise sub-frame information that is used for identifying area-of-interest (subframe) in the scene that shown frame describes and the edit file that is used to edit sub-frame information.
Video data and subframe metadata output interface 280 provide one or more interfaces for output video data 110 and the subframe metadata 150 that is generated.For example, video data and subframe metadata output interface 280 can comprise to the storage medium of stored video data 110 and subframe metadata 150 (for example, video disk, video video tape or other storage medium) interface, to the interface of the transmission medium (for example, by internet, Ethernet or other Network Transmission) of transmitting video data 110 and subframe metadata 150 and/or to the interface of video data 110 and subframe metadata 150 being carried out the additional treatments circuit of further handling.Video data input interface 290 comprises one or more interfaces that are used to receive the video data 110 of compression or unpacked format.For example, video data interface 290 can comprise to the interface of the storage medium of storing initial video data and/or to the interface of the transmission medium by internet, Ethernet or other network receiving video data 110.
In the operating process, the subframe metadata generates software module 250 in case start, and just provides instruction to treatment circuit 210, with video data stored 110 by video input interface 290 receiving video datas 110 or before fetching from local storage 230.If video data 110 is that the subframe metadata generates software module 250 also provides instruction with visit demoder 260 to treatment circuit 210 through coding, and the instruction of using demoder 260 the to provide video data that this process encodes of decoding.
Then, the subframe metadata generates software module 250 and provides instruction to treatment circuit 210, thereby obtains at least one frame in the initial video frame sequence from video data 110, and by main display interface 220 the initial video frame is shown to the user.Response is made in the user's input that receives by user's input interface 217, wherein said user input identify with shown frame in describe the corresponding subframe of area-of-interest of scene, next, the subframe metadata generates software module 250 provides instruction to generate subframe metadata 150 to import according to the user to treatment circuit 210, and with the subframe metadata store that generated in local storage 230.In requiring the embodiment of pixel transitions, the subframe metadata generates also command process circuit 210 visit pixel transitions modules 270 of software module 250, has the subframe metadata 150 of appropriate pixels resolution with generation.
Generating software module 250 according to the subframe metadata is the type of the target video display of its programming, the subframe metadata 150 that the subframe metadata generates software module 250 generations can comprise one or more groups word frame metadata 150, and each group all generates at specific target video display.For example, in one embodiment, in order go up to show at specific video display (for example, the first target video display), treatment circuit 210 is by the first target display interface, 222 output initial video data 110 with at the subframe metadata 150 of the first target video display.In another embodiment, treatment circuit 210 is by output interface 280 output initial video data 110 and one or more groups subframe metadata 150, for follow-up processing, storage or transmission.
Fig. 3 is the schematic block diagram that video processing equipment 120 according to the present invention generates the operation of subframe metadata 150.In Fig. 3, video data 110 is represented as initial video frame sequence 310.Each frame 310 in the initial video frame sequence (video data 110) is transfused to subframe metadata generation application program 140, generates subframe metadata 150 by it.In addition, each frame 310 in the initial video frame sequence all can be presented on the display 130 of video processing equipment 120, and the confession user watches and handles, as shown in Figure 2.
For example, the input equipment 320 of user-operable such as mouse controlled the position of pointer 330 on the display 130.Pointer 330 can be used to identify be presented at display 130 on present frame 310 in the corresponding subframe 315 of area-of-interest.For example, the user can utilize pointer 330 to create window on display, and carries out a series of clicks and drag operation by mouse 320, window size and position on the control display 130.In case the user uses input equipment 320 to create window on display 130, the user also can utilize input equipment 320 to point out, this window definition a subframe 315, this operation provides subscriber signal 325 to realize by generating application program 140 via user interface 217 to the subframe metadata.According to subscriber signal 325, the subframe metadata generates application program 140 and generates subframe metadata 150.For example, subframe metadata 150 can identify the locus (for example, on the present frame 310 with window center pixel position corresponding) and the window size (for example, length of window and the width of representing with a plurality of pixels) of the window center of present frame 310.
The subframe metadata generates application program 140 and comprises subframe identification module 340, subframe editor module 350 and metadata generation module 360.In case receive the subscriber signal 325 of creating subframe 315, subframe identification module 340 distributes subframe identifier 345 to give this subframe.Subframe identifier 345 is used for identifying this subframe of subframe metadata 150 defined sequence of subframes.
350 pairs of further user signals 325 of subframe editor module are made response, and subframe is edited.For example, in case the user uses input equipment 320 to create subframe 315, the user can further use input equipment 320 to edit subframe 315, and will represent editor's subscriber signal 325 to offer subframe metadata generation application program 140 by user interface 217.This subscriber signal is imported into subframe editor module 350, and to generate edit file 355, edit file 355 has been described the editor that subframe 315 is carried out.Edit file 355 is included in the subframe metadata 150, be used to be presented on the target video display before, the subframe on the target display devices 315 is edited.Though edit file can designatedly be applied to all videos data, most edit file is applied to specific one or more subframes.
Edit file 355 includes but not limited to camera lens moving direction and speed, view convergent-divergent (zoom) rate, contrast adjustment, brightness adjustment, filtering parameter and video effect parameter etc.Video effect includes but not limited to wipe, gradual change, dissolving, surface and object distortion, optically focused and Gao Guang, the color and style filling, video or figure covering, color correction, 3D perspective correction and 3D texture.Another example of video effect comprises " time shift ".Only, just can when playback, slow down broadcast by first sequence of described first subframe definition by in metadata, adding the edit file that relevant controlling first subframe slows down.Second sequence relevant with second subframe can receive normal playback, and the playback of three sequence relevant with the 3rd subframe can be accelerated.The realization of time shift comprises and increases and reduce frame rate, or only duplicates or abandon the frame in the selected initial video frame sequence, perhaps merges frame to generate additional frame or to reduce total quantity in complicated more mode.
The edit file 355 that the subframe identifier 345 that subframe identification module 340 distributes, subframe editor module 350 generate, current initial video frame 310 and the size of definition subframe 315 and the subscriber signal 325 of position all are imported into subframe metadata generation module 360, are used to generate subframe metadata 150.Usually, for each subframe 315, the identifier, subframe 315 that subframe metadata 150 comprises subframe identifier 345, therefrom obtain the initial video frame 310 of subframe 315 is with respect to the position of initial video frame 310 and size and any edit file 355 relevant with subframe 315.
Subframe metadata generation module 360 is that each subframe 315 generates subframe metadata 150, and the comprehensive subframe metadata 150 of output definition subframe 315 sequences.Subframe 315 sequences can comprise a subframe 315 of each initial video frame 310, at each initial video frame 310 continuous a plurality of subframe 315 that show, the a plurality of subframes 315 corresponding, perhaps a plurality of subframes 315 of a plurality of sub-scenes of in initial video frame 310 sequences, describing with the sub-scene of the scene of in initial video frame 310 sequences, describing.For example, subframe metadata 150 can comprise the cis element sequence data, promptly identifies a series of sub-scenes, also identifies and each relevant subframe 315 of each sub-scene in these a series of sub-scenes.
Subframe metadata 150 is also indicated the relative different of subframe 315 positions in the sub-scene.For example, in one embodiment, subframe metadata 150 can point out that each subframe 315 in the sub-scene all is positioned at fixed space position identical on the video display 130 (for example, each subframe 315 comprises identical location of pixels).In another embodiment, subframe metadata 150 can point out that the locus of each subframe 315 in the sub-scene changes with subframe.For example, each subframe 315 in the sequence of subframes of this sub-scene all includes the object that the locus changes on the initial video frame sequence of correspondence.
Fig. 4 is the synoptic diagram of typical initial video frame 310 and corresponding subframe 315.In Fig. 4, describe first scene 405 by first sequence 410 of initial video frame 310, describe second scene 408 by second sequence 420 of initial video frame 310.Therefore, each scene 405 and 408 includes the corresponding sequence 410 and 420 of initial video frame 310, and watches by each the initial video frame 310 in each sequence 410 and 420 of continuous demonstration initial video frame 310.
But in order to show each scene 405 and 408 and do not reduce the video quality of beholder's perception on the small video display, each scene 405 and 408 all can be split into the sub-scene of a plurality of independent demonstrations.For example, as shown in Figure 4, in first scene 405, two sub-scenes 406 and 407 are arranged; In second scene 408, a sub-scene 409 is arranged.Just as watching each scene 405 and 408, can watch each sub-scene 406,407 and 409 by showing each subframe 315 sequences by each sequence 410 and 420 of continuous demonstration initial video frame 310.
For example, see the first frame 310a in first sequence 410 of initial video frame, the user can identify two subframe 315a and 315b, and each subframe all includes the video data of the different sub-scenes 406 of expression and 407.Suppose that sub-scene 406 and 407 continues to spread all over first sequence 410 of initial video frame 310, the user can further identify the corresponding sub-scenes 406 of 315, one subframes of two subframes and 407 of each the initial video frame 310 in first sequence 410 of initial video frame 310.The result who generates is first sequence of subframes 430 and second sequence of subframes 440, and wherein, each the subframe 315a in first sequence of subframes 430 comprises the video content of representing sub-scene 406; Each subframe 315b in second sequence of subframes 440 comprises the video content of representing sub-scene 407.Each sequence of subframes 430 and 440 is all shown continuously.For example, can show all subframe 315a continuously, show all subframes 315 of the corresponding second sub-scene 407 afterwards continuously corresponding to the first sub-scene 406.In this way, film has kept the logic flow of scene 405, allows the details in spectators' perception scene 405 simultaneously.
Similarly, see the first frame 310b in second sequence 420 of initial video frame, the user can identify the subframe 315c corresponding with sub-scene 409.Once more, suppose that sub-scene 409 continues to spread all in second sequence 420 of initial video frame 310, the user can further identify the subframe 315c that comprises sub-scene 409 in the frame of initial video subsequently 310 of second sequence 420.The result produces sequence of subframes 450, and each subframe 315c wherein comprises the video content of representing sub-scene 409.
Fig. 5 is the synoptic diagram of the subframe metadata 150 of typical sequence of subframes.In subframe metadata 150 shown in Figure 5, be cis element sequence data 500, point out the order (order of demonstration just) of subframe.For example, cis element sequence data 500 can identify a series of subframes of a series of sub-scenes and each sub-scene.Use example shown in Figure 4, cis element sequence data 500 can be divided into subframe set of metadata 520, and each group 520 is corresponding to specific sub-scene.
For example, in first group 520, first subframe of cis element sequence data 500 from first sequence of subframes (for example, sequence 430) begins, and is other subframe in first sequence 430 afterwards.In Fig. 5, first subframe in first sequence is marked as the subframe A of initial video frame A, and last subframe is marked as the subframe F of initial video frame F in first sequence.After last subframe of first sequence 430, cis element sequence data 500 from first subframe of second sequence of subframes (for example, sequence 440) (for example continue second group 520, the second groups 520, subframe 315b) beginning is to last subframe end of second sequence 440.In Fig. 5, first subframe of second sequence is marked as the subframe G of initial video frame A, and last subframe in first sequence is marked as the subframe L of initial video frame F.First subframe (for example, the subframe 315c) beginning of last group 520 from the 3rd sequence (for example, sequence 450) is to last subframe end of the 3rd sequence 450.In Fig. 5, first subframe of the 3rd sequence is marked as the subframe M of initial video frame G, and last subframe in the 3rd sequence is marked as the subframe P of initial video frame I.
In each group 520 is the subframe metadata of each subframe in this group 520.For example, the subframe metadata 150 that comprises each subframe in first sequence of subframes 430 for first group 520.In typical embodiment, subframe metadata 150 can be organized as the metadata text file that comprises a plurality of clauses and subclauses 510.Each clauses and subclauses 510 in the metadata text file include the subframe metadata 150 of specific sub-frame.Therefore, each clauses and subclauses in the metadata text file include the subframe identifier that identifies the specific sub-frame relevant with this metadata, and are associated with a frame in the initial video frame sequence.
Fig. 6 is the synoptic diagram of the subframe metadata 150 of specific sub-frame.Comprise various subframe metadata 150 among Fig. 6, they can find in the clauses and subclauses 510 of the metadata text file of reference Fig. 5 discussion.Subframe metadata 150 at each subframe comprises general sub-frame information 600, the aspect ratio (SF ratio) of for example distributing to the subframe identifier (SF ID) of this subframe, the information relevant (OF ID, OF counting, playback offsets), subframe position and size (SF position, SF size) and will showing the display of this subframe thereon with the initial video frame that therefrom obtains subframe.In addition, as shown in Figure 6, the sub-frame information 150 of specific sub-frame can include the edit file 355 that is used to edit subframe.Edit file 355 shown in Figure 6 comprises supplying and other video effects and relevant parameter of camera lens moving direction and camera lens rate travel, view convergent-divergent (zoom) rate, color adjustment, filtering parameter, image or video sequence.
Fig. 7 provides the synoptic diagram of the exemplary video processing display 130 of graphical user interface (GUI) 710, and described graphical user interface 710 comprises the video editing instrument that is used to edit subframe 315.What show on Video processing display 130 is the subframe 315 of present frame 310 and present frame 310.Subframe 315 comprises the interior video data of area-of-interest that user ID goes out, as described in reference Fig. 3.In case after having identified subframe 315, the user can use the one or more video editing instruments that offer the user by GUI 710 to edit subframe 315.For example, as shown in Figure 7, the user can be by clicking or select an edit tool among the GUI 710, to subframe 315 filter application, color correction, covering or other edit tool.In addition, GUI 710 also allows the user to move between initial frame and/or subframe, to check and more initial sequence of subframes and sequence of subframes.
Fig. 8 is used to generate the schematic block diagrams of organizing the exemplary video treatment facility 120 of subframe metadata more.Generate the quantity and the type of the target video display of subframe metadata according to video processing equipment 120 needs for it, the treatment circuit 210 of video processing equipment 120 can according to initial video data 110 produce one or more groups subframe metadata 150a, 150b.。。150N, wherein every group of subframe metadata 150a, 150b.。。150N generates at specific target video display.For example, in one embodiment, in order on the first target video display, to show, treatment circuit 210 generation definitions first group of subframe metadata 150a of sequence of subframes.First group of subframe metadata 150a is used to revise initial video data 110, thereby produces the full screen display of this sequence of subframes on the first target video display.
Fig. 9 is the schematic block diagram that generates the exemplary video disposal system 100 of many group subframe metadata 150 for a plurality of target video displays 165.The same with Fig. 1, processing system for video 100 comprises video processing equipment 120, for example computing machine or miscellaneous equipment that can processing video data 110, and video processing equipment 120 is carried out the subframe metadata and is generated application programs 140.The subframe metadata generate application program 140 with initial video data 110 as importing, and generation subframe metadata 150, subframe metadata 150 has defined sequence of subframes, be used to revise initial video frame sequence (video data 110), thereby on the target video display 165 of video display apparatus 160, produce the full screen display of subframe.
Fig. 9 shows following exemplary video display device: TV 160a, PDA(Personal Digital Assistant) 160b, cell phone 160c and kneetop computer 160d.Each video display apparatus 160a-160d is connected to video display 165a-165d separately communicatedly.In addition, each video display apparatus 160a-160d also communicates to connect to media player 910a-910d separately.Each media player 910a-910d includes the video player circuit, is used to handle video content and shows on video display 910a-910d separately.Media player 910 can be included in the video display apparatus 160, perhaps can be connected to communicatedly on the video display apparatus 160.For example, relevant with TV 160a media player 910a can be VCR, DVD player or other similar equipment.
As described above with reference to Figure 1, the subframe metadata 150 that the subframe metadata generates application program 140 generations can comprise one or more groups subframe metadata 150a-150d, and every group of target video display 165a-165d that is respectively specific generates.For example, as shown in Figure 9, the subframe metadata generates application program 140 and generates four groups of subframe metadata 150a-150d, and each group is corresponding video display 165a-165d respectively.Therefore, show in order to go up, by initial video data 110 being made amendment for the subframe set of metadata 150a of the specific generation of this video display 165a at specific video display (for example, display 165a).
In the operating process, each media player 910 connects communicatedly, comprises the initial video data 110 of initial video frame sequence and the subframe set of metadata 150 of definition sequence of subframes with reception.By from the internet or other network download, (for example, vhs video band, DVD or other storage medium) broadcasting or upload receives initial video data 110 and subframe set of metadata 150 to the memory device by being connected to media player 910 communicatedly.Media player 910 uses subframe metadata 150 to revise the initial video frame sequence, to produce the full screen display corresponding with this sequence of subframes at target video display 165.For example, media player 910a can connect communicatedly, to receive initial video data 110 and subframe metadata 150a; Media player 910b can connect communicatedly, to receive initial video data 110 and subframe metadata 150b; Media player 910c can connect communicatedly, to receive initial video data 110 and subframe metadata 150c; Media player 910d can connect communicatedly, to receive initial video data 110 and subframe metadata 150d.
Figure 10 is the logical flow chart that generates the exemplary process 1000 of subframe metadata according to the present invention.Processing in this step, receives the initial video data that comprise video content from step 1010 from any video source (for example, video camera, video disk or video record band).These initial video data include the initial video frame sequence, and this initial video frame comprises the video content of arbitrary format.In addition, the video data that is received can be to use any coding standard to carry out coding and compression, or not compression but through coding, perhaps not compression and uncoded.If the initial video data have been passed through compressed/encoded, will decompress and decoding to this video data, to produce the initial video frame sequence.
Processing proceeds to step 1020, and first frame in the initial video frame sequence is shown to the user.For example, first frame can be presented on the display that can be watched by the user.Handle then and proceed to determining step 1030, judge whether the subframe of first frame is identified.For example, the user can provide the user to import, be used to identify with first frame in the corresponding subframe of area-of-interest.If identified subframe (the "Yes" branch of step 1030), handle and proceed to step 1040, generate the subframe metadata that this is identified subframe.For example, can comprise the identifier of this subframe, therefrom obtain the identifier, this subframe of the initial video frame of subframe at the subframe metadata of specific sub-frame with respect to the position and the size of initial video frame and any edit file that is used to edit subframe.In step 1050, repeat this processing at each subframe that is identified in first frame.Therefore,, handle turning back to step 1040, generate subframe metadata at additional sub-frames if another subframe is identified in first frame (the "Yes" branch of step 1050).
If in first frame, do not identify subframe (the "No" branch of step 1030), the subframe (the "No" branch of step 1050) that does not perhaps have more quilts to identify in first frame, processing proceeds to determining step 1060, and whether judge has more frame in the initial video frame sequence.If more initial video frame (the "Yes" branch of step 1060) is arranged, handle and proceed to step 1070, the next frame in the initial video frame sequence is shown to the user, then in step 1030 re-treatment.But, if there is not more initial video frame (the "No" branch of step 1060), handle to proceed to step 1080, the subframe metadata store that will generate at each subframe that is identified is in meta data file.
Those of ordinary skills as can be known, among the application employed phrase " be operably connected " " can connect communicatedly " comprise direct connection and connect in succession by another parts, element, circuit or intermodule, wherein, for indirect connection, parts, element, circuit or the module that gets involved can not revised the information of signal, but may adjust its size of current, voltage swing and/or watt level.Those of ordinary skills it can also be appreciated that inferring connection (promptly inferring parts and another parts connects) comprises between two parts and " being operably connected " and identical direct with indirect being connected of " can be connected communicatedly " mode.
The present invention is by having showed specific function of the present invention and relation thereof by method step.The scope of described method step and order are to define arbitrarily for convenience of description.As long as can carry out specific function and order, also can use other boundary and order.Therefore any boundary or order described or choosing fall into scope and spirit essence of the present invention.
The present invention also is described some important function by functional module.The boundary of described functional module and the relation of various functional modules are to define arbitrarily for convenience of description.As long as can carry out specific function, also can use other boundary or relation.Similarly, defined FB(flow block) herein, to explain specific critical function.As long as can reach application target, also can use other boundary or relation, still carry out specific critical function.Therefore described other boundary or relation also falls into scope and spirit essence of the present invention.
Those of ordinary skills also as can be known, the functional module among the application and other displaying property module and assembly can be embodied as the processor and the aforesaid combination in any of discrete component, special IC, the appropriate software of execution.
In addition, although more than be the description of the present invention being carried out by some embodiment, those skilled in the art know that the present invention is not limited to these embodiment, under the situation that does not break away from the spirit and scope of the present invention, can carry out various changes or equivalence replacement to these features and embodiment.Protection scope of the present invention is only limited by claims of the application.

Claims (10)

1, a kind of video processing circuits of using with first video display, be used to handle the initial video frame sequence that is used by the target video player circuit, described target video player circuit is connected to second video display communicatedly, described first video display has the visible area bigger than second video display, it is characterized in that described video processing circuits comprises:
Driving circuit is connected to first video display communicatedly;
Treatment circuit, mutual by described driving circuit, thus on first video display, show at least one frame in the initial video frame sequence;
Input interface circuit, the signal that the expression user is imported is sent to described treatment circuit;
Described treatment circuit is undertaken alternately by described driving circuit, showing subframe on described first video display, thereby the signal of described expression user input is made response;
Described subframe is corresponding to the zone of being imported by described expression user in described at least one frame in the described initial video frame sequence that signal identified; And
Described treatment circuit generates and described subframe metadata corresponding, is used to revise the initial video frame sequence by described target video player circuit, thereby generates the full screen display corresponding with described subframe on described second video display.
2, video processing circuits according to claim 1 is characterized in that, described treatment circuit comprises the relevance of a plurality of frames in described subframe and the initial video frame sequence in described metadata.
3, video processing circuits according to claim 1, it is characterized in that, described treatment circuit is undertaken alternately by described driving circuit, on described first video display, show and the relevant additional sub-frames of described initial video frame sequence, thereby the additional signal of expression user input is made response;
Described treatment circuit generates the attaching metadata corresponding with described additional sub-frames, is used to revise the initial video frame sequence by described target video player circuit, thereby generates described full screen display on described second video display; And
The described metadata and the described attaching metadata that are generated by described treatment circuit define one group of subframe jointly.
4, video processing circuits according to claim 3 is characterized in that, at least two subframes in the described subframe group are corresponding with a frame in the initial video frame sequence.
5, video processing circuits according to claim 3 is characterized in that, at least two subframes in the described subframe group comprise the object that the locus changes on the initial video frame sequence.
6, video processing circuits according to claim 3 is characterized in that, two subframes in the described subframe group are corresponding with at least two different frames in the initial video frame sequence.
7, a kind of processing system for video receives the video data of representing the initial video frame sequence, it is characterized in that described processing system for video comprises:
Receive the user interface of user's input;
Described user's input comprises the sub-frame information of definition first subframe and second subframe, first area-of-interest in the first at least of the corresponding described initial video frame sequence of described first subframe; At least second area-of-interest in the second portion of the corresponding described initial video frame sequence of described second subframe;
Treatment circuit is connected to described user's input communicatedly;
Described treatment circuit is used to revise described video data according to described sub-frame information generator data, thereby after revising described initial video frame sequence according to described sub-frame information, generates the full screen display of described initial video frame sequence.
8, processing system for video according to claim 7 is characterized in that, the subsequence of the corresponding described initial video data sequence of described first subframe.
9, a kind of method relevant with the initial video frame sequence, be used on first video display of a plurality of video displays, showing the modification of described initial video frame sequence, described a plurality of video display is of different sizes, and it is characterized in that, described method comprises:
Reception comprises the video data of described initial video frame sequence;
On first video display, show at least one frame in the described initial video frame sequence;
Receive user's input, described user's input identify with described initial video frame sequence at least one frame in the corresponding subframe of area-of-interest; And
Import generation subframe metadata according to the user, be used to revise described initial video frame sequence, to show the modification of described initial video frame sequence.
10, method according to claim 9 is characterized in that, described method also comprises based on a plurality of subframes and repeats described demonstration, receives user's input and generate step.
CN200710104100A 2006-05-22 2007-05-21 Method for processing video frequency, circuit and system Expired - Fee Related CN100587793C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US80242306P 2006-05-22 2006-05-22
US60/802,423 2006-05-22
US11/474,032 2006-06-23

Publications (2)

Publication Number Publication Date
CN101079248A true CN101079248A (en) 2007-11-28
CN100587793C CN100587793C (en) 2010-02-03

Family

ID=38906688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200710104100A Expired - Fee Related CN100587793C (en) 2006-05-22 2007-05-21 Method for processing video frequency, circuit and system

Country Status (1)

Country Link
CN (1) CN100587793C (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262439A (en) * 2010-05-24 2011-11-30 三星电子株式会社 Method and system for recording user interactions with a video sequence
CN104123112A (en) * 2014-07-29 2014-10-29 联想(北京)有限公司 Image processing method and electronic equipment
CN106576191A (en) * 2013-09-13 2017-04-19 沃科公司 Video production sharing apparatus and method
CN110073662A (en) * 2016-11-17 2019-07-30 英特尔公司 The suggestion viewport of panoramic video indicates

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262439A (en) * 2010-05-24 2011-11-30 三星电子株式会社 Method and system for recording user interactions with a video sequence
CN106576191A (en) * 2013-09-13 2017-04-19 沃科公司 Video production sharing apparatus and method
CN106576191B (en) * 2013-09-13 2020-07-14 英特尔公司 Video production sharing device and method
US10812781B2 (en) 2013-09-13 2020-10-20 Intel Corporation Video production sharing apparatus and method
CN104123112A (en) * 2014-07-29 2014-10-29 联想(北京)有限公司 Image processing method and electronic equipment
CN104123112B (en) * 2014-07-29 2018-12-14 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN110073662A (en) * 2016-11-17 2019-07-30 英特尔公司 The suggestion viewport of panoramic video indicates
CN110073662B (en) * 2016-11-17 2023-07-18 英特尔公司 Method and device for indicating suggested viewport of panoramic video
US11792378B2 (en) 2016-11-17 2023-10-17 Intel Corporation Suggested viewport indication for panoramic video

Also Published As

Publication number Publication date
CN100587793C (en) 2010-02-03

Similar Documents

Publication Publication Date Title
KR100912599B1 (en) Processing of removable media that stores full frame video ? sub?frame metadata
KR100904649B1 (en) Adaptive video processing circuitry and player using sub-frame metadata
KR100906957B1 (en) Adaptive video processing using sub-frame metadata
KR100836667B1 (en) Simultaneous video and sub-frame metadata capture system
KR100909440B1 (en) Sub-frame metadata distribution server
KR100915367B1 (en) Video processing system that generates sub-frame metadata
JP6562992B2 (en) Trick playback in digital video streaming
CN1278550C (en) Method and apparatus for regenerating image and image recording device
AU2006211475A1 (en) Digital intermediate (DI) processing and distribution with scalable compression in the post-production of motion pictures
CN101043600A (en) Playback apparatus and playback method using the playback apparatus
CN101094407B (en) Video circuit, video system and video processing method
CN101079248A (en) Video processing method, circuit and system
JP6838201B2 (en) Backward compatibility Display management metadata compression
Beach et al. Video compression handbook
CN1236598C (en) Device and method for changeable speed reproduction of movable picture stream
TWI826400B (en) Information processing device, information processing method, recording medium, reproduction device, reproduction method, and program
US20230319347A1 (en) Method, apparatus, and program product for authoring video content using stream embedded indicators
CN1529514A (en) Layering coding and decoding method for video signal
CN1529513A (en) Layering coding and decoding method for video signal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1115218

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1115218

Country of ref document: HK

C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100203

Termination date: 20110521