CN1344106A - Edition method for non-linear edition system based on MPEG-2 code stream - Google Patents

Edition method for non-linear edition system based on MPEG-2 code stream Download PDF

Info

Publication number
CN1344106A
CN1344106A CN 00124793 CN00124793A CN1344106A CN 1344106 A CN1344106 A CN 1344106A CN 00124793 CN00124793 CN 00124793 CN 00124793 A CN00124793 A CN 00124793A CN 1344106 A CN1344106 A CN 1344106A
Authority
CN
China
Prior art keywords
frame
pes
gop
code stream
pts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 00124793
Other languages
Chinese (zh)
Other versions
CN100353750C (en
Inventor
高文
罗森林
袁禄军
彭泽山
成华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SUANTONG DIGITAL TECHNOLOGY RESEARCH CENTER Co Ltd BEIJING
Original Assignee
SUANTONG DIGITAL TECHNOLOGY RESEARCH CENTER Co Ltd BEIJING
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SUANTONG DIGITAL TECHNOLOGY RESEARCH CENTER Co Ltd BEIJING filed Critical SUANTONG DIGITAL TECHNOLOGY RESEARCH CENTER Co Ltd BEIJING
Priority to CNB00124793XA priority Critical patent/CN100353750C/en
Publication of CN1344106A publication Critical patent/CN1344106A/en
Application granted granted Critical
Publication of CN100353750C publication Critical patent/CN100353750C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

An editing method of non-linear edit system based on MPEG-2 code stream is disclosed. The editing method on the basis of frame includes determining editing point and the position of the frame correspondent to the editing point, labelling the last image frame before the editing point as Flast, labelling the first image frame after the editing point as Ffirst, determining the position of editing point in the basic audio-video data stream and the type of frame, editing, and updating the head information of relative GOP. Its advantages are high editing efficiency and high image quality.

Description

Edition method for non-linear edition system based on the MPEG-2 code stream
The present invention relates to a kind of edit methods, especially a kind of edit methods that is accurate to each layer data stream based on MPEG-2 code stream nonlinear editing system.
The code stream level of MPEG-2 is divided into pixel, frame, GOP, PES, TS, PS from component relationship.In nonlinear editing system, different according to the requirement of different editor's accuracy and editing speed, the object that need be directed to the different code streams level is edited.The structure of the video data stream of MPEG-2 as shown in Figure 1, look, the elementary stream of audio frequency is packaged into packetized datastream (PES) data stream packets according to constant duration, each bag all will add corresponding PES header packet information, adds Presentation Time Stamp (PTS) and Decoding Time Stamp (DTS) simultaneously.Elementary stream is meant the code stream that forms behind the direct compressed encoding of audio frequency looking, and video code flow will form a GOP according to the high-rise bitstream syntax structure in the standard, and the front of code stream will add header, also will add header for audio code stream equally.The structure of GOP can be as shown in Figure 2.Comprise among the GOP that frame is I frame, B frame, P frame.
The edit methods of layer data stream can be as accurate as frame at present, also can be as accurate as GOP.The edit methods that wherein is accurate to frame is generally the full decoder mode, just with behind the true form code stream full decoder, at the enterprising edlin of decoded bit stream.This method influences the quality of image inevitably to decoded data recompile, and real-time, practicality are relatively poor, also need a large amount of memory devices.This mode must be brought the decoding burden of load-bearing simultaneously, and rate is lower down for editor.And the editor of GOP also just edits at the sequence of a plurality of whole GOP in the existing technology, normally complete one by one GOP is moved in code stream, inserts, and is not deep into the edit mode of GOP analysis itself.No matter have, be at the video or the editor of audio frequency again, also do not have the edit methods based on the PES fluid layer at present.
In addition, MPEG-2 is being flow in line nonlinearity editor's the process, the most basic a kind of operation is exactly cutting and company material, being about to two sections materials cut according to certain time, remove before this or content after this, then these two sections materials are merged, form one section new material.Yet mpeg stream is the integral body that moving image and sound accompaniment thereof organically combine, and only certain image segments is sheared or connected to be too big meaning not, and having only corresponding sound accompaniment handled together just has practical meaning.For the synchronism of looking audio frequency after keeping editing, the selection of audio editing method is very important.But in the prior art, also there is not the synchronism edit methods of looking audio frequency based on the PES fluid layer.
The object of the present invention is to provide a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, by setting up the method for frame index file, do not need the approximate edit methods that is accurate to frame of encoding and decoding at frame, and the frame-editing method that is accurate to that the frame with certain type is only needed partial decoding of h.
Another object of the present invention is to provide a kind of and be accurate to the edit methods of GOP, utilize and set up the GOP index file, directly edit, improve image quality and editorial efficiency at GOP based on MPEG-2 code stream nonlinear editing system.
Another purpose of the present invention is to provide a kind of edit methods based on MPEG-2 code stream nonlinear editing system video and audio sync PES fluid layer, according to video and audio index file, can carry out finishing the edit methods of video editing efficiently and audio video synchronization based on the PES fluid layer.
The object of the present invention is achieved like this:
A kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it may further comprise the steps:
1, positional editing point, determine in-edit the position of corresponding frame;
2, the last frame image tagged with the shearing point place front end data of this frame is F Last, first frame flag of shearing point place Back end data is F First
3, determine the position of in-edit in looking the audio frequency elementary stream by decoded portion or index file, and the type of definite frame;
4,, carry out editing and processing at frame according to edit methods;
5, the header of the relevant GOP of change forms a new legal GOP.
Described edit methods comprises the edit methods that is accurate to frame and the edit methods of frame.
Described editing and processing at frame comprise the code stream of excision mark front end code stream, retention marker rear end or excision rear end code stream, keep in the middle of the code stream or excision of front end, keep the code stream at the code stream at two ends or excision two ends, the code stream method in the middle of keeping.
Described index file content comprises: the sequence head original position of the frame number of frame, frame, frame place be GOP original position, an image original position of frame, the coded format (I, B, P) of frame, the flag bit of GOP just.
Described flag bit content comprises the wiping out of frame, by recompile or the sign that remains unchanged.
The code stream of described excision mark front end code stream, retention marker rear end and the edit methods that is accurate to frame comprise:
If 1 F FirstBe the I frame, do not have frame to rearrange, directly remove F FirstPreceding all frames are revised F then FirstRelevant information among the GOP of place forms new legal GOP;
If 2 F FirstFor the P frame, with F FirstDecoding back recompile is the I frame, removes F then FirstPreceding all frames are revised F FirstRelevant information after recoding among the GOP of place forms new legal GOP;
If 3 F FirstFor the B frame, need F so FirstPlace GOP decodes to F FirstFrame is then with F FirstFrame is encoded into the I frame, revises F FirstRelevant information after recoding among the GOP of place forms new legal GO.
The code stream of described excision rear end, the edit methods that keeps the code stream of front end and be accurate to frame comprise:
If 1 F LastBe the I frame, do not have frame to rearrange, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP;
If 2 F LastBe the P frame, do not have frame to rearrange, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP;
If 3 F LastFor the B frame, need F so LastPlace GOP decodes to F LastFrame is then with F LastFrame is encoded into the P frame, revises F LastRelevant information after recoding among the GOP of place forms new legal GOP.
The code stream of described excision mark front end code stream, retention marker rear end and the approximate edit methods that is accurate to frame comprise:
If 1 F FirstBe the I frame, do not need coding-decoding operation, directly remove F FirstPreceding all frames are revised F then FirstRelevant information among the GOP of place forms new legal GOP;
If 2 F FirstFor the P frame, with F FirstRecompile is the I frame behind the frame decoding, removes F then FirstPreceding all frames are revised F FirstRelevant information after recoding among the GOP of place forms new legal GOP;
If 3 F FirstBe the B frame, need location F FirstNearest I frame or the P frame (generally speaking, being I frame or P frame every 1 to 2 frame) in front and back (in this GOP) if frame is the I frame recently, removes all frames in I frame front, revises F FirstThe header of place GOP forms new legal GOP.If frame is the P frame recently, then need to rearrange into the preceding data of I frame with rearranging into the I frame behind this P frame decoding, removing, revise new GOP header, form new legal GOP.
The code stream of described excision rear end, the code stream that keeps front end and the approximate edit methods that is accurate to frame comprise:
If 1 F LastBe the I frame, do not need the coding and decoding operation, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP;
If 2 F LastBe the P frame, do not need the coding and decoding operation, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP;
If 3 F LastBe the B frame, need location F LastNearest I frame or the P frame (generally speaking, being I frame or P frame every 1 to 2 frame) in front and back (in this GOP) if frame is the I frame recently, directly removes this all data of I frame back, revises the header of GOP, forms new legal GOP.If recently frame is the P frame, only need remove this P frame back data, revise new GOP header, form new legal GOP.
A kind ofly be accurate to the edit methods of GOP based on MPEG-2 code stream nonlinear editing system, it comprises the steps:
1, sets up the GOP index file;
2, according to index file, determine the position of GOP in code stream,
3, judge the position of editing among the point frame place GOP;
4, according to the leading portion or the back segment of how much determining to give up or keep this GOP of frame before and after the in-edit;
5, respective field in the GOP header structure is faced in modification mutually.
If the code stream of the code stream back segment of excision in-edit leading portion also keeps the back segment code stream, then revise respective field in the header structure of first GOP of this GOP back; If excision in-edit back segment code stream also keeps the leading portion code stream, revise respective field in the header structure of first GOP of this GOP front.
Described GOP index file comprises: the sequence head original position of the sequence number of GOP in code stream, the video sequence of GOP in code stream, GOP be GOP original position, flag bit in code stream.
Described flag bit content comprises wipes out or remains unchanged.
A kind of edit methods based on MPEG-2 code stream nonlinear editing system video PES fluid layer, its step comprises:
1, sets up index file;
2, determine that according to index file the leading portion PES of material 1,2 wraps and the position of the back segment PES bag of material 1,2;
3, for front end PES bag, it is long to calculate the PES bag; Revise the PES packet length of destroyed PES header;
4, for rear end PES bag, it is long to calculate the PES bag; Calculate the value of PTS and DTS; Revise the PES packet length of ruined PES header; The value of all PTS and DTS in the modification reservation material.
Described index file comprises: the sequence head original position of the sequence number of PES in code stream, the video sequence of PES in code stream, PES be PES original position, flag bit in code stream.
Described flag bit content comprises wipes out or remains unchanged.
Following formula is used in the modification of the value of described DTS:
DTS2(0)=DTS1(end)+C1×N1
DTS2(1)=DTS2(0)+C2×N2
DTS2(n)=DTS2(n-1)+[DTS2(1)-DTS2(0)]
Wherein: C1 is material 1 every frame decoding used time (constant); N1 is that material 1 last PES wraps contained frame number; C2 is material 2 every frame decodings used time (constant); N2 is that material 2 first PES wrap contained frame number.
Following formula is used in the modification of the value of described PTS:
PTS2(0)=PTS1(end)+C1×N1)
PTS2(1)=PTS2(0)+C2×N2
PTS2(n)=PTS2(n-1)+[PTS2(1)-PTS2(0)]
A kind of edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer provides two sections PES stream PES1 and PES2, and its step comprises: set up index file; Determine the position of PES according to index file, and do sign; Draw the video pts value that corresponding requirement is sheared from video PES: the video pts value (PTS_V1 and PTS_V2) of two sections PES stream shearing point promptly, keep before the video pts value PTS_V1 that flows leading portion PES1 stream shearing point among the PES1 and video pts value (PTS_V2) segment afterwards of back segment PES2 stream shearing point, seek in each PES stream and the immediate audio frame of the PTS of frame of video Editorial Services that provides, merge this two section audio PES stream, link forms a new audio frequency PES, make it still can keep with video editing after PES stream synchronously.
Described index file comprises video index file and audio index file.
Described video index file comprises: the sequence head original position of the sequence number of PES in code stream, the video sequence of PES in code stream, PES be PES original position, flag bit in code stream.
Described audio index file comprises: tonic train head original position in code stream of the sample frequency of PES corresponding codes layer, PES code stream in code stream, PES, PES be PES original position, flag bit in code stream.
The step of the immediate audio frame of described searching comprises, earlier according to coincidence formula: the condition of PTS P1≤PTS V1<PTS P2 obtains first audio frame place of the group at PTS place, foundation: ΔPTS A = f * sample _ count sample _ frequency Calculate the PTS increment of this frame, finally make PTS_A+ Δ PTS A〉=PTS_V1 determines PTS_A3 and the PTS_A4 of audio frequency PES2; Wherein: PTS_V1, PTS_V2: the video pts value of two sections PES stream shearing point before and after being respectively;
PTS_P1, PTS_P3: before and after being respectively in two sections PES stream with the most approaching group of (Packet) pts value of shearing point;
PTS_P2, PTS_P4: before and after being respectively in two sections PES stream with next group pts value of the most approaching group of shearing point;
PTS_A1, PTS_A3: before and after being respectively in two sections PES stream with shearing point near the pts value of frame; PTS_A2, PTS_A4: before and after being respectively in two sections PES stream with shearing point near the next frame pts value of frame.
Described on-link mode (OLM) comprises: all keep Editorial Services's two audio streams, two frames.
Described on-link mode (OLM) comprises: keep a frame of audio stream 1, abandon a frame of audio stream 2 entirely.
Described on-link mode (OLM) comprises: give up a frame of audio stream 1, keep a frame of audio stream 2.
Described on-link mode (OLM) comprises: all give up Editorial Services's two audio streams, two frames.
Described modification relevant parameter comprises: keep last PES_ group or keep first PES group or the processing of follow-up PES_ group.
Last PES_ of described reservation group comprises: the grammatical word (0xFFF) by audio frame navigates to the audio frequency of sequence 1 and reduces point, removes the sequence sound intermediate frequency and reduces the later voice data of point; Before if this reduction point is positioned at first frame of this packet, also with a last combination in this packet and the stream; Length field in the modification group head makes it the data length that equals remaining among this Packet; Calculate ESCR1 and audio ES speed R1 ES: ESCR1=ESCR_base*300+ESCR_ext,
R1 wherein ES=ES_rate *50.
First PES of described reservation group comprises: the grammatical word (0xFFF) by audio frame navigates to the audio frequency of sequence 2 and reduces point, removes sequence 2 sound intermediate frequencies and reduces voice data before the point; Then it is merged to next group if this trim points is arranged in this packet last frame, otherwise remaining data are broken into a new group, revise packet_length, PTS, the ESCR field is not if some exists in these fields, then add this field, and corresponding sign is set; ESCR's is revised as:
ESCR New=ESCR1+L1×f÷R1 ES
f=27MHz
Wherein: the L1:ESCR field is to the byte number of this ESCR field.
The processing of described follow-up PES_ group comprises: the variable quantity (with respect to original data) of noting PTS and ESCR:
ΔPTS=PTS New-PTS Old
ΔESCR=ESCR New-ESCR Old
To all use Δ PTS and Δ ESCR to proofread and correct corresponding value for later frame, until end.
Described step comprises wealthyly can add an aliasing time valuation on the correction PTS of back segment material.
According to the technique scheme analysis as can be known, method of the present invention is with regard to the edit methods that is accurate to frame, and it has reduced the intensity of coding and decoding, and editor's precision height.The edit methods that wherein is accurate to frame is not introduced error, that is to say not abandon entirely or many retention frames; Be similar to and be accurate to frame because giving up or keeping of frame arranged, thus error introduced, but error mostly is the content of two frames most, and editable efficient is but improved greatly.For GOP, be accurate to the GOP method simply, fast, can edit GOP itself.Another main advantage of the present invention is to be accurate to the edit methods of PES, it not only can directly be operated on the PES fluid layer, also can be at the ES fluid layer editor who requires not change code flow structure as far as possible, and realized based on the video of PES fluid layer and the synchro edit of audio frequency.
The present invention is described in further detail below in conjunction with accompanying drawing and specific embodiments.
Fig. 1 is a MPEG-2 code flow structure schematic diagram;
Fig. 2 is a GOP frame sequence structural representation;
Fig. 3 is the structural representation that the present invention is directed to the frame-editing method;
Fig. 4 is provided with figure for video PES fluid layer edit methods parameter of the present invention;
Fig. 5 is provided with figure for audio frequency PES stream edit methods parameter of the present invention;
Fig. 6 is the schematic diagram of on-link mode (OLM) state in four behind the audio editing of the present invention.
Referring to Fig. 3, the present invention is for to carry out the editing and processing schematic diagram at frame.With the last frame image tagged of shearing point place front end data is F Last, first frame flag of shearing point place Back end data is F FirstCut mode can be divided into that the excision front end keeps the rear end and the excision rear end keeps two kinds of front ends, keeps buffering technique and is easy in the past two kinds of methods and derives out for keeping two ends and excision two ends in the middle of the excision.
The edit methods step that the present invention is accurate to frame is: (1) positional editing point, determine in-edit the position of corresponding frame; (2) determine the position of in-edit in looking the audio frequency elementary stream by decoded portion or index file, and the type of definite frame; (3) the I frame by this GOP of the quick allocation of decoded portion also decodes view data as required; (4) carry out encoding process by coded portion, first frame transform of in-edit back is become I frame or P frame, form a complete GOP.(5) header of the relevant GOP of change forms a new legal GOP.
Utilize this method,, promptly require to keep F when needs excision shearing point front end data FirstWhen frame and Back end data:
If a is F FirstBe the I frame, do not have frame to rearrange, directly remove F FirstPreceding all frames are revised F then FirstRelevant information among the GOP of place forms new legal GOP.
If b is F FirstFor the P frame, with F FirstDecoding back recompile is the I frame, removes F then FirstPreceding all frames are revised F FirstRelevant information after recoding among the GOP of place forms new legal GOP.
If C is F FirstFor the B frame, need F so FirstPlace GOP decodes to F FirstFrame is then with F FirstFrame is encoded into the I frame, revises F FirstRelevant information after recoding among the GOP of place forms new legal GOP.
Need excision shearing point Back end data method, promptly require to keep F LastWhen frame and earlier data:
If a is F LastBe the I frame, do not have frame to rearrange, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP.
If b is F LastBe the P frame, do not have frame to rearrange, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP.
If c is F LastFor the B frame, need F so LastPlace GOP decodes to F LastFrame is then with F LastFrame is encoded into the P frame, revises F LastRelevant information after recoding among the GOP of place forms new legal GOP.
Keep buffering technique and be easy in the past two kinds of methods and derive out about keeping two ends and excision two ends in the middle of excising.
The approximate edit methods that is accurate to frame of the present invention: (1) positional editing point, determine in-edit the position of corresponding frame; (2) determine the position of in-edit in looking the audio frequency elementary stream by index file, and the type of definite frame; (3) according to edit methods, the header that is similar to the relevant GOP of editing and processing (5) change that is accurate to frame forms a new legal GOP.The approximate edit methods that is accurate to frame need navigate to frame equally, does not do but do not carry out encoding and decoding behaviour, and guarantee editor's error as much as possible little (worst error is two frames) simultaneously, and improved editorial efficiency greatly,
Utilize this method,, promptly require to keep F when needs excision shearing point front end data method FirstWhen frame and Back end data:
If a is F FirstBe the I frame, do not need coding-decoding operation, directly remove F FirstPreceding all frames are revised F then FirstRelevant information among the GOP of place forms new legal GOP.
If B is F FirstFor the P frame, with F FirstRecompile is the I frame behind the frame decoding, removes F then FirstPreceding all frames are revised F FirstRelevant information after recoding among the GOP of place forms new legal GOP.
If C is F FirstBe the B frame, need location F FirstNearest I frame or the P frame (generally speaking, being I frame or P frame every 1 to 2 frame) in front and back (in this GOP) if frame is the I frame recently, removes all frames in I frame front, revises F FirstThe header of place GOP forms new legal GOP.If frame is the P frame recently, then need to rearrange into the preceding data of I frame with rearranging into the I frame behind this P frame decoding, removing, revise new GOP header, form new legal GOP.
When excision shearing point Back end data method, promptly require to keep F LastWhen frame and earlier data:
If a is F LastBe the I frame, do not need the coding and decoding operation, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP.
If B is F LastBe the P frame, do not need the coding and decoding operation, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP.
If C is F LastBe the B frame, need location F LastNearest I frame or the P frame (generally speaking, being I frame or P frame every 1 to 2 frame) in front and back (in this GOP) if frame is the I frame recently, directly removes this all data of I frame back, revises the header of GOP, forms new legal GOP.If recently frame is the P frame, only need remove this P frame back data, revise new GOP header, form new legal GOP.
Keep buffering technique and be easy in the past two kinds of methods and derive out about keeping two ends and excision two ends in the middle of excising.
A kind of edit methods that is accurate to GOP based on MPEG-2 code stream nonlinear editing system of the present invention:
A, set up the GOP index file; Index file comprises: the sequence number of GOP in code stream, GOP are at sign indicating number
The sequence head original position of the video sequence in the stream, GOP be GOP original position, mark in code stream
The will position.The flag bit content comprises wipes out or remains unchanged.B, according to index file, determine the position of GOP in code stream, c, judge the position among the editor point frame place GOP; D, according to the leading portion or the back segment of how much determining to give up or keep this GOP of frame before and after the in-edit; Respective field in the GOP header structure is faced in e, modification mutually.
If the code stream of the code stream back segment of excision in-edit leading portion also keeps the back segment code stream, then revise respective field in the header structure of first GOP of this GOP back; If excision in-edit back segment code stream also keeps the leading portion code stream, revise respective field in the header structure of first GOP of this GOP front.The GOP index file comprises: the sequence head original position of the sequence number of GOP in code stream, the video sequence of GOP in code stream, GOP be GOP original position, flag bit in code stream.The flag bit content comprises wipes out or remains unchanged.
The present invention is accurate to the video editing method of GOP and does without any need for coding and decoding behaviour.As long as judge the position among the editor point frame place GOP, how much determine to give up or keep this GOP according to frame before and after the in-edit then.
For the non-linear editing method, concrete operations be elementary stream, will change the parameter of elementary stream part grammar structure.From in-edit, for the leading portion material, if the part grammar parameter of the elementary stream that changes will change the grammer parameter of PS, TS fluid layer, only the value to the PES packet length will change.For linking point back material, from top syntactic structure as can be seen, need the value that PTS and DTS value and PES packet length are arranged that changes.Can do acquisition by the behaviour of master data for the PES packet length.The value of PTS and DTS can be determined according to last value of leading portion material.
A kind of edit methods of the present invention based on MPEG-2 code stream nonlinear editing system video PES fluid layer, its step comprises:
A, set up index file;
B, determine the position of the back segment PES bag of the leading portion PES bag of material 1,2 and material 1,2 according to index file;
C, for front end PES bag, it is long to calculate the PES bag; Revise the PES packet length of destroyed PES header;
D, for rear end PES bag, it is long to calculate the PES bag; Calculate the value of PTS and DTS; Revise the PES packet length of ruined PES header; The value of all PTS and DTS in the modification reservation material.
Index file comprises: the sequence head original position of the sequence number of PES in code stream, the video sequence of PES in code stream, PES be PES original position, flag bit in code stream.The flag bit content comprises wipes out or remains unchanged.
For fixing frame speed (this can by obtaining in the sequence head), PTS differs 40ms as the every frame show time scale of pal mode.
Revise the method for back segment material DTS:
DTS2(0)=DTS1(end)+C1×N1
DTS2(1)=DTS2(0)+C2×N2
DTS2(n)=DTS2(n-1)+[DTS2(1)-DTS2(0)]
Wherein: C1 is material 1 every frame decoding used time (constant); N1 is that material 1 last PES wraps contained frame number.C2 is material 2 every frame decodings used time (constant); N2 is that material 2 first PES wrap contained frame number.
Revise the method for back segment material PTS:
Wrap its first frame for the PES of material 2 after Editorial Services and be decided to be the I frame, so:
PTS2 (0)=DST2 (0) not necessarily definitely equates with DTS as PTS, might have difference, so can revise according to the difference of the PTS of material 1 and the DTS PTS to material 2.Can be modified in addition: PTS2 (0)=PTS1 (end)+C1 * N1)
PTS2(1)=PTS2(0)+C2×N2
PTS2(n)=PTS2(n-1)+[PTS2(1)-PTS2(0)]
For one section MPEG video-voice frequency flow, people often determine a shear time according to the content of being seen, and lessly shear according to sound, so will be standard with the video editing this moment, voice data are sheared accordingly.Because the cycle of frame of video and audio frame is inconsistent, and (as common pal video is 40 milliseconds/frame not have at double relation, 44.1kHz, the audio frequency of Layer2 is 26 a milliseconds/frame), so this correspondence of looking audio frequency is a misalignment, when shearing, also can only find both nearest frames, take all factors into consideration the errors after two sections materials merge then, make the material distortion factor minimum after the processing.In PES group head, comprise the basic time scale information PTS field of looking audio sync in the mpeg stream, utilize it can keep the precise synchronization of video and audio stream to play.Meanwhile, PES is again the bridge of conversion between the PS of system stream and the TS stream, can arbitrarily and easily export PS or TS stream with it.Because the ES fluid layer of audio frequency has lost the time scale information in the stream, also just lost initial synchronizing information.Certainly, if adopt by looking the corresponding reproduction time of audio frequency to come synchronously, owing to just do not align when looking audio frequency and begin to play, this section time difference has just been buried at the ES fluid layer, and can be accumulated in the stream after the shearing, has strengthened error.If the form that flows with PS or TS is exported at last, break into PES again again, break into PS stream or TS stream again, increased the work of repetition.Being in the same place video and voice data are multiplexing as for PS stream and TS stream, is quite high in the complexity of the enterprising edlin of this stream.This mainly is the staggered too many reason owing to video/audio on PS stream and TS stream, as distance usually suitable at interval between certain width of cloth video image and its voice data of playing simultaneously, this has just caused video, the data processing at audio frequency shearing point place, the high complexity of operation such as repack, so the present invention is the editor who carries out audio stream on audio frequency PES stream.It at first needs to set up index file, comprises video index file and audio index file.Wherein the video index file comprises: the sequence head original position of the sequence number of PES in code stream, the video sequence of PES in code stream, PES be PES original position, flag bit in code stream; The audio index file comprises: tonic train head original position in code stream of the sample frequency of PES corresponding codes layer, PES code stream in code stream, PES, PES be PES original position, flag bit in code stream.Determine the position of PES according to index file, and do sign; Draw the video pts value that corresponding requirement is sheared from video PES: the video pts value (PTS_V1 and PTS_V2) of two sections PES stream shearing point promptly, keep before the video pts value PTS_V1 that flows leading portion PES1 stream shearing point among the PES1 and video pts value (PTS_V2) segment afterwards of back segment PES2 stream shearing point, seek in each PES stream and the immediate audio frame of the PTS of frame of video Editorial Services that provides, merge this two section audio PES stream, link forms a new audio frequency PES, make it still can keep with video editing after PES stream synchronously.
Audio sync edit methods based on the PES fluid layer can be established by shown in Fig. 4,5:
PTS_V1, PTS_V2: the video pts value of two sections PES stream shearing point before and after being respectively;
PTS_P1, PTS_P3: before and after being respectively in two sections PES stream with the most approaching group of pts value of shearing point;
PTS_P2, PTS_P4: before and after being respectively in two sections PES stream with next group PTS of the most approaching group of shearing point
Value;
PTS_A1, PTS_A3: before and after being respectively in two sections PES stream with shearing point near the pts value of frame;
PTS_A2, PTS_A4: before and after being respectively in two sections PES stream with shearing point near the next frame PTS of frame
Value;
Promptly provide two section audio PES stream: PES1 and PES2 and the corresponding video pts value (coming from video PES) that requires shearing: PTS_V1 and PTS_V2, keep among the stream PES1 before the PTS_V1 and the segment after the PTS_V2 among the stream PES2, merge this two section audio PES stream and form a new audio frequency PES, make it still can keep with video editing after PES stream synchronously.
For given shearing point and two sections materials will editing, can in audio frequency PES stream, navigate to corresponding audio frame according to the pts value of shearing point.At this moment, editor's emphasis just concentrates in the processing to these two audio frames in these two sections streams.Taking all factors into consideration processing mode finds out only a kind of processing mode and handles.Because handle the choice relate to data in the convection current, so the length field in also will modification group head, markers territory etc. constitute a new legal PES and flow.
When finding out in each PES stream, be that example illustrates this process with PES stream 1 below with the immediate audio frame of the video editing place PTS that provides:
Find out immediate Packet place according to the pts value of shearing point.For each audio frequency Packet, analyze the pts value among its Packet_header: PTS_P1; Analyze the pts value among its adjacent next Packet: PTS_P2 again; Compare with PTS_V1, till finding certain PTS_P1 and PTS_P2 that following formula is set up:
PTS_P1≤PTS_V1<PTS_P2
In (1), find out immediate audio frame among the Packet at PTS_P1 place.Analyze this Packet structure earlier, find first audio frame place among this Packet, calculate the PTS increment of this frame: ΔPTS A = f * sample _ count sample _ frequency Wherein: (following data all can by reading in the audio frame header)
f=90kHz;
sample_count:1152(Layer2,Layer3),384(Layer1);
Sample_frequency: audio sample rate;
Represent the pts value of the audio frame of present analysis with PTS_A, N represents its sequence number in this packet, obviously has incipient the time:
PTS_A=PTS_P1;N=0;
If PTS_A+ Δ PTS A<PTS_V1 continues to get next audio frame and analyzes, and calculates Δ PTS AAnd the value of modification PTS_A and N.Analyze till the following formula establishment always:
PTS_A+ΔPTS A≥PTS_V1
Just found the audio frame that satisfies condition this moment, and have:
PTS_A1=PTS_A;PTS_A2=PTS_A+ΔPTS A
In like manner determine PTS_A3 and the PTS_A4 of audio frequency PES2.
Select the best processing mode of link
Because PTS is directly proportional with reproduction time length at interval,, have so can weigh the length of respectively distinguishing reproduction time with the PTS length that takies:
PlayTimeA?≈?PTS1-PTS_A1;PlayTimeB?≈?PTS_A2-PTS1
PlayTimeC?≈?PTS2-PTS_A3:P1ayTimeD?≈?PTS_A4-PTS2
From edit methods, accept or reject angle and consider that this two section audios code stream has four kinds of on-link mode (OLM)s, as shown in Figure 6.Error at various on-link mode (OLM)s is respectively: (1 on-link mode (OLM) one: all keep Editorial Services's two audio streams, two frames, error is: E=B+C; (2) on-link mode (OLM) two: keep a frame of audio stream 1, abandon a frame of audio stream 2 entirely, error is
E=|B-D| (absolute value); (3) on-link mode (OLM) three: give up a frame of audio stream 1, keep a frame of audio stream 2, error is
E=|C-A| (absolute value); (4) on-link mode (OLM) four: all give up Editorial Services's two audio streams, two frames, error is E=A+D;
The method of the invention is that the minimum mode of Select Error links two segment encodes stream in various on-link mode (OLM)s.
Revise relevant parameter:
Owing in the packet_header of audio frequency PES stream, comprise PTS, time scale informations such as ESCR, and these two seasonal effect in time series time references may be inconsistent, must proofread and correct the time of back segment sequence according to the time of leading portion sequence, makes the markers of the sequence after the merging keep continuity.In addition, the layer of two tonic trains, parameters such as bit rate or sample frequency may be inconsistent, so will imperfect PES_ group respectively weave into a complete PES_ group to two of junction.
The processing of last PES_ that will keep group: the grammatical word (0xFFF) by audio frame navigates to the audio frequency of sequence 1 and reduces point, removes sequence 1 sound intermediate frequency and reduces the later voice data of point.Before if this reduction point is positioned at first frame of this group, do not have the situation of audio frame in the audio group t, with should group with stream in a last combination also.Revise the length field in the packet header, make it the data length that equals remaining in this group.Calculate ESCR1 and audio ES speed R1 ESIn order to using in (2):
ESCR1=ESCR_base*300+ESCR_ext
R1 ES=ES_rate *50
The processing of first PES_ that will keep group: the grammatical word (0xFFF) by audio frame navigates to the audio frequency of sequence 2 and reduces point, removes sequence 2 sound intermediate frequencies and reduces voice data before the point.If this trim points is arranged in this group last frame it is merged to the next one and organize, otherwise remaining data are broken into a new group, revise packet length, PTS, the ESCR field is not if some exists in these fields, then add this field, and corresponding sign is set.PTS wherein NewMay equal PTS_A1 or PTS_A2 among the figure two according to the difference of choice mode.The amending method of ESCR is as follows:
ESCR New=ESCR1+L1×f÷R1 ES
f=27MHz
Wherein: ESCR1, R1 ESReferring to standard; The L1:ESCR field is to the byte number of this ESCR field.
The processing of follow-up PES_ group: continue to calculate PTS by the method shown in (2) NewAnd ESCR New, only the data in the formula all replace with the value among the Packet of first reservation in the sequence 2.Note the variable quantity (with respect to original data) of PTS and ESCR
ΔPTS=PTS New-PTS Old
ΔESCR=ESCR New-ESCR Old
To all use Δ PTS and Δ ESCR to proofread and correct corresponding value for later frame, until end.
If further reduce error, and reduce accumulated error as far as possible.Can on the correction PTS of back segment material, add an aliasing time valuation,, then error amount can be fallen and be changed to zero if estimated value is comparatively accurate.

Claims (32)

1, a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: it may further comprise the steps:
A, positional editing point, determine in-edit the position of corresponding frame;
B, be F with the last frame image tagged of the shearing point place front end data of this frame Last, first frame flag of shearing point place Back end data is F First
C, determine the position of in-edit in looking the audio frequency elementary stream by decoded portion or index file, and the type of definite frame;
D, according to edit methods, carry out editing and processing at frame;
The header of e, the relevant GOP of change forms a new legal GOP.
2, according to claim 1 a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: described edit methods comprises the edit methods that is accurate to frame and the edit methods of frame.
3, according to claim 1 a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: described editing and processing at frame comprise the code stream of excision mark front end code stream, retention marker rear end or excision rear end code stream, keep in the middle of the code stream or excision of front end, keep the code stream at the code stream at two ends or excision two ends, the code stream method in the middle of keeping.
4, according to claim 1 a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: described index file content comprises: the sequence head original position of the frame number of frame, frame, frame place be GOP original position, an image original position of frame, the coded format (I, B, P) of frame, the flag bit of GOP just.
5, according to claim 4 a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: described flag bit content comprises the wiping out of frame, by recompile or the sign that remains unchanged.
6, according to claim 2 or 3 described a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: the code stream of described excision mark front end code stream, retention marker rear end and the edit methods that is accurate to frame comprise:
If a is F FirstBe the I frame, do not have frame to rearrange, directly remove F FirstPreceding all frames are revised F then FirstRelevant information among the GOP of place forms new legal GOP;
If b is F FirstFor the P frame, with F FirstDecoding back recompile is the I frame, removes F then FirstPreceding all frames are revised F FirstRelevant information after recoding among the GOP of place forms new legal GOP;
If c is F FirstFor the B frame, need F so FirstPlace GOP decodes to F FirstFrame is then with F FirstFrame is encoded into the I frame, revises F FirstRelevant information after recoding among the GOP of place forms new legal GO.
7, according to claim 2 or 3 described a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: the code stream of described excision rear end, the edit methods that keeps the code stream of front end and be accurate to frame comprise:
If a is F LastBe the I frame, do not have frame to rearrange, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP;
If b is F LastBe the P frame, do not have frame to rearrange, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP;
If c is F LastFor the B frame, need F so LastPlace GOP decodes to F LastFrame is then with F LastFrame is encoded into the P frame, revises F LastRelevant information after recoding among the GOP of place forms new legal GOP.
8, according to claim 2 or 3 described a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: the code stream of described excision mark front end code stream, retention marker rear end and the approximate edit methods that is accurate to frame comprise:
If a is F FirstBe the I frame, do not need coding-decoding operation, directly remove F FirstPreceding all frames are revised F then FirstRelevant information among the GOP of place forms new legal GOP;
If b is F FirstFor the P frame, with F FirstRecompile is the I frame behind the frame decoding, removes F then FistPreceding all frames are revised F FirstRelevant information after recoding among the GOP of place forms new legal GOP;
If c is F FirstBe the B frame, need location F FirstNearest I frame or the P frame (generally speaking, being I frame or P frame every 1 to 2 frame) in front and back (in this GOP) if frame is the I frame recently, removes all frames in I frame front, revises F FirstThe header of place GOP forms new legal GOP.If frame is the P frame recently, then need to rearrange into the preceding data of I frame with rearranging into the I frame behind this P frame decoding, removing, revise new GOP header, form new legal GOP.
9, according to claim 2 or 3 described a kind of based on the edit methods of MPEG-2 code stream nonlinear editing system at frame, it is characterized in that: the code stream of described excision rear end, the code stream that keeps front end and the approximate edit methods that is accurate to frame comprise:
If a is F LastBe the I frame, do not need the coding and decoding operation, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP;
If b is F LastBe the P frame, do not need the coding and decoding operation, directly remove F LastThe frame that the back is all is revised F then LastRelevant information among the GOP of place forms new legal GOP;
If c is F LastBe the B frame, need location F LastNearest I frame or the P frame (generally speaking, being I frame or P frame every 1 to 2 frame) in front and back (in this GOP) if frame is the I frame recently, directly removes this all data of I frame back, revises the header of GOP, forms new legal GOP.If recently frame is the P frame, only need remove this P frame back data, revise new GOP header, form new legal GOP.
10, a kind ofly be accurate to the edit methods of GOP based on MPEG-2 code stream nonlinear editing system, it is characterized in that: it comprises the steps:
A, set up the GOP index file;
B, according to index file, determine the position of GOP in code stream,
C, judge the position among the editor point frame place GOP;
D, according to the leading portion or the back segment of how much determining to give up or keep this GOP of frame before and after the in-edit;
Respective field in the GOP header structure is faced in e, modification mutually.
11, the edit methods that is accurate to GOP based on MPEG-2 code stream nonlinear editing system according to claim 10, it is characterized in that: if the code stream of the code stream back segment of excision in-edit leading portion and keep the back segment code stream, then revise respective field in the header structure of first GOP of this GOP back; If excision in-edit back segment code stream also keeps the leading portion code stream, revise respective field in the header structure of first GOP of this GOP front.
12, according to claim 10ly be accurate to the edit methods of GOP based on MPEG-2 code stream nonlinear editing system, it is characterized in that: described GOP index file comprises: the sequence head original position of the sequence number of GOP in code stream, the video sequence of GOP in code stream, GOP be GOP original position, flag bit in code stream.
13, according to claim 12ly be accurate to the edit methods of GOP based on MPEG-2 code stream nonlinear editing system, it is characterized in that: described flag bit content comprises wipes out or remains unchanged.
14, a kind of edit methods based on MPEG-2 code stream nonlinear editing system video PES fluid layer, it is characterized in that: its step comprises:
A, set up index file;
B, determine the position of the back segment PES bag of the leading portion PES bag of material 1,2 and material 1,2 according to index file;
C, for front end PES bag, it is long to calculate the PES bag; Revise the PES packet length of destroyed PES header;
D, for rear end PES bag, it is long to calculate the PES bag; Calculate the value of PTS and DTS; Revise the PES packet length of ruined PES header; The value of all PTS and DTS in the modification reservation material.
15, the edit methods based on MPEG-2 code stream nonlinear editing system video video PES fluid layer according to claim 14, it is characterized in that: described index file comprises: the sequence head original position of the sequence number of PES in code stream, the video sequence of PES in code stream, PES be PES original position, flag bit in code stream.
16, the edit methods based on MPEG-2 code stream nonlinear editing system video video PES fluid layer according to claim 14 is characterized in that: described flag bit content comprises wipes out or remains unchanged.
17, the edit methods based on MPEG-2 code stream nonlinear editing system video video PES fluid layer according to claim 14, it is characterized in that: following formula is used in the modification of the value of described DTS:
DTS2(0)=DTS1(end)+C1×N1
DTS2(1)=DTS2(0)+C2×N2
DTS2 (n)=DTS2 (n-1)+[DTS2 (1) one DTS2 (0)]
Wherein: C1 is material 1 every frame decoding used time (constant); N1 is that material 1 last PES wraps contained frame number; C2 is material 2 every frame decodings used time (constant); N2 is that material 2 first PES wrap contained frame number.
18, the edit methods based on MPEG-2 code stream nonlinear editing system video video PES fluid layer according to claim 14, it is characterized in that: following formula is used in the modification of the value of described PTS:
PTS2(0)=PTS1(end)+C1×N1)
PTS2(1)=PTS2(0)+C2×N2
PTS2(n)=PTS2(n-1)+[PTS2(1)-PTS2(0)]
19, a kind of edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer provides two sections PES stream PES1 and PES2, and it is characterized in that: its step comprises: set up index file; Determine the position of PES according to index file, and do sign; Draw the video pts value that corresponding requirement is sheared from video PES: the video pts value (PTS_V1 and PTS_V2) of two sections PES stream shearing point promptly, keep before the video pts value PTS_V1 that flows leading portion PES1 stream shearing point among the PES1 and video pts value (PTS_V2) segment afterwards of back segment PES2 stream shearing point, seek in each PES stream and the immediate audio frame of the PTS of frame of video Editorial Services that provides, merge this two section audio PES stream, link forms a new audio frequency PES, make it still can keep with video editing after PES stream synchronously.
20, the edit methods based on MPEG-2 code stream nonlinear editing system audio sync PES fluid layer according to claim 19, it is characterized in that: described index file comprises video index file and audio index file.
21, the edit methods based on MPEG-2 code stream nonlinear editing system audio sync PES fluid layer according to claim 19, it is characterized in that: described video index file comprises: the sequence head original position of the sequence number of PES in code stream, the video sequence of PES in code stream, PES be PES original position, flag bit in code stream.
22, the edit methods based on MPEG-2 code stream nonlinear editing system audio sync PES fluid layer according to claim 19, it is characterized in that: described audio index file comprises: tonic train head original position in code stream of the sample frequency of PES corresponding codes layer, PES code stream in code stream, PES, PES be PES original position, flag bit in the code stream Shen.
23, the edit methods based on MPEG-2 code stream nonlinear editing system audio sync PES fluid layer according to claim 19, it is characterized in that: the step of the immediate audio frame of described searching comprises, earlier according to coincidence formula: the condition of PTS P1≤PTS V1<PTS P2, obtain first audio frame place of the group at PTS place, foundation: ΔPTS A = f * sample _ count sample _ frequency Calculate the PTS increment of this frame, finally make PTS_A+ Δ PTS A〉=PTS_V1 determines PTS_A3 and the PTS_A4 of audio frequency PES2; Wherein: PTS_V1, PTS_V2: the video pts value of two sections PES stream shearing point before and after being respectively;
PTS_P1, PTS_P 3: before and after being respectively in two sections PES stream with the most approaching group of (Packet) pts value of shearing point;
PTS_P2, PTS_P4: before and after being respectively in two sections PES stream with next group pts value of the most approaching group of shearing point;
PTS_A1, PTS_A3: before and after being respectively in two sections PES stream with shearing point near the pts value of frame; PTS_A2, PTS_A4: before and after being respectively in two sections PES stream with shearing point near the next frame pts value of frame.
24, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 19, it is characterized in that: described on-link mode (OLM) comprises: all keep Editorial Services's two audio streams, two frames.
25, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 19, it is characterized in that: described on-link mode (OLM) comprises: keep a frame of audio stream 1, abandon a frame of audio stream 2 entirely.
26, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 19, it is characterized in that: described on-link mode (OLM) comprises: give up a frame of audio stream 1, keep a frame of audio stream 2.
27, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 19, it is characterized in that: described on-link mode (OLM) comprises: all give up Editorial Services's two audio streams, two frames.
28, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 19, it is characterized in that: described modification relevant parameter comprises: keep last PES_ group or keep first PES group or the processing of follow-up PES_ group.
29, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 28, it is characterized in that: last PES_ of described reservation group comprises: the grammatical word (0xFFF) by audio frame navigates to the audio frequency of sequence 1 and reduces point, removes the sequence sound intermediate frequency and reduces the later voice data of point; Before if this reduction point is positioned at first frame of this packet, also with a last combination in this packet and the stream; Length field in the modification group head makes it the data length that equals remaining among this Packet; Calculate ESCR1 and audio ES speed R1 ES: ESCR1=ESCR_base*300+ESCR_ext,
R1 wherein ES=ES_rate *50.
30, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 28, it is characterized in that: first PES of described reservation group comprises: the grammatical word (0xFFF) by audio frame navigates to the audio frequency of sequence 2 and reduces point, removes sequence 2 sound intermediate frequencies and reduces voice data before the point; Then it is merged to next group if this trim points is arranged in this packet last frame, otherwise remaining data are broken into a new group, revise packet_length, PTS, the ESCR field is not if some exists in these fields, then add this field, and corresponding sign is set; ESCR's is revised as:
ESCR New=ESCR1+L1×f÷R1 ES
f=27MHz
Wherein: the L1:ESCR field is to the byte number of this ESCR field.
31, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 28 is characterized in that: the processing of described follow-up PES_ group comprises: the variable quantity (with respect to original data) of noting PTS and ESCR:
ΔPTS=PTS New-PTS Old
ΔESCR=ESCR New-ESCR Old
To all use Δ PTS and Δ ESCR to proofread and correct corresponding value for later frame, until end.
32, the edit methods based on MPEG-2 code stream nonlinear editing system audio-visual synchronization PES fluid layer according to claim 19 is characterized in that: described step comprises wealthyly can add an aliasing time valuation on the correction PTS of back segment material.
CNB00124793XA 2000-09-15 2000-09-15 Edition method for non-linear edition system based on MPEG-2 code stream Expired - Fee Related CN100353750C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB00124793XA CN100353750C (en) 2000-09-15 2000-09-15 Edition method for non-linear edition system based on MPEG-2 code stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB00124793XA CN100353750C (en) 2000-09-15 2000-09-15 Edition method for non-linear edition system based on MPEG-2 code stream

Publications (2)

Publication Number Publication Date
CN1344106A true CN1344106A (en) 2002-04-10
CN100353750C CN100353750C (en) 2007-12-05

Family

ID=4590663

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB00124793XA Expired - Fee Related CN100353750C (en) 2000-09-15 2000-09-15 Edition method for non-linear edition system based on MPEG-2 code stream

Country Status (1)

Country Link
CN (1) CN100353750C (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100377589C (en) * 2005-04-07 2008-03-26 北京北大方正电子有限公司 A method for quick generation of video file
CN100454981C (en) * 2007-11-19 2009-01-21 新奥特(北京)视频技术有限公司 Method of generating snapshot document of engineering
CN100508584C (en) * 2004-09-02 2009-07-01 索尼株式会社 Recording and reproducing device and method and program thereof
CN101577829B (en) * 2003-01-17 2011-04-13 松下电器产业株式会社 Image encoding method
CN101820540B (en) * 2009-12-25 2011-09-14 北京惠信博思技术有限公司 MPEG-2 code multiplexing method
CN101472118B (en) * 2007-12-24 2011-12-28 新奥特(北京)视频技术有限公司 Method for cutting document during acceptance process of acceptance system
CN101448096B (en) * 2007-11-28 2012-05-23 新奥特(北京)视频技术有限公司 Method for multi-channel video signal synthetic processing multi-track special skill
CN102857747A (en) * 2011-06-27 2013-01-02 北大方正集团有限公司 Method and device for local recoding
CN103024394A (en) * 2012-12-31 2013-04-03 传聚互动(北京)科技有限公司 Video file editing method and device
CN103053134A (en) * 2010-07-30 2013-04-17 德国电信股份有限公司 Method for estimating type of group of picture structure of plurality of video frames in video stream
CN103167342A (en) * 2013-03-29 2013-06-19 天脉聚源(北京)传媒科技有限公司 Audio and video synchronous processing device and method
CN105578265A (en) * 2015-12-10 2016-05-11 杭州当虹科技有限公司 Timestamp compensation or correction method based on H264/H265 video analysis
CN105592356A (en) * 2014-10-22 2016-05-18 北京拓尔思信息技术股份有限公司 Audio-video online virtual editing method and system
CN108550369A (en) * 2018-04-14 2018-09-18 全景声科技南京有限公司 A kind of panorama acoustical signal decoding method of variable-length
CN110740344A (en) * 2019-09-17 2020-01-31 浙江大华技术股份有限公司 Video extraction method and related device
CN112351308A (en) * 2020-10-30 2021-02-09 杭州当虹科技股份有限公司 Method for realizing rapid transcoding based on local transcoding technology
CN115577684A (en) * 2022-12-07 2023-01-06 成都华栖云科技有限公司 Method, system and storage medium for connecting nonlinear editing system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0615244B1 (en) * 1993-03-11 1999-07-21 Matsushita Electric Industrial Co., Ltd. System for non-linear video editing
JP3276596B2 (en) * 1997-11-04 2002-04-22 松下電器産業株式会社 Video editing device
US6104441A (en) * 1998-04-29 2000-08-15 Hewlett Packard Company System for editing compressed image sequences

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577829B (en) * 2003-01-17 2011-04-13 松下电器产业株式会社 Image encoding method
CN100508584C (en) * 2004-09-02 2009-07-01 索尼株式会社 Recording and reproducing device and method and program thereof
CN100377589C (en) * 2005-04-07 2008-03-26 北京北大方正电子有限公司 A method for quick generation of video file
CN100454981C (en) * 2007-11-19 2009-01-21 新奥特(北京)视频技术有限公司 Method of generating snapshot document of engineering
CN101448096B (en) * 2007-11-28 2012-05-23 新奥特(北京)视频技术有限公司 Method for multi-channel video signal synthetic processing multi-track special skill
CN101472118B (en) * 2007-12-24 2011-12-28 新奥特(北京)视频技术有限公司 Method for cutting document during acceptance process of acceptance system
CN101820540B (en) * 2009-12-25 2011-09-14 北京惠信博思技术有限公司 MPEG-2 code multiplexing method
CN103053134B (en) * 2010-07-30 2016-08-03 德国电信股份有限公司 Estimate the method for the type of the image group structure of multiple frame of video in video flowing
CN103053134A (en) * 2010-07-30 2013-04-17 德国电信股份有限公司 Method for estimating type of group of picture structure of plurality of video frames in video stream
US9241156B2 (en) 2010-07-30 2016-01-19 Deutsche Telekom Ag Method for estimating the type of the group of picture structure of a plurality of video frames in a video stream
CN102857747A (en) * 2011-06-27 2013-01-02 北大方正集团有限公司 Method and device for local recoding
CN102857747B (en) * 2011-06-27 2015-02-25 北大方正集团有限公司 Method and device for local recoding
CN103024394A (en) * 2012-12-31 2013-04-03 传聚互动(北京)科技有限公司 Video file editing method and device
CN103167342A (en) * 2013-03-29 2013-06-19 天脉聚源(北京)传媒科技有限公司 Audio and video synchronous processing device and method
CN103167342B (en) * 2013-03-29 2016-07-13 天脉聚源(北京)传媒科技有限公司 A kind of audio-visual synchronization processing means and method
CN105592356A (en) * 2014-10-22 2016-05-18 北京拓尔思信息技术股份有限公司 Audio-video online virtual editing method and system
CN105578265A (en) * 2015-12-10 2016-05-11 杭州当虹科技有限公司 Timestamp compensation or correction method based on H264/H265 video analysis
CN105578265B (en) * 2015-12-10 2019-03-05 杭州当虹科技有限公司 A kind of timestamp compensation or modified method based on H264, H265 video analysis
CN108550369A (en) * 2018-04-14 2018-09-18 全景声科技南京有限公司 A kind of panorama acoustical signal decoding method of variable-length
CN108550369B (en) * 2018-04-14 2020-08-11 全景声科技南京有限公司 Variable-length panoramic sound signal coding and decoding method
CN110740344A (en) * 2019-09-17 2020-01-31 浙江大华技术股份有限公司 Video extraction method and related device
CN112351308A (en) * 2020-10-30 2021-02-09 杭州当虹科技股份有限公司 Method for realizing rapid transcoding based on local transcoding technology
CN115577684A (en) * 2022-12-07 2023-01-06 成都华栖云科技有限公司 Method, system and storage medium for connecting nonlinear editing system

Also Published As

Publication number Publication date
CN100353750C (en) 2007-12-05

Similar Documents

Publication Publication Date Title
CN1344106A (en) Edition method for non-linear edition system based on MPEG-2 code stream
CN1251493C (en) Recording apparatus and method, reproducing apparatus and method, and its recording carrier
CN1255800C (en) Method and equipment for producing recording information signal
CN1125543C (en) Multiplexing method and equipment for digital signal and recording medium for digital signal
CN1201572C (en) Frame-accurate editing of encoded A/V sequences
CN1152569C (en) Multiplexed data producing, encoded data reproducing, and clock conversion apparatus and method
CN1186930C (en) Recording appts. and method, reproducing appts. and method, recorded medium, and program
CN1172531C (en) Information transmitting method, encoder/decoder of information transmitting system using the method, and encoding multiplexer/decoding inverse multiplexer
CN1245022C (en) Data processing method/equipment and data regenerateion method/equipment and recording medium
CN1157727C (en) Edit system, edit control device and edit control method
CN1270315C (en) Recording/reproducing apparatus and method and program offering medium
CN101036391A (en) Picture coding apparatus and picture decoding apparatus
CN1165178C (en) Compress coding data reproduction method and device
CN1942962A (en) Audio reproducing apparatus, audio reproducing method, and program
CN1336764A (en) Device and method for image coding, device for decoding image
BR0305434A (en) Methods and arrangements for encoding and decoding a multichannel audio signal, apparatus for providing an encoded audio signal and a decoded audio signal, encoded multichannel audio signal, and storage medium
CN1615659A (en) Audio coding
CN1244106C (en) Information recording device and method, and information recording medium recorded with recording control program
CN1946183A (en) Image encoding apparatus, picture encoding method and image editing apparatus
CN1942931A (en) Audio bitstream format in which the bitstream syntax is described by an ordered transveral of a tree hierarchy data structure
CN1879163A (en) Frame-based audio transmission/storage with overlap to facilitate smooth crossfading
CN1833439A (en) Data processing device and data processing method
CN1421859A (en) After-recording apparatus
CN1248496C (en) Magnetic tape recording, equipment and method for magnetic tape reproducing, and magnetic tape format and storage carrier
CN1463440A (en) Information signal edition appts., information signal edition method, and information signal edition program

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20071205

Termination date: 20091015