MXPA99004448A - Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer-readable recording medium storing an editing program - Google Patents

Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer-readable recording medium storing an editing program

Info

Publication number
MXPA99004448A
MXPA99004448A MXPA/A/1999/004448A MX9904448A MXPA99004448A MX PA99004448 A MXPA99004448 A MX PA99004448A MX 9904448 A MX9904448 A MX 9904448A MX PA99004448 A MXPA99004448 A MX PA99004448A
Authority
MX
Mexico
Prior art keywords
audio
video
data
video object
units
Prior art date
Application number
MXPA/A/1999/004448A
Other languages
Spanish (es)
Inventor
Tsuga Kazuhiro
Saeki Shinichi
Okada Tomoyuki
Hamasaka Hiroshi
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of MXPA99004448A publication Critical patent/MXPA99004448A/en

Links

Abstract

One or more video objects are recorded on an optical disc. When a user indicates a linking edit that links sections of the video objects, video object units (VOBUs) that include picture data at the end of a former section and VOBUs that include picture data at the start of a latter section are read from the optical disc and the audio packs and video packs are separated from these read VOBs. Next, the video packs are re-encoded and some of the audio packs that were originally in the former section are multiplexed into the latter section. The result of the multiplexing is then recorded onto the optical disc.

Description

VIDEO DATA EDITING DEVICE, DISC OPTICAL FOR USE AS A RECORDING MEDIA OF A VIDEO DATA EDITING DEVICE, AND A COMPUTER LEVELABLE RECORDING MEDIA WHAT STORIES AN EDITING PROGRAM FIELD OF THE INVENTION The present invention relates to a video data editing apparatus that uses an optical disk as an editing medium for video data, a computer-readable recording medium that stores an editing program, an optical disk for use as a recording medium of a video data editing apparatus, and a reproduction apparatus for an optical disc.
BACKGROUND OF THE INVENTION Video editors in the film and broadcast industries make full use of their skill and experience when editing the wide variety of video productions that come to market. While fans of REF .: 30088 movies and producers of home videos can not possess this skill or experience, many are still inspired by the professional edition to try to edit video for themselves. This creates a demand for a home video editing device that can perform video editing, while still being easy to use. While video editing generally comprises a variety of operations, domestic video editing devices that are likely to appear on the market in the near future will especially require an advanced scene link function. This function links a number of scenes to form an individual work. When scenes are linked using conventional, domestic equipment, users connect two video cassette recorders to form a copy system. In operations performed when scenes are linked using this kind of copy system are described below. Figure 1A shows a video editing arrangement using video cassette recorders that are respectively capable of recording and reproducing video signals. The arrangement of Figure 1A includes the video cassette 301 which records the source video, the video cassette 302 for recording the editing result, and two video cassette recorders 303 and 304 for reproducing and recording video images in the video cassettes. video cassettes 301 and 302. In this example, the user attempts to perform the editing operation shown in Figure IB using the arrangement of Figure 1A. Figure IB shows the relationship between the material that is edited and the result of editing. In this example, the user reproduces scene 505 which is located between time t5 and time tlO of the source material, scene 506 which is located between time tl3 and t21, and scene 507 which is located between time t23 and t25 and try to produce and the editing result that consists only of these scenes. With the arrangement of Figure 1A, the user adjusts the video cassette 301 including the source material in the recording apparatus 303 and the video cassette 302 to record the editing result in the video cassette recording apparatus 304. After placing the video cassettes 301 and 302, the user presses the fast forward button on the operation panel of the video cassette recorder 303 (shown by © in Figure 1A) to search for the start of the scene 505. Then, the user - presses the play button on the operation panel of the video cassette recorder unit 303 (as shown by ® in Figure 1A) to play scene 505. At the same time, the user presses the button record on the operation panel of the video cassette recorder 304 (as shown by ® in Figure 1A) to begin recording. When the scene 505 is finished, the user stops the operation of both recording devices 303 and 304 of video cassettes. The user then rapidly overtakes the video cassette at the start of scene 506, and then simultaneously begins playback by the video cassette recorder 303 and recording by the video cassette recorder 304. After finishing the above process for scenes 506 and 507, the user causes the video cassette recorders 303 and 304 to rewind the video cassettes 301 and 302 respectively to complete the editing operation. 'If the scene-link operation described above can be performed easily in the home, users would be able to easily handle programs that have been recorded on a large number of magnetic tape cassettes. A first problem with the video editing arrangement described above is that the source material and the editing result need to be recorded on separate recording media, meaning that two video cassette recorders need to be used to play and record the media. of respective recording. This greatly increases the scale of the video editing arrangement. The second problem is that the need to reproduce the video images between time t5 ~ 510, 513 ~ t21 and time t23 ~ t25 using the video cassette recorder 303 makes video editing very time-consuming. Here, the longer the video extracts that make up the editing result, the longer the playback time and the editing time, meaning the editing of long source materials. It can take an extremely long time. In order to complete the above link process in a short time with a small-scale equipment, it would be ideal if the pieces of the recording media that record the desired video images are simply linked together, as with the conventional splicing of video sections. Video tapes. When the source materials are stored as analog video signals, there are no significant problems because the sections of the magnetic tape storing the desired materials are spliced together. However, when link system streams that have been highly compressed in accordance with MPEG techniques, there are problems that video playback may be disrupted or disrupted at the junctions between the spliced sections. Here, the term "system stream" refers to video data and audio data that are multiplexed together, with these streams also called "visual audio data (AV data)" in this specification. One of the causes of the above problem is the assignment of the variable length code to the video frames in a video stream. When a video stream is encoded, an optimal amount of code is assigned to each display or display cycle in order to make a good balance between the complexity of the image to be displayed and the amount of data that is already stored in it. the buffer of the video decoder. Since accurate calculations are made when assigning the code within a video stream, it can be ensured that subflows or overflows will not occur in the buffer of the video decoder when an individual video stream is played in its original form. However, when the previous and subsequent video streams, and separately encoded are linked together, the last video stream will be entered into the video decoder's buffer without taking into account the amount of data already accumulated in the memory intermediate of the video decoder at the end of the playback of the previous video stream. When this occurs, there is a clear possibility of a subflow or an overflow occurring in the buffer of the video decoder. When partial sections of a system stream are linked in the same manner as in Figure IB, there is the possibility of an overflow or a subflow occurring in the buffer of the video decoder when the reproduction proceeds from the previous section to the last section. Video playback without interruption and disturbance is called seamless playback. To analyze the seamless reproduction of linked sections, it is necessary to temporarily convert the previous section and the last section into video signals and audio signals and then re-encode these signals to convert the signals of the previous section- and the last section into an individual video stream and current of Audio. The time taken by this re-encoding is to provide the amount of data in the video streams and audio streams of the edited source materials. As a result, when the source materials contain a large amount of data, this process will be very time-consuming. To analyze the AV sync during playback of an MPEG system stream, the timestamps that show the respective playback times of a video stream and an audio stream must be consecutive. Conventionally, MPEG standards have focused on consecutive playback of a stream from start to end, so that seamless playback has not been possible for two MPEG streams that do not have consecutive time stamps. As a result, during editing, it has been necessary to give at least one of the linked MPEG streams the time stamps that are continuous with the timestamps in the order of the MPEG stream, meaning that the last MPEG stream It has to be re-encoded in its entirety.
DESCRIPTION OF THE INVENTION It is a first object of the present invention to provide a video data editing apparatus that can perform an editing operation that links seamless current stitches of video or video stream portions in a short time using only a single recording medium. It is a second object of the present invention to provide a video data editing apparatus that can perform an editing operation that seamlessly links seamless video or video stream portions, these include system streams that have been encoded after a precise calculation ensures no subflow or overflow and that it occurs in a buffer.
The main points worthy of attention for the realization of the first and second objects are as follows: To seamlessly link the video streams or parts of video streams in a single recording medium in a short time, the part of the data that is going to re-encode to achieve the seamless link should be as short as possible. However, due to the construction of data from the system streams, this can not be achieved by simply reducing the size of the part of the re-encoded data. These system streams are composed of an array of a plurality of video packets and audio packets. In this data construction, however, the video data and the audio data to be played simultaneously for a given time are not necessarily properly arranged in the video packets and audio packets. As a result, if the re-encoded area is simply decreased, the audio data, which will be reproduced for the re-encoded video data and must themselves be re-encoded, will be outside the area subject to the re-coding. coding. In the present invention, however, the re-encoded area is set as a whole number of units of video objects. Each video object unit includes a set of image data (junction data) with a playback period of approximately 0.5 seconds in the modes, and in many cases will also include audio data that will be continuously reproduced. the image data. A "one second rule" (described in the modes) is used under the MPEG standards, so that the audio data to be played simultaneously with the video data will be included within the same unit of the video object . When performing a re-encoding for a whole integer multiple of video object units, the part that needs to be re-encoded for the seamless link can be greatly reduced. The first object can be achieved by a video data editing apparatus that performs editing to allow seamless reproduction of at least video objects that are recorded on an optical disc, each video object that includes a plurality of video units. video object, and each video object unit including image data sets, the video data editing apparatus including: a reading unit for reading at least one of a sequence of previous units of video object and a sequence of final or last units of video objects of a video object recorded on the optical disc, the sequence of previous units of video object is composed of a predetermined number of video object units placed at the end of a video object. previous video to be played first, and the sequence of final or last units of video object that is composed of a predetermined number of units of objec to video placed at the start of a last video object to be played second; encoding unit for re-encoding the image data sets included in at least one of the sequence of previous units of video object and the sequence of last units of video object to allow the previous video object and the last object video will be played seamlessly, and a writing unit for rewriting one of the previous video object and the last video object on the optical disk after coding by the coding unit. With the construction indicated, the edited materials are subject to video recorded on an individual optical disc, with the encoded data, which are video object sequences that are smaller than the video object. As a result, when the video objects to be launched without seams are extremely long, for example, it is sufficient to read and re-encode only the video object units at the end of a first video object before recording the resulting data. back to the same optical disc. As a result, an editing operation that allows seamless reproduction of the video objects can be completed in a short time. The second object can be achieved with a video data editing apparatus where the coding unit re-encodes at least one of the set of image data included in the sequence of previous video object units and the data set of images included in the sequence of last video object units using a target amount of code, the target amount of code which is an amount by which the overflow will not occur in a video buffer of a video encoder, even when the image data sets included in the sequence of previous video object units are present in the video buffer at the same time as the image data sets included in the sequence of last video object units. With the indicated construction, it is considered a case where the image data included in a final video object is accumulated in a decoder buffer while the image data included in a previous video object is present in the decoder buffer , with the re-encoding that is done to ensure that no overflows occur in the video decoder's buffer. As a result, when coded video objects are linked separately, seamless reproduction for the resulting individual video object will be possible. Here, each set of image data may include data to be decoded • for a video frame, the generation and training unit that additionally adds a presentation completion time when the reproduction of the image data sets in the sequences of previous units of video object and a presentation completion time when the reproduction of image data sets in the last video object unit sequence initiates the seamless link information, the certain compensation found when subtracting the presentation start time of the last unit sequence of object from video of the end time of presentation of the sequence of previous units of video object.
With the construction noted, the display completion time information for the image data included in the previous video object and the presentation completion time information for the image data included in the last video object are generated and write on the optical disc. If a reproduction apparatus is of the extended type type STD where one of a normal time measured by an STC and a sum of normal time and a compensation are used by a video decoder during the coding operations, the apparatus of reproduction will read the information of the time of presentation completion and the information of the time of beginning of presentation of the optical disc and uses these to calculate the compensation that is going to be added to the normal time. As a result, even without the PCR, DTS and PTS are not continuous in the binding limit of a previous video object and a last video object, even in this limit seamless playback is possible. Here, each video object unit may include a plurality of image data set and a plurality of audio data, and the video data editing apparatus may additionally include: a separation unit for separating image data sets and audio data sets of the sequence of previous units of video object and the sequence of last units of video object read by the reading unit; and a multiplexing unit for multiplexing at least one of the image data sets, including one of the image data and the re-encoded image data, separated from the sequence of the previous units of video object with the sets of audio data read from the sequence of the previous units of the video object, and for multiplexing the image data sets, including one of re-encoded image data and image data, separated from the sequence of last units of video. video object with the audio data sets separated from the last video object unit sequence, the writing unit that writes the data transferred by the multiplexing unit to the optical disc.
With the construction noted, while a plurality of image data sets of the last video object are being accumulated in the decoder buffer, a unit of audio data sets included in the above video object is read from the optical disk. , and to allow simultaneous reproduction, a plurality of image data in the video object units encoded from the last video object and multiplexed with the plurality of audio data sets in the previous video object. As a result, even if the video stream is encoded with a variable bit rate (VBR) while the audio stream is encoded with a constant bit rate (CBR), successive reproduction of a plurality of data sets will be possible. audio, while the video stream waits in the buffer to reach its decoding time. Here, a plurality of audio data sets to be reproduced for a plurality of audio frames from the first audio frame to the second audio frame can be stored as a first audio packet group., where if a data size of the first packet group is not an integer multiple of 2 kilobytes (KB), one of the filler data and a filler group can be used to make the data size of the first group of audio packet that is an integer multiple of 2 KB, and wherein the plurality of audio data sets to be reproduced for a plurality of audio frames starting from the third audio frame can be stored as a second audio frame group, with the multiplexing unit that multiplexes image data sets and audio data sets so that the first audio packet group is located before the second group of the audio packet. With the construction noted, it is possible to avoid audio reproduction and a plurality of audio data sets in the previous video object that matches the audio playback of the audio data sets in the last video object. Also, sequencing between audio playback and video playback for the last video object can be maintained. It is also possible to make the editing video data editing device allow seamless reproduction of a previous section and the last section that are located on at least one video object that is recorded on an optical disc, each video object including a plurality of video object units and each video object unit including image data sets, the video data editing apparatus including: a reading unit for reading a sequence of previous units of video object and a sequence of last video object units of a video object recorded on the optical disc, the sequence of previous units of video object that is composed of video object units placed at the end of the previous section which will be played first, and the last unit sequence of video object that is composed of video object units placed at the beginning of a last section that is going to play second; a coding unit for re-encoding the image data sets included in at least one of the sequences of previous video object units and the sequence of last video object units to allow the previous section and the last section to be reproduce without seams; and a writing unit for writing at least one of the previous section and the last section on the optical disk after decoding by the coding unit. With the indicated construction, when the edited materials are parts of video objects recorded on the same optical disk, the re-coding is performed for object units that are smaller than the parts of the video objects. As a result, when the parts of the video objects to be linked without seams are extremely long, for example, it is sufficient to read and re-encode only the object units at the end of a first video object before recording the video objects. Resulting data back on the same optical disk. As a result, an editing operation that allows seamless reproduction of the object units can be completed in a short time. Here, when a type of image from a final set of image data in a display order of the previous section is a Bidi Image rectionally Predictive (image B), the re-encoding unit can perform the re-encoding to convert the final set of image data to a Predictive Image (image P) whose information components are dependent only on sets of image data that are reproduced earlier than the final set of image data. With the construction indicated, if it is necessary to convert an image type when linking video objects having a coding order and a display order that complies with the MPEG standard, the transition of the buffer state is appropriately estimated without ignoring the increase in the composition of the buffer resulting from this image type conversion. As a result, re-encoding can be performed - using a more adequate amount of data.
BRIEF DESCRIPTION OF THE DRAWINGS These and other objects, advantages and features of the invention will become apparent from the following description thereof taken in conjunction with the accompanying drawings that illustrate a specific embodiment of the invention. In the drawings: Figure 1A shows a conventional video editing arrangement using video cassette recorders that are capable of reproducing and recording video signals; Figure IB shows the relationship between the source materials and the editing result; Figure 2A shows the outer appearance of a DVD-RAM disc which is the recordable optical disc used in the embodiments of the present invention; Figure 2B shows the recording areas on a DVD-RAM; Figure 2C shows the cross section and surface of a DVD-RAM cut in a sector header; Figure 3A shows zones 0 to 23 on a DVD-RAM; Figure 3B shows zones 0 to 23 arranged in a horizontal sequence; Figure 3C shows the logical sector numbers (LSN) in the volume area; Figure 3D shows the logical block numbers (LBN) in the volume area; Figure 4A shows the contents of the recorded data in the volume area; Figure 4B shows the hierarchical structure of the data definitions used in the MPEG standard; Figure 5A shows a plurality of image data set arranged in the display order and a plurality of image data set arranged in the order of coding; Figure 5B shows the correspondence between the audio frames and the data of a dio; Figure 6A shows a detailed hierarchy of the logical formations in the construction of data of a VOB (video object); Figure 6B shows the partial suppression of a VOB; Figure 6C shows the logical format of a video packet arranged at the start of a VOB; Figure 6D shows the logical format of other video packets arranged in a VOB; Figure 6E shows the logical format of an audio package; Figure 6F shows the logical format of a packet header; Figure 6G shows the logical format of a system header; Figure 6H shows the logical format of a group header, Figure 7A shows a video frame and the occupation of the video buffer; Figure 7B shows an audio frame and an ideal transition in the buffer state of the audio buffer; Figure 7C shows an audio frame and the ideal transition in the buffer state of the audio buffer; Figure 7D shows the detailed transfer period of each image data set; Figure 8A shows how the audio packets, which store audio data to be reproduced in a plurality of audio frames, and video packets, which store the image data to be reproduced in a plurality of frames of video, you can record; Figure 8B shows a key to the annotation used in Figure 8A; Figure 9 shows how the audio data to be reproduced in a plurality of audio frames can be recorded, and the video data, which stores the image data to be reproduced in a plurality of video frames; The figure. 10A shows the transition in the. buffer state during the first part of a video stream; Figure 10B shows the transition in the buffer state during the last part of a video stream; Figure 10C shows the transition in the buffer state through two VOBs, when the intermediate current whose last part causes the buffer state shown in Figure 10B seamlessly links to the video stream whose previous part causes the buffer state shown in Figure 10A; Figure 'HA is a graph where the SCRs of the video packets included in a VOB are plotted in the order in which the video packets are arranged; Figure 11B shows an example where the first SCR in section B corresponds to the last SCR in section A; Figure 11C shows an example where the first SCR in section D is higher than the last SCR in section C; Figure 11D shows an example where the last SCR in section E is higher than the first SCR in section F; Figure 11E shows the continuity plot of the VOBs of Figure 11A for two specific VOBs; Figure 12A shows a detailed expansion of the data hierarchy in the RTR administration file; Figure 12B shows the format of the PTM descriptor; Figure 12C shows the data construction of the audio separation location information; Figure 13 shows the buffer occupancy for each of a previous VOB and a last VOB; Figure 14A shows examples of audio frames and video frames; Figure 14B shows the time difference gl appearing at the end of the audio data and the image data when the playing time of the image data and the playing time of the audio data are aligned at the start of a VOB; Figure 14C shows the audio packet G3 including the audio separation and the audio pack G4, the audio packet G3 including (i) the sets of the audio data y-2, y-1 and y, which is locate at the end of the VOBfl, and (ii) Re 11 eno_Group, of the audio packet G4 that includes the audio data sets u, u + 1, and u + 2, which are located at the start of V0B # 2; Figure 14D shows in which of V0BU # 1, V0BU # 2 and V0BU # 3 at the start of V0B # 2 the audio packet G3 including the audio separation is arranged; Figures 15A to 15D show the procedure for the regeneration of the audio separation when the VOBUs located at the beginning of V0B # 2, of the VOB # 1 and # 2 to be played without seams, are erased; Figure 16 shows a sample configuration of the system using the video data editing apparatus of the first embodiment; Figure 17 is a block diagram showing the construction of the physical equipment of the DVD recorder 70; Figure 18 shows the construction of the MPEG encoder 2; Figure 19 shows the construction of the MPEG decoder 4; Figure 20 is a timing diagram showing the synchronization for switching of switches SWl to SW4; Figure 21 is a flow diagram showing the processing procedure without seams; Figure 22 is also a flow chart showing the seamless processing procedure; Figures 23A and 23B show the analysis of the transition in the buffer state for the audio packets; Figure 23C shows the area to be read from the previous VOB in step S106; Figure 23D shows the area to be read from the last VOB in step S107; Figure 24A shows the audio data in the audio stream corresponding to the audio frames x, x + 1, y, u, u + 1, u + 2 used in Figure 22; Figure 24B shows the case when the Prime_SCR + STC_Compensation corresponds to a limit between the audio frames in the previous VOB; Figure 24C shows the case when the start time of video playback VOB_V_S_PTM + STC_compensation corresponds to the limit between the audio frames in the previous VOB; Figure 24D shows the case when the presentation completion time of the video frame corresponds to a limit between the audio frames in the last VOB; Figure 25 shows how audio packets that store audio data for a plurality of audio frames and video packets that store video data for each video frame are multiplexed, Figure 26 shows an example of the section of a VOB that is specified using the time information for a pair of C_V_S_PTM and C V E PTM; Figure 27A shows the area to be read from the previous cell in step S106; Figure 27B shows the area to be read from the last cell in step S107; Figure 28A shows an example of the binding of sets of cell information that are specified as the edit limits in a VOBU; Figure 28B shows the processing for the three rules for GOP reconstruction when correcting the display order and the coding order; Figure 29A shows the processing when an image type of the image data in the previous cell changes; Figure 29B shows the procedure for measuring the ß change in buffer occupancy when the image type in the previous cell changes; Figure 30A shows the processing when the type of image in the last cell changes; Figure 30B shows the procedure for measuring the change in buffer occupancy when an image type in the last cell changes; Figure 31 is a flow chart showing the procedure for seamless processing; Figure 32 is also a flow chart showing the procedure for seamless processing; Figure 33 is also a flow chart showing the procedure for seamless processing; Figure 34 shows the audio frames in the audio stream corresponding to the audio frames x, x + 1 and y used in the flow chart of Figure 31; Figure 35 shows the hierarchical directory structure; Figure 36 shows the information, in addition to the table of administration of sectors and administration table of AV blocks shown in Figure 6, in the administration information for the file system; Figure 37 shows the linked relationships shown by the arrows in Figure 6 within the directory structure; Figure 38A shows the data construction of the file entries in greater detail; Figure 38B shows the construction of data of the allocation descriptors; Figure 38C shows the recorded listing of 2 upper bits in the data shown in the extension length; Figure 39A shows the construction of such detailed data of the file identification descriptor for a directory; Figure 39B shows the construction of detailed data of the file identification descriptor for a file; Figure 40 is a model showing the buffer storage in the buffer buffer of the AV data read from the DVD-RAM; Figure 41 is a functional block diagram showing the construction of the DVD recorder 70 divided by function, Figure 42 shows an example of an interactive screen displayed on the TV monitor 72 under the control of the recording-editing-playback control unit 12; Figure 43 is a flowchart showing the processing by the recording-editing-playback control unit 12 for a virtual edit 'and for a real edition unit; Figures 44A to 44F show a complementary example for illustrating the processing of the AV data editing unit 15 in the flow diagram of Figure 43; Figures 45A through 45E show a complementary example to illustrate the processing of the AV data editing unit 15 in the flow diagram of Figure 43; Figures 46A to 46F show a complementary example for illustrating the processing of the AV data editing unit 15 in the flow diagram of the Figure 43; Figure 47A shows the relationship between the extensions and the data in memory, in terms of time; Figure 47B shows the position relationship between the extensions, in the Input area and the Output area; Figure 48A is a flow diagram showing the processing by unit 11 of the AV file system when a "DIVIDE" command is executed; Figure 48B is a flow chart showing the processing when the execution of a "SHORT" command is issued; The figure. 48B is a flowchart showing the processing when the execution of an "ANNEXAR" command is omitted; Figure 50 is a flow chart for the case when the previous extension is below the length of the AV block but the ultimate extent is at least equal to the length of the AV block; Figures 51A-51B are a complementary example showing the processing of unit 11 of the AV file system in the flow chart of Figure 50; Figures 52A-52C are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 50; Figures 53A to 53D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 50; Figures 54A-54D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 50; Figure 55 is a flow diagram for the case when the previous extension is at least equal to the length of the AV block but the last extension is below the length of the AV block; Figures 56A-56B are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 55; Figures 57A-57C are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 55; Figures 53A-53D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 55; Figures 54A-54D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 55; Figure 60 is a flow diagram for the case when both the anterior extension and the latter extension are below the length of the AV block; Figures 61A-61D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 60; Figures 62A-62C are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 60; Figures 63A-63C are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 60; Figures 64A-64D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 60; Figure 65 is a flow chart for the case when both the previous extension and the last extension are at least equal to the length of the AV block; Figures 66A-66D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 65; Figure 67 is a flow diagram showing the case when both the previous extension and the last extension are at least equal to the length of the AV block but the data sizes in the Entry area and the Output area are insufficient; Figures 68A-69E are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 67; Figures 69A-69D are a complementary example showing the processing of the fragmentation unit 16; Figure 70A shows the detailed hierarchical content of the RTRW administration file in the fourth mode; Figure 70B is a flow chart showing the logical format of the original PGC information in the fourth mode; Figure 70C is a flow chart showing the logical format of the PGC information defined by the user in the fourth mode; Figure 70D shows the logical format of the title search indicator; Figure 71 shows the interrelationships between the AV file, the extensions, the VOBs, the VOB information, the original PGC information, and the user-defined PGC information, with the unified elements enclosed in the drawn boxes with thick lines; Figure 72 shows an example of a 'PGC defined by the user and an original PGC; Figure 73 shows a part corresponding to the cell to be erased using diagonal dimming; Figure 74A shows that ECC blocks are free in empty areas by a real edition using the PGC information # 2 defined by the user; Figure 74B shows examples of VOB, VOB information, and PGC information after a real edition; Figure 75 is a functional block diagram showing the construction of the DVD recorder apparatus 70 divided according to the function; Figure 76 shows an example of the original PGC information that has been generated by the PGC information generator 25 defined by the user when recording an AV file; Figure 77A shows an example of graph data that is displayed on the TV monitor 72 under the control of the recording-editing-playback control unit 12; Figure 77B shows an example of the PGC information and the cell information displayed with or a list of operation objectives; Figure 78A is a flow chart showing the processing during partial reproduction of a title; Figure 78B shows as only the section between the presentation start time C_V_S_PTM and the presentation completion time C_V_E_PTM being played, from among the VOBUs enters the VOBU (START) and the VOBU (FINAL); Figures 79A, 79B show the user pressing the dial key while viewing the video images on the TV monitor 72; Figures 80A, 80B show how data is entered and transferred between the components shown in Figure 75 when a marking operation is returned; Figure 81 is a flow chart showing the processing of the multistage, editing control unit 26 when defining the user defined PGC information; Figure 82 is a flow chart showing the processing of the multistage editing control unit 26 when the user defined PGC information is defined; Figure 83 is a flowchart showing the processing of the recording-editing-playback control unit 12 during a scheduled and actual editing; Figure 84 is a flowchart showing the update processing for the PGC information after a real edition; Figure 85 shows an example of the interactive screen that is displayed on the TV monitor 72 to have the user make a selection of the cell information as an element in a set of PGC information defined by the user during a virtual edit; Figures 86A, 86B show the relationship between the operation of the user of the remote controller 71 of the display processing that accompanies the operation of the user; Figures 87A to 87B show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user; Figures 86A, 88B show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user; Figures 89A, 89B show the relationship between the operation of the user of the remote controller 71 and the display processing that achieves the operation of the user; Figure 90 shows an example of the interactive screen in which the user has to select a set of PGC information defined by the user or a planned one (using the play key) or a real edition (using the 'real' edit key) ); Figure 91 shows an example of the original PGC information table and the user-defined PGC information table, when the user-defined PGC information # 2 composed of CELL # 2B, CELL # 4B, CELL # 10B / and CELL # 5B and the information # 3 of PGC defined by the user composed of CELDA # 3C, CELL # 6C, CEELDA # 8C, CELL # 9C as defined; Figures 92A-92B show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user; Figures 93A-93C show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user; Figures 94A-94C show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user; Y Figure 95 shows the table of information of the original PGC and the table of information of PGC defined by the user after the processing of the VOB in a real edition.
DESCRIPTION OF THE PREFERRED MODALITIES The following embodiments describe a video data editing apparatus and the optical disc which uses the video data editing apparatus as the recording medium. For ease of explanation, it is divided into four modalities that deal with the physical structure of the optical disc, the logical structure, the structure of the physical equipment of the video data editing apparatus, and the functional construction of the data editing apparatus. , s of video. The first embodiment explains the physical structure from the optical disc and the structure of the physical equipment of the video data editing apparatus, as well as the seamless linking of the video objects as the first basic example of video editing. The second embodiment explains the seamless connection of the partial sections of video objects as the second basic example. The third embodiment deals with the functional construction of the video data editing apparatus and the procedure for performing video editing within a file system. The fourth embodiment describes the data and process structures of the video data editing apparatus when it performs a two-step editing process comprised of virtual editing and real edition of two types of program chain called a user-defined PGC and a Original PGC. (1-1) Physical Structure of a Regulable Optical Disc Figure 2A shows the external appearance of a DVD-RAM disc that is a rewritable optical disc. As shown in this drawing, the DVD-RAM is located in a video data editing apparatus that has been placed in a cartridge 75. This cartridge 75 protects the recording surface of the DVD-RAM, and a closer 76 which open and close to allow access to the DVD-RAM locked inside. Figure 2B shows the recording area of the DVD-RAM disc which is a rewritable optical disc. As shown in the figure, the DVD-RAM has an entrance area in its innermost periphery and an exit area in its outermost periphery, with a data area between them. The input area records the reference signals necessary for the stabilization of a servo during access by optical reader, and the identification signals to prevent confusion with other means. The output recorder records the same types of reference signals as the input area. The data area, meanwhile, is divided into sectors that are the smallest units through which you can access the DVD-RAM. Here, the size of each sector is set to 2 KB. Figure 2C shows the cross section and the surface of a DVD-RAM cut in the header of a sector. As shown in the figure, each sector is composed of a sequence of dimples that is formed on the surface of a reflective film, such as a metal film, and a concave-convex part. The dimple sequence is composed of 0.4 μm ~ 1.87 μm dimples that are cut on the surface of the DVD-RAM to show the direction of the sector. The concave-convex part is composed of a concave part called a "groove" and a convex part called a "ground". Each groove and earth has a recording mark composed of a metal film capable of phase change attached to its surface. Here, the expression "capable of phase change" means that the recording mark may be in a crystalline state or in a non-crystalline state depending on whether the metallic film has been exposed to a beam of light. Using this phase change feature, data can be recorded in this concave-convex part. While it is possible to record data on the ground part of an MO disk (Magnetic and Co-Optical), data can be recorded on both the ground and groove sides of a DVD-RAM, meaning that the density of a DVD-RAM exceeds that of an MO disk. The error correction information is provided on a DVD-RAM for each group of 16 sectors. In this specification, each group of 16 sectors that is given an ECC (Error Correction Code) is called an ECC block. In a DVD-RAM, the data area is divided into several zones to perform the rotation control called Z-CLV (Constant Line Velocity of Zones) during recording and playback. Figure 3A shows the plurality of zones provided in a DVD-RAM. As shown in the figure, a DVD-RAM is divided into 24 zones numbered zone 0 ~ zone 23. Each zone is a group of tracks that are accessed using the same angular velocity. In this modality, each zone includes 1888 tracks. The rotational angular velocity of the DVD-RAM is adjusted separately for each zone, with this velocity being greater the closer the zone is located to the inner periphery of the disk. The division of the data area into zones ensures that the optical reader can move at a constant speed while performing access within an individual zone. By doing so, the elevation density of the DVD-RAM is increased, and rotation control during recording and playback becomes easier. Figure 3B shows a horizontal arrangement of the entrance area, the exit area, zone 0-23 shown in Figure 3A. The entrance area and the exit area that include a defect management area (DMA: Defect Management Area). This defect management area records the position information that shows the positions of sectors found to include defects and the replacement position information that shows whether the sectors used to replace defective sectors are located in any of the replacement areas. Each zone has a user area, in addition to a replacement area and an unused area that are provided on the boundary with the next area. A user area is an area that the file system can use as a recording area. The replacement area is used to replace the bad sectors when those bad sectors are found. An unused area is an area that is not used to record data. In only two tracks are used as the unused area, with the unused area that is provided to prevent erroneous identification of sector addresses. The reason for this is that while the sector addresses are recorded in the same position on adjacent tracks within the same zone, for the Z-CLV recording positions and sector addresses are different for adjacent tracks in the boundaries between zones.
In this way, the sectors that are not used for recording data exist in the boundaries between zones. In a DVD-RAM, logical sector numbers (LSN: Logical Sector Number) are assigned to physical sectors of the user area in order to start from the inner periphery to consecutively display only the sectors used for data recording. As shown in Figure 3C, the area that records the user data and is composed of sectors that have been assigned to an LSN is called the volume area. The volume area is used to record AV files that are each composed of a plurality of VOBs and a RTRW (Real-Time Rewritable) administration file that is the management information for the AV files. These AV files and the 'RTRW administration file are actually recorded in a file system according to ISO / IEC 13346, although this will not be explained in this modality. The file system is discussed in detail in the third subsequent mode. (1-2) Recorded Data in the Volume Area Figure 4A shows the content of the recorded data in the volume area of a DVD-RAM. The video stream and the audio stream shown in the fifth level of Figure 4 are divided into units of approximately 2 KB, as shown in the fourth level. The units obtained through this division are interleaved in the VOB # l and VOB # 2 in the AV file shown in the third level as video packets and audio packets in accordance with the MPEG standard. The AV file is divided into a plurality of extensions as shown in the second level, according to IS0 / IEC13346, and these extensions are each stored in an empty area within a zone in the volume area, as shown in the first level of Figure 4A. The information for VOB # l ~ VOB # 3 is recorded in the RTRW administration system such as V0B # 1 information, VOB # 2 information, and V0B # 3 information shown in the fifth level. In the same way as an AV file, this RTRW file is divided into a plurality of extensions that are recorded in empty areas in the volume area. The following explanation will deal with video streams, audio streams, and VOBs separately, having first explained the hierarchical structure of the MPEG standard and the DVD-RAM standard that defines the data structures of these elements. Figure 4B shows the hierarchical structure of the data definitions used under the MPEG standard. The data structure for the MPEG standard is composed of an elementary stream layer and a system layer. The elementary current layer shown in Figure 4B includes a video layer defining the data structure of the video streams, an MPEG-Audio audio layer that defines the data structure of an MPEG-Audio stream, a AC3 layer that defines the data structure of a data stream under the Dolby-AC3 methods, and a linear PCM layer that defines the data structure of an audio stream under the linear PCM methods. The presentation start time (Presentation_Home_Time) and the presentation completion time (Present Event_Finally_Time) are defined within the elementary stream layer, although, as shown by the separate frames for the video layer, MPEG-Audio layer, AC-3 layer, and Linear PCM layer, the Data from the video stream and the audio stream are independent of each other. The start time of presentation and the end time of presentation of a video frame and the start time of presentation and the end time of presentation of an audio frame are not synchronized in a similar way. The system layer shown in Figure 4B defines the packets, groups, DTS and PTS described below. In Figure 4B, the system layer is shown in a separate box to the video layer and the audio layer, which shows that the packets, groups, DTS and PTS are independent of the data structures of the video streams and the audio streams.
While the previous layer structure is used. for the MPEG standard, the DVD-RAM standard includes the system layer under the MPEG standard shown in Figure-4B and an elementary stream layer. In addition to the packages, groups, DTS, and PTS described above, the norm of. DVD defines the data structures of the VOBs shown in Figure 4A. (1-2-1) Video Stream The video stream shown in Figure 5A has a data structure that is defined by the video layer shown in Figure 4B. Each video stream is composed of an array of a plurality of image data sets each corresponding to a frame of video images. This image data is a video signal in accordance with the NTSC (National Television Standards Committee) or PAL (Phase Alternation Line) standard that has been compressed using MPEG techniques. The image data sets produced by compressing a video signal under the NTSC standard are displayed by video frames that have a frame interval of approximately 33 m seconds (1 / 29.97 seconds to be precise), while data sets Image produced by compressing a video signal under the PAL norm are displayed by video frames that have a frame interval of 40 m seconds. The upper level of Figure 5A shows the examples of video frames. In Figure 5A, the sections indicated between the symbols "<" and ">" are each a video frame, with the "<" symbol showing the presentation start time (Time_Home_Fast) for each frame of video and the symbol «> "which shows the presentation end time (Present_Final_Ition_Time) for each video frame This annotation for video frames is also used in the following drawings The sections enclosed by these symbols include each plurality of fields As shown in Figure 5A, the image data that must be displayed for a video frame is entered into a decoder before the Pre.sentación_Inic.io_Tiempo of the video frame and must be taken from the buffer by the decoder in the Presentation_Home_Time. When compression is performed in accordance with MPEG standards, the characteristics of. spatial frequency within the image of a frame - and the correlation related to time with images that are displayed before or after a frame is used. In doing so, each set of image data becomes one of a predicative bidirectional image (B), a predicative image (P), or an Intra (I) image. An image B is used where the compression is performed using the correlation related to time with images that are reproduced both before and after the present image. An image P is used when performing compression and using time-related correlation with images that are reproduced before the present image. An I image is used when performing compression and using the spatial frequency characteristics within a frame without using time-related correlation with other images. Figure 5A shows B images, P images, and I images as they all have the same size, although it should be noted that there is actually more variation in their sizes. When decoding an image B or a picture P using the correlation related to the time between frames, it is necessary to refer to the images to be reproduced before or after the image that is decoded. For example, when an image B is decoded, the decoder has to wait until the decoding of the next image has been completed. As a result, an MPEG video stream defines the encoding order of each image as well as defining the display order of the images. Figure 5A, the second and third levels respectively show the image data sets arranged in the order of display and in the order of coding. In Figure 5A, the reference objective of one of the images B is shown by the broken line that is followed by the following image I. In the order of display, this image I follows the image B, although since the image B is compressed using the correlation related to the time with the image I, that of the coding of the image B has to wait for the end of the decoding of the image I. As a result, the coding order defines that the image I arrives before the image B. This re-arrangement of the display order of the images when the coding order is generated is called "reordering". As shown in the third level of Figure 5A, each image data set is divided into units of 2 KB after it is arranged in the order of coding. The resulting 2 KB units are stored as a sequence of video packets, as shown in the background level of Figure 5A. When a sequence of B images and P images are used, problems can be caused, such as by the special playback characteristics that perform decoding starting from the midpoint through the video stream. To prevent these problems, an I image is inserted into the video data at intervals of 0.5 sec. Each image data sequence that starts. from an I image that continues until the next image I is called a GOP (Group of Images), with the GOPs that are defined in the system layer of the MPEG standard as the unit for MPEG compression. In the third level of Figure 5A, the dotted vertical line shows the boundary between the present GOP and the next GOP. In each GOP, the type of image of the image data that is arranged last in the order of display is an image in P, while the type of image in the image data that is arranged first in the order of coding it must be an I image. (1 -2-2) C orriente de Audi or The audio stream is data that has been compressed according to one of the Dolby-AC3 method, MPEG method, and or linear PCM. Similar to a video stream, an audio stream is generated using audio frames that have a fixed range of frames. Figure 5B shows the correspondence between the audio frames and the audio data. In detail, the playback period of an audio frame is 32 m seconds to Dolby-AC3, 24 m seconds for MPEG, and about 1.67 m seconds (1/600 seconds to be precise) for linear PCM. The upper level of Figure 5B shows the example audio frames. In Figure 5B, each section indicated between the symbols "<" and "> "• is an audio box, with the" < "symbol showing the presentation start time and the" > "symbol showing the presentation end time. This annotation for the video frames is also used in the following drawings. The audio data that must be displayed for a video frame is entered into a decoder before the start time of presentation of the audio frame and must be extracted from the buffer by the decoder at the start of presentation time.
The background level of Figure 5B shows an example of how the audio data to be played in each frame is stored in audio packets. In this figure, the audio data to be played for the audio frames fdl, f82 are stored in the audio package A71, the audio data to be played for the audio frame f84 are stored in the package A72 audio, and the audio data to be played for audio frames f86, f87 are stored in audio package A73. The audio data to be played for the audio frame f83 are divided between the audio pack A71 that arrives first and the audio pack A72 that arrives later. In the same way, the audio data to be played for the f86 video frame is divided between the audio pack A72 that arrives first and the audio pack A73 that arrives last. The reason why the audio data to be played for an audio frame is stored divided between two audio packets is that the boundaries between the audio frames and the video frames do not correspond to the boundaries between the packets. The reason why these limits do not correspond is that the data structure of the packets under the MPEG standard is independent of the data structure of the video streams and the audio streams. (1-2-3) Data Structure of the VOB The VOB (Video Objects) # 1, # 2, # 3 ... shown in Figure 4A are program streams under ISO / IEC 13818-1 that are obtained by multiplexing a video stream and an audio stream, although these VOB do not have a Program_Fin__Code in the termination. Figure 6A shows the detailed hierarchy of the logical construction of the VOB. This means that the logical format located at the highest level of Figure 6A is shown in more detail at the lower levels. The video stream that is located at the highest level in Figure 6A is shown divided into a plurality of GOPs in the second level, with these GOPs that have been shown in Figure 5A. As in Figure 5A, the image data in the GOP units are divided into a large number of 2 KB units. On the other hand, the audio stream shown to the left of the highest level in Figure 6A is divided. in a large number of units of approximately 2 KB at the third level in the same way as' Figure 5B. The image data for a GOP unit that is divided into 2 KB units is interspersed with the air stream that is similarly divided into units of approximately 2 KB. This produces the sequence of packets in the fourth level of Figure 6A. This packet sequence forms a plurality of VOBU (Video Object Units) that are displayed at the level time, with the VOB (video object) shown in the sixth level which is composed of a plurality of these VOBU arranged in a time series. In Figure 6A, the guides drawn using dashed lines show the relationships between the data structures at the adjacent levels. By referring to the 6A guides, you can see that the. VOBU in the fifth level correspond to the packet sequences in the fourth level and the image data in the GOP units shown in the second level. As can be seen when plotting the guides, each VOBU is a unit that includes a-1 minus a GOP composed of image data with a playback period of approximately 0.4 to 1.0 seconds and audio data that has been interspersed with this data. image. At the same time, each VOBU is composed of an array of video packages and audio packages under the MPEG standard. The unit called a GOP under the MPEG standard is defined by the system layer although when only the video data is specified by a GOP, as shown in the second level of Figure 6A, the audio data and other data (such as such as the sub-image data and the control data) that are multiplexed with the video data are not indicated by the GOP. Under the DVD-RAM standard, the expression "VOBU" is used for a unit that corresponds to a GOP, with this unit being a general name for at least one GOP composed of image data with a reproduction period of approximately 0.4. to 1.0 seconds and the audio data that has been interspersed with this image data. Here, it is possible for the parts of a VOB to be erased, with the minimum unit being a VOBU. As an example, the video stream recorded on a DVD-RAM as a VOB can contain images for a commercial that is not needed by the user. The VOBUs in this VOB include at least one GOP that composes the commercial and the audio data that is interspersed with this image data, so that if only the VOBUs in the VOB that correspond to the commercial can be deleted, the user will then be able to see the video stream without having to see the commercial. Here, even if a VOBU is erased, for example, the VOBUs on either side of the deleted VOBU will include a portion of the video stream in the GOP units having each I image located on its front side. This means that a normal process of coding and reproduction is possible, even after the VOBU has been erased. Figure 6B shows an example where part of a VOB is deleted. This VOB originally includes the VOBUil, V0BU # 2, VOBU # 3, VOBU # 4 ... .VOBU # 7. When the deletion of V0BU # 2, V0BU # 4 and V0BU # 6 is indicated, the areas that were originally occupied by these VOBUs are free and thus are shown as empty areas in the second level of Figure 6B. When the VOB is played later, the playback order is V0BU # 1, V0BU # 3, V0BU # 5 and V0BU # 7. The video packets and audio packages included in a VOBU each have a data length of 2 KB. This size of 2 KB corresponds to the size of the sector of a DVD-RAM, so that each video package and audio package is recorded in a separate sector. The arrangement of video packets and audio packets corresponds to the arrangement of an equal number of consecutive logical sectors, and the data held within these packages are read from the DVD-RAM. That is, the arrangement of video packages and audio packages refers to the order in which these packages are read from the DVD-RAM. Since each video packet is approximately 2 KB in size, if the data size of the video stream for a VOBU is several hundred KB, for example, the video stream will be stored having been divided into several hundred video packages. (1-2-3-1) Data Structures of Video Packages and Audio Packages Figures 6C to 6E show the logical format of the video packets and audio packets stored in a VOBU. Normally, a plurality of groups are inserted into a packet in an MPEG system stream, although under the DVD-RAM standard, the number of groups that can be inserted into a packet is restricted to one. Figure 6C shows the logical format of a video package arranged at the start of a VOBU. As shown in Figure 6C, the first video packet in a VOBU is composed of a packet header, a system header, a group header, and the video data that is part of the video stream.
Figure 6D shows the logical format of video packets that do not come first in the VOBU. As shown in Figure 6D, these video packets are each composed of a packet header, a group header, and video data, without the header of s i s t ema. Figure 6E shows the logical format of the audio packets. As shown in Figure 6E, each audio packet is composed of a packet header, a group header, a Sub_cor r_iente_id that shows whether the compression method used for the video stream included in the present packet is PCM Linear or Dolby-AC3, and the audio packages that are part of the video stream and have been built according to the indicated method. (1-2-3-2-1) Control of Intermediate Memory within a VOB The video stream and the audio stream are stored in video packets and audio packets as described above. However, in order to seamlessly reproduce the VOBs, it is not sufficient to store the video stream to the. audio stream in video packets and audio packets, which is necessary for the proper arrangement of video packets and audio packets that will be uninterrupted buffer control. The buffers referred to herein are input buffers for temporarily storing the video stream and the audio stream before input to a decoder. Hereinafter, the separate buffers are referred to as the video buffer and the audio buffer, with specific examples shown as the video buffer 4b and the audio buffer 4d in Figure 19. The uninterrupted control of the buffer refers to the input control for the buffer which ensures that no overflow or subflow will occur for any input buffer. This is described in more detail later, but it is achieved primarily by assigning timestamps (which show the correct times for input, output, and data display) that are standardized by an MPEG stream to the packet header and the header of the packet. group shown in Figure 6D and Figure 6E. If no subflows or overflows occur for the video buffer and the audio buffer, there will be no interruptions in the playback of video streams and audio streams. As will be clear from this specification, it is very important that the buffer control is not interrupted. There is a time limitation whereby each set of audio data needs to be transferred to the audio buffer and s encoded by the start time of presentation of the audio frame to be reproduced by this data, since These audio streams are encoded using fixed length coding with a relatively small amount of data, the data that is required for playback of each audio frame can be stored in the audio packets. These audio packets are transferred to the audio buffer during playback, meaning that the time limitation described above can be easily managed. Figure 7A is a figure showing the ideal operation of the buffer for the audio buffer. This • figure shows how the occupation of the buffer memory changes for a sequence of audio frames. In this specification, the term "buffer occupancy" refers to the extent to which the capacity of a buffer to store data is being used. The vertical axis of Figure 7A shows the occupation of the audio buffer, while the horizontal axis represents the time. This time axis is divided into sections of 32 m seconds, which correspond to the reproduction period of each audio frame in the Dolby-AC3 method. By referring to this graph, it can be seen that the occupation of the buffer memory changes over time to exhibit a sawtooth pattern.
The height of each triangular tooth that makes up the sawtooth pattern represents the amount of data in the part of the audio stream that will be played in each audio frame. The gradient of each triangular tooth represents the transfer speed of the audio stream. This transfer speed is the same for all audio frames. During the period corresponding to a triangular tooth, audio data is accumulated with a constant transfer speed during the viewing period (32 m seconds) of the audio frame presiding over the audio frame that is played by this audio data. At the end time of presentation of the preceding audio frame (this is the time represented by the decoding time for the present frame), the audio data for the present frame is transferred instantaneously from the audio buffer. The reason for achieving a sawtooth pattern is that the processing is continuously repeated from buffering to transferring from the buffer. As an example, it is assumed that the transfer of an audio stream to the audio buffer starts at time TI. This audio data must be reproduced at time T2, so that the amount of data stored in the audio buffer will gradually increase between time Ti to time T2 due to the transfer of this audio data. However, because this transferred audio data is transferred at the presentation definition time of the preceding audio frame, the audio buffer will be cleared from the audio data at that point, so that memory occupancy Intermediate audio returns to 0. FIG. 7A, the same pattern is repeated between time T2 and time T3, between time T3 and time T4, and so on. The buffer operation shown in Figure 7A is the buffering operation state, ideal for the premise where the audio data to be played in each audio frame is stored in an audio packet. In reality, however, it is normal for the audio data to be played in several different audio frames that will be stored in an audio package, as shown in Figure 5B. Figure 7B shows a more real operation for the audio buffer. In this figure, the audio pack A31 stores the audio data A21, A22 and A23 that must be respectively decoded by the presentation end times of the audio frame f21, f22 and f23. As shown in Figure 7B, only the decoding of the audio data A21 will be terminated at the time of presentation of the audio frame f21, with the coding of the other audio data sets f22 and f23 that it is determined respectively by the ending times of presentation of the following audio frames f22 and f23. Of the audio frames included in this audio packet, the A21 packets must be decoded first, with the de-coding of this audio data that needs to be terminated by the end time of presentation of the audio frame f21. Therefore, this audio pack must be read from the DVD-RAM during the playback period of the audio frame f21. Video streams are encoded with variable code length due to large differences in code size between different types of images (I images, P images and B images) used in compression methods that use time-related correlation . The audio streams also include a significant amount of data, so that it is difficult to complete the transfer of the image data for a video frame, especially the image data for an I image, by the end time of presentation of the frame of previous audio. Figure 7C is a graph showing the video frames and the occupation of the video buffer. In Figure 7C, the vertical axis represents the occupation in the video buffer, while the horizontal axis represents the time. This horizontal axis is divided into sections of 33 m seconds, each corresponding to the period of • video frame reproduction under the NTSC standard. By referring to this graph, it can be seen that the changes in the occupation of the video buffer changes over time to exhibit a pattern of ground teeth. The height of each triangular tooth that makes up the sawtooth pattern represents the amount of data in the part of the video stream that will be played in each video frame. As mentioned above, the amount of data in each video frame is not equal, since the amount of code for each video frame is assigned dynamically according to the complexity of the frame. The gradient of each triangular tooth shows the transfer speed of the video stream. The transfer speed of approximately the video stream is calculated by subtracting the output speed of the audio stream from the output speed of the track buffer. This transfer speed is the same during each frame period. During the period corresponding to a triangular tooth in Figure 7C, the image data is accumulated with a constant transfer rate during the viewing period (33 m seconds) and the video frame that precedes the video frame being reproduced for this image data. At the end time of presentation of the preceding video frame (this time representing the decoding time for the present picture data), the picture data for the present picture is transferred instantaneously from the video buffer. The reason why a sawtooth pattern is achieved is that the processing is constantly repeated from storage in the video buffer to the transfer from the video buffer. When the image that is to be displayed in a given video frame is complex, a large amount of code needs to be assigned to this frame. When a large amount of code is assigned, this means that the pre-storage of data in the video buffer needs to be started in advance. Normally, the period from the transfer start time, in which the transfer of the image data in the video buffer is started, at the time of decoding for the image data the VBV delay is called (Memory Verification). Video Intermediate). In general, the more complex the image, the greater the amount of code assigned and the greater the VBV delay. As can be seen from Figure 7C, the transfer of the image data is decoded at the time of presentation completion T16 of the preceding video frame is started at the time Til. The transfer of the image data that is decoded at the time of presentation completion T18 of the preceding video frame, meanwhile, it starts at time T12. The transfer of image data for other video frames can be seen to start times T14, T15, T17, T19, T20 and T21. Figure 7D shows the transfer of image data sets in more detail. When considering the situation in Figure 7C, the transfer of the image data to be decoded at time T24 in Figure 7D needs to be completed at the "Tf_Period" between the start time T23 of the "VBV delay". and the beginning of the transfer of the image data for the next video frame to be played. The increase in buffer occupancy that occurs from this Tf_Periodo forward is made by the transfer of the image data for the image to be displayed in the next video frame. The image data accumulated in the video buffer wait for the time T24 in which the image data is to be encoded. At the decoding time T24, the image A is decoded, which illustrates part of the image data stored in the video buffer, thereby reducing the total occupation of the video buffer. When considering the previous situation, it can be seen that while it is sufficient for the transfer of audio data to be played in a certain audio frame that is started around a frame in advance, the transfer of the data from image for a certain video frame needs to be started well before the decoding time of this image data. In other words, the audio data to be played in a certain audio frame must be entered into the buffer around the same time as the image data for a video frame which is well in advance of the audio frame. This means that when the audio stream and the video stream are multiplexed in an MPEG stream, the video data needs to be multiplexed also before the corresponding image data. As a result, the video data and the audio data in a VOBU are actually composed of video data that will be played later and audio data. The arrangement of the plurality of video packets and audio packets that have been described as reflecting the order of transfer of the data included in the packets. Accordingly, to cause the audio data to be played in a video frame read at about the same time as the image data to be played in a video frame which is well in front of the audio frame, the audio packets and the video packets that store the audio data and the video data in question that need to be arranged in the same part of the VOB. Figure 8A shows how the audio packets, which store audio data to be played in each audio frame, the video packets, which show the image data to be played in each video frame, should be to stock. In Figure 8A, the rectangles marked "V" and "A" show each video packet and each audio packet. Figure 8B shows the meaning of the width and height of each of its rectangles. As shown in Figure 8B, the height of each rectangle shows the bitrate used to transfer the packet. As a result, packets having a high height are transferred with a high bitrate, which means that the packet can be entered into a buffer relatively quickly. The packets that are not high, however, are transferred with a low bitrate, and in this way it takes a relatively long time to be transferred to the buffer. The image data Vil that is decoded at time Til in Figure 8B are transferred during time kll. Since the transfer and decoding of All audio data is done during this period kll, the video packets that store the video data VII and the audio packets that store the audio data All are arranged in a similar position, as shown at the bottom of Figure 8A. The image data V12 that is decoded at time T12 in Figure 8A is transferred during time kl2. Since the transfer and decoding of the audio data A12 is performed during this period kl2, the video packets storing the video data V12 and the audio packets storing the audio data A12 are arranged in a similar position, as It is shown in the lower part of Figure 8A. In the same way, the audio data A13, A14 and A15 are arranged in similar positions as the image data V13 and V14 whose transfer is started at the time of transfer of these audio data sets. It is noted that when the video data with a large amount of assigned code, such as the image data V16, is accumulated in the buffer, a plurality of audio data A15, A16 and A17 are multiplexed during kl6 which is the period for transferring V16 video data. Figure 9 shows as audio packets that store a plurality of audio data sets to be reproduced in a plurality of audio frames and video packets that store image data to be reproduced in each video frame they can be stored. In Figure 9, audio pack A31 stores audio data A21, A22 and A23 to be played for audio frames f21, f22 and f23. From the audio data that is stored in the audio pack A31, the first audio data to be encoded is the image data A21. Since the image data A21 needs to be decoded at the end time of presentation of the audio frame f20, this audio data A21 needs to be read from the DVD-RAM in conjunction with the image data VII which is transferred during the same. period (period kll) as the audio frame f20. As a result, the audio package A31 is arranged close to the video packets that store the image data Vil. When it is considered that an audio packet can store audio data that must be decoded for several audio frames, and that the audio packets are arranged in positions similar to the video packets that are composed of image data that should be decoded in the future, it may appear that the audio data and the video data to be decoded at the same time should be stored in audio packets and video packets that are in different positions within a VOB. However, there will be no cases where video packets that store image data that will be decoded a second or more later are stored along with the audio data that must be decoded at the same time. This is because the MPEG standard defines the upper limit for the time data that can be accumulated in the buffer, with all the data that has to be transferred from the buffer within a second of being entered into the buffer . This restriction is called the "one second rule" for the MPEG standard. Due to the one-second rule, even if the audio data and the video data to be decoded at the same time are arranged in different positions, the audio packet that stores the video data to be decoded in a given time will be stored definitively within a range of 3 VOBU of the VOBU which stores the image data to be decoded at the given time. (1-2-3-2-2) Intermediate Memory Control between the VOBs The following explanation deals with the buffer control that is performed when two or more VOBs are played back in succession. Figure 10A shows the buffer for the first part of a video stream. In Figure 10A, the input of the packet that includes the image data is started at the point indicated as First_SCR during video frame f71, with the amount of data shown as BT2 being transferred by the end time of presentation of the table of video f72. Similarly, the amount of BT3 data has been accumulated in the buffer by the display frame completion time f73. This data is read from the video buffer by the video decoder at the end time of presentation of the video frame f74, with this time which is indicated hereinafter by the notation First_DTS. In this way, the state of the buffer changes as shown in Figure 10A, with no data for a preceding video stream at the start and the cumulative amount of data that is gradually increased to plot a triangular shape. It is noted here that Figure 10A is plotted with the premise that the video packet is entered in the First_SCR time, although when the packet placed on the front of a VOB is a different packet, the start of the increased amount of data stored in the buffer will not match the First_SCR time. Also, the reason Ultimo_SCR is placed at a midpoint through a video frame is that the data structure of the packet is not related to the data structure of the video data. Figure 10B shows the buffer state during the last part of a video stream. In this drawing, the data entry in the video buffer is terminated by the Ultimo_SCR time which is located at the midpoint through the video frame f61. After this, only the amount of data? 3 of the accumulated video data is taken from the video buffer at the end time of presentation of the video frame f61. After this, it can be seen that only the amount of data? 4 is taken from the video buffer at the end time of presentation of the video frame f62, and only the amount of data? 5 is taken at the time of Finalization of video frame presentation f63. This last time is also called the Last_DTS. For the last part of a VOB, the input of the video packets and the audio packets is determined by the time shown as Last_SCR in Figure 10B, so that the amount of data stored in the video buffer will subsequently decrease in the steps in the decoding of video frames f61, f62, f63 and f64. As a result, the occupation of the buffer decreases in the steps at the end of a video stream, as shown in Figure 10B. Figure 10C shows the buffer state through the VOBs. In more detail, this drawing shows the case where the last part of a video stream causing the buffer state shown in Figure 10B is seamlessly linked to the front of another video stream making the buffer state shown in Figure 10A. When these two video streams are seamlessly linked, the First_DTS of the last part of the second video stream to be played needs to be followed later by the video frame with the Last_DTS of the last part of the first video stream . In other words, the decoding of the first video frame in the second video stream needs to be done after the decoding of the video frame with the final decoding time in the first video stream. If the interval between the Last_DTS of the last part of the first video stream and the First_DTS of the last part of the second video stream is equivalent to a video frame, the image data of the last part of the first video stream will coexist in the video buffer with the image data of the last part of the second video stream, as shown in Figure 1 OC. In Figure 10C, it is assumed that the video frames f71, f72 and f73 shown in Figure 10A correspond to the video frames fdl, f62 and f63 shown in Figure 10B. Under these conditions, the time of completion of presentation of the video frame f71, the image data BE1 of the last part of the first video stream and the image data BT1 of the previous part, of the second video stream is present in the video buffer. At the end time of presentation the video frame f72, the image data BE2 of the last part of the first video stream and the image data BT2 of the previous part of the second video stream are present in the buffer Of video. At the time of completion of presentation of the video frame f73, the image data BE3 of the last part of the first video stream and the BT3 image data of the previous part of the second video stream are present in the buffer Of video. As the decoding of the video frames progresses, the image data of the last part of the first video stream decreases in steps, while the image data of the previous part of the second video stream increases gradually. These decreases and increments occur concurrently, so that the buffer state shown in Figure 10C exhibits a sawtooth pattern that closely resembles the buffer state shown for the VOBs in Figure 7C. It should be noted here that each of the BT1 + BE1 total of the amount of data BT1 and the amount of data BE1, the total BT2 + BE2 of the amount of data BT2 and the amount of data BE2, and the total BT3 + BE3 of the amount of data BT3 and the amount of data BE3 is below the capacity of the video buffer. Here, if any of these totals BT1 + BE1, BT2 + BE2 or BT3 + BE3 exceeds the capacity of the video buffer, an overflow will occur in the video buffer. If the highest of these totals is expressed as Bvl + Bv2, this value Bvl + Bv2 must be within the capacity of the buffer. (1-2-3-3) Package Header, System Header, Group Header The information for the buffer control described above is written as timestamps in the packet header, system header and group header shown in Figures 6F ~ 6H. Figures 6F ~ 6H show the logical formats of the packet header, the system header and the group header. As shown in Figure 6F, the packet header includes a Paque t e_Inici o_Code, an SCR (system clock reference), demonstrates the time at which the data stored in the present packet must be entered into the buffer of video and the audio buffer, and a Pr_grama_max_velocidad. In a VOB, the first SCR is set as the initial value of the STC (System Time Clock) that is provided as a normal feature in a decoder under the MPEG standard. The system header shown in Figure 6G is appended only to the video packet that is located at the start of a VOBU. This system header includes the maximum speed information (shown as the "Speed, Limit, Info" in Figure 6G) that shows the transfer speed to be requested from the playback device when the data is entered, size information of the buffer (shown as "Memoryint ermedia.Limit.Info" in Figure 6G) showing the highest size of the buffer to be requested from the playback apparatus when the data is entered into the VOBU. The group header shown in Figure 6H includes a DTS (Decoding Time Mark) that shows the decoding time and for a video stream, a PTS (Presentation Time Mark) shows the time in which the data is displayed. they must transfer from the rearrangement of the decoded video stream. The PTS and the DTS are adjusted based on the start time of presentation of a video frame or an audio frame. In the construction of the data, a PTS and a DTS can be adjusted for all packets, although this information is rare for the image data that must be displayed or displayed for all the video frames. It is common for this information to be designated once in a GOP, which is to say once given 0.5 seconds of the playback time. Each video package and each audio package is assigned an SCR, however. For a video stream, it is common for a PTS to be assigned to each video frame in a GOP, although for an audio stream, it is common for a PTS to assign each or two audio frames. For an audio stream, there will be no difference between the display order and the encoding order, so the DTS is not required. When an audio packet stores all the audio data to be played for two or more audio frames, a PTS is written at the beginning of the audio packet. As an example, the audio pack A71 shown in Figure 5B can be given the start time of presentation of the audio frame f81 in the PTS. On the other hand, the audio package A72 that stores the divided audio frame f83 must be given the start time of presentation of the audio frame f84, then the presentation time of the audio frame f83, as the PTS. This is also the case for the A73 audio packets, which must be given the start time of presentation of the f86 audio frame, not the presentation start time of the f85 audio frame, such as the PTS. (1-2-3-4) Continuity of Time Marks The following is an explanation of the values that are adjusted as the PTS, DTS, and SCR for the video packets and the audio packets, as shown in Figures 6F to 6H.
Figure HA is a graph showing the values of the SCR of the packets included in a VOB in the order that the packets are arranged in the VOB. The horizontal ashows the order of the video packages, with the vertical axis showing the value of the SCR that is assigned to each package. The first value of the SCR in Figure 11A is not zero, and instead is a predetermined value shown as Initl. The reason why the first value of the SCR is not zero is that the VOBs that are processed by a video editing device undergo many editing operations, so that there are many cases where the first part of a VOB it will have already been deleted. It should be obvious that the initial value of the SCR of a VOB that has been coded will already be zero, although this embodiment assumes that the initial value of the SCR for a VOB is not zero, as shown in Figure HA. In Figure HA, the closer a video packet is to the start of the VOB, the lower the value of the SCR of that video packet will be, and in addition, a video packet is from the start of the VOB, and the higher the value of the SCR of that video packet. • This characteristic is referred to as the "continuity of time stamps", with the same continuity that is exhibited by the DTS. Although the order of coding of the video packets is such that a last video packet can actually be displayed before a previous video packet, meaning that the PTS of the last packet has a lower value than the previous packet, the PTS it will exhibit an approximate continuity in the same way as the SCR and the DTS. The SCR of the audio packets exhibits continuity in the same way as for the video packets. The continuity of the SCR, DTS and PTS is a prerequisite for the proper coding of the VOB. The following is an explanation of the values used for the SCR to maintain this continuity. In Figure 11B, the straight line showing the values of the SCR in section B is an extension of the straight line that shows the values of the SCR in section A. This means that there is continuity within the SCR values between section A and section B. In Figure 11C, the first value of the SCR in period D is greater than the highest value in the straight line that shows the SCR values in section C. However, in this case also, the closer a packet is to the start of the VOB, and it will not be the value of the SCR, and the more favored a packet of video from the start of the VOB, the higher the value of the SCR. This means that there is continuity of the time stamps between section C and section D. Here, when the difference in time stamps is large, these marks are naturally non-continuous. Under the MPEG standard, the difference between the timestamp pairs, such as the SCRs, should not exceed 0.7 seconds, so that the areas in the data where this value is exceeded are treated as non-continuous. In Figure 11D, the last value of the SCR in section E is higher than the first value in the straight line that shows the values of the SCR in section F. In this case, the continuity where the nearest is a At the beginning of the VOB, the value of the SCR will be lower, and the more favored the video package is since the beginning of the VOB, the higher the value of the SCR is valid no longer, so there is no continuity in the timestamps between section E and section F. When there is no continuity in the timestamps, as in the example of section E and section F, the previous and last sections are administered as separate VOBs. It should be noted that the details of the buffer control between the VOBs and the multiplexing method are described in detail in the PCT applications "WO 97/13367" and "WO 97/13363". 1-2-4) AV Files An AV file is a file that records at least one VOB that will be played consecutively. When a plurality of VOBs are retained within an AV file, these VOBs are reproduced in the order they are stored in the AV file. For the example in Figure 4, the three VOB, V0B # 1, VOB # 2 and VOB # 3 are stored in an AV file, - with these VOBs being played in the order of VOB # l? VOB # 2? V0B # 3 When the VOBs are stored in this manner, the buffer status for the video stream placed at the end of the first VOB to be played and the video stream placed at the start of the next VOB to be played will be as shown. shown in Figure 10C. Here, if the highest quantity data Bvl + Bv2 are stored in the buffer memory, it exceeds the capacity of the buffer, or if the first time stamp in the VOB is to be played second it is not continuous with the last mark of VOB time to be played first, there is a danger that seamless reproduction is not possible for the first and second VOB. (1-3 Logical Construction of the Archive of RTRW administration The following is an explanation of the RTRW administration file. The RTRW administration file is the information that shows the attributes for each VOB stored in an AV file. Figure 12A shows the detailed hierarchical structure in which the data is stored in the administration files RTRW. The logical format shown on the right in Figure 12A is a detailed expansion of the data shown on the left, with broken lines that serve as guides to clarify what parts of the data structure are expanding. Referring to the data structure in Figure 12B, it is possible that the RTRW administration file records the VOB information, for V0B # 1, VOB # 2, VOB # 3, ... V0B # 6, and that it is VOB information is composed of general VOB information, current attribute information, a time map table, and seamless link information. (1-3-1) VOB General Information The "general VOB information" refers to the VOB-ID that is uniquely assigned to each VOB- in an AV file and to the VOB reproduction period information of each VOB. (1-3-2) Current Attribute Information The stream attribute information is composed of the video attribute information and the audio attribute information. The video attribute information includes information of the video format indicating one of MPEG2 and MPEG1, and a display or display method indicating one of NTSC and PAL / SECAM. When the video payload information indicates NTSC, an indication such as "720 x 480" or "352 x 240" can be given as the display resolution, and an indication such as "4: 3" or "16: 9" "It can be given as the aspect ratio. The presence / absence of copy production control for an analog video signal can also be indicated, such as the presence / absence of a copy security for a video cassette recorder that damages the AGC circuit of a VTR by changing the signal amplitude during the blank period of a video signal. The invention of audio attribute shows the coding method which can be one of MPEG2, Dolby Digital, or Linear PCM, the sampling frequency (such as 48 KHz), a bitrate when using a fixed bitrate, or a bitrate marked with "VBR" when using a variable bitrate. The time map table shows the size of each VOBU that makes up the VOB and the reproduction period of each VOBU. To improve the excess capabilities, the representative VOBUs are selected in a predetermined range, such as a multiple of ten seconds, and the directions and times of reproduction of these representative VOBUs are given in relation to the start of the VOB. (1-3-3) Seamless Link Information The seamless link information is the information that allows consecutive reproduction of the plurality of VOBs in the AV file to be performed seamlessly. This seamless link information includes the seamless brand, the VOB_V_S_PTM video presentation start time, the VOB_V_E_PTM video presentation completion time, the First_SCR, the Last_SCR, the audio separation start time A_STP_PTM, the audio separation length A_GAP_LEN, and a location and audio separation A_GAP_LOC. (1-3-3-1) Seamless Brands The seamless mark is a mark that shows whether the VOB corresponding to the present seamless link information is reproduced seamlessly following the end of the VOB playback placed immediately before the present VOB in the AV file. When this mark is set to "01", the reproduction of the present VOB (the last VOB) is performed without seams, whereas when the mark is set to "00", the reproduction of the present VOB is not reproduced without seams. In order to perform the reproduction of a plurality of VOBs without seams, the ratio of the previous VOB to the last VOB should be as follows: (1) Both VOBs can use a display or display method (NTSC, PAL, etc.) to the current 'video as it is given in the video attribute information. (2) Both VOBs must use the same encoding method (AC-3, MPEG, Linear PCM) for the audio stream as given in the audio attribute information. Failure to comply with the above conditions prevents seamless reproduction. When a different display method is used for a video stream or a different encoding method is used for an audio stream, the video encoder and the audio encoder will have to determine their respective operations to switch the display method, the decoding method and / or bit rate. As an example, when two video streams that are going to -reproduce consecutively are such that the previous audio stream has been encoded according to the AC-3 methods and the subsequent one according to the MPEG methods, an audio decoder it will have to have the decoding to switch the current attributes when the current commutes from AC-3 to MPEG. A similar situation also occurs for a video decoder when the video stream changes. The seamless mark only adjusts "01" when both of the above conditions (1) and (2) are satisfied. If any of the above conditions (1) and (2) does not satisfy, the seamless mark is set to "00". (1-3-3-2) VOB V S PTM Video Presentation Start Time The VOB_V_S_PTM video presentation start time shows the time at which the reproduction of the first video field in the video streams that make up a VOB is going to start. This time is given in the PTM descriptor format. The PTM descriptor format is a format by which time is expressed with an accuracy of 1 / 27,000,000 seconds or 1 / 90,000 seconds (= 300 / 27,000,000 seconds). This accuracy of 1 / 90,000 seconds is adjusted by considering the common multiples of the frame rates of the NTSC signals, PAL signals, Dolby-AC3 and MPEG audio, while the accuracy of 1 / 27,000,000 seconds is adjusted considering the frequency of the STC. Figure 12B shows the PTM descriptor format. In this drawing, the PTM descriptor format is composed of a base element (PTM_base) that shows the quotient when the presentation start time is divided by 1 / 90,000 seconds and an extension element (PTM_extension) that shows the rest when the same presentation start time is divided by the base element at an accuracy of 1 / 27,000,000 seconds. 1-3-3-3) End Time of Presentation of Video VOB V E PTM The video presentation completion time shows the time at which the reproduction of the last video field in the video streams that make up a VOB ends. This time is also given in the PTM descriptor format. (1-3-3-4) Relationship between the Start Time of Presentation of Video VOB V S PTM and the Time of Completion of Video Presentation VOB V E PTM The following is an explanation of the relationship between the V0B_V_E_PTM of a previous VOB and the VOB_V_S_PTM of a last VOB, when the previous VOB and the last VOB are to be played without seams. Since the last VOB will fundamentally be played after all the video packages included in the previous VOB, so that if the VOB_V_S_PTM of the last VOB is not equal to the VOB_V_E_PTM of the previous VOB, the timestamps will not be continuous, meaning that the previous VOB and the last VOB can not be played without seams. However, when the two VOBs have been fully and separately encoded, the encoder will have assigned a unique time stamp to each video packet and audio packet during encoding, so that the condition for the VOB_V_S_PTM of the previous VOB is equal to the VOB_V_E_PTM of the previous VOB becomes problematic. Figure 13 shows the state of the buffer for the previous VOB and the last VOB. In the graphs in Figure 13, the vertical axis shows the occupation of the buffer while the horizontal axis represents the time. The times representing the SCR, PTS, the video presentation completion time VOB_V_E_PTM, and the VOB_V_S_PTM video presentation start time have been graphed. In Figure 11B, the image data that is reproduced last in the previous VOB is entered into the video buffer by the time indicated as Last_SCR of the video packet composed of this image data, with the processing of the output of the video packet. this data waiting until the PTS that is the presentation start time is reached (if the last packet entered in an MPEG decoder is an audio packet or another, this condition is not valid). Here, I saw deo_present ation_fin_t_time V0B_V_E_PTM shows the point where the display period hl of this final video has expired starting from this PTS. This display period hl is the period taken to draw an image from the first field that composes an image of the size of the screen to the final field. In the lower part of Figure 11B, the image data that must be displayed first in the last VOB is entered into the video buffer in the First_SCR time, with the reproduction of this data waiting until the PTS indicating the Presentation start time. In this drawing, the video packets of the previous and last VOB are respectively assigned an SCR with the first value "0", a video presentation completion time V0B_V_E_PTM, and a video presentation start time VOB__V_S_PTM. For this example, you can see that VOB_V_S_PTM of the last VOB <; V0B_V_E_PTM of the previous VOB. The following is an explanation of why seamless reproduction is possible even for the VOB_V_S_PTM condition of the last VOB < VOB_V_E_PTM of the previous VOB. Under the DVD-RAM standard, an extended STD model (later in the present "E-STD") is defined as the normal model for the reproduction apparatus, as shown in Figure 19. In general, an MPEG decoder has an STC (System Time Clock) to measure a normal time, with video decoding and audio decoding that refer to the start time shown by the STC to smooth decoding processing and playback processing. In addition to the STC, however, the E-STD has an adder to add a compensation to the normal time introduced by the STC, so that any of the normal time transferred by the STC and the addition result of the adder can be selected and transferred to the video decoder and the audio decoder. With this construction, even if the timestamps for different VOBs are not continuous, the transference of the adder can be supplied to the decoder to make the decoder behave as if the VOB timestamps were continuous. As a result, seamless reproduction is still possible even when the VOB_V_E_PTM of the previous VOB and the VOB_V_S_PTM of the last VOB are not continuous, as in the previous example. The difference between the VOB_V_S_PTM of the last VOB and the VOB_V_E_PTM of the previous VOB can be used as the compensation to be summed by the adder. This is usually referred to as the "STC_compensation". As a result, a reproduction apparatus of the E-STD model finds the STC_ compensation according to the formula shown below using the VOB_V_S_PTM of the last VOB and the VOB_V_E_PTM of the previous VOB. After finding the STC_compensation, the reproduction apparatus does not adjust the result in the adder. STC_compensation = VOB_V_E_PTM of the previous VOB - VOB_V_S_PTM of the last VOB The reason why the VOB_V_S_PTM of the last VOB and the VOB_V_E_PTM of the previous VOB are written in the seamless link information is to allow the decoder to perform the previous calculation and adjust the STC_compensation in the summer. Figure 11E is a graph that has been plotted for two VOBs in each of which the time stamps are continuous, as shown in Figure HA. The timestamp of the first packet in VOB # l includes the initial Intl value, with the packets that follow after this that have recently higher values as their timestamps. In the same way, the timestamp of the first packet in VOB # 2 includes the initial value Init2, with the packets that follow after that having increasingly higher values as their timestamps. In Figure HE, the final value of the timestamps in VOB # l, is greater than the first value of the timestamps in VOB # 2, so that you can see that the time stamps are not continuous through of the two VOB. When the coding of the first packet in VOB # 2 is desired after the final package of VOB # despite the non-continuity of the timestamps, an STC_compensation can be added to the timestamps in VOB # 2 , thereby changing the timestamps in VOB # 2 from the solid line shown in Figure HE to the broken line that continues as an extension of the timestamps in VOB # l. As a result, the timestamps changed in VOB # 2 can be seen to be continuous with the timestamps in VOB # l. (1-3-3-5) First SCR The First_SCR shows the SCR of the first packet in a VOB, written in the PTM descriptor format. (1-3-3-6) Last SCR The Last_SCR shows the SCR of the last packet in a VOB, written in the PTM descriptor format. (1-3-3-7) Relationship between First SCR and Last SCR As described above, since the reproduction of the VOB is performed by a decoder of the E-STD type, the Last_SCR of the previous VOB and the First_SCR of the last VOB need not satisfy the condition that Last_SCR of the previous VOB = First_SCR of the last VOB. However, when an STC_compensation is used, the following relationship must be satisfied.
Last_SCR of the previous VOB + time required for the transfer of a package < STC compensation + First SCR of the last VOB.
Here, if the Last_SCR of the previous VOB and the First_SCR of the last VOB do not satisfy the above equation, that means that the packets that make up the previous VOB are transferred to the video buffer and the audio buffer at the same time as the packets that make up the last VOB. This violates the MPEG standard and the E-STD decoder model where packets are transferred one at a time in the packet sequence. By referring to Figure 10, it can be seen that the Last_SCR of the previous one corresponds to the First_SCR of the last VOB + STC_compensation, so that the previous relationship is satisfied. When the VOB is played using the E-STD type decoder, the particular note is the time in which the switching is made between the normal time transfer transferred by the STC and the transfer of the normal time with the compensation added by the adder . Since no information is given for this switching in the time stamps of a VOB, there is a risk that improper synchronization will be used to switch to the transfer value of the driver.
The First_SCR and the Last_SCR are effective to inform the decoder of the correct synchronization to switch to the transfer value of the adder. While the STC is being counted, the decoder compares the normal time transferred by the STC by the First_SCR and Last_SCR. When the normal time transferred by the STC corresponds to the First_SCR or Last_SCR, the decoder switches from the normal time transferred by the STC to the output or transfer value of the adder. When a VOB is played, normal playback plays the last VOB after playing the previous VOB, while "rewind playback" (seek backward image) plays the previous VOB after the last VOB. Accordingly, the Last_SCR is used to switch the value used by the decoder during normal playback and the First_SCR is used for the switch used by the decoder during the rewind play. During rewind playback, the last VOB is decoded starting from the last VOBU to the first VOBU, and when the first video packet in the last VOB has been encoded, the previous VOB is decoded starting from the last VOBU to the first VOBU. In other words, during rewind playback, the time in which the decoding of the first video packet in the last VOB is complete is the time in which the value used by the decoder needs to be decoded. To report a video data editing device of the E-STD type of this time, the First_SCR of each VOB is provided in the RTRW administration file. A more detailed explanation of the techniques used for E-STD and STC_compensation is given in PCT publication WO 97/13364. (1-3-3-8) Audio Separation Start Time "A STP PTM" When there is a separation of audio reproduction in a VOB, the audio separation start time "A STP_PTM" shows the start time of stop in which the audio decoder must for its operation. This audio separation start time is given in the PTM descriptor format. An A_STP_PTM audio separation start time is indicated by a VOB. (1-3-3-9) Audio Separation Length "A GAP LEN" The audio separation length A_GAP_LEN shows how long the audio decoder must stop its operation starting from the stop start time indicated as the audio separation start time "A_STP_PTM". The length of this audio separation length A_GAP_LEN is restricted to being less than the length of an audio frame. (1-3-3-1-0) Inefability of Audio Separation The following is an explanation of why a period where an audio separation occurs needs to be specified by the audio separation start time A_STP_PTM and the audio separation length A_GAP_LEN. Since video currents and audio stream are played with different cycles, the total playing time of a video stream contained in a VOB does not correspond to the total playback time of the audio stream. For example, if the video stream is for the NTSC standard and the audio stream is for Dolby-AC3, the total playback time of the video stream will be a multiple integer of 33 msec and the total current playback time The video will be a whole integer multiple of 33 msec, as shown in Figure 14A. If seamless reproduction of two VOB is performed without considering these differences in the total playing time, it will be necessary to align the playing time of an image data set 'and the playing time of the audio data to synchronize the reproduction of the image data with the audio data. In order to align these playback times, a difference of the total time appears in • one of the start or end of the image data or video data. In Figure 14B, the reproduction time of the image data is aligned with the playing time of the audio data at the start of a VOB, so that the time difference gl is present in the image data and the audio data Since the time difference gl is present at the end of VOB # l, when seamless playback of VOBll and VOB # 2 is attempted, the reproduction of the audio stream in V0B # 2 is done to conclude the time difference gl, meaning that the reproduction of the audio stream in VOB # 2 starts at time gO. The audio decoder uses a fixed frame rate when an audio stream is reproduced, so that the coding of the audio streams is performed continuously with a fixed cycle. When the VOB # 2 to be played after the VOB # l has already been read from the DVD-RAM, the audio decoder can begin the decoding of V0B # 2 as soon as the decoding of the audio stream in the V0B # 1 To prevent the audio stream in the next VOB from playing too soon during seamless playback, the audio separation information in the stream is administered on the host side of a reproduction apparatus, so that during the period of audio separation, that the guest needs to stop the operation of the audio decoder. This production stop period is audio separation, and it starts from the audio separation start time A_STP_PTM and continues for the period indicated as A_GAP_LEN. Processing to specify audio separations is also performed between a stream. More specifically, the PTS of an audio frame immediately after an audio separation written in a group header of an audio group, so that it is possible to specify when audio separation ends. However, problems appear with the specification method when several sets of the audio data that I know should be played for several audio frames are stored in an individual audio group. In more detail, when several sets of audio data to be played for several audio frames are stored in an individual audio group, it is only possible to provide one PTS for the first of the plurality of audio frames in this group. . In other words, a PTS can not be provided for the remaining audio frames in the group. If the audio data to be played for the audio frames located both before and after an audio separation are arranged in the same group, it will not be possible to provide a PTS for the audio box located immediately after the separation of the audio. Audio. As a result, it will not be possible to specify the audio separation, meaning that the audio separation will be lost. To avoid this, the audio box located immediately after an audio separation is processed to be arranged on the front of the next audio packet, so that the PTS (Audio Separation Start Time A_STP_PTM and the separation length of A_GAP_LEN audio) of the audio box immediately after the audio separation can be cleared within the stream. Whenever necessary, a filler group, as prescribed by the MPEG standard, can be skipped immediately after the audio data in an audio package that stores the audio data to be played immediately before a audio separation. Figure 14C shows the audio package G3 which includes use of audio separation that includes the audio data y2, y-1, and to be played back for audio frames y-2, y-1, and located in the last part of V0B # 1 shown in Figure 14B and a Packet_Fill. This drawing also shows the G4 audio package that includes the audio boxes u + 1, u + 2 and +3 that are placed on the front of V0B # 2. The aforementioned G4 audio packet is the packet that includes the audio data to be played for the audio frame immediately after audio separation, while the G3 audio packet is the packet that is located immediately before the audio split. this package. If the audio data to be played for the audio box located immediately after the audio separation is included in a packet, the packet located immediately before this packet is called an "audio packet that includes an audio separation " Here, the G3 audio packets are placed towards the end or end of the video packet sequence in a VOBU, with no image data with a last type of playback that is included in V0B # 1. However, it is assumed that the reproduction of V0B # 2 will follow the VOB # l playback, so that the data included in V0B # 2 is the image data to be read corresponding to the audio data y-2, y-1 e y. If this is the case, the G3 audio packets that include audio separation can be placed within any of the first three VOBUs in V0B # 2 without violating the "one second rule". Figure 14D shows that this G3 audio packets that includes audio separation can be placed within any of V0BU # 1, V0BU # 2 and V0BU # 3 at the start of V0B # 2. The operation of the audio decoder needs to be temporarily stopped during the period of audio separation. This is because the audio decoder will attempt to perform the decoding processing even during audio separation, so that the host control unit performing the core control processing in a playback apparatus has to indicate an audio pause to the decoder once the reproduction of the image data and the audio data has stopped, thereby temporarily stopping the audio decoder. This indication is shown as the ADPI (Pause Information of the Audio Decoder) in Figure 19. By doing so, the operation of the audio decoder can be stopped during the period of audio separation. However, this does not mean that the audio transfer can be stopped in spite of how an audio separation appears in the data. This is because it is normal for the control unit to be composed of a normal microcomputer and computer program, so that depending on the circumstances to stop the operation of the audio encoder, audio separations must occur repeatedly over a period of time. Short period of time, there is the possibility that the control unit does not assure the stop indication sufficiently early. As an example, when the VOBs of approximately one second in length are played consecutively, it becomes necessary to give stop indication to the audio decoder at intervals of about one second. When the control unit of a normal microcomputer and computer program is composed, there is a possibility that the control unit is not able to stop the audio decoder during the period where the audio separations are present.
When the VOBs are played back, the playback time of the image data and the playing time of the audio data have been aligned several times, so it is necessary to provide the audio decoder with a stop indication each time. When the control unit is composed of a normal microcomputer and computer program, there is a possibility that the control unit is not able to stop the audio decoder during the period where the audio separations are present. For this reason, the following restrictions are imposed so that audio separations will only occur once within a certain period. First, to allow the control unit to perform the stop operation with ease, the VOB playback period is adjusted to 1.5 seconds or above, thus reducing the frequency with the audio separations that may occur. Second, the alignment of the playback time of the image data and the playing time of the audio data is only performed once in each VOB. By doing so, there will be only one audio separation in each VOB. Third, the period of each audio separation is restricted to being less than one audio frame. Fourth, the VOB_A_STP_PTM audio separation start time is adjusted with the VOB_V_S_PTM video presentation start time of the next VOB as a standard, so that the audio separation start time VOB_A_STP_PTM is restricted to being within a frame audio of the next video presentation start time VOB_V_S_PTM. As a result, VOB_V_S_PTM, - the reproduction period of an audio frame < A_STP_PTM < VOB_V_S_PTM. If an audio separation that satisfies the above formula occurs, the first image in the next VOB will have to be only displayed, so even if there is no audio transfer at this time, this will not be particularly prominent. When providing the above restriction, when the audio separations appear during seamless playback, the interval between the audio separations will be at least "1.5 seconds - the playback period of two frames of a u-di or more specifically, When replacing the actual values, the reproduction period according to the audio frame will be 32 msec when the Dolby-AC3 is used, so that the minimum interval between the audio separations is 1436 msec. high probability that the control unit is capable of performing stop control processing within the dead line for processing. (1-3-3-11) Audio Separation Location Information The audio separation location information "A_GAP_LOC" is a 3-bit value that shows in which of the three VOBs located at the beginning of the last VOB, the audio packet including the audio separation has been inserted. When the first bit in this value is "1", this shows that the audio separation is present in the VOBU # l. In this way, the values "2" and "3" show respectively that the audio separation is present in VOBU # 2 or VOBU # 3. The reason why this mark is necessary is that it will be necessary to regenerate the audio separation when the last of the VOBs that are going to be played without seams have been partially erased. The partial erasure of the VOB reflects the erasure of a plurality of VOBUs that are located at the start or end of a VOB. As an example, there are many cases during video editing when the user wishes to remove the opening credit sequence. The erasure of the VOBU that includes this opening credit sequence is called the "partial erasure of a VOB". When performing partial erasure, audio packages that include an audio separation that move to a last VOB require special attention. As described above, the audio separation is determined according to the video presentation start time of the last VOB_V_S_PTM, of course, when some of the VOBUs are deleted from the last VOB, the image data having the VOB_V_S_PTM video display start time, which determines the audio separation and the VOBUs for this image data will be erased. Audio separation is multiplexed into one of the first three VOBs at the start of a VOB. Accordingly, when a part of a VOB, such as the first VOBU, is deleted, it will not be clear as to whether the audio separation will have been destroyed as a result of this deletion. Since the number of audio separations that can be provided within a VOB is limited to one, it is also necessary to erase a previous audio separation that is not needed any longer once a new audio separation has been generated. As shown in Figure 14D, the G3 audio package that includes audio separation needs to be inserted into one of the VOBU # VOBU # 3 in VOB # 2 to comply with the one second rule, so that the Audio package that includes this audio separation needs to be taken from the packages included in V0B # 1 to V0B # 3. While this comprises a maximum of VOBU, the immediate extraction of only one G3 audio packet that includes audio separation is technically very difficult. This means that they are sometimes required from the current. Here, each VOBU includes several hundred packets so that a significant amount of processing is required to refer to the content of all packets. The audio separation location information A_GAP_LOC uses a 3-bit mark to show in which of the three VOBUs at the start of the last VOB an audio packet that includes an audio separation has been inserted, so that only one VOBU needs be seed when seing for audio separation. This facilitates the extraction of the G3 audio packets that includes audio separation. Figures 15A to 15E show a procedure for the regeneration of the audio separation by the video data editing apparatus when the VOBUs located at the start of V0B # 2 have been erased, between the two VOB, V0B # 1 and V0B # 2, which will be played seamlessly. As shown in Figure 15A, the VOBU, "VOBU # 98", VOBU # 99", and" VOBU # 100 are located at the end of V0B # 1 and the VOBU "V0BU # 1", "V0BU # 2", and "V0BU # 3" are located at the beginning of V0B # 2. In this example, the user instructs the video data editing device to perform a partial erasure to delete V0BU # 1 and V0BU # 2 in V0BU # 2. In this case, the G3 audio package that includes the audio separation is required, from the audio data stored in the VOBU # 100, but it is known that for some of this this G3 audio package that includes the audio separation is will fix in one of V0BU # 1, V0BU # 2 and VOBU3 # in VOB # 2. To find the VOBU in which the G3 packet including audio separation has been fixed, the video data editing apparatus refers to the audio separation location information A_GAP_LOC. When the audio information location information A GAP_LOC is set as shown in Figure 15B, it can be seen that the audio packet G3 including audio separation is located in V0BU # 3 in V0B # 2. Once the video data editing apparatus knows that the audio packet G3 including the audio separation is located in the V0BU # 3, the video data editing apparatus will know if the audio separation was multiplexed in the area that was submitted to partial erasure. In the present example, the audio separation is not included in the deleted area, so that the value of A_GAP_LOC is only amended by the number of VOBU that were erased. This ends the explanation of the VOBs, video stream, audio stream, and VOB information that are stored in an optical disk for the present invention. (1-4) Construction of the Video Data Editing System System.
The video data editing apparatus of the present embodiment provides conjunctions for both a DVD-RAM playback editing apparatus and a DVD-RAM recording apparatus. Figure 16 shows an example of the construction of the system that includes the video data editing apparatus of the present embodiment. As shown in Figure 16, this system includes a video data editing apparatus (later in the present DVD recorder apparatus 70), a remote controller 71, a TV monitor 72 that is connected to the DVD recorder 70 , and an antenna 73. The DVD recorder 70 is conceived as a device to be used in place of a video cassette recorder device, conventional for recording television broadcasts, but also incorporating editing functions. The system illustrated in Figure 16 shows the case when the DVD recording apparatus 70 is used as a home video editing device. The DVD-RAM described above is used by the DVD recorder apparatus 70 as the recording medium for recording television broadcasts. When a DVD-RAM is loaded in the DVD recorder 70, the DVD recorder 70 compresses a video signal received via the antenna 73 or a conventional NTSC signal and records the result in the DVD-RAM as VOB. The DVD recorder .70 also decompresses the video streams and audio streams included in the VOBs recorded on a DVD-RAM and transfers the resulting video signal or NTSC signal and audio signal to the TV monitor 72. • (1-4-1) Construction of the Physical Equipment of the Recorder 70 DVD Figure 17 is a block diagram showing the construction of the physical equipment of the DVD recorder 70. As shown in Figure 17, the DVD recorder 70 is comprised of a control unit 1, an MPEG encoder 2, a disc access unit 3, an MPEG decoder 4, a signal processing unit 5 of video, a remote controller 71, a common bar 7, a signal receiving unit 8 of the remote control, and a receiver 9. The arrows drawn with solid lines in Figure 17 show the physical connections that are achieved by the wiring of the circuit inside the DVD recorder 70. The dashed lines, meanwhile, show the logical connections that indicate the input and output of the various kinds of data in the connections-shown with the solid lines during a video editing operation. The numbers (1) and (5) assigned to the dashed lines show how many VOBUs and the image data and audio data that make up the VOBUs are transferred into the physical connections when the DVD player 70 re-encodes the VOBU The control unit 1 is the host-side control unit which includes the CPUs la, the common bar Ib of the processor, the interface lc of the common bus, the main storage Id, and the ROM le. When running programs stored in ROM, the control unit 1 records, plays and edits the VOB. The MPEG encoder 2 operates as follows. When the receiver 9 receives an NTSC signal via the antenna 73, or when a video signal transferred by a home video camera is received via the video input terminals provided on the back of the DVD recorder 70, the encoder 2 MPEG encodes the NTSC signal • the video signal to produce the VOBs and transfers the generated VOBs to the disk access unit 3 and the common bus 7. As a process that is particularly related to the video editing, the MPEG encoder 2 receives an input of the decoding result of the MPEG decoder 4 from the connection line Cl via the common bus 7, as shown by dashed line (4), and transfers the coding result for this data to the disk access unit 3 via the common bar 7, as shown by dashed line (5). The disc access unit 3 includes a track buffer 3a, an ECC processing unit 3b, and a drive mechanism 3c for a DVD-RAM, and has access to the DVD-RAM according to the control by the unit control 1. In more detail, when the control unit gives an indication for recording on the DVD-RAM and the VOB encoded by the MPEG encoder 2 have been transferred successively as shown by dashed line (5), the unit 3 accesses the disk stores the received VOBs in the track buffer 3a, and once the ECC processing has been performed by the ECC processing unit 3b, controls the drive mechanism 3e to successively record these VOBs on the DVD- RAM. On the other hand, when the control unit 1 indicates a data reading of a DVD-RAM, the disk access unit 3 controls the drive mechanism 3c to successively read the VOBs of the DVD-RAM, and once, that the ECB processing unit 3b has performed ECC processing on these VOBs, stores the result in track buffer 3a. The drive mechanism 3c mentioned herein includes a tray for seating the DVD-RAM, a spindle motor for holding and rotating the DVD-RAM, an optical reader for reading a signal recorded on the DVD-RAM, and an actuator. for the optical reader. The read and write operations are achieved by controlling these components of the drive mechanism 3c, although this control is not part of the essence of the present invention. Since this control can be achieved using well-known methods, no further explanation will be given in this specification. When the VOBs that have been read from the DVD-RAM by the disk access unit 3 are transferred as shown by dashed line (1), the MPEG decoder 4 decodes these VOBs to obtain the digital video data, uncompressed and an audio signal. The MPEG decoder 4 transfers the uncompressed digital video data to the video signal processing unit 5 and transfers the audio signal to the TV monitor 72. During a video editing operation, the MPEG decoder 4 transfers the decoding result for a video stream and an audio stream to the common bus 7 via the connecting lines C2, C3, as shown by dashed lines ( 2) and (3) in Figure 17. The decoding result transferred to the common bar is transferred to the MPEG encoder 2 and to the connection line Cl, as shown by dashed line (4). The video signal processing unit 5 converts the image data transferred by the MPEG decoder 4 into a video signal for the TV monitor 72. In the reception of the graphical data from the outside, the video signal processing unit 5 converts the graphic data into an image signal and performs signal processing to combine this image signal with the video signal. The signal receiving unit 8 of the remote control receives a signal from the remote control in form to the control unit 1 of the key code included in the signal so that the control unit 1 can check the control according to the user's operations of the remote control 71. (1-4-1-1) Internal Construction of MPEG Encoder 2 Figure 18 is a block diagram showing the construction of the MPEG encoder 2. As shown in Figure 18, the MPEG encoder 2 is composed of a video encoder 2a, a video buffer 2b for storing the transfer of the video encoder 2a, an audio encoder 2c, a 2d audio buffer for storing the transfer of the audio encoder 2c, a current encoder 2e for • multiplexing the video stream 'encoded in the video buffer 2b and the audio stream encoded in the memory, intermediate audio 2d, an STC unit (System Time Clock) 2f to generate the synchronization clock of the MPEG encoder 2, and the encoder control unit 2g to control and manage these components of the MPEG encoder 2. (1-4-1-2) Internal Construction of the MPEG decoder 4 Figure 19 shows the construction of the MPEG decoder 4. As shown in Figure 19, the MPEG decoding 4 is composed of a demultiplexer 4a, a video buffer 4b, a video decoder 4c, an audio buffer 4d, an audio decoder 4e-, a buffer of reordering 4f, an STC unit of 4g, switches SWl to SW4, and a decoder control unit 4k. The demultiplexer 4a refers to the header of a group that has been read from a VOB and judges thus the various packets are video packets or audio packets. The demultiplexer 4a transfers the video data in packets judged as being video packets to the video buffer 4b and the packet audio data judged as being audio packets to the aüdi or 4 d buffer. The video buffer 4b is a buffer for accumulating the video data that is transferred by the demultiplexer 4a. Each data set of An image in the intermediate video barrier 4b is stored until its encoding time is taken from the video buffer 4b.
The video decoder 4c takes the image data sets of the video intermediate memory 4b at their respective de-encoding times and instantly decodes the data. The audio buffer 4d is an intermediate memory for accumulating the audio data transferred by the demultiplexer 4a. The audio decoder 4e successively decodes the audio data stored in the audio buffer 4d in frame units. Upon receipt of the ADPI (Pause information of the decoder of Audio) emitted by the control unit 1, the audio decoder 4e for the processing of the coding for the audio frames. The ADPI is output by the control unit 1 when the present time reaches the audio separation start time A_STP_PTM shown by the seamless link information. The reordering buffer 4f is a memory for storing the decoding result of the video decoder 4c when an I or P-image has to be encoded. The reason why the decoding results for the images I or P pictures are stored is that the order of coding was originally produced when re-arranging the order of display or display. Accordingly, after each image B to be displayed before the decoding results stored in the reordering buffer 4f have been decoded, the reordering buffer 4f transfers the decoding results of the images I and the P images stored so far as an NTSC signal. The STC unit of 4g generates the synchronization clock which shows the system clock for use in the MPEG decoder 4. The adder 4h transfers a value produced by adding the STC_compensation to the normal clock shown by the synchronization error as the normal compensation clock. The control unit 1 calculates the STC_compensation by finding the difference between the video presentation start time VOB_V_S_PTM and the presentation completion time VOB_V_E_PTM which are given in the seamless link information, and adjust the STC_compensation in the adder 4h. The switch SW1 supplies the demultiplexer 4a with the normal time measured by the STC 4g unit or the normal compensation time transferred by the adder 4h. The switch SW2 supplies the audio decoder 4e with the normal time measured by the STC 4g unit or the normal compensation time transferred by the adder 4h. The normal time supplied or the normal time supplied or the normal compensation time is used to check the decoding time and the start time of presentation of each audio frame. The switch SW3 supplies the video decoder 4c with the normal time measured by the STC unit 4g or the normal compensation time transferred by the adder 4h. The normal time supplied or the normal compensation time is used to check the decoding time of each image data set.
The switch SW4 supplies the rearrangement buffer 4f with the normal time measured by the unit 4g of STC or the normal compensation time transferred by the adder 4h. The normal time supplied or the normal time of compensation is used to check the start time of presentation of each set of image data. The decoder control unit 4k receives a decoding processing request from the control unit 1 for a whole integer multiple of the VOBUs, which is to say a whole integer multiple of the GOPs, and has the descrambling processing performed by all the components of the demultiplexer 4a to the rearrangement buffer 4f. Also, upon receipt of a valid / invalid indication for the reproduction transfer of the decoding result, the decoder control unit 4k has the result of encoding the video decoder 4c and the audio decoder 4e transferred abroad if it is valid the indication, or prohibits the transfer of the decoding results of the video decoder 4c and the audio decoder 4e to the outside if the indication is invalid. The valid / invalid indication can be given by a unit smaller than a video stream, such as a video field. The information that indicates the valid reproduction transfer section in the video field units is called valid reproduction section information. (1-4-1-2-1) Synchronization for Switch Switching SW1 ~ SW4 Figure 20 is a diagram. of synchronization of the synchronization for the switching of switches SWl to SW4. In this synchronization diagram it shows the switching of switches SWl to SW4 when seamless playback of VOB # l and V0B # 2 is performed. The upper part of Figure 20 shows the packet sequences that found VOB # l and V0B # 2, while the middle part shows the video frames and the bottom shows the audio frames. The synchronization for switching of switch SWl which is the point where the packet sequence that is transferred to MPEG decoder 4 changes from V0B # 1 to V0B # 2. This time is indicated as the Last_SCR in the seamless link information of V0B # 1. The synchronization for the switching of the switch SW2 is the point where all the audio data in the VOB that is stored in the audio buffer 4d before the switching of the switch SW1, that is to say VOB # l, have been decoded. ficado The synchronization for the switching of the switch SW3 is the point where all the video data in the VOB has been decoded which is stored in the video buffer 4b before the synchronization time (TI) of the switch SW # 1, which is to say VOB # l. The synchronization for switching of switch SW4 is the point during the playback of V0B # 1 where the last video frame has been played.
Programs stored in the ROM include modules that allow two 'VOBs that have been burned to DVD-RAM to play seamlessly. (1-4-1-2-2) Procedure for Seamless Processing of VOBs Figures 21 and 22 are flow diagrams showing the seamless link procedure of two VOBs in an AV file. Figures 23A and 23B show an analysis in the buffer state for each video packet. Figures 24A and 25 show the audio frames in the audio stream corresponding to the audio frames x, x + 1, y-1, y, u + 1, u + 2 and u + 3 mentioned in Figure 22 The following is an explanation of the re-coding of the VOB. In step S102 of Figure 21, the control unit 1 performs the calculation VOB V E PTM of the previous VOB minus VOB_V_S_PTM of the last VOB to obtain the STC_compensation. In step S103, the control unit 1 analyzes the changes in the occupation of the buffer of the First_SCR of the previous VOB to the time of completion of the de-coding of all the data in the previous VOB. In Figures 23A and 23B show the analysis process for the occupation of the buffer made in step S103. When the video pack # 1 and # 2 are included in the previous VOB as shown in Figure 23A, SCR # 1, SCR # 2, and DTS # 1 included in these video packages are plotted on the same axis. After this, the size of the data included in the video package # 1 and the video package # 2 are calculated. A line is graphical starting from SCR # 1 with the bitrate information in the packet header as the gradient, until the data size of video pack # 1 has been plotted. After this, the data size of the video pack # 2 will be graphical starting from SCR # 2. Then, the data size of the image data Pl to be decoded is removed in DTS # 1. This data size of the image data Pl is obtained by analyzing the bitstream. After plotting the data sizes of the video packets and the image data in this manner, the buffer state of the video buffer 4b from the first SCR to the DTS can be plotted as a graph. By using the same procedure for all video data and audio data in a VOB, a graph showing the state of the buffer can be obtained, as shown in Figure 23B. In step S104, the control unit 1 performs the same analysis as in step S103 for the last VOB, and in this way analyzes the changes in occupation of the video buffer of the Primero_SCR of the last VOB at the time of presentation completion Last_SCR of all the data in the last VOB. In step 105, the control unit 1 analyzes the changes in occupation of the video buffer of the First_SCR of the last VOB plus STC_compensation to the Last DTS of the previous VOB. This period from the First_SCR to the last VOB plus STC_compensation • to the Last_DTS of the data in the previous VOB is when the first i-magen data of the last VOB is being transferred to the video buffer 4b while the last image data of the previous VOB are still stored in the video buffer 4b. When the video data of the previous VOB and the. Last VOB consist of the buffer, the memory status will be as shown in Figure 10C. In Figure 10C, the video buffer 4b stores the video data of both the previous VOB and the last VOB during the period from the First_SCR + STC_compensation to the Last_SCR, with Bvl + Bv2 representing the highest occupation of the buffer of 4b video during this period. In step S106, the control unit 1 controls the unit from disk access to read the 3 VOBUs that are located at the end of the previous VOB. After this, in step S107 the control unit 1 controls the disk access unit 3 to read the three VOBUs located on the front of the last VOB. Figure 23C shows the area to be read from the previous VOB in step S106. In Figure 23C, the former VOB includes the VOBU # 98 ~ # 105, so that the VOBU # 103 to # 105 is read as the VOBU including the image data V_END that must be decoded to the latter. Figure 23D shows the area that should be read from the last VOB in step S107. In Figure 23D, the former VOB includes the VOBU # 1 ~ # 7, so that when the VOBU # l arrives first, the VOBU # l to # 3 should be read as the VOBU including the image data V_TOP. According to the one-second rule, there is a possibility that the audio data and the image data that must be reproduced in the space of one second are stored through three VOBUs, so that when reading the three VOBUs at the beginning and end of a VOB, in step S106, all the image data and audio data to be reproduced between a point of a second from the end time of presentation of the image data V_END located at the end of the previous one VOB and this presentation completion time by itself can be read together. Also, in step S107, all image data and audio data to be reproduced between the presentation start time and the V_TOP image data located at the start of the last VOB of a point one second after this time Start of presentation that can be read together. It should be noted that the readings in this flow chart are made for VOBU units, although the readings can be made instead for the image data and audio data to be reproduced in a second, of all the image data and audio data included in a VOBU. In this mode, the number of VOBUs corresponding to a second that is, although any VOBU number can be encoded. The reading can be performed alternatively for image data and audio data that will be reproduced in a period of no more than one second. Then, in step S108, the control unit 1 controls the demultiplexer 4a to separate the VOBUs for the first part and the last part in a video stream and an audio stream, and makes the video decoder 4c and the decoder 4e audio decode these streams. During normal playback, the decoding results of the video decoder 4c and the audio decoder 4 e will be transferred as video and audio. When re-coding is performed, however, these coding results must be entered into the MPEG encoder 2, so that the control unit 1 causes the video stream and the audio stream of the decoding results to be transfer to common bar 7, as shown by the arrows (2) and (3) that are plotted with dashed lines in Figure 17. The video stream and the audio stream that are the decoding results are transferred via the common bar 7 in order to the MPEG encoder 2, as shown by dashed line (4). After this, the control unit 1 calculates the amount of code for the re-encoding of the encoded video stream and the audio stream decoded by the MPEG encoder 2. First, in step S109, the control unit 1 judges whether the accumulated amount of data in the buffer exceeds the upper limit of the buffer at any point in the decoding when the previous VOB and the last VOB coexist in the buffer . In the present embodiment, this is achieved by judging whether the value Bvl + Bv2 calculated in step S105 exceeds the upper limit of the buffer. If this value does not exceed the upper limit, the processing proceeds to step S112, or if the value exceeds the upper limit, the control unit 1 subtracts the excess amount of the code A in the calculated amount and assigns the resulting quantity of the code to the decoded VOBU sequence. If the amount of code is decreased, this means that the image quality of the video stream decreases during playback these VOBU. However, overflows in the video buffer 4b must be prevented when two VOBs are seamlessly linked, so that this method that decreases the image quality is used. In step Slll, the control unit 1 controls the video decoder 4c to re-encode the decoding results of the video decoder 4c and the audio decoder 4e according to the amount of code assigned in step S110. Here, the MPEG decoder 2 performs a decoding - to temporarily convert the values of the graphic elements in the video data into digital data in a YUV coordinate system. The digital data in the YUV coordinate system is digital data for the signals (luminance signal (Y), chrominance signal (U, V)) that specify the colors for a color TV, with the video decoder 4c re-encodes this digital data to produce image data sets. The technique used for assigning a quantity of code is that described in MPEG DIS (International Standard Edited) Test Model 3. The re-coding to reduce the number of codes is achieved by processes such as the replacement of the coefficients of quantification. It is noted that the amount of code from which the quantity A of excesses has been subtracted can be assigned to only the last VOB or only the previous VOB. In step S112, the control unit 1 calculates which part of the decoding result for the audio data taken from the previous VOB corresponds to the audio frame x that includes the STC_compensation + First_SCR of the last VOB. In Figure 24A, the graph shows the buffer status for the previous VOB and the last VOB, while the lower part shows the audio frames of the audio data separated from the previous VOB and the audio frames of the audio separated from the last VOB. The sequences of audio frames in the lower part of Figure 24A show the correspondence between each audio frame and the time axis of the graph in the upper part. The downline drawn from the point shown as First_SCR + STC_compensation in the graph that uses an audio frame of the sequence of audio frames for the previous VOB.
The audio box that crosses this descending line is the audio box x, and the audio box x + 1 that follows immediately after is the final audio data included in the previous VOB. It should be noted that the data in the audio boxes x and x + 1 are included in the audio data that must be played during a period that is indicated by the points 1.0 seconds before and after the period of reproduction of the final image data V_END, with this being included in the three VOBUs read in step S105. Figure 24B shows the case where the first o_SCR + STC_compensation corresponds to an audio frame boundary in the previous VOB. In this case, the audio frame immediately before the limit is set as the audi or x box. In step S113, the control unit 1 calculates the audio frame and + 1 which includes the STC_compensation + VOB_V_S_PTM of the last VOB. In Figure 24A, the ascending line drawn from the video presentation start time VOB_V_S_PTM + STC_compensation in the graph crosses an audio frame in the audio frame sequence of the last VOB. Picture . audio that crosses this ascending line is the audio frame and + 1. Here, the audio frames up to the preceding audio frame and are the valid audio frames that are still used after the editing has been made, the data of original audio included in the previous VOB. Figure 24C shows the case where the video presentation start time VOB_V_S_PTM + STC_compensation corresponds to an audio frame boundary in the previous VOB. In this case, the audio box immediately before the presentation start time VOB_V_S_PTM + STC_compensation is set as the audio frame and. In step S114, the audio data of the audio box x + 2 to the audio frame and are taken from the above audio data. In Figure 24A, the audio frames of the audio frame and + 1 to the front have been drawn with a dashed line, showing which part is not multiplexed with the VOB. It should be noted that the audio frames that have been moved to the last VOB have been assigned timestamps for the last VOB, so that these audio frames are time stamps reassigned for the last VOB. In step S115, the audio frame immediately after the audio frame including the boundary between the audio frames and e and +1 is detected from the audio frame sequences of the last VOB. When a descending line is drawn from the limit of the audio frames and e and +1, this line will cross one of the audio frames in the sequence of audio frames in the last VOB. The audio box that crosses the audio frame is the audio box u. Figure 24D shows the case where the end time of presentation of the audio frame corresponds to an audio frame limit in the last VOB. In this case, the audio frame immediately after this presentation ending time is set as the audio frame u. In step S116, the audio packet G4, which includes a sequence of audio data where the audio data reproduced for the audio box u are arranged on the front is generated from the audio stream at the last VOB. In Figure 24A, the preceding audio frames are audio frames u have been drawn with a dashed line, with this audio data shown using a dashed line not being multiplexed in the last VOB. As a result of the above steps S114 ~ S116, the audio data from the last audio frame to the audio frame x + 1 is multiplexed from the previous VOB. The audio data from the audio box x + 2 to the audio box y and the audio data from the audio box u to the final audio frame are multiplexed in the last VOB. When performing multiplexing in this way, the audio frames for the audio data at the end of the previous VOB will be read from the DVD-RAM at the same time as the image data to be played later in the playback. In this point, when the audio data in the previous VOB is not present until the frame and, which is to say the audio data is short, silent audio frame data is inserted to compensate for the insufficient number of frames. In the same way, 'when the audio data of the last VOB is not present starting from the audio box u, which is to say that the audio data is short, silent audio frame data is inserted to compensate the number insufficient of pictures. When the audio boxes in the x + 2 box to the audio box and in the previous VOB and the audio data in the audio box u to the final audio box in the last VOB are multiplexed in the last VOB, you need to pay attention to AV synchronization. As shown in Figure 24A, a separation of reproduction occurs in the audio box and and the audio frame u, and if multiplexing is performed without considering this reproduction gap, a loss of synchronization will occur so the audio frame u will be played before the corresponding video frame. To prevent the increase of these time delays between audio and video, you can assign to the audio group a time stamp that shows the audio box u.
To do so, in step S117, a Filler_Group or padding bytes are inserted into the packet that includes the data in the audio box and, so that the audio box u is not stored in the packet that stores the audio box and . As a result, the audio box u is located at the beginning of the next packet. In step S118, the VOBU sequence that is located at the end of the previous VOB is generated by multiplexing the audio data up to the audio box x + 1, from the audio data extracted from the VOBU located at the end of this previous VOB, with the video data that has been re-encoded. In step S119, the audio data in the x + 2 audio frame forward is multiplexed with the video data that is extracted from the VOBUs located at the start of the last VOB to generate the VOBUs that must be arranged on the front of the VOBU. last VOB. In detail, the control unit 1 has the audio packet G3, which includes the audio data sequence of the first audio frame x + 2 to the audio frame and and the group_Rein, and the audio pack G4, which includes the audio data sequence of the audio frame u is advanced in the last VOB, multiplexed with the re-encoded video data and causes the current encoder 2e to generate the VOBU that will be fixed at the start of the last VOB. As a result of this multiplexing, the audio frames at the end of the audio data of the previous VOB will be read from the DVD-RAM at the same time as sets of image data that will be played at a later time. Figure 25 shows how audio packets that store a plurality of audio data sets to be reproduced for a plurality of audio frames are multiplexed with video packets that store image data to be reproduced for a plurality of of audio frames. In Figure 25, the transfer of the V_TOP image data to be decoded at the start of the last VOB will be completed within the period Tf_Period. The sequence of packets subsequently arranged below this period Tf Period in Figure 25 shows the packets that make up the V_TOP image data. In Figure 25, the audio packet G3 that includes the audio separation stores the audio sets x + 2, y-1, and that are to be played back for the audio frames x + 2, y-1, y. From the audio data sets stored in this audio packets, the first one to be decoded is the audio data x + 2. This audio data x + 2 must be decoded at the end time of presentation of the audio frame x + 1, and in this way must be read from the DVD-RAM together with the image data V_TOP whose sequence of packets is transferred during the same period (Tf_Period) as the audio box x + 1. As a result, this audio data is inserted between the video packet sequence P51, which stores the image data V__TOP, and the sequence of video packets P52, as shown in the bottom of Figure 25. In the packet of G4 audio which stores the audio data sets u, u + 1, and u + 2 to be played for the audio frames u, u + 1, u + 2, the audio data u to be decoded first. This audio data u must be decoded at the end time of presentation of the audio frame u-1, so that this audio data u must be read from the DVD-RAM together with the image data V_NXT whose pack sequence is transfers during the same period. As a result, this audio data u is inserted between the video packet sequence P52, which stores the V_TOP image data and the P53 sequence of video packets that store the V_NXT image data, as shown at the bottom of the Figure 25. As shown above, the audio packet G3 including the audio separation is inserted between the P51 and P52 sequences of video packets, while the audio packet G4 is inserted between the P52 and P53 sequences of packets of video, thus ending multiplexing. After this, in step S120 the control unit 1 inserts the First_SCR and Last_SCR of the previous VOB of the last VOB, the seamless mark, the V0B_V_E_PTM and the VOB_V_S_PTM the seamless link information for the previous VOB. In steps S121 and S122, the control unit 1 writes all the information that is related to the audio separation, which is to say the audio separation start time, A_STP_PTM, the length of the audio separation A_GAP_LEN, and the audio separation location information A_GAP_LOC in the seamless link information. After the above processing, the control unit 1 has the end of the previous VOB, the start of the last VOB, and the seamless link information described in the DVD-RAM. The video packets and audio packets that store the video data and the audio data obtained through the previous recoding are assigned the SCRs with ascending values. The initial value of the assigned SCRs is the value of the 'SCR of the packet located initially at the beginning of the area undergoing re-coding. Since the SCRs show the time at which the video packets and the respective audio packets are to be entered into the video intermediate memory 4b and the video decoder 4c, if there is a change in the amount of data before or after. after the re-coding, it will again be necessary to update the values of the SCRs. Even if this is the case, however, the decoding process will still be performed correctly with the condition that the SCRs for the first re-coded part of the last VOB are below the SCRs of the video packets in the remaining part of the last VOB that was not re-encoded. The PTS and the DTS are assigned according to the video frames and the audio frames, so that there will be no significant change in their values when the recoding is performed. As a result, the continuity of the DTS-PTS is maintained between the data not subjected to the re-coding and the data in the re-encoded area. To produce two VOBs without seams, you should avoid non-continuity in time stamps. To do so, the control unit 1 judges in step S123 of Figure 22 whether the overlap of the SCRs has appeared. If this judgment is negative, the processing of the flow chart of Figure 22 ends. If overlap of the SCRs has appeared, the control unit 1 proceeds to step S124 where it calculates the excess amount A based on the number of packets that have an overlap SCR. The control unit 1 then returns to step S110 to repeat the re-encoding, basing the amount of the code assigned for repeated re-encoding on this excess amount A. As shown by the arrow (5) in Figure 17, the six VOBU that has been multiplexed again by the processing in Figure 22 are transferred to the disk access unit 3. Disk access unit 3 then writes the VOBU sequence to the DVD-RAM. It should be noted that while the flow chart of Figures 21 and 22 describes the seamless link of the two VOBs, the same processing can be used to link two sections of the same VOB. For the example shown in Figure 6B, when VOBU # 2, # 4, # 6 and # 8 are deleted, the VOBU located before each erased part can be analyzed seamlessly to the VOBU located after the packet erased by the processing of Figures 21 and 22. The following is a description of the reproduction procedure for seamlessly reproducing two VOBs that have been seamlessly linked by the processing described above. When the user indicates seamless playback of two or more VOBs recorded in an AV file, control unit 1 is first required to seamlessly aa reacts in the binding information if seams of the last VOB. If this seamless mark is "on", the control unit 1 adjusts the time obtained by subtracting the video display start time VOB_V_S_PTM from the last VOB of the video presentation completion time VOB_V_E_PTM of the previous VOB to obtain the STC_compensation. The control unit 1 then causes the adder 4h to add the STC_Compensation to the normal time measured by the 4g unit of STC.
After this, the time of introduction to the first_SCR buffer of the previous VOB indicated by the seamless link information is compared with the normal time measured by the 4g unit of STC. When the normal time reaches this Primero_SCR, the control unit 1 controls the switch SWl to switch to the transfer of the normal compensation time transferred by the adder 4h instead of the normal time transferred by the 4g unit of STC. After this, the control unit 1 switches the states of the switches SW2 ~ SW4 according to the synchronization diagram in Figure 20. With the present embodiment, seamless reproduction of a plurality of VOBs can be achieved when reading and re-encode only the endings and the respective starts of the VOB. Since the re-encoded data is only the VOB located at the start and end of the VOB, the re-encoding of the VOB can be achieved in a very short time. It is noted that while the present embodiment describes a case where the seamless link information is administered for each VOB, the information that is required for the seamless link of the VOB can be provided collectively. - As an example, the video presentation differentiation time VOB_V_E_PTM and the video presentation start time VOB_V_S_PTM which are used to calculate the STC_compensation are described as occurring in two separate sets - of VOB information, although these are can give as the seamless link information of the last VOB. When this is done, it is desirable that the VOB information includes the information for the presentation time of the previous VOB (PREV_VOB_V_E_PTM). In the same way, it is preferred that the information which is the final SCR in the previous VOB (PREV_VOB_ULTIMO_PTM) is included in the seamless link information of the last VOB. In the present embodiment, the DVD recorder apparatus 70 was described as being a device that takes the place of a conventional domestic VCR (not portable), although when a DVD-RAM is used as the recording medium for a computer, it is You can use the following system fix. The unit 3 of. Disk access can function as a DVD-RAM drive device, and can be connected to a common computer bar via an interface that complies with SCSI, IDE. or the IEEE1394 standard. In this case, the DVD recording apparatus 70 will include a control unit 1, an MPEG encoder 2, a disc access unit 3, an MPEG decoder 4, a video signal processing unit 5, a remote control 71, a common bar 7, a signal receiving unit 8 of the remote control, and a receiver '9. In the above embodiment, the VOBs were described as being a multiplexed combination of a video stream and an audio frame, although the sub-image data produced by submitting the data for subtitles to the pass-length encoding can also be multiplexed in the VOBs. A video stream composed of still image data sets can also be multiplexed.
In addition, the above embodiment describes the case where the re-encoding of the data is performed with the MPEG decoder 4 after the VOB has been decoded by the MPEG encoder 2. However, during the re-encoding the VOB can be directly input directly from the disk access unit 3 to the MPEG encoder 2 without coding before. The present modality describes the case where an image is represented using a frame, although there are cases where an image is actually represented using 1.5 frames, such as for a video stream where a decrease of 3: 2 is used with images for 24 frames per second they undergo compression, in the same way as with film materials. The computation program of processing modules represented by the flowcharts in this first mode (Figures 21-22) can be done by a program with machine language that can be distributed and sold having been recorded on a recording medium. Examples of this recording medium are an IC card, an optical disk, or a flexible disk. The machine language program recorded on the recording medium can then be installed on a normal personal computer. When executing the machine language programs, installed, the normal, personal computer can achieve the functions of the video data editing apparatus of the present modality.
Second Modality With the first embodiment dealing with a premise that is performed without the seamless link for the VOBs, this second embodiment describes the seamless connection of a plurality of VOB parts. In this second embodiment, these parts of a VOB are specified using the time information indicated by the video fields. The video fields referred to herein are units that are smaller than a video frame, with the time information for the video fields being expressed using the PTS of the video packets.
The parts of a VOB that are specified using the time information for the video fields are called cells, and the information used to indicate these cells is called cell information. The cell information is recorded in the RTRW administration file or a PGC information element. The details of the data construction and the generation of cell information and PGC information are given in the fourth modality. Figure 2 6 shows examples of the cells located by the video fields for start and end. In Figure 26, the time information sets C_V_S_PTM, C_V_E_PTM specify the video fields at the start and end of a cell. In Figure 26, the time information C_V_S_PTM is the start time of presentation of a video field in which the image P in a VOBU # 100 that forms a part of the present VOB must be reproduced. In the same way, the time information C_V_E_PTM is the end time of presentation of a video field in which the image Bl in the VOBU # 105 that forms a part of the same VOB must be reproduced. As shown in Figure 26, the time information C_V_S_PTM y. C_V_E_PTM specifies a section of an image P to an image B as a cell. (2-1) Reconstruction of GOPs When the seamless link parts of a VOB that are indicated by the time information, they become necessary for the use of two processes that were not required in the first mode. First, the construction of the GOPs has to be reconstructed to convert the indicated section for the time information into a separate VOB, and second, the increase in the occupation of the buffer due to the reconstruction of the GOPs has to be estimated. The reconstruction of the GOPs refers to a process that changes the construction of the GOPs so that the section indicated as a cell has the appropriate display order and the order of coding.
More specifically, when a section is indicated to be linked by the cell information, there may be cases where an edit limit is defined in the middle of a VOBU, as shown in Figure 28A. If this is the case, the two cells to be linked will not have an appropriate display order or appropriate coding order. In order to rectify the order of display and the order of coding, the reconstruction of the GOPs is performed using the processing based on the three rules shown in Figure 28B. When the image data in the display order of a previous cell is a B-image, the processing based on the first rule re-encodes this image data to convert them into an image P (or an I-image). The image P in the forward direction that is referred to by the image B is located before the image B in the order of coding. However, this P image will not be displayed after editing, and will thus be deleted from the VOB.
When the first image data in the order of coding the last cell is an image P, the processing based on the second rule re-encodes this image data to convert them to an image I. When the first set or consecutive sets of image data in the display order of the last cell are B images, processing based on the third rule re-encodes this image data to convert them to image data whose display does not depend on the correlation with other images that have been reproduced previously. Later in the present, the images formed image data that only depend on the correlation with images that are still to be displayed will be called B-forward images. (2-2) Estimation of the Increase in the Occupation of the Intermediate Memory When the image types of certain images have been changed by processing based on the three rules described above, the processing for estimating the increase in buffer occupancy estimates the sizes of these converted sets of image data. When the reconstruction described above is performed for the previous cell, the final image data in the reproduction order of the previous cell is converted from an image B to an image P or an image I, thereby increasing the size of this data . When the reconstruction described above is performed for the last cell, the image data located at the start of coding order of the final cell is converted from a P image to an I image, and the image type of the video data located at the front of the display order are converted to a B-forward image. This also increases the size of the data. The following is an explanation of the procedure for estimating the increases in the size of the data that accompanies the conversion of the image type. Figure 29A and 29B will be used to explain this procedure. In Figure 29A, the first cell continues until the image B B3. According to the above rules, the video data editing apparatus has to convert this image B B3 to the image P Pl. When the image B B3 is dependent on the image P P2 which is reproduced after the image B B3, the conversion process of the image type will incorporate the necessary information of the image P P2 in the image P Pl 'that is produced for the conversion process. In view of this procedure, the video data editing apparatus can estimate the data size of the image P Pl 'which is obtained by the conversion process using a sum of the size of the image B B3 and the size of the image P P2. This estimation method represents only one potential method, however, so that other methods are equally possible. In determining the amount of the code for use in the re-encoding based on the estimated buffer occupancy, the video data editing apparatus may allocate an optical quantity of code to the previous cell and the last cell. Figures 30A and 30B show how the increases in buffer occupancy that accompany changes in the type of image within the last cell are estimated. In Figure 30A, the image data B B3 forward corresponds to the last VOB. Each cell is determined based on the display time for the start of the cell, so that the image B B3 is the image data located at the start of the order of display or display of the last cell. As a result, the video data editing apparatus needs to convert the image B B3 into the image B-forward B 'according to the rules given above. When this image B B3 has an information component that is dependent on the previously reproduced image P P2, this information component of the image P P2 will have been incorporated in the image B-forward B 'during the conversion of the image type.
In view of this procedure, the video data editing apparatus can estimate the size of the image data B-. forward B 'which is obtained by the conversion process using a sum of the size of the image B B3 and the size of the image P P2. For the last VOB, the video data editing apparatus needs to convert the image type of the image data located at the start or start of the encoding order. Referring to the image order of the last VOB in Figure 28A, it can be seen that the image P P3 is the image data to be displayed immediately after the image B B3. The image P P3 is stored in the reordering buffer 4f of the video data editing apparatus until the decoding of the image B B3 is completed, and in this way is only displayed after the decoding of the image B B3 has been performed. By causing the reordering buffer 4f to reorder the image data in this manner, the image P P3 will precede the image B B3 in the encoding order even though the image P P3 is displayed after the image B B3. According to the rules described above, the video data editing apparatus needs to convert the detected P3 image data - as the first image data in the order of coding into an I image. When this image P has an information component depending on the image I that is reproduced before the image P P3, this information component of the image I will be incorporated in the image P P3 during the conversion of the image type. In view of this procedure, the video data editing apparatus can estimate the data size of the image II 'which is obtained by the conversion process using a sum of the image size P P3 and the size of the image I precedent. Based on the occupation of the buffer that is estimated in this way, the image data editing apparatus can then allocate the optimum quantities of the code to the previous and last cells to be used in the recoding. (2-3) Procedure for seamless connection of ceeds Figures 31 to 33 are flowcharts that show the procedure that links two cells to allow seamless reproduction of the two. It is noted that many of the steps in these flowcharts are the same as the steps in the flowcharts shown in Figures 21 and 22 with the term "VOB" that has been replaced with the term "cell". These steps have been given the same reference numbers as in the first modality, and their explanation is omitted. Figure 34 shows the audio frames in the audio stream corresponding to the audio frame x, the audio frame x + 1 and the audio frame and which is used in Figure 31. In step S102, the control unit 1 refers to the time information that specifies the end of the cell to be played first (later called the "previous cell") and the time information that specifies the end of the cell to be played second (subsequently called the "last cell") and subtract the C_V_S_PTM from the last cell of the C_V_E_PTM of the previous cell to obtain the STC_compensation. In step 103, the control unit 1 analyzes the changes in the buffer occupancy of the First_SCR of the previous cell to the decoding completion time of the Last_DTS of all the data in the previous cell. In step S104, the control unit 1 performs the same analysis as in step S103 for the last cell, and in this way analyzes the changes in the buffer occupancy of the First_SCR of the last cell at the end time of Last_DTS decoding of all data in the last cell. In step S130, the control unit 1 estimates the increment a in the occupation of the buffer that accompanies the changes in the type of image for the last cell, according to the procedure in Figures 30A and 30B. In step S131, the control unit 1 estimates the increase ß in buffer occupancy that accompanies changes in the image type for the previous cell, according to the procedure shown in Figures 29A and 29B. In step S132, control unit 1 adds the estimated increments c. , ß to the memory buffer occupation, respectively for the previous and last cells. In step S105, the control unit 1 analyzes the changes in the buffer occupancy of the First_SCR of the last cell + STC_compensation to the Last_DTS of the last cell. As shown in Figure 10C of the first embodiment, the highest occupancy Bvl + BHv2 of the video buffer 4b is obtained during the period where the video data for both the previous cell and the last cell are stored in the buffer 4b video In step S106, the control unit 1 controls the disk access unit 3 to read the three VOBs that are thought to include the image data located at the end of the previous cell of the DVD-RAM. After this, in step S107, the control unit 1 controls the disk access unit 3 to read the three VOBs that are believed to include the image data located at the beginning of the last image. Figure 27A shows the area to be read from the previous cell in step S106. Figure 27B shows the VOB that includes the VOBU # 98 to # 107, with the VOBU # 99 to # 105 that are indicated as the previous cell. When the image data to be reproduced last in the previous cell is the Bend image data, this image data will be included in one of the VOBU # 103 to # 105 according to the one second rule, so that VOBU # 103 to VOB 105 will be read as the VOBU sequence that includes the image data to be played back to the latter. The VOB shown in Figure 27B includes the VOBU # 498 to # 507, and of these, the VOBU # 500 to # 506 are indicated as the last cell. When the image data to be displayed first in this last cell is the PTOP image data, these PTOP image data will be included in VOBU # 500 to # 502, so VOBU # 500 to # 502 will be read as the VOBU sequence that includes the image data to be displayed first. These VOBUs include all the image data that depends on the PTOP image data of the Bend image data, in addition to the audio data to be played at the same time as the PTOP image data and the Bend image data. . As a result, all the image data that is required for the conversion of the image types is read by this operation. It should be noted that the readings in this flow chart are made for the VOBU units, although these readings can be made instead for the image data and audio data to be reproduced in a second, of all the image data and audio data included in a VOBU. In the present embodiment, the number of VOBs corresponding to one second of reproduction are given three, although any number of VOBs may be used. The readings can be performed alternatively for the image data and the audio data to be reproduced in a period longer than one second.
After these readings are completed, in step S108, the control unit 1 controls the demultiplexer 4a to separate the video data and the audio data from the VOBUs located at the end of the previous cell and at the start of the last cell . In step S109, the control unit 1 judges whether the accumulated amount of data in the buffer exceeds the upper limit of the buffer at any point in the decoding when the previous cell and the last cell exist in the buffer. More specifically, this is achieved by judging whether the value Bvl + Bv2 calculated in step S105 exceeds the upper limit of the buffer. If this value does not exceed the upper limit, the processing proceeds to step S133, or if the value exceeds the upper limit, control unit 1 allocates a quantity of code based on the amount A in excess to the previous cell and the last cell in step S110. It is noted that the re-coding performed in this case can be performed only for one of the previous VOB and the last VOB, or both. In step Slll, the video data obtained from. the two cells are re-encoded according to the amount of code assigned in step S110. In step S133, the First_SCR that has recently been assigned to the re-encoded video data in the last cell is obtained. In this last cell, the first image data in the display order and the first image data in the order of coding will have been converted into the image types with large amounts of image data, so it should be obvious that the value First_SCR + STC_compensation will indicate a time earlier than before. In step Si 12, the control unit 1 calculates the audio data, of the audio data separated from the previous cell, which correspond to the audio box x which includes the sum of the STC_compensation and the first First_SCR that is assigned recently to the video data in the previous VOB. In Figure 34, the upper and lower graphs respectively show the transition in the occupation of the buffer due to the video data in the previous cell and last cell. The lower graph in Figure 34 shows the audio frames of the audio data separated from the previous cell. The sequence of the audio frames below the lower graph in Figure 34 shows each audio frame against the time axis of the graph given above. The occupation of the buffer for the new last cell obtained as a result of the coding is increased by the quantity l. It is noted that this amount a differs from the increased amount that was estimated in step S132. Due to this amount, the Primero_SCR that is recently assigned to the latest video data indicates an earlier time. As can be seen in the lower graph in Figure 34, the new value of First_SCR + STC_compensation is placed in the time that is from Tal earlier than before. In the Figure 34, the plotted downward guide of the 'first value of First_SCR + STC_compensation crosses an audio frame in the sequence of audio frames of the previous cell. This intercepted audio frame is the audio box x, with the following audio box x + 1 which is the final audio box in the previous cell. Since the value of the sum of the STC_compensation and the new Primero_SCR of the last one. cell indicates a time earlier, this means that a box is indicated earlier as the audio box x. As a result, when a reading is started for the video data in the last cell, the audio data to be read from the previous cell together with this video data is comparatively larger than in the first mode. Subsequently, the processing of the step S113 to S119 is performed so that the current encoder 2e performs the multiplexing shown in Fig. 25. After this, in step S120, the First_SCR, the Last_SCR, the seamless mark, the C_V_E_PTM and the C_V_S_PTM for the previous and last cells are intercepted in the seamless link information of the previous cell. The control unit 1 then performs the processing in steps S121 and S122. From the data of the six VOBUs obtained through the re-coding, the three VOBUs arranged at the beginning (the first VOBUs) originally formed part of the previous cell, and in this way they are appended to the end of the previous cell. Similarly, the three VOBUs arranged at the end (the last VOBUs) originally formed part of the last cell, and thus are intercepted at that beginning of the last cell. While one of the previous and last cell that has been given the re-encoded data is administered having been assigned the same identifier as the VOB from which they were taken, the other of the two cells is administered having been assigned a different identifier VOB from which it was taken. This means that after this division, the previous and the last cell are administered as separate VOBs. This is because there is a high possibility that the timestamps are not continuous in the boundary between the previous cell and the last cell.
As in the first embodiment, in step S123, the control unit 1 judges whether the values of the SCR are continuous. If so, control unit 1 ends processing in the flowcharts of Figures 31 to 33. If not, control unit 1 calculates the excess amount A based on the number of packets given for overlapping SCRs. , determines a code amount based on the excess amount A, and returns to step S109 to repeat the re-encoding. As a result of the above processing, the cells are re-encoded, with the cells indicated by the cell information that are set as separate VOBs. This means that the VOB information for the newly generated VOBs needs to be provided in the RTRW administration file. The following is an explanation of how this VOB information is defined for ce Ida s. The "video stream attribute information" includes the compression mode information, the TV system information, the aspect ratio information and the resolution information, although this information can be adjusted to correspond to the information for the VOBs (s) from which the cells were taken. The "audio stream attribute information" includes a coding mode, the presence / absence of a dynamic range control, a sampling frequency, and the number of channels, although this information may be adjusted to correspond to the information for the VOB (s) from which the cells were taken. The "time map table" is composed of the size of each VOBU that composes the VOB and the viewing period of each VOBU, although a corresponding part of the information given for the VOB (s) from which the cells were taken can be use, with the sizes and viewing periods that are amended only to the VOBUs that have been re-encoded. The following is an explanation of the "seamless link information" that was generated in step S133. This seamless link information is composed of a seamless mark, a presentation start time V0B_V_S_PTM, a video presentation completion time VOB__V_E_PTM, a First_SCR, a last Last_SCR, a start time of audio separation A_STP_PTM, and an audio separation length A_GAP_LEN. These elements are written in the seamless link information at a time. Only when the relationship between the previous cell and the last cell satisfy conditions (1) and (2) is the seamless mark set to "01". If any condition is not satisfied, the seamless mark is set to "00". (1) Both cells must use the same display method (NTSC, PAL, etc.) for the video stream as given in the video attribute information. (2) Both cells must use the same coding method (AC-3), MPEG, Linear PCM) for the audio stream as given in the audio attribute information. The "video presentation start time VOB_V_S_PTM" is updated to the presentation start time after recoding. The "video presentation completion time VOB_V_E_PTM" is updated at presentation completion time after re-encoding. The "First_SCR" is updated to the SCR of the first packet after recoding. The "Last_SCR" is updated to the SCR of the final package after re-coding. The "A_STP_PTM audio separation start time" is set at the end time of presentation of the audio frame and that is the final audio frame to be played for the audio data moving to the last cell in Figure 34. The "audio separation length A_GAP_LEN" is set as the period from the end time of presentation of the last audio frame and to be played "using the audio data moving to the last cell in Figure 34 at the start time of presentation of the audio frame u.Once the VOB information has been generated as described above, a RTRW administration file included with this new VOB information is recorded in the DVD-RAM. By doing so, the two cells indicated by the cell information can be recorded on the DVD-RAM as two VOBs to be played seamlessly. As described above, this second mode can process the cells in a VOB or VOBs to make the cells reproduced seamlessly when reading only and when re-encoding the end of the previous cell and the start of the last cell. Since only the VOBUs located at the beginning and end of the respective cell are re-encoded, this recoding of the cells can be achieved in a very short time. It should be noted that while the present embodiment describes the case where the video fields are used as the unit when the cells are indicated, video frames may be used instead.
The computation program of processing mode represented by the flowcharts in this first mode (Figures 31-33) can be done by a machine language program that can be distributed and sold having been recorded on a recording medium. Examples of this recording medium are an IC card, an optical disk, or a flexible disk. The engraving machine language program in the recording medium can then be installed on a normal, personal computer. When executing the installed machine language programs, the normal personal computer can achieve the functions of the video data editing apparatus of the present modality.
THIRD MODALITY The third embodiment of the present invention manages AV files in a file system and allows greater freedom in video editing. 3-1 Directory Structure in a DVD-RAM The RTRW administration file of the AV files of the first mode are arranged in the directories shown in Figure 35 within a file system that complies with ISO / IEC 13346. In the Figure 35, ovals represent directories and rectangles represent files. The root directory includes the "one RTRW" directories and two files called "Archivol.DAT" and "File2.DAT". The RTRW directory includes three files called "Peí íula 1. VOB", "Película2.VOB" and "RTRWM.IFO". (3-1-1-) Administration Information File Systems in the Directories The following is a description of the management information used to administer the RTRW administration file and the AV files in the directory structure shown in Figure 35. Figure 36 shows the file system administration information in the directory structure of Figure 35.
Figure 36 shows the volume area shown in Figure 3D, the sectors, and the stored contents of the sectors in a hierarchy. The arrows © ~ ® in this drawing show the order in which the storage positions of the "Movie 1. VOB" file are specified by this management information. The first level in the hierarchy in Figure 36 shows the volume area shown in Figure 3 D. The second level in the hierarchy displays the descriptors of the file set, the final descriptors, the file entries, and the directories, of the complete administration information. The information in this second level complies with a file system that is standardized under ISO / IEC 13346. File systems that are standardized under ISO / IEC 13346 manage directories in a hierarchy. The management information in Figure 36 is arranged according to the directory structure. However, a recording region is only displayed for the AV file "Movie 1. VOB". The file set descriptor (LBN 80) in the second level shows the information such as the LBN of the sector that stores the file entry for the root directory. The final descriptor (LBN 81) shows the end of the file set descriptor. A file entry (such as LBN 82, 584, 3585) is stored for each file (or directory), which shows a storage location for a directory file. The file entries for the files and the file entries for the directories have the same format, so you can freely build the hierarchical directories. A directory (such as LBN83, 584, 3585) shows the storage locations for the file entries of the files and directories included in the directory. Three file entries and two directories are shown at the third level in the hierarchy. File entries and directories are followed by the file system and have a data construction that allows the storage position of a specified file to be indicated despite the construction of the hierarchy in the directory structure. Each file entry includes an assignment descriptor that shows a storage position of a directory file. When the data recorded in each file is divided into a plurality of extensions, a file entry includes a plurality of assignment descriptors for each extension. The expression "extension" refers to a section of data included in a file that should be stored preferably in consecutive regions. When, for example, the size of a VOB - to be recorded in an AV file is large, but there are no consecutive regions to store the VOB, the AV file can not be burned to the DVD-RAM. However, when there is a plurality of small consecutive regions distributed across the partition area, by dividing the VOBs to be recorded in the AV file, the resulting divided sections of the VOBs can be stored in the consecutive, distributed areas. By dividing the VOBs in this way, the probability of being able to store the VOBs as AV files is increased, even when limiting the number of consecutive regions and the length of the partition area. To improve the efficiency with which data is recorded on a DVD-RAM, the VOBs recorded in an AV file are divided into a plurality of extensions, with these extensions being recorded in consecutive separate areas on the disc without considering twice as many. the extensions. It should be noted that the expression "consecutive regions" refers here to a region composed of ECC blocks that are logically or physically consecutive. As an example, the file entries with LBN 82 and 584 in Figure 36 each include an individual assignment descriptor, which means that the file is not divided into a plurality of extensions (that is, it is composed of an extension). individual). The 3585 file entry meanwhile has two mapping descriptors, which means that the data that is to be stored in the file is composed of two extensions. Each directory includes a file identification descriptor that shows a storage portion of a file entry for each file and for each directory included in the directory. When a route is traced through the file entries and directories, the storage location "root / video / Moviel.VOB" can be found by following the given order as the file set descriptor? F? file entry (root)? ®? Director and (root? ®? file entry (RTRW)? ®? directory (RTRW)? ®? file entry (Movie .VOB)? ®? file (extensions # 1 and # 2 of the Movie .VOB). Figure 37 shows the link between the file entries and the directories in this route in another format that maps the construction of the directory In this drawing, the directories used for the route include the file identification descriptors for the directory of the directory. source directory (the origin of the root that is the root itself), the RTRW directory, the Archivol.DAT, the File2.DAT The RTRW directory includes the file identification descriptors for each of the - directory of the directory of origin (root), the file Pe 1 i 1. VOB, the file Pei i cul a 2. VOB, and the file RTRWM.IFO In the same way, the storage position of the file Movie 1. VOB is specified trace the route © ~ © @. (3-1-2) Data Construction of an Input of Archive Figure 38A shows the construction of data from a file entry in more detail. As shown in Figure 38A, a file entry includes a descriptor mark, an ICB mark, an assignment descriptor length, expanded attributes, and an assignment descriptor. In this figure, the legend "BP" represents "bit position", while the legend "RBP" represents "relative position of bits". The descriptor mark is a mark that shows that the present entry is a file entry. For a DVD-RAM, a variety of marks are used, such as the file entry descriptor and the space bitmap descriptor. For a file entry, a value of "261" is used as the file mark indicating a file mark. The ICB 'mark shows the attribute information for the same file entry. The expanded attributes are the information that shows the attributes with a higher level content than the content specified by the attribute information field in the file entry. The assignment descriptor field stores as many allocation descriptors as there are extensions that make up the file. Each assignment descriptor shows the logical block number (LBN) that indicates the storage position of an extension for a file or directory. The data construction of an allocation descriptor is shown in Figure 38B. The allocation descriptor of Figure 38B includes the data that shows the extension length and a logical block number that shows the possible storage of the extension. However, the two upper bits of the data indicating the extension length show the storage state of the extension storage area. The meanings of the various values are as shown in Figure 38C. (3-1-3) Construction of Data of the File Identification Descriptors for Directories and Files Figures 39A and 39B show the detailed data construction of the file identification descriptors for directories and files in various directories. In these two types of file identification descriptors have the same format, and thus includes the administration information, identification information, a length and directory name, an address that shows the logical block number that stores the input file for the file directory, expansion information, and a directory name. In this way, the address of a file entry is associated with a directory name or a file name. (3-1-4) Minimum Size of an AV Block When a VOB to be recorded in an AV file is divided into a plurality of extensions, the data length of each extension must exceed the length of the data of an AV block. The expression "AV block" refers here to the minimum amount of data for which there is no subflow danger for the track buffer 3a when a VOB of the DVD-RAM is read. To guarantee consecutive playback, the minimum size of an AV block is defined in relation to the track buffer provided in the playback apparatus. The following explanation is about how the minimum size 'of an AV block is. (3-1-5) Minimum Size of an AV Block Area First, the relationship of the need to determine the minimum size of an AV block to guarantee uninterrupted reproduction is described. Figure 40 shows a model of how a playback apparatus that plays video objects buffer the AV data read from the DVD-RAM in the track buffer. This model shows the minimum requirements of a reproduction device for the uninterrupted reproduction that is going to be guaranteed. In the upper part of Figure 40, the reproduction apparatus subjects the AV data to reading from the DVD-RAM to the ECC process, temporarily accumulates the resulting data in the track buffer, which is a FIFO memory, and then transfers the data from the track buffer to the decoder. In the illustrated example, Vr is the input transfer speed of the track buffer (or in other words, the speed at which it is read from the optical disk), and V0 is the memory output transfer rate Intermediate tracks (decoder input speed), where Vr >; V0. In the present model, Vr = llMbps. The lower part of Figure 40 is a graph showing the changes in the amount of data in the track buffer for the present model. In this graph, the vertical axis represents the amount of data in the buffer, while the horizontal axis represents time. This graph assumes that the block AV # k that includes a defective sector is read after block AV # j that does not include bad sectors. The TI time period displayed on the time axis shows the time required to read all the AV data in block AV # j that does not include bad sectors. During this TI period, the amount of data in the track buffer increases at the speed (Vr - V0). Period T2 (hereinafter referred to as the "jump period") shows the time required for the optical reader to jump from block AV # j to block AV # k. This jumped period includes the search time for optical reader and the time taken for optical disc rotation to stabilize. In the worst-case scenario of a jump from the inner periphery to the outer periphery of the optical disk, the jump time is assumed to be around 1500 ms for the present model. During the jump period T2, the amount of data in the track buffer decreases at a speed of V0. The periods T3 to T5 show the time taken to read all the AV data in the AV block #k that includes a defective sector. From these periods, the period T4 shows the time taken to jump to the next ECC block from a present ECC block that includes a defective sector. This jump operation comprises the jump from a present ECC block if one or more of the 16 sectors are defective and the jump to the next ECC block. This means that in an AV block, instead of just logically replacing each defective sector in an ECC block with a replacement sector (or a replacement ECC block), the use of each ECC block (all 16 sectors) is stopped with a defective sector. This method is called the ECC block jump method. Period T4 is the disk rotation waiting time, which, in the worst case scenario, is the time taken for a disk revolution. This is presumed to be around 105 ms for the present model. In periods T3 and T5, the amount of data in the buffer is increased at a given rate as Vr-V0, while during period T4, the amount decreases at the speed V0. When "N_ecc" represents the total number of ECC blocks in an AV block, the size of an AV block is given by the formula "N_ecc * l 6 * 8 * 2048 * bits." To ensure that effective playback is performed, it is as described the minimum value of N ecc.
In period T2, the AV data is read only from the track buffer without concurrent replenishment of the AV data. During this period T2, if the amount of data in the buffer reaches zero, a subflow will occur in the decoder. In this case, the uninterrupted playback of AV data can not be guaranteed. As a result, the relationship shown as the following equation 1 needs to be satisfied to ensure the uninterrupted reproduction of the AV data (which is to say, to ensure that the subflow does not occur).
Equation 1 (amount of data B in buffer) D (amount of data R consumed) The amount of buffer data B is the amount of data stored in the buffer at the end of the TI period. The amount of data R consumed is the amount of data of the data read during period T2.
The amount of data B in buffer is given by Equation 2 below.
Equation 2 The amount of data R consumed is given by Equation 3 below.
Equation 3 (amount of data R consumed) = T2 * V (Substituting Equations 2 • and 3 on the respective sides of Equation 1 gives Equation 4 below.
Equation 4 (N_ecc * l 6 * 8 * 2048) * (1 -Vs / Vr) > t 2 * V0 By rearranging Equation 4, it can be seen that the number N_ecc of ECC blocks that guarantees consecutive reproduction must satisfy Equation 5 below.
Equation 5 N ecc > T2 * V0 / ((16 * 8 * 2048) * (1-V0 / Vr)) In Equation 5, T2 is the jump period described above, which has a maximum of 1.5s. Meanwhile, Vr has a fixed value, which for the model in the upper part of Figure 40 is 11 Mbps. V0 is expressed by the following Equation 6 that takes the variable bit rate of the AV block that includes the number N_ecc of ECC blocks under consideration. It is noted that V0 is not the maximum value of the logical transfer rate for the transfer from the track buffer, but is given by the subsequent equation as the effective input speed of the variable rate AV data in the decoder. The length of the AV block here is given as the number N_packet package in a block composed of N_ecc ECC blocks ((N_e ce- 1) * 16 <N_paquet eDN_ecc * 16).
Equation 6 V0 = length of AV (bi t io) * (1 / AV block playback time (second)) (N_package * 2048 * 8) * (27M / (SCR_pr_primary_primary SCR_pr imero_corrient)) In the previous equation, SCR_pr next_primer is the SCR of the first packet in the next AV block, while SCR_first_current is the SCR of the first packet in the present AV block. Each SCR shows the time to which the corresponding packet must be transferred from the track buffer to the decoder. The unit for the SCRs is 1/27 mega seconds. As shown in Equations 5 and 6, the minimum size of an AV block can theoretically be calculated according to the actual bit rate of the AV data. Equation 5 applies to a case where there are no bad sectors on the optical disk. When these sectors are present, the number of ECCC Necc blocks is required to ensure that the uninterrupted playback is as described below. It is assumed herein that the AV block area includes ECC blocks with bad sectors, the number of which is represented as "dN_ecc". AV data is not recorded in the dN_ecc ECC blocks due to the ECC block jump described above. The lost time Ts caused by the jump of the dN_ecc defective ECC blocks is represented as "T4 * dN ecc", where "T4" represents the block jump block ECC for the model shown in Figure 40.
To ensure uninterrupted playback of AV data when defective sectors are included, - the AV block area needs to be included as the -number of ECC blocks as represented by Equation 7.
Equation 7 N_ecc > - dN_ecc + VD * (T j + Ts) / ((16 * 8 * 2046) * 1-V0 / Vr)) As described above, the size of the AV block area is calculated from Formula 5 when no defective sectors are present, and from Formula 7 when defective sectors are present. It should be noted here that when the AV data is composed of a plurality of AV blocks, the first and last AV blocks do not need to satisfy Equation 5 or 7. This is because the synchronization at which the decoding is started The timing for the first AV block can be delayed, which is to say, the data supply to the decoder can be delayed until enough data accumulates in the buffer, thus ensuring uninterrupted playback between the first and second blocks. AV. The last AV block, meanwhile, is not followed by any particular AV data, meaning that the playback can simply end with this last AV block. (3-2) Functional Blocks of the DVD Recorder 70 Figure 41 is a block diagram of function showing the construction of the DVD burning apparatus 70 divided into functions. Each function in Figure 41 is performed by the CPU in the control unit 1 which executes a program in the ROM for controlling the physical equipment shown in Figure 17. The DVD player of Figure 41 includes the unit 100 of disc recording, disk reading unit 101, common file system unit 10, unit 11 of AV file system, recording-editing-playback control unit 12, AV data recording unit 13 , the data reproduction unit 14, and the AV data editing unit 15. (3-2-1) Disk Recording Unit 100 Disk Reading Unit 101 The disk recording unit 100 operates as follows. Upon receipt of an entry of the logical sector number from which the recording is to be started and the data to be recorded from the common file system unit 10 and the unit 11 of the AV file system, the recording unit 100 The disk drive moves the optical reader to the appropriate logical sector number and causes the optical reader to record the data in ECC block units (16 sectors) in the sectors indicated on the disk. When the amount of data to be recorded is below 16 sectors, the disk recording unit 100 first reads the data, submits it to ECC processing, and records it on the disk as an ECC block.
The disk reading unit 101 operates as follows. Upon receipt of an entry from a logical sector number from which the data will be read and a number of sectors of the unit of the common file system and the unit 11 of the AV file system, the disk reading unit 101 moves the optical reader to the appropriate logical sector number and causes the optical reader to read the data in the ECC block units from the indicated logical sectors. The disk reading unit 101 has performed the ECC processing on the read data and transfers only the data of the required sectors to the unit 10 of the common file system. As with the disk recording unit 100, the disk reading unit 101 of the VOBs in units of 16 sectors for each ECC block, thereby reducing overloads. (3-2-2) Unit 10 of Common File System The common file system unit 10 provides the recording-editing-playback control unit 12, the AV data recording unit 13, the AV data reproduction unit 14, and the AV data editing unit 15 with the normal functions to access the standardized data format under ISO / IEC 13346. These normal functions provided by the common file system unit 10 controls the disk recording unit 100 and the disk reading unit 101 for reading or writing the data or from the DVD in the directory units and in the file units. Representative examples of the normal functions provided by unit 10 of the common file system are as follows. 1. Engaging the disc recording unit 100 to record a file entry and transfer the file identification descriptor to the recording-editing-playback control unit 12, the AV data recording unit 13, the recording unit 14 AV data reproduction, and the AV data editing unit 15. 2. Convert a recorded area to the disk that includes a file in an empty area. 3. Control the disk reading unit 101 to read the file identification descriptor of a specified file of a DVD-RAM,. 4. Control the disk recording unit 100 to record the memory present in the memory on the disk as a non-AV file. 5. Control the disk reading unit 101 to read an extension that composes a file recorded on the disk. 6. Control the disk reading unit 101 to move the optical reader to a desired position in the extensions that make up a file. To use any of the functions (1) to (6), the recording-editing-playback control unit 12 to the AV data editing unit 15 can issue an order to the file system unit 10, to indicate that the file that is to be read or recorded as a parameter. These commands are called commands oriented to the common file system.
Several types of commands oriented to the common file system are available, such as "(1) CREATE", - "(2) DELETE", "(3) OPEN / CLOSE", "(4) WRITE", "- (5) ) READ ", and" (6) SEARCH ". These commands are assigned respectively to functions (1) to (6). In present fashion, the assignment of orders to normal functions is as follows. To use the function (1), the recording-editing-playback-control unit 12 to the AV data editing unit 15 can issue a "CREATE" command to the unit 10 of the common file system. To use the function (2), the recording-editing-reproducing control unit 12 to the AV data editing unit 15, can issue a "CLEAR" command to the unit 10 of the common file system. In the same way, to use the functions (3), (4), (5) and (6) respectively, the recording-editing-reproducing control unit 12 to the AV data editing unit 15 can issue an order "OPEN / CLOSE", "WRITE", "READ" or "SEARCH" to unit 10 of the common file system. (3-2-3) Unit 11 of the AV File System The AV file system unit 11 provides the AV data recording unit 13, the AV data reproduction unit 14, and the AV data editing unit 15 with extended functions that are only necessary when a file is recorded or edited. AV. These extended functions can not be provided by the common file system unit 10. The following are representative examples of these extended functions. (7) Writing a VOB that is encoded by the MPEG encoder 2 onto a DVD-RAM as an AV file. (8) Cutting of an indicated part of the VOB recorded in an AV file and adjusting the part as a different file. (9) Cleaning of an indicated part of the VOB recorded in an AV file. (10) Link of two AV files presented in the DVD-RAM with VOBU that have been re-encoded according to the procedure of the first and second modalities.
To use the extended functions (7) to (10), the recording-editing-reproducing control unit 12 to the AV data editing unit 15 can issue an order to the common file system unit 10 to indicate that the file is recorded, linked or cut. These commands are called commands oriented to the AV file system. Here, the commands oriented to the file system- "AV-ESCRI B I R", "DIVIDE", "CUT", and "ANNEX" are available, with these assigned respectively to functions (7) to (10). In the present embodiment, the assignment of the orders to the extended functions is as follows. To use the function (7), the AV data recording unit 13 to the AV data editing unit 15 can issue an AV-WRITE command. To use the function (8), the AV data recording unit 13 to the AV data editing unit 15 can issue a DIVID command. Similarly, to use the function (9) or (10), the AV data recording unit 13 to the AV data editing 15 may issue an "ACQUIRE" or "ANNEXE" command. With function (10), the file extension after the link is as long or longer than an AV block. (3-2-4) Unit 12 of Recording Control-Editing-Reproduction The record-editing-playback control unit 12 issues an OPEN / CLOSE command that indicates the directory names as parameters to the common file system unit 10, and in so doing causes the common file system unit 10 to read a plurality of file identification descriptors of the DVD-RAM. The record-editing-playback control unit 12 then analyzes the directory structure of the DVD-RAM of the file identification descriptors and receives an indication from the user of a file or directory to be operated. Upon receipt of the user's indication of the target file or directory, the recording-editing-playback control unit 12 identifies the content of the desired operation based on the operation of the user identified by the remote control signal receiving unit 8. , and issues instructions to cause the AV data recording unit 13, the AV data reproduction unit 14, and the AV data editing unit 15 to perform appropriate processing for the file or directory indicated as the operation target. . To cause the user to indicate the operation objective, the recording-editing-reproduction control unit 12 transfers the graphic data, which visually represents the directory structure, the total number of AV files, and the empty area data sizes. in the present disc, to the video signal processing unit 5. The video signal processing unit 5 converts this logical data into an image signal and makes them display on the TV monitor 72. Figure 42 shows an example of the graphic data displayed on the TV monitor 72 under the control of the recording-editing-playback control unit 12. During the display or visualization of this graphic data, the display or display color of any of the files or directories may change to show potential operation objectives. This change in color is used to focus the user's attention, and in this way is called the "focus state". During the use of the normal color, meanwhile, it is called the "normal state" When the user presses the marks key on the remote control 71, the display of the file or directory that is currently in the focus state returns to the normal state and displays a file or directory recently indicated, different, in the focus state. When any of the files or directories are in the focus state, the recording-editing-playback control unit 12 waits for the user to press the "confirm" key on the remote control 71. When the user presses the enter key, the recording-editing-playback unit 12 identifies the file or directory that is currently in the focus state with a potential operation objective. In this way, the recording-editing-playback control unit 12 can identify the file or directory that is the operation objective. To identify the operation content, however, the recording-editing-playback control unit 12 determines what operation content has been assigned to the key code received from the remote control signal receiving unit 8. As shown on the left side of Figure 41, the keys with the legend "PLAY", "REWIND", "STOP", "FAST FORWARD", "RECORD", "MARK", "VIRTUAL EDIT", and " REAL EDITION "are present in the remote control 71. In this way, the recording control-editing-reproducing control unit 12 identifies the operation content indicated by the user in accordance with the key code received from the receiving unit 8. remote control signals. (3-2-4-1) Operation Content that Can Be Received by the Recording Control-Editing-Reproduction Unit 12 The operation contents are classified in operation contents that are provided in the domestic AV equipment, conventional, and the operating contents that are provided especially for video editing. As specific examples, "play", "rewind", "stop", "fast forward", and "record" all fall into the previous category, while "mark", "virtual edition" and "real edition" all fall in the last category. A "play" operation causes the DVD recorder apparatus 70 to play a VOB that is recorded in an AV file that is specified as the operation target. A "rewind" operation causes the DVD recorder apparatus 70 to rapidly reproduce a VOB currently played back in reverse.
A "stop" operation causes the VOB recorder apparatus 70 to stop playback of the present VOB. A "fast forward" operation causes the VOB recording apparatus 70 to rapidly reproduce the present VOB in the forward direction. A "record" operation causes the DVD recorder apparatus 70 to generate a new AV file in the directory indicated as the operation target and write the VOB to be recorded in the new AV file. These operations in this prior category are well known to users as functions of conventional, domestic AV equipment, such as video cassette recorders and CD players. The operations in the last category are performed by users when, to use an analogy of editing a conventional film, sections of the film are cut and spliced together to produce a new sequence 'of film. A "mark" operation causes the DVD apparatus 70 to reproduce a VOB included in the AV file indicated as the operation target and mark the desired images of the video images reproduced by the VOB. To use the analogy of editing a movie, this "mark" operation involves marking points where the movie will be cut. A "virtual editing" operation causes the DVD recorder apparatus 70 to select a plurality of pairs of two points indicated by a mark operation such as playback start points and playback end points and then define a logical reproduction path to the assign a reproduction order to these pairs of points. In a virtual edit operation, the section defined by a pair of a playback start point and the end point of playback selected by the user is called a "cell". The reproduction path defined by assigning a playback order to the cells is called a "program string". A real "editing" operation causes the DVD recorder apparatus 70 to cut each section indicated as a cell of an AV file recorded on a DVD-RAM, adjust the cut sections as stop files, and link a plurality of sliced sections of according to the production order shown by a program chain. These editing operations are analogous to the cutting of a film in the marked positions and the splicing of the sections cut together. In these editing operations, the extension of the linked files is equal to or greater than the length of an AV block. The recording-editing-reproducing control unit 12 controls which of the AV data recording unit 13 to the AV data editing unit 15 is used when performing the operation contents described above. In addition to specifying the operation objective and the operation content, the recording-editing-reproduction control unit 12 chooses the appropriate component (s) for the operation content of the data recording unit 13. AV to the AV data editing unit 15 and transfers the instructions informing them of the components of the operation content. The following is a description of example instructions of the record-editing-playback control unit 12 given to an AV data recording unit 13, the AV data reproduction unit 14, and the AV data editing unit 15. using combinations of an operation objective and an operation content. In Figure 42, the "DVD_Video" directory is in the focus state, so that if the user presses the "REC" key, the recording-editing-playback control unit 12 identifies the "DVD_Video" directory as the operation objective and "record" as the operation content. The recording-editing-reproducing control unit 12 selects the AV data editing unit 13 as the component capable of performing a recording operation, and instructs the AV data recording unit 13 to generate a new AV file in the directory indicated as the operation objective.When the file "AV_ARCHI V0 # 1" is in the focus or focus state and the user presses the "PLAY" key on the remote control 71, the recording-editing-playback control unit 12 identifies the file "AV_ARCHI VO # 1"as the operation target and" reproduces "as the operation content. The recording-editing-reproducing control unit 12 selects the AV data reproduction unit 14 as the component capable of performing a reproduction operation, and instructs the AV data reproduction unit 14 to reproduce the AV file indicated as the objective of operation. When the file "AV_ARCHI VO # 1 is in the focus state and the user presses the" MARK "key on the remote control 71, the recording control-edition-playback control unit 12 identifies the file" AV_ARCHIV0 # 1"as the operation target and "mark" as the operation content The recording control-editing-playback control unit 12 selects the AV data editing unit 15 as the component capable of performing a marking operation, and instructs the unit 15 AV data editing to perform a trademark operation for the AV file indicated as the operation target. (3-2-5) AV Data Recording Unit 13 The AV data editing unit 13 controls the encoding operations of the MPEG encoder 2 while issuing commands oriented to the common file system and commands oriented to the AV file system in a predetermined order to the common file system unit 10 and unit 11 of the AV file system. By doing so, the AV data editing unit 13 makes use of the functions (1) to (10) and performs the recording operations. (3-2-6) Unit 14 of AV Data Reproduction The data reproduction unit 14 AV controls the decoding operations of the MPEG decoder 4, while issuing commands directed to the common file system and commands oriented to the AV file system in a predetermined order to the common file system unit 10 and the unit 11 of the common file system. By doing so, the AV data reproduction unit 14 makes use of the functions (1) and (10) and performs the operations of "playback", "rewind", "fast forward", and "stop". (3-2-7) Unit 15 of AV Data Editing The AV data editing unit 15 controls the decoding operations of the MPEG decoder 4, while issuing commands oriented to the common file system and commands oriented to the AV file system in a predetermined order to the common file system unit 10. and unit 11 of the AV file system. In doing so, the data reproduction unit V makes use of functions (1) to (10) and performs the operations of "dialing", "virtual editing", and "editing".
In more detail, upon receiving instructions from the editing-editing-reproduction control unit 12 to mark the AV file indicated as the operation target, the AV data editing unit 15 causes the data reproduction unit 14 AV plays the indicated AV file and inspects when the user presses the "MARK" key on the remote control 71. When the user presses the "MARK" key during playback, the AV data editing unit 15 writes the information called a " mark point "on it as a non-AV file. This markpoint information shows the time in seconds from the start of playback of the AV file to the point where the user presses the "MARK" key. Upon receiving instructions from the record-editing-playback control unit 12 for a virtual editing operation, the AV data editing unit 15 generates the information defining a logical reproduction path according to the key operations of the user of the video editing operation. remote control 71. The AV data editing unit 15 then controls the unit 10 of the common file system so that this information is written on the DVD-RAM as a non-AV file. Upon receiving instructions from recording control unit-editing-playback control unit 12 for a real editing operation, the AV data editing unit 15 cuts the sections of the DVD-RAM indicated as cells and adjusts the cut sections as separate files which links to form a sequence of cells. When linking a plurality of files, the AV data editing unit 15 performs the processing so that seamless reproduction of the images will be achieved. This means that * there will be no interruptions in the display or display of the image when a linked AV file is played. The AV data editing unit 15 links the extensions to make all extensions, except for the last extension to be played, equal to or greater than the AV block length. (3-2 -7 -1) Processing for Editions Virtual and Editions by Unit 15 of AV Data Editing Figure 43 is a flowchart for processing the actual editing and virtual editing operations. Figures 44A to 44F are figures showing a complementary example of the processing by the AV data editing unit 15 according to the flow chart of Figure 43. The following describes the editing processes of the data editing unit 15 AV with reference to the flow chart of Figure 43 and the example in Figures 44A and 44F. The AV file shown in Figure 44A is already stored on the DVD-RAM. When this AV file is indicated as the operation target, the user presses the "PLAY" key on the remote control 71. The recording control-editing-playback control unit 12 detects the key operations, so that when the user presses the "MARK" key, the AV data editing unit 15 causes the AV data reproduction unit 14 to begin playing the AV file in step SI. After the start of playback, the reproduction proceeds to time ti in Figure 44B 'when the user then presses the "MARK" key. In response to this, the AV data editing unit 15 adjusts mark point # 1, which expresses a relative time code for time ti; in the present AV file. The user subsequently presses the "MARK" key a total of seven times at times t2, t3, t4, ... td. In response, the AV data editing unit 15 adjusts the mark points # 2, # 3, # 4, # 5, ... # 8, which express the relative time codes for time t2, t3, t4, ... t8, in the present AV file, as shown in Figure 44B. After the execution of the step SI, the processing proceeds to step S2 where the AV data editing unit 15 causes the user to indicate the pairs of mark points. The AV data editing unit 15 then determines the cells to be played within the present AV file according to the selected pairs and mark points.
In Figure 44C, the user indicates which mark points # 1 and # 2 form the pair (1) mark points # 3 and # 4 form the pair (2), mark points # 5 and # 6 form the pair ( 3), and mark points # 7 and # 8 form the pair (4). In this way, the AV data editing unit 15 adjusts the AV data within each pair of points as a separate cell, and in the present example adjusts whether the four cells, Cell # l, Cell # 2, Cell # 3 and Cell # 4 It is pointed out in the present example, that the data editing unit 15. AV can alternatively adjust the pair of Mark # 2 and Mark # 3 as one cell, and the pair of Mark # 4 and Mark # 5 as another cell. Then, in step S3, the AV data editing unit 15 generates a program string by assigning a playback command to the cells it has reproduced. In Figure 44D, Cell # l is the first in the reproduction path (shown by the legend wira "in the drawing), Cell # 2 in the second reproduction path (shown by the legend" 2a "in the drawing ), and Cell # 3 and Cell # 4 respectively are the third and fourth in the reproduction path • (shown by the legends "3a" and "4a" in the drawing). In doing so, the AV data editing unit 15 treats the plurality of cells as a program string, based on the chosen playback order. It is noted that Figure 44D shows the simplest reproduction order of cells, with the adjustment of other orders, such as Cell # 3? Cell # l? Cell # 2? Cell # 4 that is equally possible. In step S6, the AV data editing unit 15 inspects whether the user has indicated the reproduction of the program chain. In step S5, the AV data editing unit 15 inspects whether the user has indicated an editing operation for the program chain. When the user indicates playback, the AV data editing unit 15 instructs the AV data reproduction unit 14 to play the program chain indicated for playback. On receipt of the playback instructions of the AV data editing unit 15, the AV data reproduction unit 14 causes the optical reader to look for Marcail which is the reproduction start position for Cell # l, as shown in Figure 44E. Once the optical reader has been moved to the Marcatl in the AV file according to the SEARCH command, the AV data editing unit 15 makes the section between Marcail and Mark # 2 read when issuing a READ command to the unit 10 of the common file system. In this way, the VOBUs in the Cell # 1 are read from the DVD-RAM, before they are decoded sequentially by the MPEG decoder 4 and displayed as images on the TV monitor 72. Once you have decoded the VOBU up to Mark # 2, the AV data editing unit 15 performs the same processing performed for the remaining cells. In doing so, the AV data editing unit 15 has only the sections indicated as cells # 1, # 2, # 3, and # 4 reproduced. The AV file shown in Figure 44A is a movie that was broadcast on television. Figure 44F shows the image content of the different sections in this AV file. The section between time tO and time ti is the credit sequence VI that shows the cast and the director of the film. The section between time ti and time t2 is the first diffusion sequence V2 of the film itself. The section between time t2 and time t3 is a commercial V3 sequence that was inserted into the TV broadcast. The section between time t3 and time t4 is the second broadcast sequence V4 in the movie. The section between time t5 and time t6 is the third sequence of diffusion V5 in the film. Here, times ti, t2, t3, t4, t5, and td are set as Marcatl, Mark2, Mark # 3, Mark # 4, Mark # 5 and Mark # 6 and pairs of marks are adjusted as cells. The display order of the cells is set as a program string. When a reading is made as shown in Figure 44E, the AV data editing unit 15 causes the credit sequence Vl to skip, so that the playback starts with the first movie sequence V2 given the time ti and the t2.
After this, the AV data editing unit 15 causes the commercial sequence V3 to skip, and causes the second movie sequence V4 between the time t3 and the t4 to be reproduced. The following is a description of the operation of the data editing unit 15 AV when the user indicates an operation of , actual edition, with reference to Figures 45A to 45E and Figures 6A to 46F. Figures 45A to 45E show a complementary example of the processing of the AV data editing unit 15 in the flow chart of Figure 43. The variables mx, Af in the flow chart of Figure 43 and Figures 45A to 45E indicates a position in the AV file. The following explanation deals with the processing of the AV data editing unit 15 for a real editing operation. First, in step S8, the AV data editing unit 15 determines at least two sections to be cut from the present AV file according to the program quality that was generated during a virtual editing operation.
The "source AV file" in Figure 45A has been given the Marcail, Mark # 2, Mark # 3, ... # 8 points. The cells that have been adjusted for this source AV file are defined by pairs of Marcail brand points, # 2, # 3, ... # 8, so that the AV data editing unit 15 handles the mark points in each pair as an edit start point and an edit completion point, respectively. As a result, the AV data editing unit 15 treats the pair of Marks # 1 and # 2 as the edit start point "Entry (1)" and the editing ending point "exit (1)". The AV data editing unit 15 similarly treats the trademark pairs # 3, and # 4 as the edit start point "Entry (2) "and the completion and edition point "Output (2)", the pair of Marks # 5 and # 6 as the start point of the "Input (3)" edit and the "Exit (3)" edition ending point, and the Pair of Marks # 7 and # 8 as the edit start point "Input (4)" and the edit ending point "Output (4)". The period between Marcail and Mark # 2 corresponds to the first movie sequence V2 between time ti and time t2 shown in Figure 44F. Similarly, the period between Mark i3 and Mark # 4 corresponds to the second movie sequence V4 between time t3 and time t4 shown in Figure 44F. and the period between Mark # 5 and Mark # 6 corresponding to the second movie sequence V5 between time t5 and time t6. Therefore, when indicating this operation of real edition, the user obtains an AV file that only includes the film sequence of V2, V4 and V5. Then, in step S9, the AV data editing unit 15 issues a DIVID command to unit 11 of the AV file system to make the determined division region divided into mx AV files (where mx is an integer not less than 2). The AV data editing unit 15 treats each closed area indicated by a pair of an edit start point and an edit completion pair in Figure 45A as an area to be cut, and thus cuts the four AV files shown in Figure 45B.
The AV data editing unit 15 subsequently specifies one of the mx to AV files cut using the Af variable, with the files cut which are numbered AV file, Afl, Af2, Af3, ... Afm. In step S10, the AV data editing unit 15 adjusts the variable Af in to initialize the variable Af. In step Sil, the AV data editing unit 15 issues a READ command to unit 11 of the AV file system for the VOBUs (hereinafter referred to as the "final part") located at the end of the AV Af file and the VOBU (later called in the present "first part") located at the beginning of the AV Af + 1 file. After issuing these commands, in step S12, the AV data editing unit 15 uses the same procedure as the second mode to re-encode the last part of the AV file and the first part of the AV Af file. After re-encoding, the AV data editing unit 15 issued an "ACERT" command to unit 11 of the AV file system for the last part of the Af file and the first part of the Af + 1 (Af2) file.
In Figure 45C, the last part of the AV file Afl and the first part of the AV file Af2 are read as a result of the READ command and are re-encoded. As a result of the re-encoding process, the re-encoded data induced to re-encode the read data accumulates in the memory of the DVD recorder 70. In step S13, the AV data editing unit 15 issues an ACORT command which results in the area previously occupied by the last and first part read being deleted. It should be noted that deletion performed in this manner results in one of the following two cases. The first case is when in spite of whether either the AV Afl file or the AV Af + 1 file, whose sections are to be re-encoded have been deleted, has a continuous length that is equal to or greater than the length of the AV block, The continuous length of the other AV file is below the data size of an AV block. Since the length of an AV block is set to the length that prevents overflows from occurring, if the AV Af or Af + 1 file is played in a state where its continuous length is shorter than the length of an AV block, an subflow in the track buffer. The second case is where the data size of the data (data in memory) that has been re-encoded and stored in the memory is below the size (length) of data of an AV block. When the data size of the data in memory is larger and thus occupy a region in a DVD-RAM that is equal to or greater than an AV block, the data can be stored in a different position in the DVD-RAM far of AV Af and Af + 1 files. However, when the size of the data in memory is smaller than an AV block, the data can not be stored in a different position on the DVD-RAM away from the AV Afl files and Af + 1. This is for the following reasons.
During a reading performed for in-memory data that is smaller than the size of an AV block but stored in a separate position, a sufficient amount of data can not accumulate in the track buffer. If the jump from the memory data to the AV Af + 1 file takes a relatively long time, a subflow will be taking or will occur in the track buffer while the jump is taking place. In Figure 45D, dashed lines show that the last part of the AV Afl file and the first part of the file • AV Af2 have been erased. This results in the length of the AV Afl file that is below the length of an AV block, and in the length of the data in memory that is below the length of an AV block. If this AV Afl file is left as it is, there is a risk that the subflow will occur when jumping from the AV Afl file to the AV file Af2. To prevent the occurrence of these subflows, in step S14, the AV data editing unit 15 issues an ANNEXAR command for the AV file and the AV Af + 1 file. As shown in Figure 45E and Figure 46A, this processing results in the binding of the AV Afl file and the re-encoded VOBUs, so that the continuous length of the recording region for all the extensions that make up the AV Afl file ends equal to or longer than the length of a AV block. After issuing the ANEXAR command, the AV data editing unit 15 judges in step S15 whether the Af variable corresponds to the number of AV files mx-1. If the numbers do not correspond, the data editing unit V increments the variable Af_ in step S16 and returns to the step Sil. In this way, the AV data editing unit 15 repeats the processing in steps Sil to S14. After the variable Af has been increased to become "2", the AV data editing unit 15 issues a READ command so that the last part of the AV file Af2 (after the previous link) and the first part of the AV file Af3 are read, as shown in Figure 46B. Once the VOBUs in this last part and first part have been re-encoded, the resulting re-encoded data is stored in the memory in the DVD recorder 70. The regions in the DVD-RAM that were originally occupied by the first part and last part are erased as a result of the SHORT command that the AV data editing unit 15 issued by step S13. As a result, the remaining AV Af3 file has a continuous length that is below the length of an AV block. The AV data editing unit 15 issues an ANNEXAR command to unit 11 of the AV file system for the AV file Af2 and Af3, as shown in Figure 46D and 46E. This procedure is repeated until the variable Af equals the value mx-1. As a result of the above processing, the extensions in the storage area only contain the movie sequences V2, V4 and V5. These extensions each have a continuous length that is above the length of an AV block, so as to ensure that there will be no interruptions to the image display during playback of these AV files. The period between Marcail and Mark # 2 corresponds to the first movie sequence V2. The period between Brand # 3 and Brand # 4 corresponds to the first film sequence V4, and the period between Marcaid and Marcaid corresponds to the third film sequence V5. As a result, when performing an editing operation, the user can obtain a sequence composed of AV files only for the movie sequences V2, V4 and V5. (3-2-7-1-2) Processing of Unit 11 of the AV File System When the Order is Issued Divide The following discussion deals with processing details by unit 11 of the AV file system when it provides extended functions in response to a DIVID command. Figure 48A shows the operation of unit 11 of the AV file system when extended functions are provided in response to a DIVID command. In this flowchart, one of the mx pairs of an edit start point (entry point) and an edit completion point (exit point) is indicated using the variable h. In step S22, the value "1" is substituted for the variable h so that the first pair of the entry point and the exit point are processed. The unit 11 of the AV file system generates an entry (h) in step S31, and edition to the file identifier (h) for the file entry (h) in a directory file of a temporary directory. In step S33, unit 11 of the AV file system calculates the first address s of the sequence of u logical blocks (where u> 1) from the logic block corresponding to the entry point (h) to the logic block which correspond to the exit point (h), and the number of occupied r blocks. In step S34, unit 11 of the AV file system generates allocation descriptors within the file entry (h). In step S35, the AV file system unit records the first address of the sequence of the logical blocks and the number of occupied blocks in each of the allocation descriptors. In step S35, unit 11 of the AV file system judges whether the variable h has reached the value mx-1.
If the variable h has not reached this value, the unit 11 of the AV file system increases the variable h and returns to step S31. In doing so, the unit 11 of the AV file system repeats the processing in steps S31 to S35 until the variable h reaches the value mx-1, and thus cuts the closed sections within each of the mx-1 pairs of an entry point and an exit point as AV files. (3-2-7-1-3) Processing of Unit 11 of the AV File System When an Order is Issued Shorten The explanation deals with the processing of unit 11 of the AV file system when it provides the extended functions in response to the command SHORT. Figure 48 is a flow diagram showing the content of this processing. In step S38, the unit 11 of the AV file system calculates both the first address c of the logical block sequence between the delete start address and the delete completion address which specifies the area to be erased and the number of occupied blocks. In step S45, the unit 11 of the AV file system accesses the allocation identifiers of the AV file whose first or last part is to be deleted. In step S46, unit 11 of the AV file system judges whether the area to be erased is the first part of the extension of an AV file. If the area to be erased is the first part of an extension ("Yes" in step S46), unit 11 of the AV file system proceeds to step S47 and updates the first storage address p of the extension to the first storage address p + c * d in the allocation descriptor. After this, in step S48, the unit 11 of the AV file system updates the data size q of the extension of the number q of the occupied blocks given in the data size assignment descriptor q-c * d. On the other hand, if in step S46 the unit 11 of the AV file system finds that the area to be deleted is the last part of an AV file, the unit 11 of the AV file system directly proceeds to step S48, and update the data size q of the extension of the number q of the occupied blocks given in the data size allocation descriptor qc * d. (3-2-7-1-4) Processing of Unit 11 of the AV File System When an Order is Issued ANNEX The following discussion deals with the processing content of unit 11 of the AV file system when it provides extended functions in response to an ANNEXAR command. The following explanation is intended to clarify the procedure used to process the areas enclosed by the dot and dash lines y3, y4 in Figure 45E and Figure 46D. In response to an ANEXAR command, unit 11 of the AV file system fixes the AV Af and Af + 1 files, which were partially erased as a result of the DIVIDE and SHORT commands, and the re-encoded data (data in memory), which they are present in the memory of the DVD recorder 70 as a result of the re-encoding, in the DVD-RAM in a manner that allows seamless reproduction of the AV Af file, the data in the memory, and the AV Af + file. 1 in that order. Figure 47A shows an example of AV data processed by unit 11 of the AV file system when it provides extended functions in response to an ANNEXAR command. In Figure 47A, the AV x and y files have been processed according to an order DIVIDE . The virtual edition has defined a playback path by which the AV data is played in the order AV file x? data in memory? AV files and. Figure 47A shows an example playback path for the AV data in the AV x and y files. In Figure 47A, the horizontal axis represents the time, so that the playback path can be viewed to adjust the display order as AV x? File. data in memory? AV file and.
From the AV data in the AV x file, the data part m located at the end of the AV x file is stored in a consecutive area of the DVD ^ RAM, with this being called the "previous extension". From the AV data in the AV file and, the data part n located at the beginning of the AV file and also stored in a consecutive area of the DVD-RAM, with this being called the "last extension". As a result of the "DIVIDE" command, the AV x and y files are obtained with certain actions of the AV data that have been cut. However, while the file system manages the areas on the disk that correspond to the cut data as if they were empty, the data in the original AV file is actually left as is the logical blocks in the DVD-RAM. It is assumed that when the reproduction path is set by the user, the user need not consider the manner in which the AV blocks on the DVD-RAM store the cut AV files. As a result, there is no way in which the positions in the DVD-RAM can be identified by certain and that it stores the previous and last extensions. Even if the specification path specifies the. order as AV x file? AV file and, there is a possibility that the AV data that is not related to the reproduction path are present on the disk between the previous and last extension. In view of the previous consideration, the link of the AV files cut by the DIVIDIR command does not assume that the previous extension and the last extension are recorded in consecutive positions in the DVD-RAM, and in this way they must assume instead that the previous one The extension and the last extension are recorded in completely unrelated positions on the DVD-RAM. Here, it must be assumed that at least one "extension of different files", which is not related to the reproduction path indicated by the AV x and y files, is present between the storage regions of the previous extension and the last extension. Figure 47B shows a representation of the positional relationship of the storage areas in the DVD-RAM of the previous extension and the last extension, in view of the previous consideration. The AV x file that includes the previous extension is partially cut off as a result of the divide order, and thus includes an empty area when the former extension was formerly present. This area is called the Departure area. As described above, this Output area actually still includes the data from the AV x file that it cut, although the unit 11 of the AV file system treats the area as an empty area since the DIVIDIR command has already been issued. The AV file and including the last extension is partially cut off as a result of the DIVIDIR command, and thus includes an empty area where the last extension was previously present. This area is called the Entrance area. As described above, this Input area actually still includes data from the AV file and was cut, even though unit 11 of the AV file system treats the area as an empty area since the DIVID command has already been issued. In Figure 47B, the previous extension is stored in a position preceding the last extension, although this illustrates only one example, so that it is perfectly possible that the last extension is stored in a position preceding the previous • ext ens ion. In the present example, the other file extension is present between the previous extension and the last extension. While the Input area and the Output area are ideal for recording the data in memory, the continuous length and the input area and the output area is restricted due to the presence of another file extension between the previous extension and the last extension. In step S62 in the flow diagram of Figure 49, unit 11 of the AV file system calculates the data size of the Output area, and data size of the Input area.
When finding the data size of the Input area and the Output area, the unit 11 of the AV file system refers to the data size m of the previous extension and the data size n of the last extension and judges the previous one Extension may cause a subflow in the track buffer during playback. (3-2-7-1-4-1) Processing When the Previous Extension m is Less than the AV Block Length When the former extension m is shorter than the length of the AV block and the last extension n is at least equal to the length of the AV block, a subflow for the former extension m may occur. Processing proceeds to step S70 in Figure 50. Figure 50 is a flowchart when the former extension m is shorter than the length of the AV block and the last extension n is at least equal to the block length AV. The processing for unit 11 of the AV file system in Figure 50 is explained with reference to Figures 51, 52 and 53. Figures 51, 52 and 53 show the relationships between the data sizes of the extensions m and n, the area Input and the output area i and j, the data in memory k, and the block AV B, as well as the areas in which each piece of that data is recorded and the areas to which the data moves. The previous extension is shorter than the length of the AV block. As a result, a subflow will occur if no corrective action was taken. Accordingly, the flow chart in Figure 50 shows the processing to determine the appropriate storage location for the previous extension and the data in memory. In step S70, it is judged whether the sum of the sizes of the previous extension and the data in memory is equal to or greater than the length of the AV block. If so, the processing proceeds to step S71, and it is judged if the Input area is greater than the data in memory. When the Input area is greater than the data in memory, the data in memory is written in the Output area to be the consecutive length of the previous extension at least equal to the length of the AV block. Figure 51A shows an array of the previous extension, the last extension, the Input area and the Output area on the DVD-RAM in a ratio i = k, m + k > B. In Figure 51B, when the data in memory is recorded in the output area, the consecutive length of the previous extension becomes at least equal to the length of the AV block. On the other hand, when the output area is smaller than the data in memory, the data is moved. Figure 52A shows an array of the previous extension, the last extension, the input area and the output area on the DVD-RAM in an i < k, m + k > B. In Figure 52A, the previous extension is read first in the memory, and in Figure 52B the previous extension is written in an empty area in the same area as the previous extension. After the first extension has been moved, the data in memory is written immediately after the previous moved extension, as shown in Figure 52C. When the sum of the sizes of the previous extension and the data in memory is smaller. that the length of the AV block, the processing proceeds to step S72. In step S72, it is judged whether the sum of the sizes of the previous extension, the last extension, and judged whether the data in memory is at least equal to the two AV block lengths. When the sum of the sizes is less than the length of the AV block, even if the data is moved, the size is smaller than the length of the AV block. As a result, a subflow occurs. When the sum of the sizes is less than the lengths of the two AV blocks, even if the previous extension, and the data in memory, and the last extension are written in a logical block, the recording time will not be too long. The flow diagram in Figure 50, when the sum of the sizes of the data in memory, the previous extension, and the last extension is smaller than the two AV blocks, the processing proceeds from step S71 to step S73, and the Previous extension and last extension move. Figure 53A shows an array of the previous extension, the last extension, of the input area and the output area on the DVD-RAM in a ratio i <; k, m + k < B, B = m + n + k < 2B. In this case, a search is performed for an empty area in the same area as the previous extension and the last extension. When an empty area is found, the previous extension is read into memory and written to the empty area to move the previous extension to the empty area, as shown in Figure 53B. After the movement, the data in memory is written just after the previous moved extension, as shown in Figure 53C. After the data in memory has been written, the previous extension is read in memory and written immediately after the occupied area of the data in memory to move the last extension to the empty area, as shown in Figure 53D. When the sum of the sizes of the data in memory, the previous extension, and the last extension is at least equal to the lengths of the two AV blocks, the processing proceeds from step S72 to step S74. When the sum of the sizes is equal to or greater than two AV block lengths, it will take a longer time to write the data in the logic block. Meanwhile, a simple method in which the previous extension is moved and the data in memory is written just after the previous moved extension should not be adopted in view of the access speed. Here, it should be noted especially that the processing proceeds from step S72 to step S74 because the sum of the sizes of the data in memory and the previous extension is less than the length of the AV block. The reason why the sum of the sizes of the data in memory and the previous extension is less than the length of the AV block even the sum of the sizes of the data in memory, the previous extension, and the last extension is at least equal to two AV block lengths is that the size of the last extension is relatively large, with the difference between the size of the last extension and the length of the AV block that is large. As a result, when the sum of the sizes of the previous extension and the data in memory is less than the length of the AV block, part of the data in the last extension can be added to the sum, with - there is no risk of that the size of the remaining data of the last extension is insufficient. When the sum of the sizes of the data in memory, the previous extension, and the last extension is at least equal to two AV block lengths; the processing proceeds from step S72 to step S74, and the data is linked in the manner shown in Figures 54A to 54D. Figure 54A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a m + k < B, m + n + = 2B. In this case, a search is performed for an empty area in the same area as the previous extension and the last extension. When this empty area is found, the previous extension is read into memory and then written into the empty area to move the previous extension, as shown in Figure 54B. Then, the data in memory is written immediately after the previous moved extension, as shown in Figure 54C. when the data has been written into memory, a data set that is large enough to make the size of the data in this empty area equal to the block size AV moves from the beginning of the last extension just after the data in memory as shown in Figure 54D. After the previous extension, the data in memory and the front of the last extension are linked in the procedure described above, the file entries of the AV Af file that includes the previous extension and the AV Af + 1 file are integrated. You get an integrated file entry, and the processing is finished. (3-2-7-1-4-2) Processing When the Last Extension n is shorter than the length of the AV block When the "No" judgment is given in step S63 in the flow diagram of Figure 49, processing proceeds to step S64 where it is judged whether the former extension m is at least equal to the length of the AV block but the last extension n is shorter than the length of the AV block. In other words, in step S63, it is judged whether a flow for the last extension can occur. Figure 55 is a flow chart when the last extension is shorter than the length of the AV block and the previous extension is at least equal to the length of the AV block. The processing with the unit 11 of the AV file system in the flow diagram in Figure 55 is explained with reference to Figures 56, 57, 58 and 59. Figures 56, 57, 58 and 59 show the relationships between the sizes data of the extensions myn, the input area and the output area i and j, the data in memory k, and the block AV B, as well as the areas in which each piece of data is recorded and the areas to which the data moves. In step S75, it is judged whether the sum of the sizes of the last extension and the data in memory is at least equal to the length of the AV block. If so, processing proceeds from step S76 to step S76, where it is judged if the input area is greater than the. data in memory. Figure 56A shows an array. of the previous extension, the last extension, the input area, and the output area in the DVD-RAM in a relation j = k, n + k > B. In Figure 56B, recording the data in memory in the input area results in the consecutive length of the last extension that becomes at least equal to the length of the AV block. On the other hand, when the input area is smaller than the data in memory, the data is moved. Figure 57A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a j < k, n + k = B. In this case, a search is performed for an empty area in the same area as the previous extension and the last extension. When this area is empty, the data in memory is written to the empty area as shown in Figure 57B. The last extension is then read into memory and written immediately after the occupied area of the data in memory, as shown in Figure 57C. When the sum of the sizes of the last extension and the data in memory is less than the length of the AV block, the processing proceeds from. step S75 to step S i l. In step S i 1, it is judged whether the sum of the sizes of the previous extension, the last extension and the data in memory are at least equal to the two AV block lengths. When the sum of the sizes is less than two AV block lengths, processing proceeds to step S78. Figure 58A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a j < k, n + k < B, m + n + k < 2B. In step S78, unit 11 of the AV file system searches for an empty area in the same area as the previous extension and the last extension. When this empty area is found, the previous extension is read into memory and written into the empty area to move the previous extension to the empty area, as shown in Figure 58B. Then, the data in memory is written immediately after the previous moved extension, as shown in Figure 58C. When the data in memory has been written, the last extension is read into memory and written immediately after the area occupied by the data in memory to move the • last extension to the empty area, as shown in Figure 58D. When the sum of the sizes of the data in memory, the previous extension and the last extension is equal to the two lengths of AV block, the processing proceeds from step S77 to step S79, and the sides are linked in the manner shown in Figure 59A to 59D. Figure 59A shows an array of the previous extension, the last extension, the input area and the output area on the DVD-RAM in a n + k <relation; B, m + n + k = 2B. In this case, a search is performed for an empty area in the same area as the previous extension and the last extension. When this area is found empty, the data with a data size of which it is (the block length AV - (n + k)) is moved from the end of the previous extension to the empty area, as shown in Figure 59B. As shown in Figure 59C, the data in memory is written immediately after this data moves from the previous extension when the data in memory has been written, the last extension moves immediately after the occupied area of the data in "memory, as shown in Figure 59D. When the" No "judgment is given in step S64 in the flow chart in Figure 49, processing proceeds to step S65, where it is judged whether both the previous extension m and The last extension n is shorter than the length of the AV block is judged.In other words, it is judged whether a subflow can occur for both the anterior extension m and the last extension n. Figure 60 is a flow chart for when both the previous extension and the last extension are shorter than the length of the AV block.The processing by unit 11 of the AV file system in the flow diagram in Figure 60 is explained with reference to Figures 61, 62, 63, and 64. Figures 61, 62, 63 and 64 show the relationships between the data sizes of the extensions m and n, in the entrance area and the exit area i and j , the data in memory k, and the block AV B, as well as the areas in which each piece of data is recorded and the areas to which the data moves in. In step S80 in this flowchart, it is judged if the sum of the sizes of the data in memory, the previous extension, and the last extension is at least equal to the length of the AV block, otherwise processing proceeds to step S81, in this case, the sum of the sizes of the previous extension, the data in memory, and the last extension are more that the length of the AV block. As a result, it is judged if there is an extension that follows the last extension. When no extension follows the last extension, the last extension is the end of the AV file that is created by the data link, so no additional processing is needed. When an extension follows the last extension, a subflow may occur since the sum of the sizes of the previous extension, the data in memory, and the last extension is less than the length of the AV block. In order to avoid this flow process, when the extension following the last extension is linked to the last extension by the link processing shown in Figures 61A-61D. Figure 61A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a relation m + n + k < B. In step S81, unit 11 of the AV file system writes the data into memory in the entry area, as shown in Figure 61B. When the data in memory has been written in the input area, the unit 11 of the AV file system reads the last extension in memory and writes the last extension read immediately after the area occupied by the data in memory to move the last extension to the empty area, as shown in Figure 61C. Then, as shown in Figure 61D, unit 11 of the AV file system takes the data whose size is (block length AV - (previous extension + data in memory + last extension)) of the extension that follows the last extension. Unit 11 of the AV file system links this data with the previous extension, the data in memory, and the last extension. When the sum of the sizes of the previous extension, the last extension, and the data in memory is at least equal to the length of the AV block, the processing proceeds to step S82. In step S82, unit 11 of the AV file system judges whether the data size of the output area following the previous extension is smaller than the sum of the sizes of the previous extension and the data in memory. If not,, processing proceeds to step S63. Figure 62A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a ratio i = n + k, m + n + k = B. In step S83, unit 11 of the AV file system writes the data into memory in the entry area, as shown in Figure 62B. After writing the data in memory, the unit 11 'of the AV file system reads the last extension in the memory and writes the last - extension immediy after the occupied area of the data in memory to move the last extension. When the data size of the output area following the previous extension is smaller than the sum of the sizes of the previous extension and the data in memory, the processing proceeds from step S82 to step S84. In step S84, it is judged whether the data size of the input area proceeding to the previous extension is smaller than the sum of the sizes of the previous extension and the data in memory. If not, the processing proceeds to step S85. Figure 63A shows an array of the previous extension, the last extension, the input area, and the output area on the DVD-RAM in an i < n + k, m + n + k = B. In step S85, unit 11 of the AV file system writes the data into memory in the entry area as shown in Figure 63B. After writing the data into memory, unit 11 of the AV file system reads the previous extension in memory and writes the previous extension into a storage area immediy before the occupied area of the data in memory to move the previous extension to the entrance area, as shown in Figure 63C. When the "No" judgment is given in step S84, the processing proceeds to step S86. Figure 64A shows an array of the previous extension, the last extension, the input area and the output area on the DVD-RAM in an i < n + k, j < m + k, m + n + k > B. In step S86, it is judged whether the sum of the sizes of the previous extension, the last extension, and the data in memory is more than two AV block lengths. If not, each unit 11 of the AV file system searches for an empty area in the same amount as the previous extension. When an empty area is found, unit 11 of the AV file system reads the previous extension in memory and writes the previous extension read in the empty area to move the previous extension to the empty area, as shown in Figure 64B. After the movement, the unit 11 of the AV file system writes the data into memory in a storage area immediy after the previous moved extension, as shown in Figure 64C. After writing the data in memory, the unit 11 of the AV file system reads the previous extension in memory and writes the last extension in a storage area just after the occupied area of the data in memory to move the last extension to the empty area, as shown in Figure 64D. When the combined size of the previous extension, the last extension, and the data in memory exceed the AV blocks, it is judged whether either the entry area or the exit area is large. When the output area is large, a part of the data in memory is recorded in the output area to make the continuous length equal to the length of the AV block. The remaining part of the data in memory is recorded in a different empty area, and the last extension is moved to a position directly after this remaining part of the data in memory.
When the input area is large, in unit 11 of the AV file system moves the previous extension to an empty area and records a first part of the data area in memory to make the continuous length equal to the length of the AV block. After this, the remaining part of the data in memory is recorded in the input area. As a result of the previous processing to move the extensions, the total consecutive length can be maintained equal to or below two AV block lengths. After the previous extension, the data in memory and the front of the last extension are linked in the processing described above, the file entries of the AV Af file including the previous extension and the AV Af + 1 file are integr. An integr file entry is obtained, and processing ends. (3-2-7-1-4-3) Processing When Both Previous Extension as the Last Extension are at least Equal to AV Block Length When the "No" judgment is given in step 565 in the flow diagram of Figure 49, the processing proceeds to step S66 where it is judged whether the data in memory is at least equal to the length of the AV block. If so, the data in memory is recorded in an empty area and the processing ends. When the "No" judgment is given in step 566 in the flowchart of Figure 49, unit 11 of the AV file system judges whether the previous extension m is at least equal to the length of the AV block, the last extension n is at least equal to the length of the AV block , but the data in memory is less than the combined size of the input area i and the output area j. Figure 65 is a flowchart when the last extent is at least equal to the length of the AV block. Figures 66A-66D show a complementary example showing the processing of unit 11 of the AV file system in Figure 65. In Figure 66A, the previous extension and the last extension are both at least equal to the length of the block AV. The. Figures 66B-66D show how the data in memory, and the extensions are recorded in the entry area, the exit area, and other empty areas as the result of the steps in Figure 65. In this case, there is no risk of that a subflow occurs for any of the previous or last extension. However, it would be ideal, if the data in memory could be recorded in at least one of the output area that follows the AV Af file and the input area that precedes the AF Af + 1 file without causing the previous or last one to move. extension . In step S87 of the flow chart of Figure 65, it is judged whether the size of the output area exceeds the data size of the data in memory. If so, the data in memory is simply recorded in the output area in step S88. As shown in Figure 66B.
If the size of the output area is below the size of the data in memory, the processing proceeds to step S89, where it is judged whether the size of the input area exceeds the data size of the data in memory. If so, the data in memory is simply recorded in the input area in step S90, as shown in Figure 66C. If the data in memory can not be recorded in the input area or the output area, processing proceeds to step S91 where the data in memory is divided into two parts that are respectively recorded in the input area and the output area , as shown in Figure 66D. After the previous extension, the data in memory and the front of the last extension are linked in the procedure described above, the file entries of the AV Af file including the previous extension and the AV Af + 1 file are integrated. You get an integrated file entry, and the processing ends. (3-2-7-1-4-4) Processing When Both Previous Extension as the Last Extension are at least Equal to AV Block Length In step S69 in the flow diagram of Figure 49, it is judged whether the anterior extension m is at least equal to the length of the AV block and the last extension n is at least equal to the length of the AV block, but the size of the data in memory k exceeds the combined size of the output area j and the input area i. Figure 67 is a flow chart showing the processing when both the previous extension but the combined size of the input area and the output area is below the data size of the data in memory. Figure 68A-66E show complementary examples for the processing of unit 11 of the AV file system in the flow diagram of Figure 67. In Figure 68A, both the previous extension and the last extension are at least equal to the length of the AV block. Figures 68B-68D show how extensions and data in memory are recorded in the entry area, the exit area, and other empty areas as the result of the steps in Figure 67. In this case, both the previous extension and the last extension are at least equal to the length of the AV block, so there is no risk that a subflow occurs, although the recording area of the data in memory must have a continuous length that is at least equal to the length of the AV block. In step S92, it is judged whether the total size of the previous extension and the data in memory is at least equal to the two AV block lengths. If the total size exceeds two AV block lengths, the processing proceeds to step S93 where the data whose size is (block length AV - size of data in memory data k) is read from the end of the previous extension and is moved to an empty area where the data is also recorded in memory. This results in the recording status of this empty area and both extensions that are equal to the AV block length, as shown in Figure 68B.
If the "No" judgment is given in step S92, the processing proceeds to step S94, where it is judged whether the total size of the last extension and the data in memory is at least equal to the two lengths of the AV block. If so, the processing follows the pattern, in step S92, since an excessively long logical block write operation is to be edited and since a relatively large amount of data can be moved from the last extension without any risk of that the last extension ends shorter than the length of the AV block. If the total size of the last extension and the data in memory is at least equal to two AV block lengths, processing proceeds to step S95, where the data whose size is (AV block length - size of data in memory data) k) they are read from the beginning of the last extension and move to an empty area in the same zone as the previous and last extensions, where the data is also recorded in memory. This results in the recording status of this empty area and both extensions are equal to the length of the AV block, as shown in Figure 68C. If the total size of the previous extension in the data in memory is below two AV block lengths, the total size of the last extension and the data in memory is below two AV block lengths, the total data amount which is written in the logical batches will be less than two AV block lengths, so that the movement processing can be performed without interest of the time taken by the included writing processing. Accordingly, when the total size of the previous extension and the data in memory is below two AV block lengths, and the total size of the last extension and the data in memory is below two AV block lengths, the processing proceeds to step S96, where the largest of the previous extension and the last extension is located. In this situation, any of the previous or last extension can be moved, although in the present modality, it is ideal for the smallest of the two that moves; therefore this judgment in step S96. When the former extension is the smaller of the two, in step S97 the previous extension is moved, with the data in memory which is then recorded in a position immediately after the data in memory. When this is done, the continuous length of the data recorded in this empty area will be below two AV block lengths, as shown in Figure 68D. When the last extension is the smaller of the two, in step S98 the last extension is moved, with the data in memory which is then recorded in a position immediately before the data in memory. When this is done, the continuous length of the data recorded in this empty area will be below two AV block lengths, as shown in Figure 68E. After the previous extension, the data in memory and the front of the last extension are linked in the previous procedure, the file entries in the AV Af file that includes the previous extension and the AV Af + 1 file are integrated. An integrated file entry is obtained, and processing ends. The flow diagrams for the processing of "ANNEX" in a - circumstantial variety have been explained, whereby it is possible to limit the data size of the data moved and recorded to two AV block lengths in the worst case scenario. However, this does not mean that there are no cases where the data- that have two AV block lengths need to be written, with the following two cases describing these exceptions where the data exhibiting two AV block lengths need to be written. In the first section, an empty area with a continuous length of two AV block lengths is required, although only empty areas separated by AV block length are available. In this case, to create an empty area with a continuous length of two AV block lengths, the AV data must be moved for a length of the AV block. In the second exception, in step S81 of Figure 60, the movement of the data since the last extension results in that the remaining part of the last extension becomes below the block length AV. In this case, an additional motion operation becomes necessary, with the total amount of data moved in the complete processing that exceeds two AV block lengths. While the above explanation deals only with the connection of two AV files, of the data in memory, an "ANNEXAR" command can be executed to make an AV file and the data in memory. This case is the same as when the data is added to the final station in an AV file, so that the total size after this edition needs to be at least equal to the AV block size. As a result, the data in memory is recorded in the output area following this final extension. When the output area is too small to record all the data in memory, the remaining part of the data in memory can be recorded in a separate, empty AV block. The previous link process you have explained for the premise of seamless playback within a file, although you can also use it for seamless cross-file playback. Seamless cross-play files refer to a branch in the playback from one AV file present to another AV file. In the same way as described above, when two AV files and data in memory are linked, the continuous length of each extension must be at least equal to the length of the AV block, so a link procedure must be used thorough. This ends the explanation of the linking procedure used by unit 11 of the AV file system. (3-2-7-1-5) Updating the VOB Information and the PGC Information The following is an explanation of the update of the VOB information (time map table, seamless link information), and PGC information (cell information) when a DIVIDE command and ANEXAR command is executed.
First, the procedure when a DIVIDIR command has been executed will be explained. Of the plurality of AV files that are obtained by the execution of the order DIVIDIR, an AV file is assigned to the same AV_Archivo_I D as the AV file that recorded the VOB from which it was divided .. The AV_Archivo_IDs of the other AV files divide the file AV however, new values need to be assigned. The VOBs that were originally recorded as an AV file will lose several sections due to the execution of a DIVID command, so that the marks that indicated the lost sections need to be deleted. In the same way, the cell information that gives these marks as the start points and the end points need to be deleted from the RTRW administration file. In addition to deleting the mark points, it is necessary to generate new cell information indicating the video presentation start table of the AV file, C_V_S_PTM and the video presentation completion table of the AV file as C_V_E_PTM, and to add this new one Cell information to the RTRW administration file. The VOB information that includes the seamless link information and the time map table is divided into a plurality of parts when the corresponding VOB is divided. In more detail, mx VOB is produced by the division, the VOB information is divided to give mx time map tables and mx seamless link information sets. The video display start time VOB_V_S_PTM and the video display completion time VOB_V_E_PTM of a VOB generated by the processing accompanying the execution of the DIVIDIR command is respectively adjusted based on the C_V_S_PTM, C_V_E_PTM indicated by the start point and the end point in the cell information used by the DIVID command. The Last_SCR and the First_SCR in the seamless link information are also updated. The following is a description of how the information is updated when an ANNEXAR command has been executed. The execution of an ANEXAR command results in an AV file that was produced from a plurality of AV files, so that the VOBs included in this plurality of AV files will be composed of frame data sets that do not interrelate, say that the timestamps through your AV files will not be continuous. Since these are administered as a VOB that differs from the plurality of VOBs that were originally included in different AV files, the separate VOB_IDs will be assigned to these VOBs. The other necessary processing is as described in the second embodiment. However, the C_V_E_PTM in the cell information specifying a division area needs to be incremented by the number of frames included in the part of the previous VOBU that has been encoded. Similarly, the C_V_S_PTM in the cell information specifying a division area in a last AV file needs to be decreased by the number of frames included in the part of the last VOBU that has been coded. (3-2-3) The fragmenting unit 16 is connected to a fixed magnetic disk apparatus. This defragment unit 16 reads an extension, from the extensions recorded in the DVD-RAM that have been submitted to. link processing or other processing, having an empty area on either side of its recording area describes this extension in the fixed magnetic disk apparatus to generate backup data on the fixed magnetic disk apparatus. After writing all of these extensions into the fixed magnetic disk apparatus, the fragment ac unit 16 reads the generated backup data and writes the backup data for the backed extension in the empty area adjacent to the extension. Here, extensions that have an empty area adjacent to their recording area are extensions that have been generated by unit 11 of the AV file system executing an order DIVIDIR 'an order "SHORTEN". These empty areas match the areas that have been cleaned and are not used as the recording area of the data in memory or the area moved to for an extension when an ANNEXAR order has been made. Figures 69A-69D show an example illustrating the operation of the fragmentation unit 16. Figure 69A, extension ix is shown as an extension with empty areas i, j on both sides of its recording area. As shown in Figure 69A, the defragmentation unit 16 detects this extension, reads it from the DVD recorder apparatus 70, and writes it to the fixed magnetic disk apparatus. As a result of this writing operation, the backup data is generated in the fixed magnetic disk apparatus, as shown in Figure 69B. After this, the fragmenting unit 16 reads the backup data from the fixed magnetic disk apparatus, as shown in Figure 69C, and writes the extension to the DVD-RAM to use both the current recording area of the extension ix and the empty area j followed by this recording area. This creates a continuous empty area of length i + j before the extension ix, as shown in Figure 69D. When performing this procedure for extension i and then, the continuous length of the empty area can be increased further. The recording made by the fragmentation unit 16 is achieved by first storing an extension in the fixed magnetic disk apparatus, so that even if a power failure occurs for the DVD recorder apparatus 70 during writing of the extension backing in the DVD-RAM, this write processing can still be re-executed. When generating the backup data before moving the extensions to the large free empty areas in the DVD-RAM, there is this risk of data loss in an extension even when there is a power failure for the DVD recorder 70. With the present embodiment described above, the editing of a plurality of AV files can be performed freely by the user. Even if a plurality of fragmentary AV files with short continuous lengths are generated, the DVD recorder 70 will be able to link these short AV files to the AV files generated with continuous lengths that are at least equal to the length of the AV block. As a result, problems caused by fragmentation of AV files can be handled, and uninterrupted playback can be performed for the AV data that is recorded in these AV files. During link processing, it is judged whether the total size of the data to be written is at least equal to the two AV block lengths, and if so, the moved amount of pre-recorded AV data is restricted. As a result, it can be guaranteed that the total size of the data to be written is below two AV block lengths, so that the link can be determined in a short amount of time. Even though it is necessary as a result of the user's editing operation for a plurality of files, to record the re-encoded data with a short continuous length, the DVD recorder 70 will record this re-encoded data in a recording position that allow the re-encoded data to be linked to the AV data that precedes or continues during playback. This means that fragmented recording of the re-encoded data is prevented from the beginning, so that uninterrupted playback for the AV data that is recorded in this AV file will be possible. It should be noted here that the movement of the data can also be performed to enable the excessive separation on the disk of two AV data sets that have been linked together. In this case, the data produced by the link of the data sets that are physically separated on the disk are arranged in a way that it will be possible to ensure the uninterrupted playback of the two AV data sets. Nevertheless, when performing special playback such as fast forward, excessive separation of the data from the disk will result in the spasmodic reproduction of the data. To ensure smooth reproduction in this case, when two sets of AV data are linked, if one of the data sets has a consecutive length that is several times a predetermined amount and an empty block of the appropriate size is placed between the two sets of data. data, the data can be moved to this empty block. By doing so, it is. It can ensure smooth reproduction for both normal playback and special playback. It should be noted here, that the time information can be taken at the mark points in the cell information and managed with the information such as the address taken from the time map table in the form of a table. By doing so, this information can be presented to the user as potential selections on a screen that shows the initial pre-edition status. You can also generate small images (known as "thumbnails") for each mark point and store them as separate files, with the indicator information that is also produced for each model. When the cell information is displayed in the pre-editing stage, these thumbnails can be displayed to show the potential selections that can be made by the user.
Also, while the present embodiment describes a case when handling video and audio data, this is not an effective limitation for the techniques of the present invention. For a DVD-ROM, sub-picture data for subtitles that have been encoded with track length and even images can be handled as well. Processing for unit 11 of the AV file system (Figures 48A, 48B, 49- -50, 55, 60, 65, 67) that were described in that third embodiment using flowcharts can be accomplished by a language program of machine. This machine language program can be distributed and sold having been recorded on a recording medium. Examples of this recording medium are an IC card, an optical disk, or a flexible disk. The machine language language program recorded on the recording medium can not be installed on a normal, personal computer. When executing the machine language programs, installed, the normal, personal computer can achieve the functions of the video data editing apparatus of this fourth embodiment.
Fourth Modality The fourth embodiment of the present invention performs a two-stage editing process composed of virtual editions and real editions using two types of program chain, specifically the user-defined PGCs and the original PGCs. To define the PGCs defined by the user and the original PGCs, a new table is added to the RTRW administration file of this fourth mode. (4-1) RTRW Administration File The following is a description of the construction of the RTRW administration file in this fourth mode. In the fourth mode, the RTRW administration file is written to the same directory as the AV files (the RTRW directory), and has the content shown in Figure 70A.
Figure 70A shows a detailed expansion of the stored content of the RTRW administration file in the fourth mode. That is, the. The logical format located on the right side of Figure 70A shows the logical format located on the left side in more detail, with the broken guides in Figure 70A showing the correspondence between the left and right sides. The logical format of the VOBs shown in Figure 70A, the RTRW administration file can be seen to include the original PGC information table, a user-defined PGC information table, and a title search indicator, in addition of the VOB information, of the first modality. (4-1-2) Contents of the PGC Original Information The original PGC information table is composed of a plurality of original PGC information sets. Each set of original PGC information is information that indicates any of the VOBs that are stored in an AV file present in the RTRW directory or sections within these VOBs, according to the order in which these are arranged in the AV file . Each set of original PGC information corresponds to one of the VOBs recorded in an AV file present in the RTRW directory, so that when an AV file is recorded in the RTRW directory, sets of original PGC information are generated by the video data editing device and are recorded in the RTRW administration file. Figure 70B shows the data format of an original PGC information set. Each set of original PGC information is composed of a plurality of cell information sets, with each set of cell information that is composed of a cell ID (CELDA il ,. i2, i3, i4, ... in the Figure 70B) which is a unique identifier assigned to the cell information set, an AV file ID (AVF_ID in Figure 70B), a V0B_ID, a C_V_S_PTM, and a C_V_E_PTM. The AV file ID is a column for writing the identifier of the AV file that corresponds to the cell information set. The VOB_ID is a column to write the identifier of a VOB that is included in the AV file. When a plurality of VOBs are included in the AV file corresponding to the cell information set, this VOB_ID indicates which of the plurality of VOBs corresponds to the present set of cell information. The cell start time C_V_E_PTM (abbreviated to C_V_S_PTM in the drawings) shows the start time of the cell indicated by the present cell information, and so has a column to write the PTS which is assigned to the start time of the first video field in the section that uses the format of the PTM descriptor. The cell termination time C_V_E_PTM (abbreviated to C_V_E_PTM in the drawings) shows the end time of the cell indicated by the present cell information, and thus has a column to describe the end time of the final video field in the section that uses the PTM descriptor format.
The time information given as the cell start time C_V_S_PTM and the end time of C_V_E_PTM shows the start time for a coding operation by the video encoder and the end time for the coding operation, with these correspond to the mark points inserted by the user. The end time of cell C_V_E_PTM in each set of cell information in an original PGC information set corresponds to the cell start time C_V_S_PTM of the next set of cell information in the given order. Since this relationship establishes between the cell information sets, a PGC indicates all the sections in a VOB without omitting any of the sections. As a result, an original PGC is unable to indicate the sections of a VOB in an order where the sections are exchanged. (4-1-3) Contents of the PGC Information defined by the User The user defined PGC information table is composed of a plurality of sets of PGC information defined by the user. The data format of the sets of the PGC information defined by the user is shown in Figure 70C. As with the sets of the original PGC information, the sets of the user-defined PGC information are composed of a plurality of cell information sets, each of which is composed of an AV file ID, a V0B_ID , a C_V_S_PTM and a C_V_E_PTM. A set of PGC information defined by the user is composed of a plurality of cell information set in the same manner as an original PGC set, although the nature and arrangement of these sets of cell information differs from those in an original PGC information set. While an original PGC information set indicates that the sections in a video object are to be reproduced sequentially in the order in which the cell information sets are arranged, a set of PGC information defined by the user it is not restricted to indicate which sections in a video object are to be played in the order in which they are arranged. The sections indicated by the cell information sets in a user-defined PGC may be the same as the sections indicated by the PGC information sets defined by the user or a part (partial section) of one of the sections indicated by the user. an original PGC information set. It is noted that it is possible for the section indicated by a set of cell information that overlaps a section indicated by another set of cell information. There may also be gaps between a section that is indicated by a set of cell information and a section that is indicated by another set of cell information. This means that PGC information sets defined by the user do not need to indicate each section in a VOB, so that one or more parts of a VOB can not be indicated. While the original PGCs have strict limitations with reference to their playback commands, the PGCs defined by the user are not subject to these limitations, so that the reproduction order of the cells can be defined in a free manner. As a specific example, the order of reproduction of the cells in a PGC defined by the user can be the inverse of the order in which the cells are arranged. Also, a user-defined PGC can indicate sections of the VOBs that are recorded in the different AV files. The original PGCs indicate the partial sections in an AV file or a VOB according to the order in which the AV file or the VOBs are arranged, so that the original PGCs can be indicated to respect the arrangement of the indicated data. In the user-defined PGCs, however, they do not have this restriction, and thus are able to indicate the sections in the user's desired order. As a result, these user-defined PGCs are ideal for storing the playback commands that are provisionally determined by the user to link a plurality of sections in them. VOB during the process of a video data editing operation. The original PGCs are associated with the AV files and the VOBs in the AV files, and the cells in an original PGC only indicate sections in these VOBs. The PGC defined by the user, meanwhile, are not limited to being associated with. the particular VOBs, so that the cell information sets included in the PGC information defined by the user can indicate sections in different VOBs. As another difference, an original PGC is generated when an AV file is recorded, whereas a user-defined PGC can be generated at any point after the recording of an AV file. (4-1-4) PGC Information Unit Video Attribute Information - AV File The following is an explanation of the inter-relevance of AV files, VOBs, and PGC information sets. Figure 71 shows the inter-relevance of the AV files, the BOVs, the time map table, the PGC information sets, with the elements that form a unified body that encloses within the plotted frames using thick lines. It is noted that in Figure 71, the term "PGC information" has been abbreviated to "PGC I". In Figure 71, the AV # 1 file, the VOB # 1 information, and the original PGC il information composed of the cell information sets il to i3 have been arranged within the same box, while the AV i file , the VOB information i2, and the original PGC i2 information composed of the information sets of cells il to i3 have been arranged within a different frame. These combinations of an AV file (or VOB) VOB information, and original PGC information. which are present in the same box in Figure 71 are called an "original PGC" under the DVD-RAM standard. A video data editing device that complies with the DVD-RAM standard treats the so-called original PGC units as a management unit called a video title. For the example in Figure 71, the combination of the AV .i file, the VOB il information, and the original PGC information # 1 is called the original PGC # 1, while the combination of the AV i2 file, the VOB i2 and the original PGC information i2 is called the original PGC i2. When recording an original PGC, in addition to recording the VOB encoded in the DVD-RAM, it is necessary to generate the VOB information and the original PGC information for these VOBs. The recording of an original PGC is therefore considered complete when all three of the AV file, the VOB information table, and the original PGC information have been recorded in the DVD-RAM. Putting this in another way, the recording of the VOB encoded in a DVD-RAM as an AV file itself is not considered as ending the recording of an original PGC in the DVD-RAM. This is also the case for deletion, so that the original PGCs are erased as a whole. Putting this in another way, when any of an AV file, VOB information and original PGC information is deleted, other elements in the same original PGC are also erased. The reproduction of an original PGC is performed by the user that indicates the original PGC information. This means that the user does not give direct indications for the reproduction of a certain AV or VOB file. It should be noted here that an original PGC can also be reproduced in part. This partial reproduction of an original PGC is performed by the user indicating sets of cell information that are included in the original PGC, although the reproduction of a section that is not small a cell, such as a VOBU, can not be indicated. The following describes the reproduction of a PGC defined by the user. Figure 71, you can see that the information # 3 defined by the user, composed of the cells il a # 4, is included in a box separated from the original PGCs il and i2 described above. This shows that for the DVD-RAM standard, the PGC information defined by the user is not actually AV data, and instead is managed as a separate title. As a result, a video data editing apparatus defines the PGC information defined by the user in the RTRW administration file, and in doing so is capable of terminating the generation of a user-defined PGC. For the PGC defined by the user, there is a relation whereby the production of a PGC defined by the user is equal to the definition of a set of PGC information defined by the user. When a user-defined PGC is deleted, it is sufficient to delete the PGC information defined by the user from the RTRW administration file, with the PGC information defined by the user that is considered as not existing subsequently.
The units for the reproduction of a PGC defined by the user are the same as for an original PGC. This means that the reproduction of a PGC defined by the user is performed by the user indicating the PGC information defined by the user. It is also possible that the user-defined PGCs are reproduced partially. This partial reproduction of a PGC defined by the user is achieved by the user that indicates the cells that are included in the PGC defined by the user. The differences between the original PGCs and the PGCs defined as described above, but, from the user's point of view, there is no need to take these differences into account. This is because the complete reproduction or partial reproduction of both types of PGC is done in the same way by indicating PGC information or cell information respectively. As a result, both PGC classes are managed in the same way using a unit called a "video title".
The following is an explanation of the reproduction of the original PGCs and the PGCs defined by the user. The arrow drawn with dashed thick lines in Figure 71 shows how certain data sets refer to other data. The arrows y2, y4, yd and yd show the relationship between each VOBU in a VOB and the time codes included in the time map table in the VOB information, while yl, y3, y5 and y7 show the relationship between the time codes included in the code map table in the VOB information and the cell information sets. Here, it is assumed that the user has indicated one of the PGC, so that a video title will be played. When the PGC indicated in the original PGC il, the set of cell information il located in front of the original PGC il information is extracted by the reproduction apparatus. Then, the reproduction apparatus refers to the AV file and the VOB identifiers included in the extracted set of cell information il, and specifies the AV file il, the VOB il, and the table # 1 of time map for this VOB as the AV file and the VOB that corresponds to this cell information. The table table il 'of specified time includes the size of each VOBU that composes the VOB and the reproduction period of each VOBU. To improve data accessibility, the specified time map table il also includes the address and elapsed time relative to the start of the VOB for the representative VOBUs that is selected at a constant interval, such as a multiple of 10. seconds. As a result, when referring to the time map table using the cell start time C_V_S_PTM, as shown by the arrow yl, the reproduction apparatus may specify the VOBU in the AV file corresponding to the cell start time C_V_S_PTM included in the cell information set il, and in this way you can specify the first address of this VOBU. By doing so, the reproduction apparatus can determine the first address of the VOBU corresponding to this cell start time C_V_S_PTM, it can access the VOBU il as shown by the arrow y2, and in this way can start the reading of the VOBU sequence that starts from the VOBU il. Since the cell information set il also includes the cell termination time C_V_E_PTM, the reproduction apparatus can access the time map table using this cell termination time C_V_E_PTM, as shown by the arrow y3, to specify the VOBU in the AV file corresponding to the end time of cell C_V_E_PTM included in the cell information set il. As a result, the reproduction apparatus can determine the first address of the VOBU corresponding to the end time of cell C_V_E_PTM. When the VOBU corresponding to the end time of cell C_V_E_PTM is the VOBU ilO, for example, the playback apparatus will stop reading the VOBU sequence when searching for VOBU # 10, as shown by the arrow y4. By accessing the AV file via the cell information # 1 and the VOB il information, the playback apparatus can read only the section indicated by the cell information il, of the data in the VOB il that is included in the file AV # 1. If readings are also performed in cell information # 2, # 3, and i4, all VOBUs included in the VOB il can be read and played. When playback is performed for an original PGC as described above, the sections in the VOB can be played back in the VOB in which it is arranged in the VOB. The following explanation for when the user indicates the reproduction of a video title indicated by one of the PGC defined by the user. When the indicated PGC is the PGC il defined by the user, the reproducing apparatus extracts the cell information set il which is placed in front of the user-defined PCG il information for this user-defined PGC il. Then, the reproduction apparatus refers to the time map table il using the cell start time C_V_S_PTM included in this cell information il, as shown by the arrow y5, and specifies the VOBU in VOBU il corresponding to this cell start time C_V_S_PTM included in the cell information il. In this case, the specific reproduction apparatus VOBU ill as the VOBU corresponding to the start time of cell C_V_S_PTM, accesses the VOBU ill as shown by the arrow yd, and initiates the reading of a VOBU sequence that starts from the VOBU # 11. The cell information # 1 included in the PGC # 1 defined by the user also includes the cell termination time C_V_E_PTM, so that the reproduction apparatus refers to the time map table using this cell termination time C_V_E_PTM , as shown by the arrow y7, and it specifies the VOBU in the VOB # 1 that corresponds to the end time of cell C_V_E_PTM that is included in cell information # 1. When the VOBU corresponding to the cell termination time C_V_E_PTM is VOBU # 1, for example, the reproduction apparatus determines the reading of the VOBU sequence when searching for VOBU # 1, as shown by the arrow y8.
As described above, after accessing the AV file, via the cell information il of the VOB il information, the reproduction apparatus performs the same processing for the cell information i2, # 3 and # included in the PGC il information defined by the user. After extracting the information from cell i2 which is located in a position following the cell information il, the reproduction apparatus refers to the AV file identifier included in the extracted cell information # 2 and in this way determines that the file AV # 2 corresponds to this cell information and that table # 2 of time map corresponds to this AV file. Table # 2 of the specified time map includes the size of each VOBU that makes up the VOB and the reproduction period of each VOBU. To improve the accessibility in the data, the specified time map table # 2 also includes the address and elapsed time relative to the start of the VOB for the representative VOBUs that are selected at a constant interval, such as a multiple. of 10 seconds. As a result, when referring to the time map table using the cell start time C_V_S_PTM, as shown by the arrow y9, the reproduction apparatus may specify the VOBU in the AV file corresponding to the cell start time C_V_S_PTM included in cell information set # 2, and thus you can specify the first address of this VOBU. By doing so, the playback apparatus can determine the first address of the VOBU corresponding to this cell start time C_V_S_PTM, it can access 1 VOBU # 2 as shown by the arrow and 10, and thus can start the reading of the sequence of VOBU that starts from VOBU # 2. ~ Since the cell information set # 2 also includes the cell end time C_V_E_PTM, the playback apparatus can access the time map table using this cell end time C_V_E_PTM, as shown by the yll arrow, to specify the VOBU in the AV file that corresponds to the end time of cell C_V_E_PTM included in cell information set # 2. As a result, the reproduction apparatus can determine the first address of the VOBU corresponding to the end time of cell C_V_E_PTM. When the VOBU corresponding to the end time of cell C_V_E_PTM is the VOBU ill, the playback apparatus will stop reading the VOBU sequence when searching for the VOBU ill, as indicated by the arrow yl2. When playing the PGC information defined by the user in this way, the desired sections in the VOBs included in the two AV files can be played in the order given. This ends the explanation of the AV file unit, the VOB information, the PGC information. The following is a description of the title search indicator shown in Figure 70. (4-1-5) Contents of the title search indicator.
The title search indicator is the information for managing the VOB information, the time map table, the PGC information, and the AV files recorded on a DVD-RAM in the units called video titles described above. Each title search indicator is composed of the PGC number that is assigned to an original PGC information set or a set of user defined PGC information, a type of title, and a title recording history. Each type of title corresponds to one of the PGC numbers, and is set to the value of "00" to show that the AV file with the corresponding PGC number is an original PGC type, or adjusts to the value "01" for show that the AV title with the corresponding PGC number is a PGC defined by the user. The title recording history shows the data and the time in which the corresponding PGC information was recorded on the DVD-RAM. When the RTRW directory on a DVD-RAM is indicated, a playback device that complies with the DVD-RAM standard reads the title search indicators from the RTRW administration file and in this way can - instantly know how the many Original PGCs and user-defined PGCs are given in each directory on the DVD-RAM and when each of these video titles were recorded in the RTRW administration file. (4-1-6) interchangeability of the PGCs Defined by the User and the Original PGCs in a Real Edition.
The user-defined PGC information defined in a virtual edition can be used to indicate the binding order for the cells in a real edition, as shown in this fourth mode. Also, once an actual edition has been made as described in the fourth embodiment, if a set of PGC information defined by the user is converted into an original PGC information set, the original PGC information can be easily generated. for the VOB obtained by this link.
This is because the construction of PGC information data defined by the user and the original type information only differs in the given value as the type of title, and because the sections of a VOB obtained by a real edition are the sections that were indicated by the PGC information defined by the user before the actual edition. The following is an explanation of the procedure for a real edition in this fourth embodiment, and of the process for updating the PGC information defined by the user to the original PGC information. Figure 72 shows an example defined by the user and an original PGC.
In Figure 72, the original PGC il information includes only cell il, and is part of an original PGC with VOBU il and VOB information. On the other hand, the PGC information # 2 defined by the user forms a PGC defined by the user using only cell il, cell # 2, and cell # 3.
In Figure 72, the cell il indicates the section from VOB il to VOBUii, as shown by the dashed arrows y51 and y52, while cell # 2 indicates the section from VOBUii + 1 to VOBUij, as shown by the lines discontinuous y53 and y54, and cell # 3 indicates the section from VOBUij + 1 to VOBUik + 2, as shown by the dashed arrows y55 and y56. In the following example, cell # 2 is cleared from PGC deformation defined by the user, and the user indicates a real edition using the PGC information # 2 defined by the user composed of the cells of il and # 3. In Figure 73, the area corresponding to the deleted cell is shown using shading. The cell # 2 that is deleted here indicates one of the video frames, of the plurality of image data sets included in the VOBUii + 1 displayed within the wll frame using the cell start time C_V_S_PTM. Cell # 2 also indicates one of the video frames, of the plurality of image data sets included in the VOBU ij + 1 displayed within the frame wl2, using the cell termination time C_V_E_PTM. If a real edition is made using the user-defined PGC i2 information, the VOBU ii-1, i, and i + 1 located at the end of cell il and the VOBU ij, j + 1, and j + 2 located at the beginning of cell i2 will be re-encoded. This recoding is performed according to the procedure described in the first and second modalities, and the link of the extensions are then made according to the procedure described in the third embodiment. Figure 74A shows the ECC blocks in the DVD-RAM that are free by a real edition made using the user-defined PGC i2 information. As shown in the second level of Figure 74A, the VOBU #i, ii + 1, and ii + 2 are recorded in the AV block im, and the VOBU ij, ij + 1, and ij + 2 are recorded in the block in. As shown in Figure 73, cell i2 indicates the image data included in the VOBU #i + l as the C_V_S_PTM, and the image data included in the VOBU #j + A as the C_V_E_PTM. As a result, a DIVIDE command and an ACORT command of the second mode are issued to release the area of the ECC block occupied by the VOBU #i +2 to the ECC block occupied by VOBU #j, as shown by the tables wl3 and wl4 of Figure 74A. However, the ECC blocks occupied by the VOBU # 1 and #i + l and the ECC blocks occupied by the VOBU # j + l and j + 2 are not released. Figure 74B shows an example of a VOB, VOB information and PGC information after real information. Since the area corresponding to cell # 2 has been erased, VOB # 1 is erased in the (new) VOB # 1 and VOB # 2. When the DIVIDE command is issued, the VOB information for VOB # 1 is divided into the VOB il information and the VOB i2 information. The time map tables included in this information are also divided into the time map table il and the time map table i2. Although not illustrated, the seamless link information is also divided.
The VOBU in VOBil and VÓBÍ2 are referred by a reproduction apparatus via these divided time map tables. The PGC information defined by the user and the original PGC information have the same data construction, with only the value of the different title types. The sections of the VOB obtained after a real edition were originally indicated by the PGC information # 2 defined by the user before the actual edition, so that the PGC information # 2 defined by the user becomes the information of the user. Original PGC. Since this PGC information # 2 defined by the user is used to define the information, there is no need for a separate process to generate the new original PGC data after a real edition. (4-2) Functional Blocks of the DVD Recorder 70 Figure 75 is a functional block diagram showing the construction of the DVD recorder 70 in this fourth embodiment. Each function shown in Figure 75 is performed by the CPU that runs the programs in the ROM and that controls the physical equipment shown in Figure 17. The DVD player shown in the Figure 75 is composed of a disk rewriting unit 100, a disk reading unit 101, a common file system unit 10, an AV file system unit 11, and a recording-editing-control unit 12 reproduction, in the same manner as in the video data editing apparatus described in the third embodiment. The present embodiment differs with the third embodiment, however, in that the AV data recording unit 13 is replaced with the title recording control unit 22, the AV data reproduction unit 14 is replaced with the unit 23 of title reproduction control, and the AV data editing unit 15 is replaced with the multi-stage editing control unit 26. This DVD player also includes a PGC information table work area 21, a RTRW administration file work area 24, and a user defined PGC information generator 25, instead of the destion unit. fragmentation 16. (4-2-1) Unit 12 of Recording Control-Editing-Reproduction The recording-editing-reproducing control unit 12 of this fourth embodiment receives an indication of the user of a directory in the directory structure on the DVD-RAM as the operation target. Upon receiving the indication of the user of the operation target, the recording-editing-reproducing control unit 12 specifies the operation content according to the user's operation that has been reported by the remote control signal receiving unit 8. at the same time, the recording-editing-reproducing control unit 12 instructs so that the processing corresponding to the operation content is performed for the directory which is the operation objective by the title recording control unit 22, the title recording control unit 23, or any of the other components. Figure 77A shows an example of graphical data that is displayed on the TV monitor 72 under the control of the Recording-editing-playback Control Unit 12. When any of the directories is set to the focus state, the Recording-editing-playback Control Unit 12 waits for the user to press the enter key. When the user does so, the recording-editing-playback control unit 12 specifies the directory that is currently in the focus state as the current directory. (4-2-2) Work Area 21 of Table of PGC information The PGC information table work area 21 is a memory area having a standardized logical format so that the PGC information sets can be defined successively. This PGC information table work area 21 has internal regions that are managed as a matrix. The plurality of PGC information sets that are present in the PGC information table work area 21 are arranged in different columns while a plurality of cell information sets are arranged in different rows. In the PGC information table work area 21, any of the cell information in a stored set of PGC information can be accessed using a combination of row number, and column number. Figure 76 shows examples of original PGC information sets that are stored in the PGC information table work area 21. It should be noted here that when the recording of an AV file is terminated, the table of PGC information defined by the user will be emptied (shown as "NULL" in Figure 76). In Figure 76, the il information of the original PGC includes the cell information set il that shows the section between the start time tO and the end time ti, the set of cell information # 2 that shows the section between time start ti and end time t2, cell information set i3 showing the section between start time t2 and end time t3, and cell information set # 4 showing the section between start time t3 and the final time t4. (4-2-3) Unit 22 of Control of Record of Title The title recording control unit 22 records the VOBs in the DVD-RAM in the same manner as the AV data recording unit 13 in the third mode, although in doing so the title recording control unit 22 also stores a time map table in the work area 24 of the RTRW administration file, generates the VOB information, generates the VOB information, and generates the original PGC information that is stored in the work area 21 of the table of PGC information. When the original PGC information is generated, the title recording control unit 22 follows the procedure described below. First, upon receipt of the notification that the record key of the recording-editing-playback control unit 12 was pressed, the title recording control unit 22 secures a row area in the work area 21 of the recording table. PGC information. Then, after the AV data recording unit 13 has assigned an AV file identifier and a VOB identifier to the VOB to be recently recorded, the title recording control unit 22 has these identifiers. and stores them in the secured row area that corresponds to a recently graded PGC number. Then, when the encoding for the VOB is started, the title recording control unit 22 instructs the MPEG encoder 2 to transfer the PTS of the first video frame. When the encoder control unit 2g has transferred this PTS for the first video frame, the title recording control unit 22 stores this value and expects the user to perform a marking operation.
Figure 80A shows how the data entry and exit is performed between the components shown in Figure 75 when a marking operation is performed. While viewing video images displayed on the TV monitor 72, the user presses the dial key on the remote control 71. This dial operation is reported to the title recording control unit 22 and the route shown as ©, ©, ®, in Figure 80A. The title recording control unit 22 then obtains the PTS for the point where the user presses the dial key of the encoder control unit 2g, as shown by © in Figure 80A, and adjust this as the time information. The title recording control unit 22 repetitively performs the above processing while a VOB is being encoded. If the user presses the key during the generation of the VOB, the instruction control recording unit 22 instructs the encoder control unit 2g to transfer the presentation completion time for the last video frame that is going to encode. Once 'the drive 2g encoder control has transferred. this end time of. presentation for the last video frame to be encoded, the title recording control unit 22 stores this as the time information. By repeating the above processing until the encoding of a VOB is determined, the title recording control unit 22 ends up by storing the AV file identifier, the VOB file identifier, the start time of presentation of the first video frame , the start time for the presentation of each video frame corresponding to a point where a marking operation was made, and the time for ending the presentation of the final video frame. From this stored time information, the title recording control unit 22 adjusts the start time and the end time of a section and the corresponding AV file identifier and the VOB identifier as a set of information stored in a newly secured row in work area 21 of the PGC information table. In doing so, the title recording control unit 22 recently generates original PGC information. At the end of the previous generation, the title recording control unit 22 associates this original PGC information with the assigned PGC number, and, in the work area 21 of the PGC information table, generates a search indicator of title having the type information showing that this PGC information is the original PGC information, and a title recording history showing the date and time at which the recording of this PGC information was finished. It should be noted here that if the title reproduction control unit 23 can detect when there is a large change in the content of the scenes, the PGC information generator 25 defined by the user can automatically obtain the PTS for the points in the scenes. which these scene changes occur and automatically adjusts these PTS in cell information sets. The generation of a time map table or VOB information is not part of the essence of this modality, and will not explain. (4-2-4) Unit 23 of Title Reproduction control.
The title reproduction control unit 23 performs partial reproduction or reproduction for any of the titles recorded in the current directory indicated by the recording-editing-playback control unit 12. This is described in more detail later. When, as shown in Figure 77A, one of the directories is selected as the current directory and the user gives an indication for the reproduction of one of the title stored in this directory, the title reproduction control unit 23 displays the image. screen shown in Figure 77A, reads the original PGC information table and the PGC information table defined by the user in the RTRW administration file in that directory, and causes the user to select the full playback or playback partial of one of the original PGCs or the PGCs defined by the user in the current directory. Figure 77B shows the PGCs and cells that are displayed as the list of potential operating objectives. The PGC information sets and cell information representing these PGCs and cells are the same as those shown in the example in Figure 76. The original PGCs that appear on this interactive screen are shown in a simple graph showing the time in the horizontal axis, with each original PGC that is displayed together with the data and date on which it was recorded. In Figure 77B, the menu on the right at the bottom of the screen shows whether full or partial playback is going to be performed for the video title in the current directory. By pressing the "1" or "2" key on the remote control 71, the user can select the full or partial playback or the video title. If the user selects the full reproduction, the title reproduction control unit 23 causes the user to select one of the PGC as the operation target, while if the user selects the partial reproduction, the reproduction unit 23c of The title causes the user to select one of the cells as the operation target. When the complete reproduction for a PGC has been selected, the title reproduction control unit 23 extracts the cells of the selected PGC as the operation target and, by referring to a time map table such as that shown in FIG. Figure 71, reproduces the sections indicated by the cells one by one. Upon completion of the reproduction of the sections, the title reproduction control unit 23 causes the interactive display shown in Figure 77B to be displayed, and awaits the next selection of the cell information.
Figure 78A is a flow chart showing the processing when cell information sets are partially reproduced. First, in step S271, the title reproduction control unit 23 reads the C_V_S_PTM and C_V_E_PTM of the cell information to be reproduced from the original PGC information or PGC information defined by the user. Then, in the. Step S272, the title reproduction control unit 23 specifies the address of the VOBU (START) that includes the image data so gnados. In step S273, the title reproduction control unit 23 specifies the reproduction of the VOBU (FIN) including the image data assigned C_V_E_PTM and in the step S274, the title reproduction control unit 23 reads the section of the VOBU (START) to the VOBU (END) of the present VOB. In step S275, the title reproduction control unit 23 instructs the MPEG decoder 4 to decode the read VOBUs. In step S276, the title reproduction control unit 23 transfers the cell presentation start time (C_V_S_PTM) and the cell presentation completion time (C_V_E_PTM) to the decoder decoder control unit 4.k 4 of MPEG as the valid playback section information, together with a request for decoding processing. The reason why the title reproduction control unit 23 transfers the valid playback section information to the MPEG decoder 4 is that the decoder control unit 4k in the MPEG decoder 4 will attempt to decode each image data that it is not inside the section indicated by the cell. In more detail, the unit for the decoding processing of the MPEG decoder 4 is a VOBU, so that the MPEG decoder 4 will decode the entire section of VOBU (HOME) to VOBU (FIN), and in doing so will have the data of image outside the section indicated by the reproduced cell. A cell indicates a section of video field units, so that a method is necessary to prohibit the decoding and reproduction of the image data outside the section. To prohibit the reproduction of this image data, the title reproduction control unit 23 transfers the valid reproduction section information to the title reproduction control unit 23. Figure 78B shows as only the section between cell presentation start time (C_V_S_PTM) and the cell presentation completion time (C_V_E_PTM), of the area between the VOBU (START) and the VOBU (FIN), is reproduced. Upon receiving this valid playback section information, the MPEG decoder 4 can stop the display transfer of an appropriate number of video fields from the start of the VOBU (START) to the C_V_S_PTM and the display transfer of an appropriate number of video fields. Video fields from C_V_E_PTM to VOBU (END). For the construction of the physical equipment shown in Figure 17, in the disk access unit 3 it reads the VOBU sequence and transfers this to the MPEG decoder 4 via logical connection (1). The MPEG decoder 4 decodes this VOBU sequence and prohibits the transfer of reproduction of the part preceding C_V_S_PTM and the part following C_V_E_PTM. As a result, only the section indicated by the cell information is reproduced. Since a set of the original PGC information or the user defined PGC information includes a plurality of cell information sets, the procedure shown in Figure 78A may be repeated for each set of cell information included in a set of information of PGC. (4-2-5) Work Area 24 of the RTRW Administration File.
The work area 24 of the RTRW administration file is a work area for arranging the original PGC information table composed of the plurality of original PGC information sets generated in work area 21 of the PGC information table , the PGC information table defined by the user composed of a plurality of sets of PGC information defined by the user, the title search indicators, and the VOB information sets, according to the logical format shown in FIG. Figure 70. Unit 10 of the common file system describes the data arranged in work area 24 of the RTRW administration file in the RTRW directory as non-AV files, and in so doing stores an RTRW administration file in the RTRW directory. RTRW directory. (4-2-6) PGC Information Generator 25 Defined by the User.
The PGC information generator 25 defined by the user generates the PGC information defined by the user a basis to a set of PGC information recorded in the RTRW administration file of the current directory. Two types of cell information may be presented in the user-defined PGC information (called user-defined cell information sets) with states that are a first type indicating an area within a 'section indicated by the cell information in an existing set of PGC information, and a second type indicating the same section as a set of cell information in an existing set of PGC information. The PGC information generator 25 defined by the user generates these two types of cell information using different methods. To generate the first type of user-defined cell information that indicates an area within a section indicated by the existing cell information, the user-defined PGC information generator 25 causes the playback control unit 23 to title performs partial reproduction of the section indicated by the existing cell information. During partial reproduction of this section, the PGC information generator 25 defined by the user inspects when the user reviews the marking operations, and generates sets of cell information with the times of the marking operations as the starting point and the end point. In this way, the PGC information generator 25 defined by the user generates the PGC information defined by the composite user to this first type of cell information. Figures 79A and 79B show how the user uses the TV monitor 72 and the remote control 71 when generating the PGC information defined by the user. Figure 80B shows the data entry and transfer between the components shown in Figure 75 when a marking operation is performed. As shown in Figure 79A, the user sees the video images displayed on the monitor. 79B and press the dial key on the remote control 71 at the beginning of a desired scene. After this, the desired scene ends, as shown in Figure 79B, and the video images change to a content in which the user has no interest. Therefore, the user presses the dial key again. This marking operation is reported to the PGC information generator 25 defined by the user via the displayed route, F, ®, (Refer to Figure 80. The PGC information generator 25 defined by the user then obtains the PTS of the points when the user presses the dial key of the MPEG decoder 4, as shown by © in Figure 80B, and stores the PTS as the time information.The PGC information generator 25 defined by the user then generates a set of Cell information by attaching the appropriate AV file identifier and the VOB identifier to a pair of stored PTS that are the starting point and the end point of a section, and stores this cell information in a newly secured row area, the PGC information table work area 21, as shown by ® in Figure 80B. When the user defined PGC information indicating a section indicated by an existing set of cell information is generated, the user defined PGC information generator 25 specifically copies the existing cell information into a different row area in work area 21 of the PGC information table. In more detail, the user defined PGC information generator 25 secures a row area for a row in work area 24 of the RTRW administration file, and assigns a new PGC information identifier defined by the user to this row area. Once the cell information to be used in the present PGC information defined by the user has been indicated, of the cell information sets in the PGC information already stored in work area 21 of the information table of PGC, using a combination of a row number and a column number, the PGC information generator 25 defined by the user reads the cell information and copies it into a newly secured new row area in the work area 21 of the PGC information table. (4-2-7) Unit 26 Control Multistage Edition The editing multi-stage control unit 26 controls the title reproduction control unit 23, the user defined PGC information generator 25, and the seamless link unit 20 to perform a multi-stage editing process that includes: 1. virtual editions achieved by defining the PGC information defined by the user; 2. provided that allow the user to see the video images that will be obtained by a real edition, based on the results of a virtual edition; 3. Seamless links, as described in the first and second modalities; and 4. actual editions made when linking AV files as described in the third modality. (4-2-7--1) Procedure For multi-stage editing by editing multistage control unit 26 The following is a description of the specific procedure for multistage control performed by the control unit 26 of various editing stages. When the user selects a virtual edition using the remote control 71 in response to the interactive display shown in Figure 77A, the multi-stage edit control unit 26 has access to the RTRW directory, makes the common file system unit 10 read the RTRW administration file from the RTRW directory, and cause the RTRW administration file to be stored in work area 24 of the RTRW administration file. Then, from the RTRW administration file stored in the work area 24 of the RTRW administration file, the multistage editing control unit 26 transfers the original PGC information table, the user defined PGC information table , and the title search indicators to work area 21 of the PGC information table, and transfer the time map table to the work area of the time map table. Based on the information table of Original PGC transferred, the multistage edit control unit 26 displays the interactive screen shown in Figure 85, and waits for the next user indication. Figure 85 shows an example of the interactive screen displayed by the TV monitor 72 to make the user select the sections for the cells of a user-defined PGC in a virtual edition. This interactive screen displays the original PGCs and the user-defined PGCs as simple graphs, where the horizontal axis represents time. The date and time of recording of each original PGC and PGC defined by the user is also displayed. This interactive screen displays the plurality of cells as a horizontal rule of rectangles. The user can select any of these rectangles using the cursor keys on the remote control 71. These original PGCs and cells are the same as those shown in Figure 76, and the following describes the update of the original PGC information table, the PGC information table defined by the user and the title search indicators with Figure 76 as the initial state. Figure 81 is a flowchart showing the processing of the multistage edit control unit 26 when a user defined PGC is defined. In this flowchart, the variable j indicates one of the plurality of original PGCs that are arranged vertically on the interactive screen and the variable k indicates one of the plurality of cells that are arranged horizontally on the interactive screen. The variable m is the PGC number that must be assigned to the set of PGC information defined by the user that is being newly defined in the RTRW administration file and the variable n is the cell number that must be assigned to the set of PGC information that is being recently defined in the RTRW administration file. In step S201, the editing multi-stage control unit 26 replaces a given value by adding one to the last number of the original PGC information in the RTRW administration file in variable b and "1" in variable n. In step S202, the multistage editing control unit 26 adds a space for the mth PGC information defined by the user to the user defined PGC information table and in step S203, the multistage control unit 26 Edition expects the user to perform a key operation. Once the user has made a key operation, in step S204 the editing multi-stage control unit 26 adjusts the mark for the key pressed, of the marks corresponding to the keys on the remote control 71, in "1", and in step S205 judge whether the Enter_Brand, which shows whether the enter key has been pressed, is" 1". In step S206, the multistage editing control unit 26 judges whether the Final Mark, which shows whether the end key has been pressed, is "1". When both of these, marks are "0", the editing multitasking control unit 26 uses the Right_Mark, I left_Mark, - Down Down_Marca, To Arr was_Mar, which shows respectively whether the right, left, down or above have been pressed, to perform the following calculations, before replacing the calculation results in the variables k and j. k-k + 1 * (Right_Market) -1 * (I zquierda_Marca) j-j + 1 * (Toward Abaj or_Marca) -1 * (Upwards Mark) When the right key has been pressed, De r echa_Mar ca is set to "1", so that the variable k is incremented by "1". When the up key has been pressed, the Up_Mark is set to "1", so the variable j is incremented by "1". Conversely, when the Left key has been pressed, the Left_Mark is set to "1", so the variable k is decremented by "1". In the same way, when the downward key has been pressed, the Downward_Mark is set to "1", so that the variable j is decremented by "1". After updating the values of the variables k and j in this manner, the editing multi-stage control unit 26 has the cell representation in row j and column k displayed in the focus state in step S208, cleans all marks assigned to the keys on the remote control 71 to zero in step S209, and returns to step S203 where it waits once more for a key operation. By repeating the procedure of steps S203 to S209 described above, the focus state can be moved up / down and to the left / right between the cells according to the key operations made using the remote control 71. If the user presses the enter key with any of the cells in the focus state during the previous processing, the multistage editing control unit 26 proceeds to step S251 in Figure 62. In step S251 of Figure 82 , the multistage editing control unit 26 causes the user of an indication as to whether the cell information in row j and column k should be used as is, or only an area within the section indicated by this Cell information is going to be used. When the cell information is to be used as it is, the editing multi-step control unit 26 copies the cell representation in row j and column k to the space given as row 1 and column n in step S252, and defines Original_PGC # j. CELL # k as User_Def inido_PGC # m. CELL # n in step S253. After this is defined, in step S254 the multistage editing control unit 26 increments the variable n and proceeds to step S209 in Figure 81. When an area within the section indicated for this cell information in row j and the column k should be used, the multistage editing control unit 26 proceeds to step S255 to cause the title control unit 23 to begin partial reproduction for the cell information in row j and co lumna k.
In step S255, the multistage editing control unit 26 determines the circumstances for the reproduction of the cell information in row j and column k. This determination is made since when the section indicated by this cell information is reproduced in part, the need to reproduce the section once again from the beginning, with this being preferred in this case for the reproduction of the indicated section for the cell section in row j and column k to start at. the position where the previous reproduction was finished (step S266), this point being called the reproduction termination point t. On the other hand, when the cell information in row j and column k has not been reproduced, the section indicated by the cell information in row j and column k is reproduced from the start in step S255, when processing then return to steps S256 and enter the circuit formed by steps S256 and S257. Step S256 waits for cell reproduction to finish, while step S257 waits for the user to press the dial key. when the "Yes" judgment is given in step S257, processing proceeds to step S 25'8, where the time information for pressing the dial key is obtained, and then to step S259. In step S259, the multistage editing control unit 26 judges whether two sets of time information have been obtained. If not, the processing returns to step S256, or, if so, the processing proceeds to step S260 where the two sets obtained from the time information are set as the start point and the end point. One of the sets of time information obtained here is the start of the video scene that was marked by the user during its display on the TV monitor 72, while the other set of time information is the completion of this scene from video. These sets of time information are interpreted as the marking of a section in the original PGC that is especially needed by the user as material for a video editing. Accordingly, the PGC information defined by the user must be generated from this section, so that the cell information in work area 21 of the PGC information table is generated. The processing then proceeds to step S261. In step S261, the PGC information generator 25 defined by the user obtains the VOB_ID and the ID of the AV file in Original_PGC # j. CELL # k. In step S262, the PGC information generator 25 defined by the user generates Us iar io_Definido_PGC # m. CELDAin using the obtained start point and the end point, the VÓB_ID, and the ID of the AV file. In step S263, the end point information is stored as the reproduction completion point t and in step S254, the variable n is incremented, before the processing returns to step S209. As a result of the above processing, new user-defined cell information is generated from the cell information in row j and column k. After this, other cells are adjusted in the focus state and another set of user-defined cell information is generated from this cell, so that a set of PGC information defined by a cell is gradually defined in one cell at a time. the user. It should be noted here that if the reproduction based on the cell information in row j and column k in the circuit process shown as step S256 to step S257 ends without a marking operation having been made the processing will return to step S254 . When it is determined that the end or end key has been pressed, the "Yes" judgment is given in step S206 in Figure 80B and the processing proceeds to step S213. In step S213, a menu is displayed to make the user indicate whether to define a next PGC defined by the user. When the user wishes to define a new PGC defined by the user and gives an indication of this, in step S204 the variable m is incremented, the variable n is initialized and the processing proceeds to the steps S209 and S203. (4-2-7-2) Specific Example of Defining the PGC Information Defined by the User.
The following is a description of the operation when the information of User-defined PGC of a plurality of original PGC information sets that • are displayed in the interactive screen image of Figure 85. Figures 86A and 86B show the relationship between the user operations performed and the remote control 71 and the display processing that accompanies the various operations of the user. Figure 87A through Figure 90 also illustrate the examples of these operations, and refer to the following explanation of these operations. As shown in Figure 85, once cell # 1 that is in row 1 and column 1 has been set to the focus state, the user presses the enter key, as shown in Figure 86B. As the result, the "Yes" judgment is given in step S205 and the processing proceeds to the flow diagram in Figure 82. In steps S251 to S266 of the flow diagram in Figure 82, the first cell information CELDAilA in the PGC in il defined by the user is generated based on the Original_PGC # 1. CELDAi 1 shown in Figure 86A. Once this generation is completed, the variable n is incremented in step S254, and processing returns to step S203 via step S209 with the value of variable n to "2". In this example, the user presses the down key once, as shown in Figure 87B, and the right key twice, as shown in Figures 87C and 87D. In step S204, the marks corresponding to the keys that have been pressed are set to "1".
As a result of the first pressing of the Down button: k = l (= 1 + 1 * 0-1 * 0) j = 2 (= 1 + 1 * 1-1 * 0) As a result of the first pressing of the key to the right: k = 2 (= 1 + 1 * 1-1 * 0) j = 2 (= 2 + 1 * 0-1 * 0) As a result of the second pressing of the key to the right: k = 3 (= 2 + 1 * 1-1 * 0) j = 2 (2 + 1 * 0-1 * 0) As shown in Figure 87A, cell i7 is located in row 2 and column 3 is set in the focus state. Once the cell in row 2 and column 3 has been set to the focus state, the user presses the enter key, as shown in Figure 88B, so that the "Yes" judgment is made in step S205 and the processing proceeds to the flow chart with Figure 82. The cell i7A information, which is the second set of cell information in UserDefinished_PGCi 1, is then generated based on the original_PGCi2. CELL # 7 located in row 2 and column 3 of the original PGC information table (see Figure 88A).
After the second set of cell information has been generated, the previous processing is repeated. The user presses the enter key as shown in Figure 89B, so that cell information illA and cell information i3A are respectively generated as the third and fourth sets of cell information in User_fine_PGC # l. The processing returns to step S203 and, in the present example, the user then presses the end key. As a result, the End_Mark corresponding to the end key is set to "1", and the processing proceeds to step S213. Since the end key has been depressed, the editing multi-stage control unit 26 considers the definition of the PGC il information defined by the user as complete. In step S213, the user is asked to indicate whether he wishes to define another set of PGC information defined by the user (the PGC information i2 defined by the user) that follows this PGC il information defined by the user. If the user wishes to do so, variable m is incremented, variable n is initialized, and processing proceeds to step S209. By repeating the previous processing, the PCG information # 2 defined by the user, and the information # 3 defined by the user are defined. As shown in Figure 91, the user-defined PGC # 2 information is composed of i2, cell 4, cell ilOB, and cell i5B, and PGC information # 3 defined by the user of composites. cell i3C, cell i6C, cell i8C, and cell # 9C. Figure 91 shows the contents of the PGC information table defined by the user, the original PGC information table and the title search indicators at the end of the virtual editing process. If the user presses the end key at this point, the interactive display shown in figure 90 will be displayed in step S215 in Figure 81, and the multipartition control unit 26 waits for the user to select a set of information of PGC defined by the user using the up and down keys. Here, the user can select a predicted one by pressing the play key, and can select an edit by pressing the actual edit key, with the PGC information table defined by the user that is not yet recorded. If the user gives the indication for an operation that is recorded by a user-defined PGC, the user defined PGC information table includes the new PGC defined by the user generated in work area 21 of the information table of the user. PGC is transferred to the work area 24 of the RTRW administration file, where it is written to the part of the RTRW administration file described in work area 24 of the RTRW administration file corresponding to the PGC information table defined by the user. At the same time, file system commands are issued so that a newly generated user-defined title search indicator for PGC information is added to the title search indicators that are already present in the administration file of the RTRW transferred to work area 24 of the RTRW administration file. Figure 83 is a flow chart showing the processing during a planned or a real edition. The following is a description of the processing when a scheduled VOB link operation is performed, with reference to this flowchart • in Figure 83. Figures 92A-92B and 93A-93C show the relationship between operations made using the remote control 71 and the display processing that accompanies these operations. In step S220 of the flow chart of Figure 83, the first number in the PGC information table defined by the user is replaced in the variable j, and the step S221, a key operation is expected. When the user performs a key operation, in step S222 the mark corresponding to the key pressed by the user is set to "1". In step S223, the Repr oduci r_Marca is judged. , which shows whether the play key has been pressed, is "1", and in step S224, it is judged whether the Real_di_ar_Mar ca, which shows whether the real editing key "1" has been pressed. these marks are "0", the processing proceeds to step S225 where the following calculation is performed using the values of the Upward_Mark and Down_And_Mark which respectively shows whether the up and down keys have been pressed. The results of this calculation are substituted in variable j. j < -j + l * (Towards Abaj or_Marca) -1 * (Upwards Brand) When the user has pressed the up key, the Up_Mark will be set to "1", meaning that the variable j is decreased. Conversely, the user has pressed the down key, the Downward_Brand will be set to "1", meaning that variable j increases.
Once variable j has been updated in this manner, in step S226, the image in the display corresponding to the PGC information placed in row j is adjusted in the focus state. In step S227, all the marks corresponding to the keys of the remote control 71 are cleared to zero and the chasing returns to step S221 where another key operation is expected. This processing in steps S221 to S227 is repeated, when the focus state moves to a different set of PGC information according to the user operations of the keys up and down on the remote control 71. If the user presses the play key, during the previous processing that is repeating, with one of the sets of PGC information in the focus state, the Play_Brand is set to "1", the "Yes" judgment is given in step S223 , and processing proceeds to S228. in step S228, the editing multi-step control unit 23 instructs the title reproduction control unit 23 to reproduce the VOBs according to the PGC, of the user-defined PGCs, which have been indicated by the user. user. When the PGC indicated by the user is a PGC defined by the user, the cells included in the user defined PGC will indicate the sections of the plurality of sections in one or more VOBs in a user-defined order. Since this reproduction will not satisfy the conditions necessary for seamless reproduction that are described in the first and second modes is, so that the image display and transfer will stop at the cell boundary during playback before advancing to the next cell. Since the conditions necessary for seamless reproduction of the cells are not met, the display of the image or audio display will be interrupted. However, the object of this operation is only to give the user an expected result of linking to a plurality of scenes so that this object is achieved even in spite of the interruptions. (4-2-7-3) Processing a Preview of a Multistage editing and for an operation Real - The operation for linking the VOBs in a real edition is described below. Figures 94A to 94C show the relationship between the user operations of the remote control 71 and the display processing that accompanies these key operations. The user presses the up key, as shown in Figure 94B to cause cell # A1 to adjust in the focus state, and this is reflected in the display screen displayed on the TV monitor 72 as shown in FIG. shown in Figure 94A. If the user then presses the actual edit key, as shown in Figure 94C, the "Yes" judgment is made in step S224 in Figure 83, and the processing of step S8 to step S16 in the flow diagram of the Figure 43 described in the third embodiment is performed. After finishing this processing in the third embodiment, the processing advances to step S237 in Figure 84. After the variable n is set to "1" in step S237, a search is performed for the 'Original_PGCij. CELDAi that was used when the UserDefinido_PGCim was generated. CELDAin in step S238 and in step S239 is judged to be this Or iginal_PGCi j exists. If so, this Original_PGCi j is deleted in the. Step S240, or else, a search is performed for the User ioDe finido_PGCiq that was generated from this Original_PGCi j in step S240. In step S242, it is determined if there is at least one of UserDef inido_PGCiq, and if so, all UserDefinition_PGCiq are deleted in step S243. In step S244 it is judged whether the value of the variable a corresponds to the last number of the cell information, and if not, the processing proceeds to step S245 where the variable a is incremented by indicating the next set of cell information in the PGCiq information before the processing returns to step S238. The circuit process in step S2378 to step S245 is repeated until the variable n reaches the last number of the cell information in the information ig.
The sections indicated by the PGC il information defined by the user are all VOB, il, i2, and i3, so that they all submit to the real edition. The original PGC information sets that they used to generate the cell information included in the PGC il information defined by the user indicating the VOBs that undergo the actual editing, so that all these PGC information sets are deleted original. The user-defined PGC sets that were generated from these PGC information sets also indicate the VOBs that are subjected to the actual editing, so that all these sets of PGC information defined by the user are also deleted. The "Yes" judgment is made in step S244, so that the processing proceeds to step S246, and, from the free PGC numbers obtained by deleting the original PGC information sets, the lowest number is obtained as the number ie of PGC. Then, in step S247, the cell information is updated using the ID of the AV file assigned to the AV file and the VOB_ID after the order is appended, and in step S248 the PGC number of the Purpose_PGC # user is updated to the number ie of PGC. Meanwhile, in the title search indicators, the type information is updated to the original type. Figure 95 shows the example of the PGC information table and the title search indicators after the deletion of the original PGC information sets and the user-defined PGC information that achieves a real edition. Since the VOBil, # 2, and # 3, indicated by the sections in the PGC il information defined by the user are subjected to the real edition, the original PGC il information, the original PGC # 2 information, and the information # 3 of the original PGC, the information # 2 of the PGC defined by the user, and the information # 3 of the PGC defined by the user have already been deleted. Conversely, what was before PGC il information defined by the user has been defined as the original PGC il information.
Once the PGC information has been updated in work area 21 of the PGC information table as indicated above, the new original PGC information is transferred to work area 24 of the RTRW administration file ,, where it is used to overwrite the RTRW administration file currently stored in work area 24 of the RTRW administration file. At the same time, the title search indicator for this newly generated original PGC information is transferred to the work area 24 of the RTRW administration file where it is used to overwrite the title search indicators already present in the administration file of. RTRW. Once the user-defined PGC information table and the title search indicators have been written, the orders of the file system are issued so that the RTRW administration file stored in the work area 24 of the file RTRW administration is written to the RTRW directory.
With this modality, the sections that are going to be used as materials for a real edition are indicated by the cell information defined by the user, with these which are freely arranged to provisionally decide the reproduction path. When the user wishes to adjust a reproduction path of the editing materials, this can be achieved without having to temporarily produce a VOB, so that the editing of the VOB materials can be done in a short time using a simple method. This also means that there is no need to use more than the storage capacity of the DVD-RAM to provide a temporarily produced VOB. If the provisional determination of cell path can be achieved by defining only a set of PGC information defined by the user, the user can produce many variations of the reproduction path in a short time. the user-defined cell information sets are indicated using the time information for the sections in the VOBs, so that the indicated VOBs can be maintained in the state in which they were already recorded. The user can generate a plurality of sets of PGC information defined by the user for different reproduction routes and then see the predicted ones of these routes to find the appropriate mass of these reproduction r.utas. The user can then indicate a real edition for his preferred reproduction path, and in this way process the VOBs according to the information selected by the user. This means that the user can perform a prominent editing process that directly rewrites the VOBs that are already stored on an optical disc. While the original VOBs will be effectively erased from the disk, the user is able to verify the result of this before giving the actual edit indication, making this not a particular problem for the present invention. Once a real edition has been made, the type of title in the title search indicator of the PGC information defined by the user used for the actual edition will be set to "PGC type original information", so that this it can be used as the basis for the following video editing operations. As described above, a single video data editing apparatus using only an optical disc can perform advanced video editing, whereby a user can select from a plurality of freely chosen potential arrays of the source material. As a result, by using the present video data editing apparatus, a large number of video enthusiasts will be able to perform advanced editing operations that were considered out of reach of conventional home video equipment. It should be noted that the time information can be taken from the mark points in the cell information and managed with the information such as the address taken from the time map table in the form of a table. By doing so, this information can be presented to the user as potential selections on a screen that shows the pre-edition status. Reduced images (known as "thumbnails") can also be generated for each mark point and stored as separate files, with the indicator information that is also reproduced for each thumbnail. When the cell information is displayed in the pre-editing stage, these thumbnails can be 'displayed to show the potential selections that can be made by the user. The processing of the components such as the title reproduction control unit 23 (see figure 78) and the processing of the editing multi-stage control unit 26 (Figures 81 to 84) that was described in this fourth modality using the flow diagrams can be achieved by a machine language program. This machine language program can be distributed and sold having been recorded on a recording medium. Examples of this recording medium are an IC card, an optical disk, or a flexible disk. The machine language program recorded on the recording medium can then be installed on a normal personal computer. when running the installed machine language programs, the normal personal computer can achieve the functions of the video data editing apparatus of this fourth embodiment. As a final note regarding the relationship between the VOBs and the original PGC information, it is preferred that an original PGC information set be provided for each VOB. Although the present invention has been fully described by way of example with reference to the accompanying drawings, it should be noted that various changes and modifications will be apparent to those skilled in the art. Therefore, unless the changes and modifications depart from the scope of the present invention, they should be considered as being included herein.
Industrial Field of the Request The video data editing apparatus, the optical disc, and the storage program and the recording medium that stores an editing program of the present invention capable of editing video images that are stored in an optical disc to be made easily in a short time. This is highly suitable for home video equipment, and creates a new market for home video editing devices.
It is noted that in relation to this date, the best method known by the applicant to carry out the present invention, is the conventional one for the manufacture of the objects to which it refers.
Having described the invention as above, the content of the following is claimed as property:

Claims (31)

1. A video data editing apparatus that performs editing to allow seamless reproduction of at least two video objects that are recorded on an optical disk, each video object that includes a plurality of video object units, and each video object unit including image data sets, the video data editing apparatus is characterized in that it comprises: a reading means for reading at least one of a sequence of previous units of video object and a sequence of units Last object of video of a video object recorded on the optical disc, the sequence of last units of video object that is composed of a predetermined number of video object units placed at the end of a previous video object that is going to reproduce first, and the sequence of last units of video object that is composed of a predetermined number of video object units placed at the start of a video object a last video to be played second; a means "of encoding to re-encode the data sets of included in at least one of the sequences of previous units of video object and the sequence of last units of video object to allow the previous video object and the Last video object will play seamlessly; and a writing means for rewriting at least one of the above video object and the last object on the optical disk after encoding by the encoding means.
2. The video data editing apparatus according to claim 1, characterized in that the coding means re-encodes at least one of the image data sets included in the sequence of previous video object units and the data sets of image included in the sequence of last video object units using a target amount of code, the target amount of code that is an amount, for which overflow will not occur in a video buffer of a video decoder, yet when the image data sets included in the sequence of previous video object units are present in the video buffer at the same time as the image data sets included in the sequence of last video object units.
3. The video data editing apparatus according to claim 2, wherein the plurality of image data sets are stored in a plurality of video packets, each video packet being assigned an input time stamp that shows an entry time to enter the video buffer, and one of the video packets is assigned a decoding time stamp that shows at what time in one of the image data sets it must be taken from the video buffer, the video data editing apparatus is characterized in that it further comprises: an analysis means for calculating a quantity of data that will be stored in the video buffer for each of the plurality of video frames in a video analyzed period between a final input time in a final video packet in the sequence of previous units of video object au time of s final encoding of a set of image data in the sequence of previous units of the video object to be decoded last, the analysis means calculating the amount of data when referring to the input time stamps and the time stamp of the code assigned to each video packet in the sequence of previous units of video object and the sequence of previous units of video object and which totalizes a data size of each video packet corresponding to the period analyzed, and the average of analysis that calculates the target amount of code based on the calculated amount of data for each video frame and a buffer capacity of the video buffer, the re-encoding means re-encodes at least one of the sets of image data included at the end of the sequence of previous video object units and the image data sets included at the beginning of the sequence of units previous video objects using the target-code quantity.
4. The video data editing apparatus according to claim 3, characterized in that it further comprises: a means of generating information to generate the seamless link information including a timestamp assigned to a final packet in the sequence of units previous video object, a timestamp assigned to a first packet in the sequence of last video object units, and a seamless mark that shows whether 'the reproduction will be performed seamlessly for the sequence of previous units of video object and the sequence of last units of video object, an entry time in which a first package in the sequence of last units of video object it is inserted into a buffer that is found by adding a certain compensation to a timestamp assigned to the first packet in the sequence of last video object units; the writing medium that writes the seamless link information generated in the optical disk.
5. The video data editing apparatus according to claim 4, characterized in that each image data set includes data to be decoded for a video frame, the information generation means that additionally adds a time of completion of the video data. display when the reproduction of the image data sets in the sequence of previous units of the video object and a presentation start time when the reproduction of the image data sets in the sequence of last units of the object of the invention ends; video for the seamless link information, the certain compensation that is found by subtracting the display start time from the sequence of last units of video object from completion time of presentation of the sequence of previous units of video object.
6. The video data editing apparatus according to claim 2, wherein each video object unit includes a plurality of image data sets and a plurality of audio data sets, the video data editing apparatus. characterized in that it further comprises a separation means for separating sets of image data and audio data sets from the sequence of previous video object units and the sequence of last video object units read by the reading means, and a multiplexing means, for multiplexing at least one of the image data sets, including one of the image data and the re-encoded image data, separated from the sequence of previous units of video object with the sets of audio data read from the sequence of previous video object units, and to multiplex the image data sets, which includes one of the data of i magen and the re-encoded image data, separated from the last unit sequence of video object with the audio data sets separated from the last unit sequence of video object, the writing medium that writes data transferred by the multiplexing means in the optical disc.
7. The video data editing apparatus according to claim 6, wherein the plurality of audio data sets in the sequence of previous units of video object and the sequence of last units of -video object are reproduced for a plurality of audio data sets, the video data editing apparatus is characterized in that it further comprises: an analysis means for specifying a period between a first audio frame and a second audio frame, of the plurality of audio frames in the sequence of previous units of video object, to take a first sequence of audio units to be played during the specified period of the sequence of previous units of video object, and to extract a second sequence of audio data that it must be played starting from a third audio frame in the plurality of audio frames in the sequence of last units of video object or; the first audio frame which is the second audio frame next to an audio frame corresponding to a time at which a first packet is inserted in the sequence of final object units, video, the second audio frame that is locates immediately before an audio frame in the sequence of previous units of video object corresponding to a start time of displaying the reproduction of a first set of image data in the sequence of last units of video object, the third audio frame that is located immediately after the audio frame in the sequence of last units of video object that corresponds to a time of completion of presentation of the reproduction of the second audio frame, and the mutliplexing means multiplexes the sets of image data and audio data sets so that the first audio data stream is located at a position before the second unda audio data stream.
8. The video data editing apparatus according to claim 7, characterized in that it further comprises: a generating means for specifying an end time of presentation of the second audio frame as a stop time of the decoding processing for a decoder of audio, and to generate stop or stop control information indicating a decoding processing stop time and a period from the start time of presentation of the second audio frame to the start time of presentation of the third audio frame as a stop period for a processing stop by the audio decoder, a writing means writes the stop control information generated in the optical disc.
9. The video data editing apparatus according to claim 8, characterized in that a plurality of audio data sets to be reproduced for a plurality of audio frames from the first audio frame to the second audio frame are stored. as a first audio packet group, where if a data size of the first packet group is not an integer multiple of 2 kilobytes (KB), one of the padding data and a padding group is used to do the data size of the first group of audio packets an integer multiple of 2 KB, and wherein the plurality of audio data to be reproduced for a plurality of audio frames starting from the third audio frame are stored as a second group of audio packets, with the multiplexing means that multiplexes the image data sets and the audio data sets so that the first group of audio packets is located before the second undo group of audio packages.
10. The video data editing apparatus according to claim 9, characterized in that the analysis means generates location or location information showing which video object unit, of the video object units in the sequence of last units of video. video object, includes a final package in the first group of audio packets, the writing medium that writes the generation, of location generated on the optical disc.
11. A video data editing apparatus that performs editing to allow seamless reproduction of a previous section and a last section, the previous section and the last section that are located on at least one video object that is recorded on a disc optical, each video object including a plurality of video object units and each video object unit including image data sets, the video data editing apparatus is characterized in that it comprises: a reading means for reading a sequence of previous units of video object and a sequence of last units of video object of a video object recorded on the optical disc, the sequence of previous units of video object which is composed of video object units placed at the end of the previous section to be played first, and in the sequence of last units of video object that is composed of units of obj video stream placed at the beginning of a last section to be reproduced second, a coding means for re-encoding the image data sets included in at least one of the sequence of previous video object units and the sequence of Last video object units to allow the previous section and the last section to play seamlessly; and a writing means for rewriting at least one of the previous section and the last section on the optical disc after coding by the coding means.
12. The video data editing apparatus according to claim 11, characterized in that the coding means re-encodes at least one of the image data sets included in the sequence of previous units of the video object and the video sets. Image data included in the sequence of previous units of video object using the target amount of code, the target amount of code which is an amount by which no overflow will occur in a video buffer of a video decoder, even when the image data sets included in the sequence of previous video object units are present in a video decoder. the video buffer at the same time as the image data sets included in the sequence of previous video object units.
13. The video data editing apparatus according to claim 12, characterized in that when a type of image of a final set of image data in a display order of the previous section is a Predictive Bidi Image (Image B) ), the recoding means performs the re-coding to convert the final set of image data of a Predictive Image (P image) whose information components are dependent only on the image data sets that are reproduced earlier in the final set of image data.
14. The video data editing apparatus according to claim 13, characterized in that it further comprises: an analysis means for analyzing, when an image type of a final set of image data in a display order of the previous section is an image B, an increase in the size of the data that accompanies a conversion of the image B into an image P by means of coding, based on a data size of an image data sets to be reproduced after the set In the end of the image data in the display order, the encoding means re-encodes the image data sets at the end of the sequence of previous units of the video object using a target code quantity that ensures that no sub-flow will occur in a video buffer even when the image data with the analyzed increment in the data size is accumulated in the video buffer.
15. The video data editing apparatus according to claim 12, characterized in that when an image type of a first image data sets in a coding order of the last section is an image P, the re-encoding means performs a re-encoding to convert the first image data sets into an Intra Image (image I) whose information components are not dependent on other sets of image data.
16. The video data editing apparatus according to claim 15, characterized in that it further comprises: an analysis means for analyzing, when an image type of a first set of image data in a coding order of the last section is an image 'P, an increase in the size of the data accompanying the conversion of the image P in an I image with the coding means, based on a data size of an image set to be reproduced before the first Image data set in the display order, the encoding means re-encodes the image data sets at the beginning of the sequence of last video object units using an amount of target code that ensures that a subflow will not occur in a video buffer even when the image data with the analyzed increment in the data size is accumulated in the video buffer.
17. The video data editing apparatus according to claim 12, characterized in that when an image type of a first set of image data in a display order of the last section is an image B, the re-encoding means performs the re-encoding to convert the first image data set to a forward predictive image whose information components are only dependent on the image data sets that are reproduced after the first image data set.
18. The video data editing apparatus according to claim 17, characterized in that it further comprises: an analysis means for analyzing, when an image type of a first set of image data in a display order of the last section is an image B, an increase in the size of the data accompanying the conversion of the B image into a forward predictive image by the encoding means, based on the size of the data in an image data set that is to be play after the first image data set, the encoding means re-encodes the image data sets at the beginning of the sequence of last video object units using an amount of target code that ensures that a subflow will not occur in a video buffer even when the data of image with the analyzed increment in the data size is accumulated in the video buffer.
19. The video data editing apparatus according to claim 18, characterized in that it further comprises: separation means for separating sets of image data and audio data sets from the sequence of previous video object units and the sequence of last video object units read by the reading means, and a multiplexing means, for multiplexing at least one of the image data sets, including one of the image data and the re-encoded image data, separated of the sequence of previous units of video object with the audio data sets read from the sequence of previous video object units, and for multiplexing the image data sets, which includes one of the image data and the data re-coded image pictures, separated from the sequence of last video object units with the audio data sets separated from the sequence of last units of or video object, the writing medium that writes data transferred by the multiplexing means on the optical disk.
20. The video data editing apparatus according to claim 19, characterized in that the plurality of audio data sets in the sequence of previous units of video object and the sequence of last units of video object are reproduced for a plurality of audio data sets, the video data editing apparatus is characterized in that it further comprises: an analysis means for specifying a period between a first audio frame and a second audio frame, of the plurality of audio frames in the sequence of previous units of video object, to take a first sequence of audio units to be played during the specified period of the sequence of previous units of video object, and to extract a second sequence of audio data that is must reproduce by starting from a third audio frame in the plurality of audio frames in the sequence of last units of the object. eto video, the first audio frame that is the second audio frame next to an audio frame corresponding to a time at which a first packet is entered in the sequence of final units of video object, the second frame of audio which is located immediately before an audio frame in the sequence of previous units of video object corresponding to a start time for displaying the reproduction of a first set of image data in the sequence of last units of the video object video, the third audio frame that is located immediately after the audio frame in the sequence of last units of video object that corresponds to an end time of presentation of the reproduction of the second audio frame, and the means of mutliplexion multiplexes the image data sets and the audio data sets so that the first audio data stream is located in a position n before the second audio data stream.
21. The video data editing apparatus according to claim 20, characterized in that it further comprises: a generating means for specifying an end time for displaying the second audio frame as a stop time of the decoding processing for a decoder of audio, and to generate stop or stop control information indicating a decoding processing stop time and a period from the start time of presentation of the second audio frame to the start time of presentation of the third audio frame as a stop period for a processing stop by the audio decoder, a writing means writes the stop control information generated on the optical disc.
22. The video data editing apparatus according to claim 21, characterized in that it further comprises: characterized in that a plurality of audio data sets to be reproduced for a plurality of audio frames from the first audio frame to the second audio frame are stored as a first audio packet group, wherein if a data size of the first packet group is not an integer multiple of 2 kilobytes (KB), one of the padding data and a padding group is used to make the data size of the first packet group of audio an integer multiple of 2 KB, and wherein the plurality of audio data to be reproduced for a plurality of audio frames starting from the third audio frame are stored as a second group of audio packets, with the multiplexing that multiplexes the image data sets and the audio data sets so that the first group of audio packets is located before the second group of audio packets.
23. The video data editing apparatus according to claim 22, characterized in that it further comprises: characterized in that the analyzing means generates location or location information showing which video object unit, of the video object units in the sequence of last units of video object, includes a final package in the first group of audio packets, the writing medium that writes generation of location generated on the optical disc.
24. An optical disk, characterized in that it comprises: a data area that records a plurality of video objects including a plurality of video target units, each video object unit including a plurality of image data sets and a plurality of video objects; of audio data sets, the plurality of video objects having a display order, one of the video object units in a next video object, which is a video object of the plurality of video object that is will play after a preceding video object in the order of display, which includes a first audio data stream and a second audio data stream, the first audio data stream being a plurality of audio data sets which must be played for a specified period between a first audio frame and a second audio frame, of a plurality of audio frames in the precedent video object e, the second audio data sequence that is a plurality of audio data sets and that a third audio frame must be reproduced forward, of the plurality of audio frames in the next video object, the first audio frame. audio which is a second audio frame next to an audio frame corresponding to a time at which a first packet is introduced in the next video object, the second audio frame which is located immediately before an audio frame in the sequence of preceding units of video object corresponding to a type of start of presentation of audio frames in the next video object, and the third audio frame that is located immediately after the audio frame in the next video object which corresponds to an end time of presentation of the second audio frame; and an index area that stores a set of seamless link information for each video object in the data area, and the seamless link information that allows seamless reproduction of a combination of two of the plurality of objects in the data area. video that is recorded in the data area, each set of seamless link information including: the audio separation start time information indicating an end time of presentation of the second audio frame as a stop time of • decoding processing of an audio decoder; the audio separation period information indicating a period between the end time of presentation of the second audio frame at a start time of presentation of the third audio frame as a decoding stop period of the audio decoder; and the location information indicating which video object unit, of the previous units of video object in the next video object, includes the first sequence of audio packets.
25. The optical disk according to claim 24, characterized in that the first image data sequence is stored in a first group of audio data packets, the first group of audio data packets being placed before a second group of audio data packets. audio packets in the following video object, the second group of image data packets which is a sequence of audio data that is reproduced for a plurality of audio frames in the next video object, the information and location that indicates a video object unit that includes a final packet in the first group of audio packets.
26. A video data editing apparatus for an optical disk, the optical disk is characterized in that it comprises: a data area that records a plurality of video objects including a plurality of video target units, each video object unit which includes a plurality of image data sets and a plurality of audio data sets, the plurality of video objects having a display order, one of the video object units in a next video object, which is a video object of the plurality of video object to be played after a preceding video object in the order of display, which includes a first audio data stream and a second audio data stream, the first audio sequence audio data which is a plurality of audio data sets to be played for a specified period between a first audio frame and a second audio frame, of a plurality of audio frames in the preceding video object, the second audio data sequence which is a plurality of audio data sets and a third forward audio frame to be reproduced from the plurality of frames of audio. audio in the next video object, the first audio frame which is a second audio frame next to an audio frame corresponding to a time in which a first packet is inserted in the next video object, the second frame of audio that is located immediately before an audio frame in the sequence of preceding units of video object corresponding to a type of start of presentation of audio frames in the next video object, and the third audio frame that is located immediately after the audio frame in the next video object that corresponds to a presentation end time of the second audio frame; and an index area that stores a set of seamless link information for each video object in the data area, and the seamless link information that allows seamless reproduction of a combination of two of the plurality of objects in the data area. video that is recorded in the data area, each set of seamless link information including: the audio separation start time information indicating an end time of presentation of the second audio frame as a stop time of decoding processing of a. audio decoder; . the audio separation period information indicating a period between the end time of presentation of the second audio frame at a start time of presentation of the third audio frame as a decoding stop period of the audio decoder; and the location information indicating which video object unit, of the previous units of video object in the next video object, includes the first sequence of audio packets, the video data editing apparatus comprising: receiving means for receiving an indication of a part to be erased from a plurality of video object units that are located in front of a last video object; a reading means for referencing the location information in the seamless link information and reading the video object unit, of the plurality of video object units in the next video object, in which the video object is located; first sequence of image data; and an erasing means for erasing a plurality of video object units, corresponding to the part to be erased, and audio separation.
27. The video data editing apparatus according to claim 26, characterized in that the extraction means for extracting, from the first audio data sequence and the second audio data sequence, the audio data streams that are going away. to fix again on the next video object, based on the video display start time of an image data set to be played first on the next video object from which the part has been erased, an arrangement means for storing the audio data stream extracted from the first audio data stream in a first group of audio packets and the audio data stream extracted from the second audio data stream in a second group of audio packets, and to fix the first group of audio packets. audio and the second audio packets in the video object unit in the next video object.
28. The video data editing apparatus according to claim 27, characterized in that it further comprises: an authorization means for authorizing the audio separation start time information and the audio separation period information, based on the sequences of image data extracted by means of extraction, and updating the information and location, based on a location of the first audio data sequence used by the arrangement means.
29. A computer-readable recording medium that records an editing program that allows seamless reproduction of two video objects on an optical disc, characterized by each video object that includes, a plurality of video object units, and each unit of video object including image data sets, the 'editing program comprises the following steps: a reading step to read at least one of a sequence of previous units of video object and a sequence of last units of video object video of a video object recorded on the optical disc, the sequence of last units of video object that is composed of a predetermined number of video object units placed at the end of a previous video object to be played first, and the sequence of last video object units which is composed of a predetermined number of video object units' placed at the beginning of a video object. n video object a last video to be played second; a coding step to re-encode the data sets of included in at least one of the sequences of previous units of video object and the sequence of last units of video object to allow the previous video object and the last object of video are reproduced without seams; and a write step for rewriting at least one of the above video object and the last, object on the optical disk after encoding by the encoding means.
30. A computer-readable recording medium that stores an editing program that edits parts of video objects recorded on an optical disc to allow seamless reproduction for the parts, characterized in that each video object that includes a plurality of object units of video and each video object unit that includes image data sets for a given reproduction period that includes a plurality of audio frames, each image data sets being reproduced together with an audio frame, each part that is a section between an audio frame and another audio frame, the editing program that includes the following steps: a reading step to read at least one of a sequence of previous units of video object and a sequence of last units of object video of a video object recorded on the optical disc, the sequence of last units of video object that is composed of a number the default of video object units placed at the end of a previous video object to be played first, and the sequence of last units of video object that is composed of a predetermined number of video object units placed at the beginning of a video object a last video to be played second; a coding step to re-encode the data sets of included in at least one of the sequences of previous units of video object and the sequence of last units of video object to allow the previous video object and the last object of video are reproduced without seams; and a write step for rewriting at least one of the above video object and the last object on the optical disk after encoding by the encoding means.
31. A computer-readable recording medium that stores an editing program that edits an optical disc, characterized in that the optical disc comprises: a data area that records a plurality of video objects that includes a plurality of video target units, each video object unit including a plurality of image data sets and a plurality of audio data sets, the plurality of video objects having a display order, one of the video object units in a next video object, which is a video object of the plurality of video object to be played after a preceding video object in the display order, which includes a first audio data stream and a second data stream audio, the first audio data stream which is a plurality of audio data sets to be reproduced for a specified period between a first audio frame and a second audio frame, of a plurality of audio frames in the preceding video object, the second audio data sequence which is a plurality of audio data sets and which a third forward audio frame of the plural audio frame in the next video object, the first audio frame which is a second audio frame next to an audio frame corresponding to a time at which a first packet is inserted into the next video object, the second audio frame that is located immediately before an audio frame in the sequence of preceding units of video object corresponding to a type of start of presentation of audio frames in the next video object, and the third frame of audio that is located immediately after the audio frame in the next video object that corresponds to a presentation end time of the second audio frame; and an index area that stores a set of seamless link information for each video object in the data area, and seamless link information that allows seamless reproduction of a combination of two of the plurality of objects video that is recorded in the data area, each set of seamless link information including: the audio separation start time information indicating an end time of presentation of the second audio frame as a stop time processing decoding of an audio decoder; the audio separation period information indicating a period between the end time of presentation of the second audio frame at a start time of presentation of the third audio frame as a stop period of the encoder's decoding. Audio; and the location information indicating which video object unit, of the previous units of video object in the next video object, includes the first audio packet sequence, the editing step that includes the following steps: one step for receiving an indication of a part to be erased from a plurality of video object units that are located in front of a last video object a reading means for referencing the location information in the information Seamlessly linking and reading the video object unit, of the plurality of video object units in the next video object, in which the first sequence of image data is located, and an erasing step for erasing a plurality of video object units, corresponding to the part to be erased, and audio separation.
MXPA/A/1999/004448A 1997-09-17 1999-05-13 Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer-readable recording medium storing an editing program MXPA99004448A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP9/251995 1997-09-17

Publications (1)

Publication Number Publication Date
MXPA99004448A true MXPA99004448A (en) 2000-01-01

Family

ID=

Similar Documents

Publication Publication Date Title
EP1020862B1 (en) Optical disc, computer-readable recording medium storing an editing program, reproduction apparatus for the optical disc, and computer-readable recording medium storing an reproduction program
US6148140A (en) Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer readable recording medium storing an editing program
EP0903743B1 (en) Video data editing apparatus and computer-readable recording medium storing an editing program
US6487364B2 (en) Optical disc, video data editing apparatus, computer-readable recording medium storing an editing program, reproduction apparatus for the optical disc, and computer-readable recording medium storing a reproduction program
JP3050311B2 (en) Optical disk, recording device and reproducing device
JP3410695B2 (en) Playback apparatus, playback method, and computer-readable recording medium
JP2000078519A (en) Video data editing device and computer readable recording medium for recording editing program
JPH11155131A (en) Video data editing device, optical disk used for editing medium by the video data editing device and computer-readable record medium recorded with editing program
MXPA99004448A (en) Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer-readable recording medium storing an editing program
MXPA99004453A (en) Video data editing apparatus and computer-readable recording medium storing an editing program
MXPA99004447A (en) Optical disc, video data editing apparatus, computer-readable recording medium storing an editing program, reproduction apparatus for the optical disc, and computer-readable recording medium storing a reproduction program
JP2002093125A (en) Optical disk, video data editing device, computer-readable recording medium with recorded editing program, reproducing device for optical disk, and computer- readable recording medium with recorded reproducing program