CN101401130B - Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program - Google Patents

Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program Download PDF

Info

Publication number
CN101401130B
CN101401130B CN200780008655.1A CN200780008655A CN101401130B CN 101401130 B CN101401130 B CN 101401130B CN 200780008655 A CN200780008655 A CN 200780008655A CN 101401130 B CN101401130 B CN 101401130B
Authority
CN
China
Prior art keywords
video
model
place
sequence
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200780008655.1A
Other languages
Chinese (zh)
Other versions
CN101401130A (en
Inventor
迪尔克·罗斯
托尔斯滕·布莱克
奥利弗·施奈德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nero AG
Original Assignee
Nero AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nero AG filed Critical Nero AG
Priority claimed from PCT/EP2007/000024 external-priority patent/WO2007104372A1/en
Publication of CN101401130A publication Critical patent/CN101401130A/en
Application granted granted Critical
Publication of CN101401130B publication Critical patent/CN101401130B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2562DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs

Abstract

An apparatus for providing a sequence of video frames on the basis of a scene model defining a scene comprises a video frame generator adapted to provide a sequence of a plurality of video frames on the basis of the scene model. The video frame generator is adapted to identify within the scene model a scene model object having a predetermined object name or a predetermined object property, to obtain an identified scene model object. The video frame generator is further adapted to generate a sequence of video frames such that user-provided content is displayed on a surface of the identified scene model object or as a replacement for the identified scene model object. An apparatus for creating a menu structure of a video medium comprises an apparatus for providing a sequence of video frames. The apparatus for providing a sequence of video frames is adapted to generate the sequence of video frames being part of the menu structure of the video medium on the basis of a scene model, on the basis of additional information, and on the basis of a menu structure-related characteristic. This concept allows the user-friendly generation of video transitions and menu structures.

Description

The equipment and the method for sequence of frames of video are provided, the equipment and the method for model of place is provided, model of place, the equipment and the method for establishment menu structure, and computer program
Technical field
The present invention relates generally to: be used to provide the equipment and the method for sequence of frames of video, be used to provide the equipment and the method for model of place, model of place is used to create the equipment and the method for menu structure and computer program.Particularly, the present invention relates to produce automatically cartoon scene to create the design of interactive menu and video scene.
In in the past several years, the performance of home entertainment device has firmly improved.Therebetween, consumer even can produce themselves digital video and digital video is saved in storage medium.Yet, up to the present all can't be easily under the situation that programming language is not understood in depth between the video scene or creating meticulous transition between the menu page or between menu page and video scene.
In addition; Because typically be necessary for any algorithm that is used to produce transition individual code (separate code) is provided, so paid very large effort for the consumer provides for the software company of the solution of creating meticulous video transition for this task for attempting.
Summary of the invention
Seeing that more than, the objective of the invention is to propose to provide the design of sequence of frames of video, this design allows to generate flexibly the sequence of frames of video that customizes.Other purpose is for the menu structure of creating video media user-friendly design to be provided.
Utilization according to the equipment of claim 1, according to the equipment of claim 16, according to the equipment of claim 18, according to the method for claim 23 or 24, according to claim 25 be used to create video media menu structure equipment, realized this purpose according to method and the computer program of claim 31 of menu structure that is used to create video media of claim 30.
The present invention proposes the equipment that is used for providing sequence of frames of video according to the model of place of definition scene according to claim 1.
Key idea of the present invention is; Be presented on the surface after the sign of the model of place object after the sign of model of place through the content that the user is provided or be shown as replacement the model of place object after the sign of model of place, can be effectively and produce sequence of frames of video neatly.
Found in model of place, can utilize predetermined object oriented, surperficial title, object properties or surface properties to identify the surface of model of place object or model of place object.In case identified object or its surface, then can make to be suitable for producing on the surface after the content that the frame of video generator of sequence of frames of video provides the user (for example the image that provides of user, frame of video that the user provides or user provide video sequence) is presented at sign or being shown as replacement to the object after the sign according to the model of place that comprises object or surface after the sign.
Therefore, can the user-defined content of two dimension be introduced in the predefined model of place, wherein the surface of the object of predefined model of place or face are as the placeholder surface.
Alternatively, through utilizing the placeholder after the sign in the three dimensional object replacement model of place that the user provides, the object that can three-dimensional user be provided (or user provide content) is introduced in the sequence of frames of video of describing according to model of place.
In other words, find that in model of place surface and object can be with the placeholders that acts on the content that (for example, image, frame of video, sequence of frames of video or three dimensional object form) user provides.
Can utilize predetermined title or predetermined object properties sign placeholder object.Therefore can utilize the frame of video generator of sequence that is suitable for generating a plurality of frame of video with in the content introducing model of place that provides according to the content that model of place and user provide.
The present invention also provides the equipment that is used to provide the model of place that has defined the 3 D video scene according to claim 16.This equipment comprises and is used to receive interface and the placeholder inserter (inserter) to the description of video scene.According to key idea of the present invention, the placeholder inserter is suitable for placeholder title or placeholder attribute are inserted in the model of place, makes content associated object or the surface that placeholder title or placeholder attribute are specified will be provided with the user.In other words, be used to provide the equipment of model of place to create model of place, said model of place supplies to be used to provide the equipment of the present invention of sequence of frames of video to use.For this reason; Be used for providing the equipment of model of place that model of place is introduced on placeholder surface or placeholder object; Wherein can utilize the device identification said placeholder surface or the placeholder object that are used to provide sequence of frames of video, and said placeholder surface or placeholder object can be used in the content that explicit user provides.
The invention allows for model of place, at least one content associated placeholder attribute that said model of place has at least one placeholder object or at least one placeholder title or placeholder object or placeholder surface and user are provided according to claim 18.Therefore, model of place of the present invention is suitable for being used to provide the equipment of sequence of frames of video to use.
The invention allows for method according to claim 23 and 24.
The present invention proposes the equipment of menu structure that is used to create video media according to claim 25.
The advantage that the method that is used to create the menu structure of video media of the present invention is brought is through with menu structure relevant information and model of place combination, to make video structure automatically adapt to the menu structure relevant information.Therefore, use the menu structure relevant information that the frame of video that is produced by the equipment that is used to create menu structure is regulated.
In other words, revise the scene of describing by model of place according to the menu structure relevant information.Therefore, although still based on model of place, yet sequence of frames of video is suitable for user's demand.Therefore, the content that the user is provided is introduced sequence of frames of video, custom video frame sequence.Yet, still whole scenes are described by model of place, said model of place is as the template of predefine scene.
The invention allows for according to the method for the menu structure of the establishment video media of claim 30 and according to the computer program of claim 31.
By dependent claims definition other advantageous embodiment of the present invention.
Description of drawings
To describe the preferred embodiments of the present invention with reference to accompanying drawing subsequently, in the accompanying drawing:
Fig. 1 shows the block scheme that is used for providing according to the model of place of definition scene and according to the content that the user provides the equipment of the present invention of sequence of frames of video;
Fig. 2 shows the diagrammatic representation of the cubical model of place of expression;
Fig. 3 shows the tabulation of describing model of place shown in Figure 2;
Fig. 4 shows according to the diagrammatic representation by time-varying field scape model and two defined transition between first sequence of frames of video and second sequence of frames of video of user-defined sequence of frames of video;
Fig. 5 shows the process flow diagram that the content that provides according to model of place and user presents the method for frame;
Fig. 6 shows the process flow diagram of the method for using how much generations of content and scene particular video frequency frame that the user provides;
Fig. 7 shows in the production process of the sequence of frames of video that is produced the diagrammatic representation to the use of the frame of first sequence of frames of video and second sequence of frames of video;
Fig. 8 shows the diagrammatic representation that utilizes three-dimensional text object replacement placeholder object;
Fig. 9 shows the diagrammatic representation of two sequences between the menu page;
Figure 10 shows the diagrammatic representation of advancing of the introductory film of schematic overview;
Figure 11 shows the diagrammatic representation of the animation of schematic overview intermediate sequence " chapter choice menus → film begins ";
Figure 12 shows the diagrammatic representation of the sequence between master menu and submenu;
Figure 13 shows the diagrammatic representation of the intelligent 3D scene graph with 6 chapter buttons;
Figure 14 shows the diagrammatic representation of the example of the menu with 4 chapters;
Figure 15 shows the diagrammatic representation for the example of the menu with 8 main chapters, and wherein the user can navigate to next and the Previous menu page;
Figure 16 shows the diagrammatic representation for the example of the menu with 8 main chapters, and wherein the first main chapter has 4 other sub-chapters, and wherein the user can through select " on " the button master menu that navigates back;
Figure 17 shows the diagrammatic representation for the example of the template of the master menu that in intelligent 3D internal representation, appears, and wherein above example is based on the template of said master menu;
Figure 18 shows the process flow diagram of the method for the present invention that produces sequence of frames of video;
Figure 19 shows the diagrammatic representation of the user interface that is used to select video title;
Figure 20 shows the diagrammatic representation of the user interface that is used to select predefined intelligent 3D template;
Figure 21 shows and is used to make intelligent 3D template to adapt to the diagrammatic representation of the user interface of user's request;
Figure 22 shows the diagrammatic representation of the user interface that presents the user-defined menu of being created by intelligent 3D device;
Figure 23 shows the high bright diagrammatic representation of sheltering of " monitor " menu, comprises 6 buttons and 3 navigation keys (arrow); And
Figure 24 shows the diagrammatic representation of the general work flow process of Nero intelligence 3D environment.
Embodiment
Fig. 1 shows the block scheme that is used for providing according to the model of place of definition scene the equipment of the present invention of sequence of frames of video.Employing 100 is the equipment of index map 1 on the whole.Equipment 100 comprises frame of video generator 110.Frame of video generator 110 is suitable for receiving the content 114 that model of place 112 and user provide.In addition, the frame of video generator is suitable for providing sequence of frames of video 116.
It should be noted that the model of place 112 that is received by the frame of video generator comprises at least one the model of place object with object oriented or object properties.For example, model of place can comprise being arranged in two dimension or the preferably description of a plurality of objects in the three dimensions.At least one object has object oriented or the object properties with each object associated at least.
In addition, for example, the content 114 that the user provides can comprise: image, frame of video, sequence of frames of video or to the description of at least one two dimension or three dimensional object.
Frame of video generator 110 is suitable for generating according to the content that model of place and user provide the sequence 116 of a plurality of frame of video.Frame generator 110 is suitable in model of place 112 the model of place object with predetermined object oriented or predetermined object properties is identified, with the model of place object after obtaining identifying.Model of place object to having predetermined object oriented or predetermined object properties identifies and can comprise: to the particular surface of the model of place object after identifying identify.
In addition, frame of video generator 110 is suitable for producing sequence of frames of video, and the feasible content 114 that the user is provided is presented on the surface of the model of place object after the sign.Alternatively, sequence of frames of video generator 110 can be suitable for content 114 that explicit user provides as the replacement to the model of place object after the sign.
Here it should be noted that if the content 114 that the user provides is image, frame of video or sequence of frames of video, the content that then preferably the user is provided is presented on the surface of the model of place object after the sign.On the other hand, if the content 114 that the user provides is the descriptions to two dimension replacement model of place object or three-dimensional replacement model of place object, the model of place object after the content replacement that then preferably utilizes the user to provide identifies.
Therefore, frame of video generator 110 provides sequence of frames of video 116, in sequence of frames of video 116, adopts the form by model of place 112 controls that the content that the user provides is shown.Therefore, can think that model of place 112 is templates of describing the sequence of frames of video 116 of the scene that will show, the content of wherein utilizing the user to provide is replenished the scene that is shown.
Hereinafter, with the content 114 that provides with model of place 112, user and the relevant further details of generation of sequence of frames of video 116 are described.
Fig. 2 shows the diagrammatic representation of the exemplary scene model that supplies the present invention's use.Adopt 200 to indicate model of place on the whole.Model of place 200 comprises cube 210 and observation station 212.Cube 210 and observation station 212 are arranged in the three dimensions, wherein, can reference frame 220 describe the position and the orientation of cube 210 and observation station 212.Although (have direction x, y z), yet can use any coordinate system arbitrarily only to show a plurality of in maybe coordinate systems one.
Here it should be noted that the cube 210 of also being appointed as " cube 1 " amounts to and comprises 6 surfaces, shows wherein three here.For example, cube 210 comprises first surface 230, second surface 232 and the 3rd surface 234.In addition, it should be noted, can define inner preferred point of cube and the inner preferred orientations of cube, so that cubical position and orientation are described.For example, can describe cubical position and orientation according to the position and the cubical preferred orientations at the center (or center of gravity) of cube 210.For example, preferred orientations can be vertical on first surface 230 direction from first surface 230 directed outwards.Therefore, (coordinate x for example, y z) describes the position of cube 210 can to utilize three scalar coordinates with respect to the position of the initial point of coordinate system 220 222 indication cubes 210.In addition, can use two additional coordinates (for example 2 angular position, θ) preferred orientations or the orientation of definition cube 210).
In addition, model of place 220 comprises observation station 212, for example, can utilize three coordinates of the initial point 222 of reference frame 220 that the position of observation station is described.In addition, alternatively, can or observe fan-shaped to observation station 212 definition direction of observations.In other words, can define and suppose the observer who is in observation station 212 places along which direction watches, and/or which zone of model of place is visible to the observer.For example, can describe the direction of observation according to two coordinates of assigned direction.In addition, with respect to observation station 212, can the horizontal viewing angle of definition and/or the indication of right-angle view angle be positioned at which part that the observer of observation station 212 can see model of place 220.
Usually, model of place 200 comprises: which part (for example, according to viewing angle) of model of place 220 is visible definition for the observer who is positioned at observation station 212.
In other words, model of place 200 comprises: at least one object (just cube 210), at least one Properties of Objects (for example title or attribute) and randomly relevant with the observer and defined the definition for the characteristic of the part of the visible model of place 200 of the observer who is positioned at observation station 212.
Fig. 3 shows the sample list to the model of place of the model of place of Fig. 2.Adopt 300 tabulations of index map 3 integrally.
It should be noted, for example, can define the tabulation of model of place with structure description language (for example XML descriptive language, or proprietary descriptive language), and the tabulation of model of place can be adopted any possible description form.Be also to be noted that and think that all characteristics of in following example, summarizing choose wantonly, and all characteristics that can utilize other characteristic replacement in following example, to summarize, or can be omitted in all characteristics of summarizing in the following example fully.
With reference to figure 3, tabulation 300 indication models of place 200 comprise cube 210.In tabulation 300, identifier " cube 1 " is used to specify cube 210.Tabulation 300 comprises the numerous characteristics of cube 210.For example, this characteristic can comprise: the position (attribute " position ") of title of cube 210 (characteristic " title ") and cube 210, for example cube 210 is in cartesian coordinate system (x, y, the position in z).The tabulation 300 that has defined model of place can also comprise the parameter (for example, describing according to 2 angular dimensions φ, θ) of the curl (rotation) that has defined cube 210.
In addition, the description 300 to model of place 200 can comprise other details about the surface of cube 210.For example, can comprise the description of first surface 230 (by attribute " surface 1 " indication): the information relevant (attribute " texture "), the information (attribute " material ") relevant and/or the additional information (" attribute ") of first surface 230 with the material of first surface 230 with the texture of first surface 230.
In the example that provides; The model of place of model of place 200 is described 300 defined first surface 230 and have texture " video 1 ", said texture " video 1 " indication should be presented at the video content that first user provides on the first surface 230 of cube 210.
Can also be directed against second surface (describing in 300 and be designated as " surface 2 " in tabulation or model of place) and provide other attribute.For example, definition second surface 232 (" surface 2 ") has the texture that name is called " video 2 ", and said texture " video 2 " indication should be presented at the video content that second user provides on the second surface 232.Can provide similar characteristic or attribute to other surface of cube 210.
Tabulation 300 model of place is described and is also comprised: the information relevant with observation station 212.For example, can (z) (referring to attribute " position ") and observation station provide the position of observation station 212 for x, y according to Cartesian coordinates.In addition, can be observation station definition direction of observation (just being positioned at the direction that the observer of observation station 212 is watching) (attribute " direction of observation ") according to each parameter.In addition, randomly, can define viewing angle (attribute " viewing angle ") for the observer who is in observation station 212 places.Which part that viewing angle has defined for the observer who is in observation station 212 places model of place is visible.
In addition, randomly, the model of place of tabulation 300 is described and can be described any motion of objects in model of place inside.For example, can describe cube 210 is how to move in time, wherein can provide description according to the position of cube 210 and/or the sequence of positional parameter.Alternatively, can utilize the model of place description of tabulation 300 that the moving direction of cube 210 and/or the translational speed of cube 210 are described.Here it should be noted that tabulation 300 model of place is described the description that can comprise the orientation differentiation in time of the position of cube 210 differentiation and cube 210 in time.
In addition, alternatively or additionally, the model of place of tabulation 300 describes that the position that can comprise observation station changes in time and/or observer's direction of observation changes in time and/or observer's the time dependent description of viewing angle.
In other words, model of place is described and can be comprised: the description at given time instance place to model of place, and the description of model of place being carried out in time time-evolution.
In a preferred embodiment, frame of video generator 110 is suitable for assessment (for example being provided by tabulation 300) model of place to be described, and is suitable for describing generation sequence of frames of video 316 according to this model of place.For example, frame of video generator 110 can be to assessing to obtain first frame of video at very first time instance place effective scene model description.Frame of video generator 110 can also be to assessing at the second time instance place effective scene model description to obtain second frame of video to second time instance.Can also in effective standalone scenario model description, provide to the model of place of the second time example and describe for second time instance, maybe can use to the model of place of very first time instance describe and (having described the change of model of place between very first time instance and second time instance) time-evolution (time development) is described or the model of place description of confirming to the second time example is described in motion.
Fig. 4 shows the figured example that the content 114 of using frame of video generator 110 to provide according to model of place 112 and user produces sequence of frames of video.Employing 400 is the diagram of index map 4 integrally.The left column 410 of diagram 400 shows the top view at the model of place of different time instance.Another row 420 show the frame of video of the sequence of frames of video 116 that is produced for the different time instance.First row 430 shows corresponding frame of video in the top view of very first time instance place model of place and sequence of frames of video 116.Show the top view of cube 432 with first surface 434 and second surface 436 to the model of place of very first time instance.Here it should be noted that cube 432 is equal to the cube 210 of Fig. 2.The first surface 434 of cube 432 is equal to the first surface 230 of cube 210, and the second surface 436 of cube 432 is equal to the second surface 232 of cube 210.The first surface 434 of cube 432 has the content associated attribute (for example, title, material designator, texture designator or characteristic) of indicating the first surface 432 and first user to provide.In the example of Fig. 4, suppose that the sequence of frames of video that image that the first surface 434 and first user provide, frame of video that first user provides or first user provide is associated.In addition, suppose that the sequence of frames of video that image that the second surface 136 (through attribute is carried out corresponding setting) and second user provide, frame of video that second user provides or second user provide is related.At very first time instance place, model of place also comprises the description to observation station 438 and viewing angle 439.The full-screen image of selecting viewing angle 439 to make that observer at observation station 438 places sees first surface 434.
(observer at observation station 438 places can check with viewing angle 439) that the observer saw like observation station 438 places; According to the model of place to very first time instance, frame of video generator 110 produces the frame of video of the view that shows the scene of being described by model of place.Therefore the frame of video 440 that is produced by frame of video generator 110 shows the zone at the visible model of place of the observer at observation station 438 places.Like above definition, the definition model of place makes the full-screen image of observer's perception first surface 434 at observation station 438 places, and makes the frame 440 full-screen image on surface 434 is shown.Like what in model of place, define; The video sequence that the frame of video that the image that first user provides, first user provide or first user provide is associated with first surface 434, and the frame of video that is produced 440 that produces to very first time instance shows the full-screen image of the frame of video of the full-screen image of the full-screen image of the image that first user provides, frame of video that first user provides or the video sequence that first user provides.
Second row 444 shows in the model of place at the second time instance place and the corresponding frame of video that produces.The model of place at the second time instance place 446 with the very first time instance place model of place 431 similar.Yet, it should be noted that observation station 438 moves away from cube 432 between the very first time instance and second time instance.Therefore, compare with previous observation station, the new observation station 448 at the second time instance place is farther from cube 432.Yet,, suppose the viewing angle 449 at the second time instance place and the viewing angle 439 equal (although viewing angle 449 might be different with viewing angle 439) at very first time instance place for simply.Therefore, with the very first time instance place situation compare, the observer who is in observation station 448 in second time instance will see bigger a part of scene.In other words, not only see the first surface 436 of cube 432 in second time instance the observer at observation station 448 places, also see cube 432 around a part (and possibly see cubical end face).
Therefore, according to the model of place 446 at the second time instance place, frame of video generator 110 produces the image (for example, 3-D view) that second frame of video, 450, the second frame of video 450 show cube 432.Because cubical first surface 436 is visible in second frame 450; And because the sequence of frames of video (following is the content that first user provides with these three optional object factories) that the image that first surface 436 and first user provide, frame of video that first user provides or first user provide is associated, so the content that in second frame of video 430, first user is provided is presented on the first surface 436 of cube 432.In order to realize this, for example, when producing the frame of video 450 of second generation, frame of video generator 410 can be used as the content that first user provides the texture of the first surface 436 of cube 432.
Here it should be noted that the content that the content that first user who provides at very first time instance provides can provide with first user in second time instance is different.For example, frame of video generator 110 can use (the for example user sequence of frames of video that provides) first frame of video at very first time instance, and in (the for example user sequence of frames of video that provides) second frame of video of second time instance.
Be also to be noted that in second time instance content that in the frame of video that second produces, no longer first user is provided is shown as full-screen image, but is shown as the texture of the first surface 434 of having filled cube 432.Therefore, the content that provides of first user is only filled up the part of second frame of video 450 that produce.
The third line 454 illustrates model of place 456, and the frame of video 460 of the 3rd generation that is produced.It should be noted,, suppose to have rotated along Z-axis (Z-axis is vertical with the diagram plane) with the different cubes 434 that only are of the model of place 446 at the second time instance place at the model of place 456 of the 3rd time instance for example shown in Figure 4.
Therefore, the observer at observation station 448 places can see the first surface 434 and second surface 436 of cube 432.Also show the frame of video 460 of the 3rd generation that is produced.It should be noted that the content that second user provides (for example, the image that provides of second user, frame of video that second user provides or second user provide sequence of frames of video) is associated with the second surface 436 of cube 432.Therefore, the content that in the frame of video 460 that the 3rd produces, second user is provided is presented on the second surface 436 of cube 432.In other words, when the content that provides according to model of place 456 and second user at frame of video generator 10 produced frame of video 460, the content that second user is provided was as the texture of the second surface 436 of frame 432.Similarly, frame of video generator 110 produce the 3rd produce frame of video 460 time, the content that first user is provided is as the texture of the first surface 434 of cube 432.In addition; It should be noted; In the frame of video 460 that the 3rd produces, show the content that the content that first user provides and second user provide simultaneously, the content that the content that wherein first user is provided and second user provide is presented on two different surfaces of cube 432.
For more vague generalization; The invention provides the solution of the content that shows simultaneously that on different surfaces the content that first user provides and second user provide, that the surface that wherein shows the content that the content that first user provides and second user provide can belong to is single (typically three-dimensional) object or different (typically three-dimensional) object.
Fourth line 464 shows the model of place 466 and the corresponding frame of video 470 that produces at the 4th time instance place.Like what can see from model of place 466, model of place 466 only is with the difference of model of place 456: cube 432 is further rotated, makes the second surface 436 of cube 432 towards observation station 448.Frame of video generator 110 produces the frame of video 470 of the 4th generation according to model of place 446.The 4th frame of video 470 that produces that is produced is similar with the frame of video 450 of second generation, and the content that wherein second user is provided is shown as the texture on the second surface 436 of cube 432, and the second surface 436 of cube 432 is towards observation station.
Fifth line 474 shows the frame of video 480 that model of place 476 and the 5th produces.The difference of the 5th model of place 476 and the 4th model of place 466 is that the observation station 448 in observation station 482 to the four models of place 466 in the 5th model of place 476 is more near cube 432.Preferably, observation station 482 is arranged in model of place 476 with cube 432, to such an extent as to the observer at observation station 482 places sees (or perception) second surface 436 with full-screen image.Therefore, the frame of video of the 5th generation illustrates the content that second user provides as full-screen image.
In sum; The sequence of the frame of video 440,450,460,470,480 of five generations shows the transition between the content that the content that first user provides and second user provide; Wherein, First frame of video 440 that produce shows the full-screen image of the content that first user provides, and wherein the 5th frame of video that produce shows the full-screen image of the content that second user provides.
In alternative, model of place 431,446,456,466,476 can be illustrated in another transition between two scenes.For example, model of place 431,446,456,466,476 can be described the transition between the content that the menu page that shows a plurality of menu items and user provide.For example, first model of place 431 can be described the full-screen image of menu page, and last model of place 476 can be described the full-screen image of the content that the user provides.Like this, middle model of place 446,456,466 is described in the intermediate steps that preferably seamlessly transits between first model of place 431 and the last model of place 476.
In alternative, model of place 431,446,456,466,476 can be described in the menu page that more than first menu item is shown and the transition between the menu page of more than second menu item is shown.Like this, first model of place can be described the full-screen image of first menu page, and second model of place 476 can be described the full-screen image of second menu page.Middle model of place 446,456,466 is described in the intermediate steps of the transition between first model of place 431 and the last model of place 476.
In alternative, model of place 431,446,456,466,476 can be described in content that the user provides and the transition between the menu page.Like this, first model of place 431 can preferably be described the image of the content that the user provides, and last model of place 476 can be described the image of menu page.Menu is the image of the 3D scene of (for example, for standardized time parameter at time t=1 place) at very first time instance place (for example, for standardized time parameter at time t=0 place) or at the second time instance place.Middle model of place 446,456,466 is described in the intermediate steps of (preferably level and smooth) transition between first model of place 431 and the last model of place 476.
Another possible application is, the appearing of the content that 430 expressions of first row provide the user, the content that the user is provided is illustrated in the frame of video 440.In addition, the third line 454 shows appearing menu with three buttons (usually, rather than 6 buttons).Shown in the third line 454, cubical three visible surfaces (shown in the frame of video 460) can be as the button in the scene.
Fig. 5 shows the block scheme of the method that presents frame of video, and this method is applicable to frame of video generator 110.Employing 500 is the method for index map 5 integrally.It should be noted, can the method 500 of Fig. 5 be carried out repeatedly to produce sequence of frames of video to a plurality of frames.
Method 500 comprises: in first step 510, obtain user content to frame of video, said frame of video has the index f that is used to explain.
Method 500 also comprises: in second step 520, obtain to how much of the scenes of frame of video f.
Method 500 also comprises: in third step 530, the content of using (for frame of video f's) user to provide produces frame of video f how much with (for frame of video f's) scene.
Method 500 also comprises: in the 4th step 540, the frame of video f that is appeared is provided.
If in decision steps 550, find to exist more frames that will appear, then repeating step 510,520,530,540.
Acquisition comprises to the first step 510 of the user content of frame f: confirm that which user content will be used for frame of video f.For example, if the content that all frames of the sequence of frames of video that discovery will appear all use identical (stable) user to provide, the content that the user that then can obtain the frame of video to first pre-treatment provides is reused.Yet, if the different frame of (or appearing) video sequence of finding that content that different users provides should be used to produce, the content that the user who obtains to be associated provides.
For example, if the content that the user provides is a sequence of frames of video, then the different frame of the sequence of frames of video that provides of user can be associated with the different frame of (or appearing) sequence of frames of video that produces.Therefore, in step 10, which frame that identifies the sequence of frames of video that the user is provided is used to produce the current frame of video that appears.
Here it should be noted, for the generation of (or appearing) frame of video of single generation, the frame of video that can use one or more user to provide.For example, the inside of (or appearing) frame of video of single generation is had: the corresponding video frame of the sequence of frames of video that first user provides, and the corresponding video frame of the sequence of frames of video that provides of second user.Show the example of the frame of video of use with reference to figure 7.
In second step 520, obtain to when how much of the scenes of the frame f of pre-treatment.For example, can adopt the form of the descriptive language of the characteristic of describing current geometric object in each frame to provide scene how much.For example, can be to describe to how much of the scenes of frame f with the tabulation of Fig. 3 300 similar descriptive languages.In other words, scene description can comprise: the geometric configuration that will in each frame, show or the tabulation of element, and a plurality of characteristics or the attribute that are associated with geometric object or shape.For example, such characteristic can comprise: the position of object and/or location, the material of the size of object, the title of object, object, will with the transparency of object or the texture that is associated with the distinct faces of object, object, or the like.Here it should be noted, can any attribute be used for geometric object or the geometric configuration known from the description of virtual reality world.
In addition, scene can comprise the information relevant with observer or observation station how much, has defined according to the point of its generation by the image observation scene of the scene of scene geometric description.Description to observation station and/or observer can comprise: the position of observation station, the direction of observation and viewing angle.
Here it should be noted, can be directly obtain to how much of the scenes of frame f according to the model of place that can be used for frame f.Alternatively, can use to the model of place of frame e (once showing before the frame f) and utilize how much of scenes that are directed against frame f with the mobile relevant information acquisition of object in the time of frame e and frame f.Can also be to assessing with the direction or the relevant information of viewing angle of the moving of observation station, observation, to obtain to how much of the scenes of frame f.Therefore, be for how much to the geometric object that will in frame f, show and/or the description of geometric configuration to the scene of frame f.
In third step 530, content that the use user provides and the scene that in second step 520, obtains produce frame of video f how much.To describe with reference to 6 pairs of details that produce frame of video f of figure subsequently.In third step 530, according to the frame of video that obtains for how much to the user content of frame of video f and to the scene of frame of video f to appear.
Therefore, in the 4th step 540, the frame that appears f is provided for further handling (for example, in order to form frame sequence, or for the original material of frame or frame sequence is carried out further coding).
Content and scene that Fig. 6 shows using the user to provide produce the block scheme that frame of video f describes how much.Employing 600 is the method for index map 6 integrally.
The generation of frame of video f comprises: first step 610, and to the object in the frame of video f sign model of place with predetermined title or predetermined object properties.If can in first step 610, identify such object, the object after the object replacement that then in second step 620, utilizes the user to provide identifies.In third step 630, in model of place, identify object with the surface that has predetermined surface properties.For example, predetermined surface properties can be superficial makings attribute, surfacing attribute or surperficial name attribute.Yet, it should further be appreciated that appear in the model of place if having the object of predetermined title, at least one particular surface of suppose object has predetermined surface properties so automatically.For example, can define: if model of place comprises and have predetermined title the cube of (for example video_object or NSG_Mov, wherein Mov represents film), then each cubical surface has the predetermined surface properties that is suitable for illustrating video above that.
In other words, the crucial purpose of third step 630 is: the content that sign is suitable for the user is provided shows at least one surface above that; Or identifying at least one object, said object has the lip-deep attribute that indication is intended to the content that the user provides is presented at said object.
If identified the surface that is intended to the content that explicit user provides, the content that then user is provided is presented on each surface.In order to reach this effect, the frame of video generator can be used as the content that the user provides the texture on surface, wherein recognizes to be intended to the content that the user provides is presented on the said surface.
For example, the frame of video generator can be to resolving to scene description or the model of place of frame f, is intended at least one surface of the content that explicit user provides with sign.For example, the frame of video generator can be with inserting model of place with reference to (for example, link), and this content that user is provided with reference to indication is used as the texture of particular surface.In other words; The frame of video generator can be resolved characteristic title or characteristic attribute with sign object or surface to model of place or scene description, and the texture properties on object after the sign or surface is provided with the content that the user is provided is appointed as the texture that will use.
For example, for parsing, the frame of video generator can be obeyed predetermined resolution rules, for example definition: the surface that should utilize texture to fill to have predetermined surface title or surface properties according to the content that the user provides.
Alternatively, resolution rules can also be indicated: the content that should provide according to the user provides texture to i the predetermined surface of object with predetermined title.
If the content that provides according to the user has identified the surface that is intended to have texture in model of place or scene description, then frame of video generator 110 content that the user is provided is presented on the surface after the sign subsequently.For this reason, generation is by the diagrammatic representation of model of place or the described scene of scene description.Consider that object relative to each other and with respect to the relative position of observation station, will change into the diagrammatic representation of object by attribute (like position, size, orientation, color, material, texture, the transparency) object of describing according to object in model of place or scene description.In other words, will change into like the layout of model of place or the described object of scene description as from the being seen diagrammatic representation of observation station.In figured generation, consider the replacement of object in second step 620, and the content that provides of user is the fact that is intended to have the texture on the surface after the sign of such texture.
Should be noted that the figured generation through model of place or the described scene of scene description is known for artist/deviser.
Be also to be noted that and carry out institute in steps 610,620,630,640.On the contrary, in an embodiment, execution in step 610 and step 620 are with regard to enough (if step 610 success).Like this, frame of video generator 110 produces the frame of video that shows like the described scene of model of place, the object after the object replacement that wherein utilizes the user to provide according to second step 620 identifies.At last, execution in step 640 is to produce diagrammatic representation.
Yet, for example, under the situation that needn't replace any object, needn't carry out the first step 610 and second step 620.Like this, the step 630 on the surface of the content that provides of (for example with texture) explicit user is just enough above that to carry out in model of place sign.After step 630, carry out the 4th step 640.In step 640, frame of video generator 110 produces the lip-deep frame of video after the content that the user is provided is presented at sign.
In other words, can: only utilize the object that the user provides to carry out replacement (step 610 and 620) to the object after the sign; Only utilize user-defined object to carry out replacement (step 630) to the texture on surface; Or the object execution that utilizes the user to provide is carried out the replacement (step 630) to superficial makings to the replacement (step 610 and 620) of the object after identifying and the object that utilizes the user to provide.
Fig. 7 shows the diagrammatic representation of the frame of video of the frame sequence that provides to two users that are created in the transition between the sequence of frames of video that the sequence of frames of video that first user provides and second user provide.Here suppose that transition comprises: in the time interval, the content of the sequence of frames of video that the sequence of frames of video that in this time interval, first user is provided and second user provide is presented in the sequence of frames of video 16 that is produced.
For this reason, the user can define the overlapping region.In other words, for example, the overlapping region can comprise (corresponding with the specific duration) F frame.The last F frame of the sequence of frames of video that therefore, use first user provides in transition.The frame of the sequence of frames of video that first user provides has been shown in first diagrammatic representation 710 of Fig. 7, and wherein the last F frame of first user's sequence of frames of video has index (n-F+1) to n.Here the last F frame of supposing the sequence of frames of video that first user provides is used for transition.Yet not necessarily use last F frame.But, can use F the frame that is arranged in the sequence of frames of video that first user provides.
The generation of the sequence of frames of video that the preceding F frame of in addition, supposing the sequence of frames of video that second user provides is used for being produced.
Suppose that also the sequence of frames of video that is produced comprises F frame of video with index 1-F.Therefore, the frame and the frame with index 1 of the sequence of frames of video that second user provides that have an index n-F+1 of the sequence of frames of video that first user provides are associated with first frame of the sequence of frames of video that is produced.Therefore, related frame of video be used to produce first sequence of frames of video that produce.In other words, first frame of the sequence of frames of video that produces in order to calculate is used first frame of the sequence of frames of video that sequence of frames of video (n-F+1) frame that first user provides and second user provide.
On the contrary, the n frame of the sequence of frames of video that provides of first user and the F frame of the sequence of frames of video that second user provides are associated with the F frame of the video sequence of generation.
Here it should be noted, related between the frame of the frame of the sequence of frames of video that the user provides and the sequence of frames of video that is produced do not mean automatically the sequence of frames of video that produces in order to calculate particular frame and needs related frame.Yet, if find the frame of the sequence of frames of video that the sequence of frames of video that needs first user provides and/or second user provide during the process of the f frame of the sequence of frames of video that produces appearing, use related frame.
In other words; The association of more than describing between sequence of frames of video that first user provides, sequence of frames of video that second user provides and the sequence of frames of video that produced allows effectively to calculate the sequence of frames of video that is produced, and wherein can the content that variable (or moving) user provides be embedded in the sequence of frames of video that is produced.
In other words, the frame of the sequence of frames of video that provides of first user shows that as being intended to (or by sign with) frame on the surface of the sequence of frames of video that first user provides becomes (frame-variant) texture.
The frame of the sequence of frames of video that second user provides constitutes and is intended to (or by sign with) and shows that the frame on the surface of the sequence of frames of video that second user provides becomes texture.
Therefore, use frame to become texture the video sequence that is produced is provided.
To be also to be noted that the sequence of frames of video that produces in order calculating, can to switch the sequence of frames of video that sequence of frames of video that first user provides and/or second user provide with respect to the sequence of frames of video that is produced.In addition, can expand or compress with respect to the sequence of frames of video that the time provides first user.The sequence of frames of video that same suitable second user provides.What only need is that a frame of the sequence of frames of video that a frame of the sequence of frames of video that first user provides and second user provide is associated with each frame of the sequence of frames of video that is produced (content of having used those users to provide therein).
Fig. 8 shows the diagram of utilizing text replacement text placeholder object.
Adopt 800 diagrammatic representations of index map 8 integrally.Can find out that from diagrammatic representation 800 scene description 810 (representing with the frame of video form) can comprise the text placeholder object here.For example, scene description 810 can be described and have that to have indicated cube or cuboid be the title of text placeholder object or the cube or the cuboid of attribute.Therefore; If sequence of frames of video generator 110 recognize model of place 112 comprise have indicated model of place to as if the model of place object of the predetermined title of text placeholder object or predetermined object properties, then the frame of video generator utilizes the expression replacement text placeholder object of text.For example, frame of video generator 110 can utilize one or more object replacement text placeholder object of the text of representing that the user provides.In other words, the frame of video generator can be introduced model of place with the description of object (text that said object representation user provides).For example, the model of place generator can be suitable for receiving the text of string input form, and is suitable for producing the object of the text of representing the string input.Alternatively, the frame of video generator can adopt the form of one or more object to receive the description of the text that the user provides, and the shape of said one or more objects is represented text.Like this, for example, the frame of video generator can be suitable for the description that the user with text provides (with the form to the description of a plurality of objects) and be included in the model of place, and is suitable for according to comprising that the model of place to the description of the object of expression text produces frame of video.
Like what can see from Fig. 8, frame of video generator 110 produces the illustrated frame of video 820 that has comprised the text that the user provides.Here it should be noted, in a preferred embodiment, make the illustrated size of the content that the user provides be fit to the size of text placeholder object 812.The outer boundary of the text that for example, can the text placeholder object be provided as the user.In addition, can be with being applied to the text that the user provides with text placeholder object 812 associated attributes (for example, color attribute or transparency attribute), and with whether with string or the text-independent that provides the user to provide with a plurality of objects.
Therefore, model of place 112 is defined in the outward appearance of the text that the user provides in the sequence of frames of video 16 as template.
Hereinafter, will further describe the present invention.In addition, will describe using the menu structure that the present invention produces the video data medium.In addition, with describing how can thought according to the present invention set up the transition between the different video content.In addition, will describe how can produce video effect and text effect.
Hereinafter, with providing some general informations relevant with DVD menu, video transition, video effect and text effect.At first, will describe video transition, video effect and text effect.
Although key application of the present invention is to create three-dimensional (3D) DVD menu, yet will describe 3 D video transition and 3 D video effect and three-dimensional text effects.Can think 3 D video transition, 3 D video effect and three-dimensional text effects be complicated DVD creation than simple version.
Typically, when making up or linking two video sequences (or video film), insert video transition to avoid sudden change (abrupt transition).For example, will be very simple two-dimensional (2D) video transition gradual change so that the first video blackening, subsequently on the contrary with the second video gradual change.Usually, video transition is sequence of frames of video (or film sequence), and said sequence of frames of video (or film sequence) begins the frame identical with first video is shown, and the frame identical with second video is shown at last.(frame of video) sequence is somebody's turn to do in montage between two videos then (or insertion), thereby allows (or level and smooth) the continuously transition between two videos.
For the 3 D video transition, sequence of frames of video (or film sequence) is the product that presents the 3 D video transition.In addition, under the situation of 3 D video transition, preferably first frame of sequence is identical with the frame of first video, and preferably the last frame of sequence is identical with the frame of second video.Except the 3D scene and the animation, present synchronization frame that engine receives first video and second video as input.Through hypothesis two videos are placed on over each otherly with overlapping mode, and the length of hypothesis overlay area definition video transition and the scene that utilization appears replace said region covered, it is contemplated that (producing transition) this process.The simple examples of 3 D video transition can be the plane, and first video is visible on the front, and second video is visible on the back side, back.Then, this plane needs to move by this way: being visible full frame in the beginning front of animation (or transition), is visible full frame at the end back side of animation.For example, this plane can be moved away from video camera (or observer or observation station), carries out the rotation half cycle around the transverse axis of symmetry, moves to video camera once more.
Usually 3 D video effect and three-dimensional text effects are to add the three dimensional object of video film (or sequence of frames of video) to.Like this, the frame of 3D scene and animation thereof and original video (or initial video) is the input that presents device.
For text effect, must confirm (or setting) text string.Can the example of three-dimensional text effects be envisioned for sequence (for example sequence of frames of video), wherein make up string, be rendered as the three-dimensional text character that is used for character, disappear once more subsequently.Original video (or initial video) continues at running background like this.
For example, the 3 D video effect can be the three dimensional object (for example, the football in rubber nipple in child's film or the football World Championships film) that happens suddenly to frame and suddenly disappear once more subsequently.
For example, under the situation of 3D video transition, 3D video effect and 3D text effect are combined.Present that engine receives the 3D scene and from the synchronization frame of one or more video and (randomly) one or more text string as input.Present engine then and produce short film, wherein utilize external unit that film is further handled (for example, with other audio-visual-materials said film being made up or montage) subsequently by frame.
Three-dimensional scenic goes for proprietary data form or universal data format; Maybe can adopt proprietary data form or universal data format to provide three-dimensional scenic, wherein common said proprietary data form or universal data format can be the standard output data forms of any 3D modeling software.In principle, the input of 3D data layout (just describing the data layout of three-dimensional scenic) arbitrarily can be arranged.The detailed structure of document format data and the present invention are irrelevant.
In addition, preferably, can geometric object be divided into groups and (wherein, for example material is equal to color and texture: material=color+texture) for group, object and/or surface definition provide title.Like this; For example, can present engine through specific names (just characteristic or the predetermined title) notice of using the material on the front of above example midplane for the 3 D video transition: the frame that will on said surface, place (or illustrating) first video.In other words, the material for the positive page in plane provides specific names (for example NSG_Mov).This specific names (NSG_Mov) is to presenting engine indication: the frame that first video will just be shown on the front on plane in particular surface.In the same way, utilize the order of certain material title to present the frame of engine at back displays second video on plane.
In order user's editable text to be inserted in the three-dimensional scenic, use the three dimensional object such as cuboid, wherein utilize specific (or characteristic) title said three dimensional object to be labeled as the placeholder that is used for the three-dimensional text object.Presenting engine then can be in advance, and (for example before the diagrammatic representation that produces three-dimensional scenic) removed these objects, and presents the text by the end subscriber definition in the position of these images.The size of the three-dimensional text of being drawn meets the size of (or depending on) placeholder object.
Like this; 3D modeling person can create three-dimensional scenic; With dividing into groups said three-dimensional scenic be interpreted as video transition, text effect or video effect through providing title by intelligent 3D engine, wherein can use business tool (for example, can with any program of 3D data of description form output data).3D modeling person is without any need for programming knowledge.Although under the situation of considering (video) transition and (video) effect, only have the rule of minority object oriented form, yet the establishment of functional DVD menu is more complicated.Yet it is identical that basic process keeps.
Hereinafter, will describe the generation of DVD menu.Here will be noted that except main film, most of commercial DVD comprise the additional video material, as performer's titbit or with interview.In addition, usually main film branch is come out as an article.For the end subscriber that allows DVD navigates through DVD, except the audio-visual-materials of above description, DVD also comprises video sequence, wherein by DVD player the additional video sequence is interpreted as menu structure.The data layout (or details of data layout) of definition video DVD in standard, the DVD that utilizes intelligent 3D design to produce does not break away from this standard.
The DVD menu can comprise a plurality of menu pages.The user can change between the page through the action such as selector button.In addition, the user can be through the specific chapter of action beginning specific video or beginning video.
Between the demonstration of two menu pages,, can define small video sequence similar with video transition, that avoid unexpected variation between menu page and the video displaying or between the blank screen and the master menu page after directly inserting DVD.Fig. 9,10,11,12,13,14,15,16 and 17 shows the schematic arrangement (or structure) of the DVD menu with sequence between menu.Design of the present invention (also being called as intelligent 3D) provides the possibility of sequence between use three-dimensional model (also being called as model of place) definition menu page and menu.
DVD menu page self also is short-sighted frequency sequence, thereby even during the stage that DVD user's (just using the people of DVD) can select, also needn't show the image of complete static state.On the contrary, during the stage that DVD user can select, can move one or more animation.Use intelligent 3D to present these film sequence (just little animation) by the DVD Authoring program.
Therefore, on the user's computer of Authoring program or authoring software, carry out: produce sequence (for example sequence of frames of video) from three-dimensional scenic (or according to three-dimensional scenic).DVD player is only play (being included on the DVD that is produced by the DVD Authoring program) video with fixing order or according to DVD user's action.
To describe with reference to 9,10,11 and 12 pairs of representative transitions that appear on the DVD medium of figure subsequently.Fig. 9 shows the diagram of the sequence (for example sequence of frames of video) between two menu pages.Employing 900 is the diagram of index map 9 integrally.Fig. 9 shows first menu page 910.First menu page 910 comprises: can be used for the button 912,914,916,918,920,922 to the specific Zhang Jinhang selection that is included in the dvd content on the video DVD medium.Can utilize one or more Drawing Objects to represent button 912,914,916,918,920,922.In addition, but button 912,914,916,918,920,922 can comprise favored area and/or highlight regions, makes a high bright button that can move the pointer in the button be used for selecting.The diagrammatic representation that is also to be noted that button 912,914,916,918,920,922 can comprise the content that sequence of frames of video that image that the user provides, frame of video that the user provides or user provide provides as the user.In other words, the diagrammatic representation of button can comprise static state or dynamic, just changeable graphical content.
Be also to be noted that preferably, describe menu page 910 according to the model of place that produces by 3D modeling person.Therefore with the form of scene description language the element (for example geometric object) of menu page 910 is described.In addition; The model of place of menu page 910 can comprise placeholder object or placeholder surface; The object that makes it possible to utilize the user to provide (just user provide content) replacement placeholder object, and make the content that the placeholder surface can (for example with texture) explicit user provides (for example the image that provides of user, frame of video that the user provides or user provide sequence of frames of video).
Fig. 9 shows second menu page 930.Second menu page 930 comprises a plurality of buttons 932,934,936,938,940,942.Button 932,934,936,938,940,942 can have outward appearance and the function similar with button 912,914,916,918,920,922.
Fig. 9 also show when the transition of carrying out between first menu page 910 and second menu page 930 will be by the menu of DVD player broadcast between sequence or menu-menu sequence 950.Preferably; What sequence 950 between the menu between first menu page 910 and second menu page 930 (typically being cartoon scene or animation) was concerned about is: old, previous (or previous show) contents of menus disappears, and the scene (or content) that makes up new (subsequently or show subsequently) menu.According to the structure of menu, preferably show some navigational arrows (for example green arrow).Here it should be noted, as not being key component of the present invention, and should regard example as with reference to figure 9 described menu structures.In other words, the invention is not restricted to the specific menu structure.The diagrammatic representation of illustrated menu only is intended to explain the problem of dynamic menu establishment.In this article, " dynamically " be meant that at the time point place of design during menu the FINAL APPEARANCE of (just for example the time point when creating the menu template) menu is unknown.For example, at the time point place of design during menu, take (or distribution) of independent button (or effectively switch region) and optional additional (three-dimensional) object is unknown with use.
Figure 10 shows the schematically process of the introductory film of general introduction.The diagrammatic representation that Figure 10 is integrally indicated in employing 1000.Diagrammatic representation 1000 shows first menu page 1010 with a plurality of buttons 1012,1014,1016,1018,1020,1022.For example, first menu page 1010 can be identical with menu page 910.Diagrammatic representation 1000 also shows menu afterbody sequence 1030 (also being called as " introducing (intro) ").When DVD is inserted DVD player, introductory film (" introduction ") or afterbody are play once.Introductory film or afterbody end at first master menu of DVD.
In other words, menu afterbody 1030 is the sequence of frames of video that begin and finish with first master menu with blank screen.In addition, it should be noted, preferably,, describe menu afterbody sequence 1030 according to model of place like what summarized in the past.
Figure 11 shows the diagrammatic representation of the animation " chapter choice menus → film begins " of the intermediate sequence of schematic overview.The diagrammatic representation that Figure 11 is integrally indicated in employing 1100, and the diagrammatic representation of Figure 11 shows menu page 1110.For example, menu page 1110 can be identical with the menu page 1010 of the menu page 930 of the menu page 910 of Fig. 9, Fig. 9 or Figure 10.The diagrammatic representation of Figure 11 also shows first frame 1120 of film (sequence of frames of video just).Diagrammatic representation 1100 also shows menu intermediate sequence or menu to title sequence 1130.
Preferably, menu intermediate sequence 1130 begins with the frame of video that shows menu page 1110, finishes with the identical frame of video of first frame of the frame of video 1120 that provides with the user.Here it should be noted, for example, like former general introduction, can menu intermediate sequence 1130 be described according to model of place.
In alternative, can the menu intermediate sequence be incorporated in the reverse menu.Therefore, can be when finishing video (its frame is illustrated as frame 1120) and will master menu be carried out back in transition backward the time play menu intermediate sequence 1130.In other words, can be provided for from the menu intermediate sequence of title to the menu transition.Corresponding transition can begin with the frame (last frame) of sequence of frames of video and can finish with menu page 1110.
Figure 12 shows the diagrammatic representation of the sequence between master menu and submenu.The diagrammatic representation that Figure 12 is integrally indicated in employing 1200.Diagrammatic representation 1200 shows master menu 1212 and submenu 1220.For example, master menu 1212 can be identical with the menu page 1110 of the menu page 1010 of first menu page 910 of Fig. 9 or second menu page 930, Figure 10 or Figure 11.The submenu page 1220 can have and the structural similarity of the master menu page 1212 or identical structure.Yet for example, the submenu page 1220 can comprise the button that allows the sub-chapter on the visit DVD.Therefore, the submenu page 1220 can comprise a plurality of buttons 1222,1224,1226,1228,1230,1232.Diagrammatic representation 1200 also shows menu intermediate sequence or menu to submenu sequence 1240.
Under situation shown in Figure 12, (according to example embodiment) can occur up to n=6 chapter by every menu.For the template of exemplary menu intermediate sequence, the object of n*4+10 appointment is provided by deviser (for example by 3D modeling person) preferably.Therefore, if hypothesis can a maximum number n=6 chapter occur by every menu page, the object of 34 suitable appointments should be provided then by the deviser.Particularly, should following object be provided for example menu to menu animation sequence:
N " old " chapter image;
N " old " Zhang Wenben;
3 " old " navigational arrows;
1 " old " header (header);
1 " old " footer (footer);
N " newly " chapter image;
N " newly " Zhang Wenben;
3 " newly " navigational arrows;
1 " newly " header;
1 " newly " footer.
Closely link with the above object of mentioning, must in three-dimensional scenic, correspondingly arrange n " old " and respective sets n " newly ".Which object is " old " and " newly " group defined belongs to menu button.In the example " monitor " of following detailed description, whole mechanisms of chapter 1 image, chapter 1 text and first monitor are summarised as first group.
Therefore, 3D modeling person can create the 3D menu through using business software to create a series of animations, makes animation meet above-mentioned rule.3D modeling person need not have any programming knowledge.In addition, the user of Authoring program need not have any knowledge about the 3D modeling yet.Intelligence 3D engine reads (being created by 3D modeling person) 3D scene, and according to 3D sequence and the short film sequence of information creating that obtains from the user of DVD Authoring program.Film sequence constitutes the dynamic DVD menu on the DVD of compliant with the information about menu structure.
Hereinafter, be how with handling the 3D scene from the information of Authoring program with describing intelligent 3D engine to produce the menu intermediate sequence.
Different information is passed to intelligent 3D engine from Authoring program.The user possibly want (master) video of different numbers is incorporated among the DVD.The user can confirm that the user can provide the text of the label of header, footer or button to the frame of video or the sequence of frames of video of the button image in the 3D scene, and the user can select the color and the transparency of high bright sheltering (mask).Yet other information also is possible, like the material color in three-dimensional scenic or the background image.In order to regulate the 3D scene respectively, at first the 3D scene is converted into independent data structure, so-called scene graph.
Figure 13 shows the diagrammatic representation of scene graph.During presenting process, through scene graph, and according to being positioned at top conversion and material (just according to material and conversion on the more high level that is positioned at scene graph) drafting geometric object (rectangle node).In scene tree (or scene graph), adopt the node of " group " appointment to supply to divide group objects to use.The animation that generator supplies to be positioned at following object uses.
When reading in the 3D contextual data and converting the 3D contextual data to internal data format, (on the fly) the placeholder object that will be used for text changes into the dynamic 3 D text object in real time.Adopt " text " in the scene tree to specify the 3D text object, three-dimensional text object expectation text string is as input value and generation three-dimensional text in the three-dimensional scenic that appears.
Actual present process before, can be according to the user's of authoring software hobby to appearing at data structure in memory adjustment.
For example, if the user only comprises (or link) 4 videos rather than 6 videos, it is necessary that 4 video button are then only arranged.For example, if the user provides 6 three dimensional objects for button, then need shelter or omit two buttons.Title is come labeled button because can utilize specific (or characteristic), so this is very possible.Therefore, during presenting process, intelligent 3D engine only need save the respective branch in the scene tree.For the above example that provides (4 video button), intelligent 3D engine can save in the scene graph of Figure 13 the branch by 5 and 6 indications.
Before presenting each menu intermediate sequence frame, can the frame of the audio-visual-materials that should on three-dimensional button, enclose or illustrate (for example user provide content) be introduced (or sign or link) to respective material.For example, the image that adopts " chapter image 1 " indication on first button (button 1) of the menu of describing by the scene graph of Figure 13, to illustrate.
Therefore use the user of the DVD of intelligent 3D generation on DVD, to navigate through the 3D menu.For example, intermediate sequence is the short-sighted frequency film that is recorded in unchangeably on the DVD.The user is without any need for personal computer knowledge.The user of DVD Authoring program in advance through the input header character string, through the video film selecting to be used to integrate or through fixing chapter, confirmed the outward appearance of DVD menu.Intelligence 3D engine is according to these clauses and subclauses or information (title string input; The selection of video film; The selection of chapter; Will be presented at the selection of the image on the button or the selection of sequence of frames of video) and the help through the animation three-dimensional scenic produce the video intermediate sequence.The user of authoring software is without any need for 3D knowledge or programming knowledge.
Can produce the 3D scene by the 3D modeling person who uses standard software, wherein only need keep several rules.3D modeling person is without any need for programming knowledge.Can add the 3 d menu of arbitrary number, three-dimensional transition and 3-D effect and source code not carried out any change.
Here it should be noted that Figure 14,15 and 16 shows existing three-dimensional DVD menu screenshotss in use.Like 3D modeling person definition, Figure 17 shows the model of 3 d menu.
Inserting the chapter object comprises: the image area and the frame of video (or video image) that are used for chapter image, Zhang Wenben and optional additional model object (for example, the travel mechanism of monitor in the following example that is called " monitor " that illustrates).
, then can object be summarised in the group of corresponding name if but favored area (or highlight regions) comprises a plurality of objects.Bounding box by the occupied zone of the group objects on the screen automatically defines by the effective optional zone of mouse (or pointer).
Hereinafter, will describe the transition of how creating between menu page and the menu page.Here it should be noted, suppose that 3D modeling person produces the model of place of scene (or scene description).For example, describe to having replenished the user subsequently by content that provides and the scene that changes into sequence of frames of video then according to the three-dimensional modeling language for model of place.In other words, model of place comprises according to object and object properties to the description (the for example motion of motion of objects and/or observer or observation station) that develops in time of the description of scene, model of place and to the placeholder object that is used to embed the content that the user provides or the description on placeholder surface.
Hereinafter, suppose that modeling person is people or an equipment of creating the model of place of (preferably three-dimensional) scene.
In order to create 3D (three-dimensional) scene that can in the DVD menu, use, modeling person need obey one group of rule.In these rules some are provided by the logical organization or the logical constitution of DVD menu.Need Else Rule with the adeditive attribute of three dimensional object (as, for example will become the attribute of button, maybe will be used for the bright attribute that calculates of sheltering of height) notice gives intelligent 3D engine.When the display menu page, high bright to be sequestered in the choice phase be visible, and cover selected button through adopting by the defined color of the user of Authoring program, utilizes high bright the sheltering of selected button sign.As with respect to Fig. 9,10, shown in 11 and 12, with respect to the definition of rule, be necessary the menu structure that intelligent 3D design is supported is described in more detail.
Can make up intelligent 3D menu according to master menu and a plurality of submenu.On the master menu page, can place up to 6 buttons.Preferably, arrange button, and provide specific (or characteristic) title for button by 3D modeling person.For example, can provide title " NSG_BS01 " to " NSG_BS06 " for 6 buttons.For example; If cause the more button of needs owing to during the DVD process of creation, will on DVD, firing 10 videos; Then the entremets single page can be added, the navigation on a left side/right arrow button executive level direction can be passed through between the said entremets single page.In the DVD process of creation, the chapter mark is additionally inserted under the situation in the video, add one or more menu page of submenu.Utilize upwards button, can get back to more high-rise (being positioned at top) menu page once more.Preferably, also arrow button is placed in the 3D scene, and utilize the name identification arrow button (for example: NSG_Up, NSG_Nxt, NSG_Pre).
Except the above element of mentioning, go back label, Header Text and the footer text of button support in embodiments of the present invention.For this reason, the 3D modeling person placeholder object that will have a title (like employed title in text effect) of appointment adds the 3D scene to.From specific reasons, cuboid be preferred (for example: NSG_Hdr, NSG_Ftr).
Which object the further name of three dimensional object and the definite calculating that should shelter for height is bright of dividing into groups consider.High then bright calculating of sheltering is provided with these contours of objects with black white image.Figure 23 shows the high bright example of sheltering for 6 menu buttons and 3 navigational arrows.
Respective packets also allows the accurate interpolation (or definition) to highlight regions, for example, and in response to the user-defined selection of Zhang Jinhang, to utilizing the definition of object of color Gao Liang.Typically, this zone (highlight regions just) is identical with the district that corresponding chapter image is positioned at.
Hereinafter, will carry out brief discussion to the bright calculating of sheltering of height.For this reason, Figure 23 shows the high bright diagrammatic representation of sheltering to menu structure shown in Figure 17.
According to the high bright generation of sheltering of following execution: only will have specific (high bright sheltering) object of title (or belonging to specific group of objects) and be plotted in the front of black background with full light (full-bright) white.
This has produced high bright contours of objects, and is wherein in extract that the bright contours of objects of said height is stacked with the master menu video that appears, with the bright specific object of height (for example button).
Except the label of button, enclose in somewhere on the button or images displayed (or frame of video) makes for DVD user at related between button and the video and becomes easy.Typically, image comes the frame or the short film sequence (sequence of frames of video) of the video or the video chapter of auto correlation.3D modeling person confirms in three-dimensional scenic, how and where to enclose (or illustrating) image through the placeholder texture.For this reason, 3D modeling person provides sign title (for example NSG_BS01 to NSG_BS06) for respective material.
To 3D modeling person's other boundary condition is that logical organization by the 3D model causes.Therefore, preferably, (as, for example with reference to illustrated in fig. 19) introductory animation begins and ends at menu page with black image.Menu to menu animation (or the transition of menu to menu) and menu to submenu animation or submenu to menu animation finish with menu page (or submenu page) beginning and with menu page (or submenu page).Menu to video cartoon begins with menu page and finishes with the corresponding video of full frame size.The animation that illustrates in the choice phase (menu page and user just are being shown can select time time durations) can be only with less mobile introducing menu in, for example DVD user at any time during the point selection button beginning at menu to video transition perceive step (or uncontinuity) in addition.Leading to the animation of second menu page from first menu page; Must change button, label and arrow; Must (for example provide twice with all objects (or the object that is associated with button, label and arrow at least) by 3D modeling person; NSG_BS01I to NSG_BS06I, NSG_UpI, or the like; Suffix " I " indication " input ").
Hereinafter, will describe with reference to figs. 14 to 17 pairs of examples to the DVD menu.The example of Figure 14 to 17 is based on three-dimensional template, and said three-dimensional model has been described (or illustrating) by the monitor after the modeling that system supported of connecting rod and piston.The template of example is called " monitor template ".
Figure 14 shows the diagrammatic representation to the example of the menu with 4 chapters.The diagrammatic representation that Figure 14 is integrally indicated in employing 1400.
Figure 15 shows the diagrammatic representation for the example of the menu with 8 main chapters, and wherein the user can navigate to next and the Previous menu page (or first and second menu page).The diagrammatic representation that Figure 15 is integrally indicated in employing 1500.
Diagrammatic representation 1400 shows 4 monitor screens 1410,1412,1414,1416.Menu item that each expression in the monitor screen is used for the Zhang Jinhang of the video content on the DVD is selected or menu button.It should be noted, produce menu scene shown in figure 14 according to having described the three-dimensional scene models or the three-dimensional scene models template that amount to 6 monitors.For example, in the left side menu page 1510 of the diagrammatic representation 1500 of Figure 15, can see menu page with 6 monitors.Therefore, can find out, from three-dimensional scenic, remove latter two monitor (monitor on the right in monitor in the middle of just in the monitor of low row and the low row monitor) and (accordingly) chapter label from diagrammatic representation 1400.In addition, when the menu scene of Figure 14 is compared with the menu scene of Figure 15, can find out that the menu scene of Figure 14 does not comprise any arrow.This is owing to the following fact: owing to do not have entremets single-page by the represented menu of the menu scene of Figure 14, so do not need arrow.
It should be noted,, comprise two menu pages by the described menu of menu scene of Figure 15 for the diagrammatic representation 1500 of Figure 15.Adopt 1510 indications to comprise first menu page of 6 menu entries, adopt 1520 indications to comprise second menu page of 2 menu entries.In other words, the template of having supposed to define the menu scene comprises 6 menu entries, then the complete filling first master menu page 1510.First menu page 1510 also comprises navigational arrows 1530.Navigational arrows 1530 is used as navigation element, and can be called " next one " arrow.
On second menu page 1520 (being also referred to as the master menu page 2), amount in 8 videos and only kept 2, and correspondingly, stack (or demonstration) " retreating " arrow (or " last one " arrow)." retreat " arrow 1540 and allow to navigate back the previous page, just, first menu page 1510 navigates back.
Figure 16 shows the diagrammatic representation for the example of the menu with 8 main chapters.The diagrammatic representation that Figure 16 is integrally indicated in employing 1600.Here it should be noted that the master menu of the example of Figure 16 can be identical with the master menu of the example of Figure 15.In other words, diagrammatic representation 1600 shows the first master menu page 1610 identical with the first master menu page 1510 of Figure 15.Diagram 1600 also shows submenu 1620.Here it should be noted that the first main chapter has 5 other sub-chapters.In other words, select and activate, can show submenu 1620 through first monitor (or button) 1630 to first menu page 1610.Because first monitor or first button, 1630 expressions, the first main chapter are so can visit four sub-chapters of the first main chapter on menu page 1620.Be also to be noted that " making progress " button 1640 through chooser menu page 1620, the user can (from the submenu page 1620) master menu that navigates back (or master menu page 1610).In addition, menu page 1610 comprises " next one "-button 1650, with visit (for example identical with menu page 1520) next master menu page.
In other words, in the example of Figure 16, set up submenu, wherein can carry out addressing via (or through) 1630 pairs of said submenus of first button.After the sequence, the user can see submenu (or submenu page 1620) between brachymedial, and wherein (randomly) two menus (just the master menu page 1610 and the submenu page 1620) all are visible during animation.In example embodiment, 6 monitors in the master menu page 1610 upwards shift out image (or upwards shifting out visible screen), and new monitor (for example 4 of the submenu page 1620 monitors) is caught up with from below.In given example, submenu (or submenu page 1620) comprises 4 videos and allows upwards to navigate back the corresponding navigation key head 1660 of the master menu or the master menu page 1610.
Figure 17 shows the diagrammatic representation of the template of the master menu that in intelligent 3D internal representation, appears, and the example of more than describing is based on the template of said master menu.
In template, the deviser provides 6 monitors 1710,1712,1714,1716,1718,1720 of maximum available number.In addition, need to occur three navigation elements 1730 " arrow is retreated ", " arrow is next " and " arrow upwards ".Header 1740 must be obeyed predetermined title agreement with footer 1750 and chapter title.In addition, the image-region to chapter image (or chapter frame of video) must have predetermined title material (NSG_BS01, NSG_BS02, NSG_BS03, NSG_BS04, NSG_BS05, NSG_BS06).
Must independent monitor be summarised in respectively (just, one group of each monitor makes all elements and/or the object that belong to particular watcher be included in the group that belongs to particular watcher) in the group by the respective name definition.Like what from above example, can see, if satisfy these conditions, then intelligent 3D engine can make scene dynamics ground adapt to menu content.
Here it should be noted that employing 1700 integrally indicating graphic representes 1700.It should be noted that template 1700 comprises a plurality of menu items.In typical embodiment, corresponding a plurality of geometric objects are associated with menu item.To be grouped in the geometric object that the certain menu project is associated, just be included in the group of geometric object.Therefore, through identifying one group of geometric object, can identify the geometric object that belongs to menu item.Suppose that model of place or scene template describe n menu item, template comprises n group, and each of n group summarized and belonged to specific menu item purpose object.For example, belonging to specific menu item purpose object can comprise:
-having the predetermined title or a surface of attribute, said predetermined title or attribute indication: this surface is intended to show the content that the user that is associated with menu item provides, and the content of not specifying particular user to provide.In other words, each surface is the placeholder surface by the content that provides to the user of characteristic title or attribute appointment.
-having the placeholder object of predetermined title, the text placeholder object that said predetermined title is replaced the text that is intended to provided by the user identifies.For example, text placeholder can be intended to provide the video sequence that is associated with menu item relevant " title " and/or information.
Therefore, frame of video generator 110 can be suitable for should what menu entries being presented in the menu scene (or menu page) based on menu model of place sign.The frame of video generator can also be suitable for use in determining has occur for how many individual groups that defined independent or independent menu entries in the menu template.According to the information of above description, if menu model of place or menu template comprise that than the more menu entries of actual needs then frame of video generator 110 can be with belonging to selecting or remove more than the object cancellation of menu entries.Therefore, what can guarantee is, even need also can use the template of the video entry that comprises some than the menu entries still less that is included in the template.
Figure 18 shows the process flow diagram of the method for the present invention that is used to produce sequence of frames of video.The method that Figure 18 is integrally indicated in employing 1800.In first step 1810, receive the model of place that has defined scene.Preferably, model of place comprises at least one the model of place object with object oriented and object properties.
Method 1800 also comprises second step 1820, in second step 1820, receives the content that the user provides.
In third step 1830, in model of place, the model of place object with predetermined object oriented and the object properties of being scheduled to is identified.Therefore, the model of place object after the acquisition sign.In the 4th step 1840, produce sequence of frames of video, make the content that the user is provided be presented on the surface of model of place of sign, or be shown as replacement to the model of place object after the sign.
Here it should be noted, can utilize any step in the step of above description (for example, utilizing) that the method 1800 of Figure 18 is replenished by any step in the performed step of sequence of frames of video of the present invention.
Hereinafter, will describe the equipment of the present invention of the menu structure that is used to create DVD (or video media) usually and the example embodiment of method.For this reason, Figure 19 shows and is used to select or the diagrammatic representation of the user interface of input video sequence.The diagrammatic representation that Figure 19 is integrally indicated in employing 1900.According to embodiments of the invention, in first step, the user imports this user and wants to be presented on the video title that DVD goes up (or on any video media such as HD-DVD, on the Blu-ray disc or on what its video media in office).Randomly, can provide the chapter mark for each video.If for video has defined the chapter mark, then will create one or more submenu for this video title.Chapter position of each button indication in the submenu.Therefore, video title can begin with the chapter position of definition.
Figure 20 shows the diagrammatic representation of the user interface page that is used to select template or model of place.In other words, in an embodiment of the present invention, the user selects predefined or predetermined intelligent 3D template (model of place of just creating in advance) in second step.Figure 21 shows the diagrammatic representation of the screenshotss of the user interface that is used for the attribute of DVD menu structure is selected.
In other words, according to the embodiment of the invention, the user can regulate the 3D template needs with suitable this user are set in third step.This allows button text, Header Text, footer text and/or background music is changeable.In other words, for example, the user can import setting or the adjustment with respect to the chapter title that will show in model of place or scene template, replace the placeholder object.Similarly, can be replacement with Header Text and footer text definition to template Chinese version placeholder object.
In addition, the user can define use (from following maybe menu the tabulation of transition) which menu transition:
-introductory animation;
Transition cartoon between-two menus;
-transition cartoon between menu and Zhang Caidan;
-transition cartoon between menu and video title; And
-transition cartoon between video title and menu.
According to the embodiment of the invention, in the 4th step, can use the virtual remote control device in preview, to observe the menu structure of creating by intelligent 3D engine.Randomly, can utilize intelligent 3D engine real-time ground to calculate the menu transition.Therefore, Figure 22 shows the diagrammatic representation of the screenshotss of the user interface that allows the transition of user's preview menu.
According to embodiments of the invention, in the 5th (choosing wantonly) step, fire or preparing dvd (or blue light medium, HD-DVD or another video media).
Here it should be noted,, show the process of the intelligent 3D menu of establishment from user's viewpoint referring to figures 19 through 22.Be also to be noted that can with referring to figures 19 through 22 or wherein optional described user's clauses and subclauses input to the frame of video generator, with control: utilize content replacement placeholder object that the user provides or the content that the user provides is presented on the placeholder surface.
Therefore, the user imports control: produce sequence of frames of video according to model of place (also be called as the scene template or only be called " template ") and according to the content that the user provides.
Hereinafter, will describe summary according to the menu creating conception of the embodiment of the invention.
It should be noted that a DVD typically comprises the video of some.Through these videos of one or more menu page visit, wherein, utilize selector button (for example, utilize in the menu page button) to represent each video, video chapter mark or another menu.Can be through button and menu page or video chains being fetched the content of navigation DVD.Therefore, the short-sighted frequency sequence of different fixing or rest image are represented different menu pages.
The menu page of design of the present invention (being also referred to as intelligent 3D technology) to allow to mention more than the generation automatically according to user-defined amount of video.In addition, calculating the transition video between two menu pages or between menu page (or at least one menu page) and user-defined video title.This is seamless for the user has provided, the illusion (illusion) of staggered and mutual video scene.Independent menu page and video no longer are the direct-cut operations of placing one by one, are melted into each other but in the virtual three-dimensional world, seem.
Utilize intelligent 3D engine automatically to carry out establishment to the animated menu structure.The user specifies this user to want which content (one or more video title) is appeared on the disk and selects predefined intelligent 3D template (for example, from a template in the predetermined template list) simply.Intelligent then 3D engine calculates between 2 menus or the menu between menu and video title, the button of each menu and the necessary number of transition video.
Independent, predetermined intelligent 3D template demonstration (or expression) 3 D video scene (or at least one 3 D video scene).For example, can independent menu page be interpreted as the not homonymy in room in the template.If the user navigates through different menus, the video sequence that then intelligent 3D engine is created is play is transition.This transition shows the video transition scene that seamlessly is suitable for two menu scenes.Between menu page and video title, create the video transition scene of seamless adaptation.
Because intelligent 3D engine is integrated between creation application program and the creation engine, so can also and be that blue light medium and HD-DVD medium are created identical animated menu structure for the DVD video.
Hereinafter, will describe some characteristics of the embodiment of the invention with demand and remarks with respect to general installation.
In order to summarize some aspects of the embodiment of the invention, can carry out following statement:
-through series connection,, can merge the film sequence of any number via the 3D transition of smoothness.
-can (or merge or series connection) film sequence of link be assembled into the common menu structure.
-menu comprises introductory sequence and one or more master menu page.Randomly, menu structure can provide the submenu page each chapter with the addressing movie streams.Through level and smooth hyperlink transition menu page, wherein seamlessly transit and comprise: to the transition of first frame of each film (or at least to transition of first frame of a film).
Content is regulated on-menu scene dynamics ground.The existence of menu button (or correspondingly, the navigation button) and/or the number of menu chapter occurs depending on.Intelligence 3D engine is concerned about the dynamic adjustment to the menu scene.
-intelligent 3D engine is with high-rise content (user's input) and low layer content (having the universal model of special tag with the menu scene of enables dynamic explanation) and metadata (general menu sequence information; Timestamp) combines, to produce the video output of the frame of video form that appears separately.In addition, intelligent 3D engine provides the information relevant with selecting the district with the highlight bar that is used for menu navigation.
-in the 3D of menu scene model, use special tag (for example title or attribute) to utilize intelligent 3D engine automatically to produce the data of above description.
-each menu can have each row three-dimensional text, for example header, footer or chapter exercise question.Text is editable, just preferably produces the 3D grid of font characters in real time.
-to appearing of transition, 3-D effect and menu be interactively.Be hardware-accelerated through the Modern Graphic card of the high-performance visual development of three-dimensional scenic.
Hereinafter, some embodiment details will be described.
According to one embodiment of present invention, based on the idea of intelligent 3D design the three-dimensional data (3D data) that has structural information will be separated with the engine of explaining structure and presenting the dynamic 3 D model.For the tissue of data, with the fexible unit that uses to the 3D data.
In a preferred embodiment, all elements will obtain title, and the data element that exists permission that other element is divided into groups.Title can be specified the special function function as button of above description (for example, as) for 3D object or group with dividing into groups.
In the realization of intelligent 3D, engine reads general 3D data layout.There, meta data block will define the function of 3D model.For example, for the DVD menu, this metadata can be summarized as menu to video transition with the 3D scene, and this will play when end subscriber is selected the video button in the DVD menu and before selected video will be shown.Other information in the meta data block of being included in can be confirmed the number of buttons or the title of the DVD menu under this transition.
Then, a whole set of 3D data that are used to create video content comprise: the file with (maybe part to menu or video effect any) 3D and structured data.For the method that makes this content creating is applicable to other, can import other file layout except the generic-document form.As other parts, the music that exist specifying will be inner in certain menu part or video effect (or during) plays or the audio files of noise.
For intelligent 3D engine can be reacted to user's needs neatly, in the 3D model, there are some naming conventions to 3D object or branch set of pieces.For example, the special title of " NSG_BS04 " can appointed object be the 4th button in the DVD menu.Adopt this title, if do not need four buttons, for example the user has only inserted 3 video clipss, and then engine will be removed this object.Another title can be confirmed as the object or the group of the highlight regions definition of next possible in DVD menu button like " NSG_NxtH " (noting last " H " representative " Gao Liang " of title).The mode of adopt dividing into groups can have the geometry that (if not necessary) will be removed by intelligent 3D engine, and the less geometry that when calculating highlight regions, will consider.The high bright example of sheltering of " monitor " menu with 6 menu buttons and 3 navigational arrows has been shown in Figure 23.
In the data file, will be text interpretation common geometric object externally.Therefore, this object is lost as the meaning of the set of readable character, and can not reinterpret the meaning of this object to change text.Yet this is essential for the possibility that gives the text of user with oneself (will be the part of DVD menu or video content afterwards) insertion 3D scene.
For this reason, established a kind of method, can edit the object that the replacement of 3D text has the special title such as " header " to utilize, said in this example editable text is represented the title (heading) of DVD menu part.
In this scene, the realization of intelligent 3D allows independently that modeling person creates the creation and the video content of arbitrary number, and need not study software development.The engine of intelligence 3D can be explained 3D structure of models and metadata, thereby knows the function of every part of 3D scene.
Usually, the application comprises and is used to produce cartoon scene to create method, equipment and the computer program of interactive menu and video scene.
Hereinafter, will describe other realization details with reference to Figure 24.Figure 24 is the diagrammatic representation of level of module that is used to create the content of video media.The diagrammatic representation that Figure 24 is integrally indicated in employing 2400.Utilize video editing to control the content of video media is carried out process of creation with creation application program 2410.Video editing receives one or more user video segment 2420 with creation application program 2410.Video editing also receives unshowned user's input in the diagrammatic representation of Figure 24 with the creation application software.For example, the user's input to video editing and creation application software 2410 can comprise: will be included in information relevant on the video media with how many user video segments 2420 are arranged.This user profile can also comprise: with the relevant information of title that will be included in the video clips (or sequence of frames of video) on the video media.User's input can also comprise: the user relevant with the details of menu structure selects.For example, this user input can comprise: the menu structure to should which the menu template in a plurality of available menu templates (or model of place) being used to produce video media is made definitions.User profile can also comprise additional the setting, like the selection of color settings, background image, the selection of music title, or the like.
The so-called intelligent 3D engine 2430 that utilization is equal to frame of video generator 110 is carried out being stored in appearing of video sequence on the video media.One or more template definition that intelligence 3D engine 2430 receives to scene and video effect.Template definition 2440 is equal to model of place 112 and according to object and grouping information and attribute information scene is described.
Intelligence 3D engine also receives one or more video flowing and the setting of one or more attribute from video editing and creation application program 2410, adopts 2450 instruction videos stream and attribute setting.Here it should be noted that video flowing is equal to user video segment 2420, or utilize video editing and the said video flowing of creation application software 2410 creation according to the user video segment.Intelligence 3D engine is suitable for creating one or more video flowing 2460 and one or more video flowing 2640 is sent it back video editing or creation application program 2410.It should be noted, video flowing 2460 be equal to sequence of frames of video 116.
Video editing is suitable for menu and content structure according to the video flowing that is provided by intelligent 3D engine 2,430 2460 structure video medias with creation application program 2410.For this reason, video editing and creation application program are suitable for (according to certain metamessage) video content of which kind of type of video flowing 2460 expression are carried out mark.For example, video editing and creation application program 2410 can be suitable for recognizing: whether specific video screen 2460 representes the transition of menu to menu, the transition of menu to sequence of frames of video, the transition of sequence of frames of video to menu, (between blank screen and menu) introductory transition or transition of sequence of frames of video to sequence of frames of video.According to the information relevant with the type of video flowing, video editing and creation application program 2410 are placed on video flowing in the inner correct position of the data structure of video media.
For example; If video editing recognizes that with creation application program 2410 particular video stream 2460 is that menu is to video transition; Then video editing and creation application program 2410 are set up the structure of video media; If make that the user selects to play specific film in the certain menu, then between specific respective menu and specific corresponding video (or film) play menu to video transition.
In another example; If the user for example passes through the selection of the specific button (next button) on first menu page; Change to second menu page from first menu page, then should show menu to the menu transition between first menu page and second menu page to the user.Therefore, video editing is arranged the transition of corresponding menu to menu, play menu to menu transition when making the button of more than the user selects, mentioning on first menu page with creation application program 2410 on video media.
Created under the structure situation of (particularly, the menu structure of video media) in video editing and creation application program 2410, video editing and creation application program are being stored in information transmission on the video media to creating engine 2470.Creation engine 2470 is suitable for video editing and creates the data formatting that application program 2410 provides, the standard of data fit corresponding video medium (for example DVD medium, Blu-ray disc, HD-DVD or any other video media) like this.Composition apparatus 2470 also is suitable for video editing and the data that creation application program 2410 provides are write video media.
In sum, what can state is that Figure 24 shows the general work process flow diagram of intelligent 3D engine.
Hereinafter, with providing the specific detail relevant with the invention of above description.
At first, with some additional details relevant with the calculating of transition video are described.It should be noted that for the calculating of transition video, the frame of video generator receives two video images or frame of video, wherein, from the video that disappears, obtain a frame of video, and from the video that manifests, obtain a frame of video.
Image or frame of video are all with corresponding as the identical time point of final video flowing (or final sequence of frames of video 116).The length of each input video stream (or input video) and the duration of overlapping or transition are depended in the temporal position of two images or frame of video in input video stream.Yet in a preferred embodiment, the 3D engine is not considered absolute time information.
According to two input pictures or input video frame, produce single output image or output video frame.In the generation of output video frame, utilize the texture of the material of naming respectively in input video frame replacement (describing) three-dimensional scenic by model of place.Therefore, output image or output video frame are the images of three-dimensional scenic, wherein utilize the texture of first input video frame replacement object, utilize another texture of second input video frame replacement object.
In addition, will be used to produce the DVD menu to which file or software describes:
-one or more file of three-dimensional scenic described in three-dimensional animation;
-one or more description document (for example, the title of 3D template, the type of intermediate sequence, or the like) that the structure and the additional animation data of scene graph are described;
-the video image software of view data or video data and recombination video data is provided;
-with view data and text data be incorporated in the 3D scene, according to the input data with scene format and the 3D engine that presents the 3D scene subsequently;
In order to produce the DVD menu, in an embodiment of the present invention, the number according to chapter when producing DVD presents any possible menu combination and menu intermediate sequence with dividing.In addition, in video file, menu combination and menu intermediate sequence are fired on the DVD.In addition, produce (having file-name extension " .ifo " and known from DVD video disc standard) navigate file.This navigate file allows DVD player to jump to corresponding sequence (just, for example jumping to the beginning of transition video).
In order to confirm menu structure, the 3D scene of regulating corresponding modeling according to the number and the structure of available video chapter.The three-dimensional scenic (just unwanted menu item) of unwanted part modeling is automatically removed, made in the final sequence of frames of video that produces, not show them.In addition, produce user's editable text piece.
Thereby, produce 3 d menu, wherein playing animation sequence between menu page.In addition, automatically produce high bright sheltering according to three dimensional object with predetermined title.Therefore, can create high bright the sheltering of arbitrary shape.
One of key advantage of the embodiment of the invention is that menu design person (for example 3D modeling person) only needs the general menu sequence of modeling in advance.The user who in this task, does not comprise the DVD authoring software.Characteristic according to chapter is divided is automatically carried out adjusting and generation to the menu video sequence.
Hereinafter, will describe how to link (or combination) a plurality of film sequence through connecting.Here suppose that video film comprises 30 independently vidclips.Therefore, for example comprise that the whole film of 30 independent vidclips can have the sequence of 29 transition.Alternatively, for example,, then there is the sequence of 31 transition if consider the effect and of fading in the effect of fading out of the end of film in beginning.
The 3D device is only handled the data of current transition.In other words, in first step, carry out the transition between first vidclip and second vidclip, in second step, calculate the transition between second vidclip and the 3rd vidclip, or the like.According to the viewpoint of montage (cutting) software, time course such as the following stated:
-previous section of first vidclip is encoded, information encoded is stored in the video flowing of whole film;
-required view data (or video data or cinematic data) is uploaded to intelligent 3D engine (wherein the content that the user provides is partly formed in the beginning of the latter end of first video segment and second video segment) since the end of first video segment (video segment 1) and second video segment (video segment 2);
-read the view data (or video data or cinematic data or sequence of frames of video) of the transition that is appeared from intelligent 3D engine;
-image (or frame of video) that independently appears is encoded, and information encoded is stored in the video flowing of whole film;
-center section of second video segment is encoded, and information processed is stored in the video flowing of whole film;
-required video data is uploaded to intelligent 3D engine from the end and the 3rd video segment (video segment 3) of second frame of video (video segment 2);
-read the view data of the transition that is appeared from intelligent 3D engine;
-image (or frame of video) that independently appears is encoded, and with the information stores that is appeared in the video flowing of whole film.
Can repeat described process till having calculated any required transition.It should be noted because will be independently video segment and transitional sequence be stored in the single video file, so can produce single video file through the series connection of above description.
Dynamic adjustments with respect to the menu scene it should be noted, the distribution (distributing to view data and text data) of authoring software decision chapter button.In addition, authoring software decision needs (from model of place) which object and needs to regulate which object (for example content of text) in special scenes.For example when presenting the menu video, the time point place when creating DVD makes corresponding decision.In a preferred embodiment of the invention, after creating DVD, no longer possibly make amendment to menu structure.
In addition, it should be noted, within periphery of the present invention, the data that term " high-rise content " designated user provides, for example video flowing, chapter image, image header or highlight color.On the other hand, the 3D scene (for example, yet the content that is inappropriate for the user and provides comprises the model of place on placeholder object or placeholder surface) of general modeling described in term " low layer content ".In addition, which 3D model file term " metadata " is described and is formed menu together.It should be noted that whole menu comprises: be directed against the general scene of selecting the page, and link a plurality of animation intermediate sequences of independent menu page through moving of standalone object.In a preferred embodiment, the different animation sequence of mutual definition for adopting the mutual of chapter 1 button and adopting the chapter 2 button.Metadata also comprise the information relevant with independent menu sequence, with the title of menu or the relevant information of reference of supplemental audio track.
With respect to highlight regions and selection zone, it should be noted, utilize each grouping of relevant object and name to specify highlight regions and select regional.
Generation with respect to the grid of font characteristics it should be noted, for the generation of the 3D grid of font characters, all fonts that will not be included in the font file all are expressed as 3D grid.On the contrary, when using font characters for the first time, calculate the grid of font characters.Subsequently, the grid of calculating is used to represent specific font characters.As an example; Said processing to font characters allows text " Hello World " is expressed as three-dimensional text; Wherein, Because can the 3D grid (with switching way) to character " 1 " be used 3 times and can character " o " be used 2 times, so only need 7 3D grids (rather than 10 3D grids).
Here it should be noted that the generation of font characters is different with the generation of all the other frame of video.Provide except being directed against any object or the grid the 3D grid of font characters by deviser's (for example having created people's (also being called as " scene modeling person ") of model of place).The deviser places the frame of having named respectively and replaces the 3D grid to font characters, and the text that wherein utilizes the user to import replaces said frame (three dimensional representation of text just) in working time.The height of frame and thickness are (for more common: the size of the frame) size of the three-dimensional font characters of definition.Also obtain texture properties and material properties (with the diagram text character) from frame.In other words, the three dimensional representation by the text character of user input has texture identical with frame and material properties.
Hereinafter, will describe the possible user interactions that can be used to appear transition.For common, can influence the outward appearance of three-dimensional scenic from extraneous (just by the user) through dialogue.In the description document of above description, can be editable with each attribute flags.In dialogue, represent these attributes according to the type of these attributes.User one changes this attribute, just in this scene, considers the attribute that changes.Like this, for example, can in predetermined scope, change object color, background image and/or (object) flight track.
Be also to be noted that in an embodiment of the present invention with respect to the speed that appears, it can be interactively appearing.The typically computed center processor of traditional montage program is with the expression effect.Typically this is very slow, and representes unsmooth.Therefore, (can be used for now almost any computing machine) 3D graphic hardware is used in design of the present invention (for example intelligent 3D engine).Only under the situation that the 3D graphics card does not have to occur, just select slow solution based on CPU.High performance expression has been contributed in use to the scene graph that is used to represent three-dimensional scenic.
Be also to be noted that and adopt similar mode (like traditional 2D engine) to visit intelligent 3D engine from the external world.Yet, in the processing of menu, consider additional intermediate sequence.In addition, encapsulated most of logic in intelligent 3D engine internal.
Be also to be noted that the form that can adopt computer program realizes the present invention.In other words, according to some realization demand of the inventive method, can in hardware or software, realize method of the present invention.Can use digital storage media (for example storing disk, DVD, CD, ROM, PROM, EPROM or the flash memory of electronically readable control signal) to carry out and realize, institute's digital storage media and the feasible execution of programmable computer system cooperation method of the present invention.Therefore, usually the present invention be have be stored in and readable carrier on the computer program of program code, said program code is effective for carrying out method of the present invention when computer program moves on computers.Therefore in other words, method of the present invention is a computer program, and said computer program has at least one the program code that is used for when computer program moves on computers, carrying out method of the present invention.
In sum, the present invention has created the design that produces video transition, menu to video transition and the transition of menu to menu based on the time.In addition, the present invention allows to produce interactive menu based on the time.Therefore, the present invention allows to create video media user friendlyly.

Claims (23)

1. one kind is used for according to having defined the three-dimensional model of place (200,300 of scene modeling in advance; 431,446,456,466,476; 810; 2440) and the content (114 that provides according to the user; 2450) provide frame of video (1,2 ... F-1, F) sequence (116; 440,450,460,470,480; 2460) equipment, said model of place comprise at least one the model of place object (210 with object oriented or object properties; 432; 812), said equipment comprises:
Frame of video generator (110; 2430), be suitable for generating the sequence (440,450,460,470,480 of a plurality of frame of video according to model of place; 1,2 ..., F-1, F),
Wherein said frame of video generator is suitable for the parses scene model, in said model of place, one or more model of place object or the surface with predetermined title or predetermined attribute being identified, to obtain model of place object or the surface after the sign; And
Wherein said frame of video generator is suitable for said model of place is inserted in reference, and is said with reference to indicating the content that the user is provided to use the texture on the surface after making a check mark, so that said model of place is suitable for user's request, or
Wherein said frame of video generator is suitable for being provided with object or the surperficial texture properties that is identified, and is appointed as the texture that will use with the content that the user is provided, so that said model of place is suitable for user's request; And
Wherein said frame of video generator is suitable for presenting sequence of frames of video according to said model of place; Make said sequence of frames of video illustrate being seen like the observer at the observation station place, by the view of the described scene of said model of place; And the content that makes the user provide is presented at the surface (230 of the model of place object after the sign; 232,234; 432,436) go up or be presented on the surface after the sign, wherein considered the model of place object relative to each other and with respect to the relative position of observation station,
Wherein said model of place (112; 200,300; 431,446,456,466,476) according to the tabulation of geometric object, appear at the Properties of Objects in the scene and the characteristic that defined for the part of the visible model of place of the observer at observation station place defines scene; And
Wherein said model of place (112; 200,300; 431,446,456,466,476; 810; 2440) according at least one model of place object (210; 432) material behavior or superficial makings characteristic define scene.
2. according to the equipment (100 of claim 1; 2400), wherein, said model of place (112; 200,300; 431,446,456,466,476; 810; 2440) according to object (210; 432; 812) with respect to observer (212; 438,448,482) motion defines scene.
3. according to the equipment (100 of claim 1; 2400), wherein, said frame of video generator (110,2430) is suitable for having the model of place object (210 of predetermined title, material behavior, texture features or character of surface; 432) surface (230,232,234; 434,436) identify, with the surface after the acquisition sign; And
Wherein said frame of video generator is suitable for producing sequence of frames of video (116; 2460) frame (440,450,460,470,480), the feasible video sequence (114 that the user is provided; 2450) or the frame of the image that provides of user be presented on the surface after the sign.
4. according to the equipment (100 of claim 1; 2400), wherein, said frame of video generator (110,2430) is suitable for model of place object (230; 432) first surface (230; 434) and the second surface (232 of said model of place object; 436) identify, wherein said first surface has the first predetermined title, predetermined material behavior or predetermined texture features, and said second surface has the second predetermined title, predetermined material behavior or predetermined texture features,
The said first predetermined title is different with the said second predetermined title, and the said first predetermined material behavior is different with the said second predetermined material behavior, or the texture features that said first texture features and said second be scheduled to is scheduled to is different;
Wherein said frame of video generator is suitable for producing video sequence (116; 2460) frame (440,450,460,470,480) makes the frame of the image that the video sequence (114,2450) that first user is provided or first user provide be presented on the surface after first sign, and makes the video sequence (414 that second user provides; 2450) or the frame of the image that provides of second user be presented on the surface after second sign.
5. according to the equipment (100 of claim 1; 2400), wherein, said sequence of frames of video generator (110; 2430) be suitable for model of place object (210; 432) first surface (230; 432) and the second surface (232 of model of place object; 436) identify,
Said first surface has the first predetermined title, the first predetermined material behavior or the first predetermined texture features, and
Said second surface has the second predetermined title, the second predetermined material behavior or the second predetermined texture features,
Said first title is different with said second title, and said first material behavior is different with said second material behavior, or said first texture features is different with said second texture features;
Wherein the frame of video generator is suitable for producing video sequence (116,440,450,460,470,480; 2460), the feasible sequence of frames of video (114 that first user is provided; 2450) frame sequence is presented on the first surface after the sign, and the feasible video sequence (114 that second user is provided; 2450) frame sequence is presented on the second surface after the sign.
6. according to the equipment (100 of claim 5; 2400), wherein, said equipment is suitable for receiving the video sequence (114 that definition first user provides; 2450) and the video sequence (114 that provides of second user; 2450) user's input.
7. according to the equipment (100 of claim 5; 2400), wherein, said frame of video generator (110; 2430) be suitable for producing sequence of frames of video (116; 440,450,460,470,480; 2460), make that first frame (440) of the sequence of frames of video produced is the full frame version of the frame of the video sequence that provides of first user, and make that the last frame (480) of the sequence of frames of video that produced is the full frame version of the frame of the video sequence that provides of second user.
8. according to the equipment (100 of claim 5; 2400), wherein, said frame of video generator (110; 2430) be suitable at the video sequence that is produced (116; 440,450,460,470,480; Between the last frame (480) of first frame (440) 2460) and the sequence of frames of video that is produced progressive or level and smooth transition is provided.
9. according to the equipment (100 of claim 1; 2400), wherein, said frame of video generator (110; 2430) be suitable for having obtained to show the content (114 that the user-defined text object of user-defined text provides as the user; 2450);
Wherein said frame of video generator (110; 2430) be suitable at model of place (112; 200,300; 431,446,456,466,476,810; 2440) in the model of place object (812) with predetermined object oriented or predetermined object properties is identified, the model of place behind said predetermined object oriented and the said predetermined object properties sign is the text placeholder object; And
Wherein said frame of video generator is suitable for producing sequence (116; 440,450,460,470,480; 2460) the text placeholder object (812) after the text object that, makes explicit user define replaces identifying.
10. according to the equipment (100 of claim 9; 2400), wherein, said frame of video generator (110; 2430) be suitable for producing sequence of frames of video (116; 440,450,460,470,480; 2460), make user-defined text object is represented described in the said sequence of frames of video size be suitable for spreading all over the size of the text placeholder object (812) of said sequence of frames of video.
11. equipment (100 according to claim 1; 2400), wherein, said equipment is suitable for basis will be at the sequence of frames of video that is produced (116; 440,450,460,470,480; 2460) menu item (912,914,916,918,920,922,932,934,936,938,940,942 that shows in; 1012,1014,1016,1018,1020; 1024,1222,1224,1226,1228; 1230,1232) number is selected the subclass of selected model of place object from a plurality of model of place objects that form said model of place, make selected model of place object factory sequence of frames of video (116; 440,450,460,470,480; 2460), the menu item number that in said video sequence, is shown is suitable for the menu item number that will show, and
Wherein said frame of video generator is suitable for generating said sequence of frames of video according to selected model of place object.
12. equipment (100 according to claim 1; 2400), wherein, said equipment comprises highlight regions model of place object identifier, and said highlight regions model of place object identifier is suitable for from model of place (112; 200,300; 431,446,456,466,476; 2440) confirm to comprise the set of at least one highlight regions model of place object in,
High bright field scape model object has predetermined object oriented or object properties; And
Wherein, Said equipment comprises that highlight regions is described provides device, said highlight regions is described the description that provides device to be suitable for providing highlight regions, and the description of said highlight regions has defined the frame of video (440 of at least one object in the wherein set of display high-brightness zone model of place object; 450; 460,470,480) district.
13. equipment (100 according to claim 12; 2400), wherein, said highlight regions is described provides device to be suitable for highlight regions is described as the district by the defined frame of video of whole pixels (440,450,460,470,480) of display high-brightness zone model of place object.
14. model of place (200,300 that is used for according to the scene that has defined three-dimensional modeling in advance; 431,446,456,466,476; 810; 2440) and the content (114 that provides according to the user; 2450) provide frame of video (1,2 ... F-1, F) sequence (116; 440,450,460,470,480; 2460) method, said model of place comprise at least one the model of place object (210 with object oriented or object properties; 432; 812), said method comprises:
Generate the sequence (440,450,460,470,480 of a plurality of frame of video according to model of place; 1,2 ..., F-1, F),
Wherein said model of place defines scene according to the tabulation of geometric object, the Properties of Objects and the definition that appear in the scene for the characteristic of the part of the visible model of place of the observer at observation station place, and
Wherein said model of place defines scene according to the material behavior or the character of surface of at least one model of place object;
The sequence that wherein generates a plurality of frame of video comprises:
The parses scene model to be identifying (1830) to one or more model of place object or the surface with predetermined title or predetermined attribute in said model of place, to obtain model of place object or the surface after the sign;
Will be with reference to inserting said model of place, the content that said reference indication provides the user is used the texture on the surface after making a check mark, so that said model of place is suitable for user's request, or
The object or the texture properties on surface that are provided with after the sign are appointed as the texture that will use with the content that the user is provided, so that said model of place is suitable for user's request; And
Present (1840) sequence of frames of video according to said model of place; Make said sequence of frames of video illustrate being seen like the observer at the observation station place, by the view of the described scene of said model of place; And the feasible content that the user is provided is presented at the surface (230 of the model of place object after the sign; 232,234; 432,436) go up or be presented on the surface after the sign, considered that wherein the model of place object is relative to each other and with respect to the relative position of observation station.
15. model of place (112 that is used for according to the scene that has defined modeling in advance; 200,300; 431,446,456,466,476; 800; 2440), according to the content (114 that has defined at least one menu structure correlation properties and provide according to the user; 2450) create the equipment (2400) of the menu structure of video media, said model of place comprises at least one the model of place object (210 with object oriented or object properties; 432; 812), said equipment comprises:
Be used to provide sequence of frames of video (116 according to claim 1; 440,450,460,470,480; 2460) equipment (100; 2430),
Wherein saidly be used to provide the equipment (2430) of sequence of frames of video to be suitable for according to model of place, to produce said sequence of frames of video according to the additional information that has defined at least one menu structure correlation properties and according to the content that the user provides.
16. according to the equipment (2400) of claim 15, wherein, said menu structure relevant information comprises: the information relevant with the grouping of element;
Wherein model of place (112; 200,300; 431,446,456,466,476; 800; 2440) i set of pieces is described and is used for the sequence of frames of video (114 that calling party provides; 2450) i menu button (912,914,916,918,920,922,932,934,936,938,940,942,1012,1014,1016,1018,1020,1024,1222,1224,1226,1228,1230,1232,1410,1412,1414,1416);
Wherein be used to provide sequence of frames of video (116; 440,450,460,470,480; 2460) equipment (110; 2430) be suitable for receiving the relevant information of video sequence number that provides with the user that will be included in the video media;
Wherein be used to provide the equipment (110 of sequence of frames of video; 2430) be suitable for using the relevant information of sequence of frames of video number that provides with the user to confirm the needed menu button number of video sequence that calling party provides;
Wherein be used to provide the equipment (110 of sequence of frames of video; 2430) be suitable for marker elements group in said model of place, the element group after each sign is described menu button;
Wherein be used to provide the equipment (110 of sequence of frames of video; 2430) be suitable for from said model of place, selecting a plurality of element groups, each selected element group is described menu button, makes to be suitable for the required menu button number of video sequence that calling party provides by the described menu button number of selected element group; And
Wherein be used to provide the equipment (110 of video sequence; 2430) be suitable for producing sequence of frames of video; Make said sequence of frames of video show the element of selected element group; And make cancellation or reduce the extra objects of model of place, the extra objects of said model of place describe to be used for the original menu button of the sequence that calling party provides.
17. according to the equipment (2400) of claim 15, wherein said menu structure relevant information comprises: with model of place (112; 200,300; 431,446,456,466,476; 800; 2440) which element belongs to high bright group of relevant information;
Wherein be used to provide sequence of frames of video (116; 440,450,460,470,480; 2460) equipment (110; 2430) be suitable for producing description regional in the frame of video (440,450,460,470,480) that has wherein shown high bright group object.
18. equipment according to claim 17; Wherein to the frame of video (440 of the object of display high-brightness group; 450; The description in zone 460,470,480) comprises: showing that high bright group object sentences that first colored pixels is described and sentencing the monochrome image that second colored pixels is described at the object of display high-brightness group not.
19. according to the equipment (2400) of claim 15, wherein said menu structure relevant information comprises: with said model of place (112; 200,300; 431,446,456,466,476; 800; 2440) the relevant information of video transition of which kind of type has been described;
The equipment that wherein is used to create said menu structure comprises and being used for by frame of video generator (110; 2430) sequence of frames of video (116 that produces; 440,450,460,470,480; 2460) equipment in the menu structure of insertion video media;
The equipment that wherein is used to create menu structure is suitable for basis and model of place (112; 200,300; 431,446,456,466,476; 800; 2440) the relevant information of video transition of having described which kind of type is confirmed sequence of frames of video (112 in the menu structure; 200,300; 431,446,456,466,476; 800; 2440) position; And
The equipment that wherein is used for creating menu structure is suitable for recognizing and handles at least one of video transition of following type:
The transition of menu to menu,
The transition of blank screen to menu,
The transition of menu to sequence of frames of video,
The transition of sequence of frames of video to menu,
The transition of sequence of frames of video to sequence of frames of video.
20. model of place (112 that is used for according to the scene that has defined modeling in advance; 200,300; 431,446,456,466,476; 800; 2440), according to menu structure relevant information that has defined at least one menu structure correlation properties and the content (114 that provides according to the user; 2450) create the method for the menu structure of video media, said model of place comprises at least one the model of place object (210 with object oriented or object properties; 432; 812), said method comprises:
According to claim 14 sequence of frames of video (116 is provided; 440,450,460,470,480; 2460),
Wherein provide sequence of frames of video to comprise: according to model of place, sequence of frames of video is provided according to the additional information that has defined at least one menu structure correlation properties and according to the content that the user provides.
21. model of place (200,300 that is used for according to the scene that has defined three-dimensional modeling in advance; 431,446,456,466,476; 810; 2440) and the content (114 that provides according to the user of the form that adopts one or more three dimensional object; 2450) provide frame of video (1,2 ... F-1, F) sequence (116; 440,450,460,470,480; 2460) equipment (100; 2400), said model of place comprises at least one the three-dimensional scene models object (210 with object oriented or object properties; 432; 812), said equipment comprises:
Frame of video generator (10; 2430), be suitable for generating the sequence (440,450,460,470,480 of a plurality of frame of video according to said model of place; 1,2 ..., F-1, F),
Wherein said frame of video generator is suitable for resolving said model of place; In said model of place, one or more model of place object with predetermined object oriented or predetermined object properties being identified, to obtain the three-dimensional scene models object after the sign;
The content that wherein said frame of video generator is suitable for utilizing the user to provide is replaced the model of place object after identifying, so that said model of place is suitable for user's request; And
Wherein said frame of video generator is suitable for presenting sequence of frames of video according to said model of place, makes the content that the user is provided be shown as the replacement to the model of place object (812) after the sign;
Wherein said model of place (112; 200,300; 431,446,456,466,476) define scene according to the tabulation of geometric object and the Properties of Objects that appears in the scene.
22. model of place (200,300 that is used for according to the scene that has defined three-dimensional modeling in advance; 431,446,456,466,476; 810; 2440) and the content (114 that provides according to the user; 2450) provide frame of video (1,2 ... F-1, F) sequence (116; 440,450,460,470,480; 2460) equipment, said model of place comprise at least one the model of place object (210 with object oriented or object properties; 432; 812), said equipment comprises:
The device, be used for according to model of place generate a plurality of frame of video sequence (440,450,460,470,480:1,2 ..., F-1, F),
Wherein said model of place defines scene according to the tabulation of geometric object, the Properties of Objects and the definition that appear in the scene for the characteristic of the part of the visible model of place of the observer at observation station place, and
Wherein said model of place defines scene according to the material behavior or the character of surface of at least one model of place object;
The said device that wherein generates the sequence of a plurality of frame of video comprises:
Device is used for the parses scene model in said model of place, one or more model of place object or the surface with predetermined title or predetermined attribute being identified (1830), to obtain model of place object or the surface after the sign;
Device is used for said model of place is inserted in reference, and is said with reference to indicating the content that the user is provided to use the texture on the surface after making a check mark, so that said model of place is suitable for user's request, or
Device, the object or the texture properties on surface that are used to be provided with after the sign are appointed as the texture that will use with the content that the user is provided, so that said model of place is suitable for user's request; And
Device; Be used for presenting (1840) sequence of frames of video according to said model of place; Make said sequence of frames of video illustrate being seen like the observer at the observation station place, by the view of the described scene of said model of place; And the feasible content that the user is provided is presented at the surface (230,232,234 of the model of place object after the sign; 432,436) go up or be presented on the surface after the sign, considered that wherein the model of place object is relative to each other and with respect to the relative position of observation station.
23. model of place (112 that is used for according to the scene that has defined modeling in advance; 200,300; 431,446,456,466,476; 800; 2440), according to menu structure relevant information that has defined at least one menu structure correlation properties and the content (114 that provides according to the user; 2450) create the equipment of the menu structure of video media, said model of place comprises at least one the model of place object (210 with object oriented or object properties; 432; 812), said equipment comprises:
Device is used for according to claim 21 sequence of frames of video (116 being provided; 440,450,460,470,480; 2460),
Wherein provide the said device of sequence of frames of video to comprise:
Device is used for according to model of place, according to the additional information that has defined at least one menu structure correlation properties and according to the content that the user provides sequence of frames of video is provided.
CN200780008655.1A 2006-03-10 2007-01-03 Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program Expired - Fee Related CN101401130B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US78100606P 2006-03-10 2006-03-10
EP06005001.0 2006-03-10
EP06005001 2006-03-10
US60/781,006 2006-03-10
PCT/EP2007/000024 WO2007104372A1 (en) 2006-03-10 2007-01-03 Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program

Publications (2)

Publication Number Publication Date
CN101401130A CN101401130A (en) 2009-04-01
CN101401130B true CN101401130B (en) 2012-06-27

Family

ID=40518515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200780008655.1A Expired - Fee Related CN101401130B (en) 2006-03-10 2007-01-03 Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program

Country Status (3)

Country Link
JP (1) JP4845975B2 (en)
CN (1) CN101401130B (en)
RU (1) RU2433480C2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2523980C2 (en) * 2012-10-17 2014-07-27 Корпорация "САМУНГ ЭЛЕКТРОНИКС Ко., Лтд." Method and system for displaying set of multimedia objects on 3d display
CN103325135B (en) * 2013-07-17 2017-04-12 天脉聚源(北京)传媒科技有限公司 Resource display method, device and terminal
CN107180136A (en) * 2017-06-02 2017-09-19 王征 A kind of system and method for the 3D rooms texture loading based on interior wall object record device
WO2022124419A1 (en) * 2020-12-11 2022-06-16 株式会社 情報システムエンジニアリング Information processing apparatus, information processing method, and information processing system
CN112947817B (en) * 2021-02-04 2023-06-09 汉纳森(厦门)数据股份有限公司 Page switching method and device for intelligent equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000196971A (en) * 1998-12-25 2000-07-14 Matsushita Electric Ind Co Ltd Video display device

Also Published As

Publication number Publication date
RU2008140163A (en) 2010-04-20
CN101401130A (en) 2009-04-01
JP4845975B2 (en) 2011-12-28
RU2433480C2 (en) 2011-11-10
JP2009529736A (en) 2009-08-20

Similar Documents

Publication Publication Date Title
US8462152B2 (en) Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program
CN102752639B (en) Metadata is used to process the method and apparatus of multiple video flowing
US8174523B2 (en) Display controlling apparatus and display controlling method
CN100471255C (en) Method for making and playing interactive video frequency with heat spot zone
CN100364322C (en) Method for dynamically forming caption image data and caption data flow
CN101193250B (en) System and method for generating frame information for moving images
US20100156893A1 (en) Information visualization device and information visualization method
WO2008004236A2 (en) Automatic generation of video from structured content
CN102224738A (en) Extending 2d graphics in a 3d gui
CN101401130B (en) Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program
CN101095130B (en) Methods and apparatuses for authoring declarative content for a remote platform
EP2428957A1 (en) Time stamp creation and evaluation in media effect template
US9620167B2 (en) Broadcast-quality graphics creation and playout
Grahn The media9 Package, v1. 14
CN116450588A (en) Method, device, computer equipment and storage medium for generating multimedia file
van Lammeren Geodata visualization: a rich picture of the future
Lee et al. Efficient 3D content authoring framework based on mobile AR
CN116774902A (en) Virtual camera configuration method, device, equipment and storage medium
Ming Post-Production of Digital Film and Television with Development of Virtual Reality Image Technology-Advance Research Analysis
KR20120137967A (en) Image content processing and sales system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP02 Change in the address of a patent holder

Address after: Karlsruhe

Patentee after: NERO AG

Address before: Byrd, Germany

Patentee before: Nero AG

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120627

Termination date: 20220103