WO2004066216A1 - コンテンツ合成装置、コンテンツ合成方法、コンテンツ合成プログラム、コンテンツ合成プログラムを記録したコンピュータ読取可能な記録媒体、コンテンツデータのデータ構造、および、コンテンツデータを記録したコンピュータ読取可能な記録媒体 - Google Patents
コンテンツ合成装置、コンテンツ合成方法、コンテンツ合成プログラム、コンテンツ合成プログラムを記録したコンピュータ読取可能な記録媒体、コンテンツデータのデータ構造、および、コンテンツデータを記録したコンピュータ読取可能な記録媒体 Download PDFInfo
- Publication number
- WO2004066216A1 WO2004066216A1 PCT/JP2004/000561 JP2004000561W WO2004066216A1 WO 2004066216 A1 WO2004066216 A1 WO 2004066216A1 JP 2004000561 W JP2004000561 W JP 2004000561W WO 2004066216 A1 WO2004066216 A1 WO 2004066216A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content data
- data
- content
- script
- synthesizing
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Definitions
- the present invention relates to a content synthesizing apparatus, a content synthesizing method, a content synthesizing program, a computer-readable recording medium recording a content synthesizing program, a data structure of content, and a computer-readable recording medium recording content data.
- the present invention relates to a content synthesizing apparatus, a content synthesizing method, a content synthesizing program, a computer-readable recording medium recording a content synthesizing program, a data structure of content data, and a computer readable recording of the content data.
- a content synthesizing apparatus suitable for synthesizing content data, a content synthesizing method, a content synthesizing program, a computer-readable recording medium storing the content synthesizing program, a data structure of the content
- the present invention relates to a computer-readable recording medium on which content data is recorded.
- One object of the present invention is to provide a content synthesizing apparatus capable of controlling a synthesizing process from the content data side, a content synthesizing method, a content synthesizing program, a computer-readable recording medium storing the content synthesizing program, a content It is an object of the present invention to provide a computer-readable recording medium on which data structure of content data and content data are recorded.
- Another object of the present invention is to provide a content synthesizing apparatus, a content synthesizing method, a content synthesizing program, and a computer-readable recording medium storing the content synthesizing program, which do not require a new synthesizing script required for synthesizing content data.
- Another object of the present invention is to provide a data structure of content data, and a computer-readable recording medium on which the content data is recorded.
- a content synthesizing device is configured to input first content data including a synthesizing script describing synthesis of content data and second content data.
- the first content data including the synthesis script describing the synthesis of the content data and the input of the second content data are received by the content synthesis device, and the input first content is received. Included in data The input first content is synthesized with the input second content data based on the synthesized script to be input. Therefore, the synthesizing process is controlled by the synthesizing script included in the first content data.
- the first content data includes the composite stabilize, there is no need to prepare a new composite script when the first content data is composited with the second content data. As a result, it is possible to provide a content synthesizing device that can control the synthesizing process from the content data side and does not need to newly prepare a synthesizing script required for synthesizing the content data.
- the system further comprises an attribute determining unit for determining an attribute of the second content data, wherein the synthesizing script includes a script corresponding to each of the plurality of attributes of the content data, and the synthesizing processing unit determines the attribute.
- the input first content data is combined with the input second content data based on the stabilization corresponding to the attribute.
- the attribute of the second content data is determined by the content synthesizing device, and based on the script corresponding to the determined attribute included in the composite script included in the first content data, The one content data is combined with the second content data. For this reason, the combining process is controlled by a script corresponding to the attribute of the second content data. As a result, the synthesizing process can be controlled from the content data side, and the synthesizing process according to the attribute of the content data can be performed.
- the system further includes a time acquisition unit for acquiring the current time, wherein the synthesis script includes a script corresponding to the time synthesized by the synthesis processing unit, and the synthesis processing unit converts the script according to the obtained current time into a script. Based on this, the input first content is combined with the input second content data.
- the current time is acquired by the content synthesizing device, and the first content data is converted to the first content data based on the script corresponding to the acquired current time included in the synthesizing script included in the first content data.
- the combining process is controlled by a script corresponding to the combining time.
- the composition processing can be controlled from the content data side.
- the synthesizing process according to the time when the content data is synthesized can be performed.
- the apparatus further includes a position acquisition unit that acquires the current position of the content synthesizing device, wherein the synthesis script includes a script corresponding to the position, and the synthesis processing unit performs processing based on the acquired script corresponding to the current position.
- the input first content data is combined with the input second content data.
- the current position of the content synthesizing device is obtained by the content synthesizing device, and the first position is obtained based on the script corresponding to the obtained current position included in the synthesizing script included in the first content data. Is synthesized with the second content data. For this reason, the combining process is controlled by a script corresponding to the place to be combined. As a result, the synthesizing process can be controlled from the content data side, and the synthesizing process according to the place where the content data is synthesized can be performed.
- the synthesis script includes another synthesis script, and further includes an adding unit that causes the synthesized content data to include the other synthesis script.
- the composite script includes location information indicating the location of another composite stub, an acquisition unit that acquires another composite script indicated by the location information, and another composite script acquired in the composited content data. And an additional portion to be included.
- the content synthesizing device obtains another synthetic script indicated by the location information included in the synthetic script indicating the location of another synthetic script, and adds the obtained other synthetic script to the synthesized content data. Is added. Therefore, it is possible to control the combining process from the newly combined content data side.
- the first content data and the second content data include a key frame defining a frame of the imaging data
- the synthesizing script includes And a script describing insertion of data included in a key frame included in the second content data into a predetermined key frame of the first content data.
- the content synthesizing device includes a script describing that data included in a key frame included in the second content data is inserted into a predetermined key frame of the first content data. Based on the synthesizing script, data included in the key frame included in the input second content data is inserted into a predetermined key frame of the input first content data.
- the synthesizing script included in the content data controls the synthesizing process of importing the data included in the second content data into the first content data, and as a result, importing other content data from the content data side The synthesis process can be controlled.
- the first content data and the second content data include a key frame defining a frame of the animation data
- the synthesizing script converts the key frame included in the second content data into the first content data. Includes a script that states that it is to be added to the specified location in the data.
- the content synthesizing device is configured to input a key frame included in the second content data based on a synthesis script including a script describing that the key frame is added to a predetermined portion of the first content data.
- the key frame included in the second content data is added to a predetermined portion of the first content data including the key frame.
- the combining process of adding the second content data to the first content data is controlled by the combining script included in the first content data. As a result, it is possible to control the combining process for adding other content data from the content data side.
- the first content data includes a key frame that defines a frame of the image data
- the second content data is data that can be included in the key frame
- the synthesizing script includes: And a script describing that predetermined data included in the key frame of the content data is changed to the second content data.
- the key of the first content data is obtained by the content synthesizing device.
- the predetermined data included in the key frame of the input first content data is input based on a composite stabilization including a script describing that the predetermined data included in the frame is changed to the second content data. Is changed to the second content data.
- the synthesizing process for changing predetermined data included in the first content data to second content data is controlled by the synthesizing script included in the first content data. As a result, it is possible to control the synthesizing process for changing the content data to another content data.
- the synthesizing script includes a script describing that a predetermined portion of the first content data is deleted.
- the content synthesizing device deletes the predetermined portion of the input first content data based on the synthetic stabilization including the stabilization describing that the predetermined portion of the first content data is deleted. Is done. For this reason, the synthesizing process for deleting a predetermined portion included in the first content data is controlled by the synthesizing script included in the first content data. As a result, it is possible to control the synthesizing process of deleting a predetermined portion of the content data from the content data side.
- a content synthesizing device accepts input of first content data including location information indicating the location of a composite script describing composition of content data, and second content data
- An input receiving unit an obtaining unit that obtains a synthetic stablist indicated by the location information included in the input first content data, and an input of the first content data based on the obtained synthesis script.
- a synthesizing unit for synthesizing the second content data.
- the content synthesizing apparatus receives the input of the first content data including the location information indicating the location of the synthesized script describing the synthesis of the content data, and the input of the second content data.
- a synthesis script indicated by the location information included in the input first content data is obtained, and the input first content data is synthesized with the input second content data based on the obtained synthesis stabilization. You. Therefore, it is included in the first content data.
- the synthesis processing is controlled by the synthesis script indicated by the location information to be displayed. Also, since the first content data includes the location information of the composition script, there is no need to prepare a new composition script when combining the first content data with the second content data. As a result, it is possible to provide a content synthesizing device that can control the synthesizing process from the content data side and that does not need to newly prepare a synthesizing script required for synthesizing the content data.
- the composite stabilit includes location information indicating the location of another composite stabilt, and the acquiring unit further acquires another composite script indicated by the location information, and obtains another composite script acquired by the composited content data.
- an additional unit that includes
- the content synthesizing device further obtains another synthesized stabilite indicated by the location information included in the synthesized stabilit indicating the location of another synthesized script, and adds the obtained synthesized data to the synthesized content data.
- the script is attached. Therefore, the combining process can be controlled from the newly combined content data.
- a content synthesizing method is a method of synthesizing content data by a computer, comprising a step of receiving input of first content data including a synthesizing script, and second content data. And synthesizing the input first content with the input second content data based on the synthesis script included in the input first content.
- the present invention it is possible to provide a content synthesizing method that can control the synthesizing process from the content data side and does not need to newly prepare a synthesizing script required for synthesizing the content data.
- a content synthesizing method is a method of synthesizing content data by a computer, comprising: first content data including location information indicating the location of a synthesis script; Receiving the input of the content data; obtaining the composite stabilite indicated by the location information included in the input first content data; and Combining the input first content data with the input second content data.
- the present invention it is possible to provide a content synthesizing method that can control the synthesizing process from the content data side and does not need to newly prepare a synthesizing stabilizer necessary for synthesizing the content data.
- a content synthesizing program is a program for synthesizing content data, the step of receiving input of first content data including a synthesizing script, and second content data And causing the computer to execute a step of combining the input first content data with the input second content data based on the composite script included in the input first content data.
- a synthesizing process can be controlled from the content data side, and a content synthesizing program which does not require a new synthesizing script required for synthesizing content data, and a content synthesizing program are recorded.
- Computer readable recording medium can be provided.
- a content synthesizing program is a program for synthesizing content data, the first content data including location information indicating the location of a synthesis script, and the second content data.
- Receiving the input obtaining the composite script indicated by the location information included in the input first content data, and inputting the input first content data based on the obtained composite script And synthesizing with the second content data thus obtained.
- a synthesizing process can be controlled from the content data side, and a content synthesizing program which does not require a new synthesizing script required for synthesizing content data, and a content synthesizing program are recorded.
- Computer readable recording medium can be provided.
- the data structure of the content data includes: a content script; and a synthesis script used when a synthesis process for synthesizing the content data with the other content data is executed by a computer.
- the combining process for combining other content data with the content data is executed on the computer by the combining script included in the content data.
- the content data and other content data include a key frame that defines a frame of the animation data, and the synthesizing script adds a key frame included in the other content data to a predetermined portion of the content data.
- the synthesizing script adds a key frame included in the other content data to a predetermined portion of the content data.
- a computer adds a key frame included in another content data to a predetermined portion of the content data based on a synthesis script including a script describing addition of the key frame.
- the included key frame is added to a predetermined portion of the input content data.
- the synthesizing process for adding other content data to the content data is controlled by the synthesizing script included in the content data.
- the synthesizing process of adding other content data from the content data side can be controlled.
- the content data includes a key frame that defines a frame of the animation data
- the other content data is data that can be included in a key frame
- the synthesis script is included in a key frame of the content data. Includes a script that describes changing certain data to other content data.
- a key frame of the input content data is used by a computer. Is changed to other input content data. For this reason, the synthesizing process for changing predetermined data included in the content data to other content data is controlled by the synthesizing script included in the content data. As a result, synthesis that changes from content data to other content data Processing can be controlled.
- the synthesizing script includes a script that describes deleting a predetermined portion of the content data.
- the computer deletes a predetermined portion of the input content data based on a synthesis script including a script describing that a predetermined portion of the content data is deleted. For this reason, the synthesizing process for deleting a predetermined portion included in the content data is controlled by the synthesizing script included in the content data. As a result, it is possible to control the synthesizing process of deleting a predetermined portion of the content data from the content data side.
- FIG. 1 is a block diagram schematically showing the configuration of the content synthesizing apparatus according to the first embodiment.
- FIG. 2 is a diagram schematically illustrating the function of the content synthesizing apparatus according to the first embodiment.
- FIG. 3 is a flowchart illustrating a flow of a content synthesizing process executed by the content synthesizing apparatus according to the first embodiment.
- FIGS. 4A and 4B are diagrams illustrating the data structure of content data before being combined in the first combination example in the first embodiment.
- FIG. 5 is a diagram showing a data structure of the content data after being synthesized in the first synthesis example in the first embodiment.
- FIGS. 6A, 6B, 6C, and 6D show an image displayed when the content synthesized in the first synthesis example of the first embodiment is reproduced.
- FIG. 6A, 6B, 6C, and 6D show an image displayed when the content synthesized in the first synthesis example of the first embodiment is reproduced.
- FIGS. 7A and 7B are diagrams illustrating the data structure of content data before being combined in the second combination example in the first embodiment.
- FIG. 8 shows the content after being synthesized in the second synthesis example in the first embodiment.
- FIG. 3 is a diagram showing a data structure of data.
- FIGS. 9A, 9B, 9C, and 9D are diagrams for explaining an animation displayed when reproducing the synthesized content data in the second synthesis example in the first embodiment.
- FIG. 9A, 9B, 9C, and 9D are diagrams for explaining an animation displayed when reproducing the synthesized content data in the second synthesis example in the first embodiment.
- FIGS. 10A and 10B are diagrams showing the data structure of content data before being combined in the third combination example in the first embodiment.
- FIG. 11 is a diagram illustrating a data structure of the content data after being synthesized in the third synthesis example in the first embodiment.
- FIGS. 12A and 12B are diagrams showing the data structure of content data before being combined in the fourth combining example in the first embodiment.
- FIG. 13 is a diagram illustrating a data structure of the content data after being synthesized in the fourth synthesis example in the first embodiment.
- FIG. 14 is a diagram schematically illustrating the function of the content synthesizing apparatus according to the second embodiment.
- FIG. 15 is a flowchart showing the flow of a data synthesizing process executed by the content synthesizing device in the second embodiment.
- FIG. 16 is a flowchart showing the flow of an attribute discriminating process executed by the content synthesizing device according to the second embodiment.
- FIGS. 17A, 17B, and 17C are diagrams showing the data structure of content data before being combined in the first combination example in the second embodiment.
- 18A, 18B, 18C, 18D, 18E, and 18F illustrate animations displayed when the content data synthesized in the first synthesis example in the second embodiment is played.
- FIG. 18A, 18B, 18C, 18D, 18E, and 18F illustrate animations displayed when the content data synthesized in the first synthesis example in the second embodiment is played.
- FIG. 19 is a diagram illustrating an outline of functions of a content synthesizing apparatus according to the third embodiment.
- FIG. 20 is a flowchart illustrating a flow of a data synthesizing process performed by the content synthesizing apparatus according to the third embodiment.
- FIGS. 21A and 21B are diagrams illustrating a data structure of content data before being combined in the first combination example in the third embodiment.
- FIGS. 22A, 22B, 22C, 22D, 22E, and 22F show the content synthesized in the first synthesis example of the third embodiment.
- FIG. 9 is a diagram for explaining an erection displayed when data is reproduced.
- FIG. 23 is a diagram illustrating an outline of functions of the content synthesizing apparatus according to the fourth embodiment.
- FIG. 24 is a flowchart showing the flow of a data synthesizing process executed by the content synthesizing device in the fourth embodiment.
- FIGS. 25A and 25B are diagrams showing the data structure of content data before being combined in the first combination example in the fourth embodiment.
- FIG. 26 is a flowchart illustrating the flow of a data combining process performed by the content combining device according to the fifth embodiment.
- FIGS. 27A and 27B are diagrams illustrating the data structure of content data before being combined in the first combination example in the fifth embodiment.
- FIG. 28 is a diagram illustrating a data structure of content data after being synthesized in the first synthesis example in the fifth embodiment.
- FIGS. 29A and 29B are diagrams illustrating the data structure of content data before being combined in the second combination example in the fifth embodiment.
- FIG. 30 is a diagram showing a data structure of the content data after being synthesized in the second synthesis example in the fifth embodiment.
- FIGS. 31A and 31B are diagrams illustrating the data structure of content data before being combined in the third combination example in the fifth embodiment.
- FIG. 32 is a diagram showing a data structure of the content data after being synthesized in the third synthesis example in the fifth embodiment.
- FIG. 33 is a diagram schematically illustrating functions of a content synthesizing apparatus according to the sixth embodiment.
- FIG. 34 is a flowchart showing the flow of a data synthesizing process executed by the content synthesizing device in the sixth embodiment.
- FIG. 1 is a block diagram schematically illustrating a configuration of the content synthesizing device 100 according to the first embodiment.
- content synthesizing apparatus 100 can be configured with a general-purpose computer such as a personal computer (hereinafter, referred to as “PC (Personal Computer)”).
- the content synthesizing device 100 includes a control unit 110 for controlling the entire content synthesizing device 100, a storage unit 130 for storing predetermined information, and a content synthesizing device 100.
- the control unit 110, the storage unit 130, the input unit 140, the output unit 150, the communication unit 160, and the external storage device 170 are connected via a bus. Are connected to each other.
- the control unit 110 consists of a CPU (Central Processing Unit) and a CPU auxiliary circuit, and controls the storage unit 130, the input unit 140, the output unit 150, and the external storage device 170. Then, a predetermined process is executed in accordance with the program stored in the storage unit 130, and data input from the input unit 140, the communication unit 160, and the external storage device 170 are processed. The processed data is output to the output unit 150, the communication unit 160, or the external storage device 170.
- a CPU Central Processing Unit
- CPU auxiliary circuit controls the storage unit 130, the input unit 140, the output unit 150, and the external storage device 170.
- the storage unit 130 stores a RAM (Random Access Memory) used as a work area necessary for executing the program in the control unit 110, and a program to be executed in the control unit 110.
- ROM Read Only Memory
- a magnetic disk storage device such as a hard disk drive (hereinafter, referred to as “HDD (Hard Disk Drive) J”) is used as an aid to the RAM.
- HDD Hard Disk Drive
- the input unit 140 is an interface for inputting a signal from a keyboard, a mouse, or the like, and can input necessary information to the content synthesizing device 100. You.
- the output unit 150 is an interface for outputting a signal to a display such as a liquid crystal display device or a cathode ray tube (hereinafter referred to as “CRT (Cathode Ray Tube)” ′), and is required from the content synthesizing device 100. Information can be output.
- a display such as a liquid crystal display device or a cathode ray tube (hereinafter referred to as “CRT (Cathode Ray Tube)” ′
- CRT Cathode Ray Tube
- the communication section 160 is a communication interface for connecting the content synthesizing apparatus 100 to the network 500.
- the content synthesizing device 100 transmits and receives necessary information to and from other PCs and the like via the communication unit 160.
- the external storage device 170 reads a program or data recorded on the recording medium 171, and transmits the program or data to the control unit 110. Further, the external storage device 170 writes necessary information to the recording medium 1711 in accordance with an instruction from the control unit 110.
- Computer-readable recording media include magnetic tapes, cassette tapes, magnetic disks such as floppy (R) disks and hard disks, CD-ROMs (Compact Disk Read Only Memory), DVDs (Digital Versatile Disks), etc.
- Optical disks magneto-optical disks such as MO (Magneto Optical disk) and MD (MiniDisc), memory cards such as IC cards and optical cards, or masks ROM, EPR OM (Erasable Programmable Read Only Memory), EEPR OM (Electrionically
- This is a recording medium that fixedly carries a program including semiconductor memory such as Erasable and Programmable Read Only Memory (ROM) and flash ROM.
- the recording medium can be a medium that dynamically carries a program so that the program can be downloaded from the network 500.
- FIG. 2 is a diagram illustrating an outline of functions of the content synthesizing device 100 according to the first embodiment.
- control unit 110 of content synthesizing device 100 includes an input receiving unit 111 and a synthesizing processing unit 112.
- the storage unit 130 of the content synthesizing device 100 stores a plurality of pieces of content data.
- the content data includes content data including animation data and a synthesizing script, and content data including animation data.
- the content data may include data that can be output by a content reproducing device such as a computer, such as moving image data such as animation data, still image data, music data, and graphic data.
- animation data is added to the content data.
- the animation data includes a key frame which is data defining each frame of the animation data.
- the synthesizing script is information that defines a procedure in a synthesizing process for synthesizing certain content data with other content data, and is used when the synthesizing process is executed by the content synthesizing device 100.
- the synthesis script consists of control contents and parameters.
- the control content indicates the content of the synthesis processing.
- the parameter indicates the target of the compositing process.
- the synthesizing script is information that defines a procedure in a synthesizing process for synthesizing the animation data with other animation data. is there.
- the content data stored in the storage unit 130 may be received in advance from another PC or the like via the network 500 by the communication unit 160 and stored in the storage unit 130.
- the data may be read from the recording medium 17 1 by the external storage device 170 and stored in the storage unit 130.
- the input receiving unit 111 receives the input of the content data 10 including the synthesizing script stored in the storage unit 130 and the content data 20.
- the received content data 10 and content data 20 are output to the combination processing unit 112.
- the input receiving unit 111 directly receives the input of the content data 10 including the synthesizing script and the content data 20 from another PC or the like via the network 500 by the communication unit 160.
- the input of the content data 10 including the synthesizing script and the input of the content data 20 from the recording medium 171 by the external storage device 170 may be accepted.
- the combination processing unit 112 combines the animation data included in the content data 10 with the animation data included in the content data 20 based on the combination script included in the content data 10. Then, the synthesizing unit 112 stores the synthesized content data 30 in the storage unit 130. Note that the synthesis processing unit 112 may transmit the synthesized content data 30 directly to another PC or the like via the network 500 by the communication unit 160, or may be an external storage. Equipment 1 70 may be used for recording on the recording medium 17 1.
- FIG. 3 is a flowchart illustrating a flow of the content synthesizing process performed by the content synthesizing device 100 according to the first embodiment.
- step S 11 content data 10 including the composite script stored in storage unit 130 and content data 20 are input by input receiving unit 111. Is accepted.
- step S12 the combining processing unit 112 determines whether or not the content data 10 or the content data 20 input in step S11 includes a combining script. . If any of the content data includes a synthesizing script (Yes in step S11), the process proceeds to step S13. On the other hand, if no synthesizing script is included in any of the content data (step S 1
- step S13 the content data 10 input in step S11 includes the synthesizing script.
- step S12 if any of the content data includes a synthesis script, the content synthesis processing may be terminated, or the synthesis script included in either of the content data may be changed to a subsequent step. ⁇ You may use it.
- step S13 the data processing is executed by the data processing unit 112.
- the data synthesizing process is based on the synthesizing script included in the content data 10 input in step S11, and converts the animation data included in the content data 10 into the content data 2 input in step S11. This is a process for combining with the operation data included in 0.
- step S14 the content data 30 synthesized in step S13 is stored in the storage unit 130 by the synthesis processing unit 1 1 ⁇ , and the content synthesis processing ends.
- FIGS. 4A and 4B are diagrams illustrating the data structure of content data before being combined in the first combination example in the first embodiment.
- FIG. 4A is a diagram showing the data structure of content data 1.A including a composite script.
- content data 1A includes a header, key frame 1 to key frame 4, and a synthesis script.
- the header includes data indicating the attributes of the animation data, such as the display size of the animation data, the number of key frames, and the playback time interval of each key frame.
- a key frame is data that defines a frame of animation data. Then, the time at which each key frame is reproduced is determined from the reproduction time interval of each key frame. Next, the frames between key frames are complemented according to the frame rate that indicates the number of frames that can be played back per second by the playback device that plays back the animation data. Then, the key frame and the complemented frame are sequentially played back, thereby realizing the auction.
- Key frame 1 includes object data and image data.
- the object data is data indicating a shape, and includes shape data indicating the shape of the shape and position data indicating the position of the shape.
- the object data has a circular shape and a shape. Indicates that the object data is slightly to the left of the upper center of the screen.
- the object data is shown in the form of an image displayed on the screen for easy understanding.
- the image data is data of an image displayed on the background of the animation, for example, data obtained by encoding an image of a pattern, a painting, a photograph, or the like in a predetermined encoding format. It is displayed until a key frame containing another image data is played.
- the key frame 2 includes object data and music data. Since the object data is object data indicating the same figure as the object data included in the key frame 1, it is associated with each other. Here, the object data indicates that the figure is slightly to the left of the upper center of the screen, and the music data is data for generating sound as the animation progresses. This is data in which music and sound effects are encoded in an encoding format that can generate sound on a computer. Generate the same music until the keyframe is played.
- Keyframe 3 and keyframe 4 each contain object data. These object data are associated with each other because they are object data indicating the same figure as the object data included in the key frame 1 and the key frame 2. That is, the object data included in the key frames 1 to 4 are associated with each other between the key frames. For this reason, by reproducing the animation data, the graphic represented by the object data is displayed as a graphic representation of the graphic as the key frame elapses.
- Such an animation system is called a vector animation system.
- the object data included in the key frame 3 indicates that the figure is slightly below the center of the screen and slightly to the left of the center.
- the object data included in the key frame 4 indicates that the figure is slightly to the left of the center of the screen. Specifically, as keyframes 1 to 4 are played, the circular figure is initially slightly to the left of the center at the top of the screen, stays at the same position for a while, then starts moving downwards, After moving slightly below, it will move upward and move to near the center.
- the synthesis script included in the content data 1A includes “input of an object of another file” as a control content, and “key frame 2 to” as a parameter.
- Control contents is an object data included in ⁇ E mail Chillon data included in another contents data 2 A, indicating that ⁇ the target position designated by the parameter.
- the parameter “key frame 2 to” indicates that the target position of the combination process indicated by the control content is key frame 2 to the application data included in the content data 1A including the combination script.
- FIG. 4B is a diagram showing the data structure of the content data 2A.
- content data 2A includes a header and key frame 1 data to key frame 2.
- Key frame 1 and key frame 2 include object data.
- the object data indicates that the shape of the figure is square and that the figure is slightly below the center of the screen.
- the object data included in the key frame indicates the same shape as the shape indicated by the object data included in the key frame 1, and indicates that the shape is slightly above the center of the screen.
- the content synthesizing device 100 accepts the input of the content data 1A and the content data 2A, and determines whether or not the content data 1A or the content data 2A contains a synthetic script. You. Since the composite data is included in the content data 1A, the animation data included in the content data 1A is synthesized with the animation data included in the content data 2A based on the composite script, and the content data 3A described later is used. Is stored.
- the composition script describes that the object data included in each key frame of the content data 2A is inserted into each key frame from the key frame 2 of the content data 1A including the composition script.
- the content synthesizing device 100 sets the key frame 1 of the content data 1A as a new key frame 1.
- the object data included in the key frame 1 of the content data 2A is inserted into the key frame 2 of the content data 1A, and is set as a new key frame 2.
- the object data included in the key frame 2 of the content data 2A is inserted into the key frame 3 of the content data 1A, and is set as a new key frame 3.
- the key frame 4 of the content data 1A is set as a new key frame 4.
- a header is generated based on the new key frames 1 to 4, and the header and the content data 3A including the new key frames 1 to 4 are synthesized and stored.
- FIG. 5 is a diagram showing a data structure of the content data 3A after being synthesized in the first synthesis example in the first embodiment.
- the content data 3A in which the content data 1A and the content data 2A are combined by the unit 100, comprises a header and key frames 1 to 4.
- the header is generated based on the key frames 1 to 4 of the synthesized content data 3A and included in the content data 3A.
- Key frame 1 is the same as key frame 1 of content data 1A described in FIG. 4A.
- the key frame 2 is obtained by inserting the object data included in the key frame 1 of the content data 2A described in FIG. 4B into the key frame 2 of the content data 1A.
- the key frame 3 is obtained by inserting the object data included in the key frame 2 of the content data 2A into the key frame 3 of the content data 1A.
- Key frame 4 is the same as key frame 4 of content data 1A.
- FIGS. 6A, 6B, 6C, and 6D show an image displayed when the content data 3A synthesized in the first synthesis example in the first embodiment is reproduced.
- FIG. FIGS. 6A to 6D are display screen diagrams corresponding to each key frame reproduced sequentially. Referring to FIG. 6A, first, in key frame 1, a circular figure is displayed slightly to the left of the upper center of the screen, and an image A represented by image data is displayed on the background. Between key frame 1 and key frame 2, the display screen displayed at the first key frame is continuously displayed.
- a square figure is displayed slightly below the center of the screen, and music A indicated by music data flows. Between keyframe 2 and keyframe 3, the circular figure moves down and the square figure moves up.
- FIG. 6C at key frame 3, a circular figure is displayed slightly below the center of the screen, slightly to the left of the center, and a square figure is displayed slightly above the center of the screen. Between keyframes 3 and 4, the circular shape moves upward and the square shape gradually disappears. Referring to FIG. 6D, in key frame 4, the circular shape stops slightly to the left of the center of the screen, and the square shape disappears completely.
- a display screen corresponding to the complemented frame is displayed between display screens corresponding to each key frame. Is displayed.
- the synthesizing process of inserting the object data included in the content data 2A into the content data 1A is controlled by the synthesizing script included in the content data 1A. As a result, it is possible to control the synthesizing process for inputting another content data 2A from the content data 1A side.
- FIGS. 7A and 7B are diagrams illustrating the data structure of content data before being combined in the second combination example in the first embodiment.
- FIG. 7A is a diagram showing a data structure of content data 1B including a composite script.
- content data 1B includes a header, key frame 1 to key frame 4, and a synthesis script.
- Keyframes 1 to 4 each contain object data. Since these object data are the same as the object data included in key frame 1 to key frame 4 of content data 1A described in FIG. 4A, description thereof will not be repeated.
- the synthesizing script included in the content data 1B includes “input of an object in another file” as the first control content, and “keyframe 2 to” as the first parameter.
- the second control content includes “insert control data”, and the second parameter includes “(jump 4) key frame 2”. “Importing objects from other files” was explained in Figure 4A, so the explanation will not be repeated.
- Control contents “Insert control data” indicates that the target data specified by the parameter in parentheses is inserted into the target position specified by the parameter outside the parentheses.
- the parameter “(jump 4) key frame 2” is the The control data is “jump 4 j”, and the target position of the combining process indicated by the control content is the key frame 2 of the animation data included in the content data 1 B including the combining script.
- the control data is data for controlling the playback device when a key frame of the animation data is played back.
- the reproducing device reproduces the key frame based on the control data when reproducing the key frame.
- FIG. 7B is a diagram showing a data structure of the content data 2B.
- Content data 2B shown in FIG. 7B is the same as content data 2A described in FIG. 4B, and therefore description thereof will not be repeated.
- the content synthesizing device 100 receives the input of the content data 1B and the content data 2B, and determines whether or not the content data 1B or the content data 2B includes a synthesizing script. You. Since the content data 1A includes the synthesizing script, the animation data included in the content data 2B is synthesized with the animation data included in the content data 2B based on the synthesizing script. B is stored.
- the composite script includes a script describing that the object data included in each key frame of the other animation data 2B described later is inserted into each key frame from the key frame 2 of the animation data 1B including the composite script.
- This document describes that control data "jump 4" indicating that a jump to key frame 4 is to be inserted into key frame 2 of the animation data 1B.
- the content synthesizing device 100 sets the key frame 1 of the content data 1B as a new key frame 1.
- the object data included in the key frame 1 of the content data 2 B and the control data “jump 4” are inserted into the key frame 2 of the content data 1 B to obtain a new key frame 2.
- the object data included in key frame 2 of content data 2 B Is inserted into the key frame 3 of the content data 1 B to obtain the key frame 3 of the content data 3 B.
- the key frame 4 of the content data 1B is set as a new key frame 4.
- a header is generated based on the new key frames 1 to 4, and the header and the content data 3B including the new key frames 1 to 4 are synthesized and taken into account.
- FIG. 8 is a diagram showing a data structure of the content data 3B after being synthesized in the second synthesis example in the first embodiment.
- content data 3B obtained by synthesizing content data 1B and content data 2B by content synthesizing device 100 includes a header and key frames 1 to 4.
- the header has already been described with reference to FIG. 5, and the description will not be repeated.
- Key frame 1 is the same as key frame 1 of content data 1B described in FIG. 7A.
- key frame 2 the object data included in key frame 1 of content data 2B described in FIG. 7B was inserted into key frame 2 of content data 1B, and control data “jump 4” was inserted. Things.
- the key frame 3 is obtained by inserting the object data included in the key frame 2 of the content data 2B into the key frame 3 of the content data 1B.
- FIGS. 9A, 9B, 9C, and 9D show the images displayed when the content data 3B synthesized in the second synthesis example of the first embodiment is reproduced.
- FIG. 9A to 9D are display screen diagrams corresponding to each key frame reproduced sequentially. Referring to FIG. 9A, first, in key frame 1, a circular figure is displayed slightly to the left of the upper center of the screen. Between key frames 1 and 2, the display screen displayed at the first key frame is continuously displayed.
- a square shape is also displayed at the center of the screen. Appears slightly below.
- key frame 3 is excluded from being played back. Between keyframe 2 and keyframe 4, the circular shape moves downward and the square shape gradually disappears.
- the combining process including the object data included in the content data 2B and the content data 1B is controlled by the combining script including the plurality of scripts included in the content data 1B.
- the synthesizing process for synthesizing other content data 2B from the content data 1B side can be controlled.
- FIG. 10A and FIG. 10B are diagrams showing the data structure of content data before being combined in the third combination example in the first embodiment.
- FIG. 10A is a diagram showing a data structure of content data 1C including a synthesis script.
- content data 1C includes a header, key frames 1 to 2, and a synthesizing script.
- Key frame 1 to key frame 2 are the same as the object data included in key frame 2 to key frame 3 of content data 1 B described in FIG. 7A, respectively, and thus description thereof will not be repeated.
- the composite script included in the content data 2C includes "add keyframe” as the control content and "@before keyframe 1" as a parameter.
- the control content "add keyframe” indicates that each keyframe of the animation data included in the other content data 2C is inserted at the target position specified by the parameter.
- the parameter “before key frame 1” indicates that the target position of the composition processing indicated by the control content is before key frame 1 of the animation data included in the content data 1C including the composition script.
- FIG. 10B is a diagram showing the data structure of the content data 2C. Referring to FIG. 10B, content data 2C is the same as content data 2B described in FIG. 7B, and therefore description thereof will not be repeated.
- the content synthesizing device 100 receives the input of the content data 1C and the content data 2C, and determines whether or not the content data 1C or the content data 2C includes the synthesizing script. You. Since the content data 1C includes the synthesis script, the animation data included in the content data 1C is synthesized with the animation data included in the content data 2C based on the synthesis script, and the content data 3C described later is stored. You.
- the synthesizing script describes that the key frame included in the content data 2C is added before the key frame 1 of the content data 1C including the synthesizing script.
- content synthesizing device 100 adds key frame 1 and key frame 2 of content data 2C before key frame 1 of content data 1C, and creates new key frame 1 and key frame 2. .
- the key frames 1 and 2 of the content data 1C are defined as the key frames 3 and 4 of the content data 3C.
- a header is generated based on the new key frames 1 to 4, and the header and the content data 3C including the new key frames 1 to 4 are synthesized and recorded.
- FIG. 11 is a diagram showing a data structure of the content data 3C after being synthesized in the third synthesis example in the first embodiment.
- content data 3C obtained by synthesizing content data 1C and content data 2C by content synthesizing device 100 includes a header and key frames 1 to 4.
- the key frames 1 and 2 are the same as the key frames 1 and 2 of the content data 2C described with reference to FIG. 10B, respectively.
- Keyframe 3 and keyframe 4 are each described in Figure 1 OA This is the same as key frame 1 and key frame 2 of content data 1C.
- the synthesizing process of adding the key frame included in the content data 2C to a predetermined portion of the content data 1C is controlled by the synthesizing script included in the content data 1C.
- FIGS. 12A and 12B are diagrams illustrating a data structure of content data before being combined in the fourth combining example in the first embodiment.
- FIG. 12A is a diagram showing a data structure of content data 1E including a synthesis script.
- content data 1E includes a header, key frame 1 to key frame 2, and a synthesis script.
- Key frame 1 and key frame 2 include object data A indicating a figure representing a face, object data B indicating a figure representing a speech balloon, and character data A.
- Object data A included in key frame 1 indicates that the figure representing the face is at the lower left of the screen.
- the object data A included in the key frame 2 indicates that the figure representing the face is at the lower right of the screen.
- Object data B included in key frame 1 indicates that the figure representing the balloon is at the upper right of the screen. Further, the object data B included in the key frame 2 indicates that the figure representing the balloon is at the top of the screen.
- Character data A included in key frame 1 and key frame 2 indicate that character data A is located inside object data B, respectively.
- the synthesizing script included in the content data 1E includes "change to data of another file” as the control content and "character data A” as a parameter.
- Control content “Change to data of another file” indicates that the target data specified by the parameter is changed to data included in other content data 2E.
- Parameter “statement “Character data A” indicates that the data to be subjected to the combination processing indicated by the control content is character data A included in the key frame of the animation data included in the content data 1E including the combination script.
- FIG. 12B is a diagram showing the content data 2E to be changed.
- content data 2E includes character data composed of a character string “He 11 o, Wor 1 d.
- the content synthesizing device 100 receives the input of the content data 1E and the content data 2E, and determines whether or not the content data 1E or the content data 2E includes the synthesizing script. You. Since the content data 1E includes the synthesizing script, the content data 1E is synthesized with the content data 2E based on the synthesizing script, and the content data 3E described later is stored.
- the synthesizing script describes that predetermined data included in each key frame of the content data 1 is changed to data included in the content data 2E.
- content synthesizing device 100 includes character data A included in key frame 1 and key frame 2 of content data 1E in content data 2E, respectively. Is changed to character data consisting of the character string “He 11 o, W or 1 d!”, And new key frames 1 and 2 are created.
- a header is generated based on the new key frames 1 and 2, and the header and the content data 3E including the new key frames 1 and 2 are synthesized and stored.
- FIG. 13 is a diagram showing a data structure of the content data 3E after being synthesized in the fourth synthesis example in the first embodiment.
- content data 3 E obtained by synthesizing content data 1 E and object data 2 E by content synthesizing device 100 has a header and key frame 1 to key frame 2. Become.
- Keyframe 1 and keyframe 2 are described in Figure 12A respectively
- the character data A of the key frames 1 and 2 of the content data 1E is changed to the character data included in the content data 2E described in FIG. 12B.
- the synthesizing script included in the content data 1E can control the synthesizing process of changing predetermined data included in the content data 1E to other data.
- the combining process is controlled by the combining script included in the first content data.
- the first content data includes the synthesizing script, there is no need to prepare a new synthesizing script when synthesizing the first content data with the second content data.
- the synthesizing process can be controlled from the content data side, and there is no need to prepare a new synthesizing script necessary for synthesizing the content data.
- the processing performed by the content synthesizing apparatus 100 has been described.
- the content synthesizing method in which the processing illustrated in FIG. 3 is executed by a computer and the processing illustrated in FIG. A content synthesizing program to be executed, a computer-readable recording medium recording the content synthesizing program, and a data structure of the content data shown in FIGS. 4A, 7A, 10A, and 12A.
- the invention can be regarded as a computer-readable recording medium on which content data having the data structure is recorded.
- the combined stabilites described in the first embodiment include scribs corresponding to each of a plurality of attributes.
- FIG. 14 is a diagram schematically illustrating functions of the content synthesizing device 10 OA in the second embodiment.
- control unit 110 of content synthesizing device 10 OA A includes an input reception unit 111, a synthesis processing unit 112A, and a genus individual discrimination unit 113.
- the storage unit 130 of the content synthesizing device 100 stores a plurality of content data.
- the content data includes content data including the animation data and the synthesizing script, and content data including the animation data.
- Input receiving unit 111 has been described with reference to FIG. 2 in the first embodiment, and therefore description thereof will not be repeated.
- the synthesizing unit 1 1 2 A sets the attribute judging unit 1 1 to 1 if the synthesizing script included in the content data 10 includes a script corresponding to each of a plurality of attributes of the animation data included in the content data 20. Send the content data 20 to 3.
- the attribute determining unit 113 determines the attribute of the animation data included in the content data 20 sent from the combining processing unit 112A, and returns the determination result to the combining processing unit 112A.
- the attribute of the animation data is an index indicating the characteristics of the animation data such as the number of object data, the number of key frames, the number of image data, and the number of music data included in the animation data. .
- the attribute discriminating unit 113 determines that the number of object data included in the content data 20 is W, the number of key frames is X, the number of image data is Y, and If the number is Z, a sequence of WXYZ is returned to the synthesis processing unit 112A as an attribute of the amation data included in the content data 20.
- the attribute of the animation data is not limited to this.
- the attribute may be a number specified by the contents of the animation data, or may be information of the author of the animation data. However, the number may be uniquely assigned to the animation data, or may be a combination of a plurality of these.
- the composition processing unit 1 1 2 A based on the script corresponding to the attribute of the animation data included in the content data 20 indicated by the determination result determined by the attribute determination unit 1 13,
- the animation data included in the input content data 10 is combined with the animation data included in the content data 20.
- the synthesis processing unit 1 1 2 A outputs the synthesized content data 3 0 is stored in the storage unit 130.
- the synthesis processing unit 112A may transmit the synthesized content data 30 directly to another PC or the like via the network 500 by the communication unit 160, or to an external storage.
- the information may be recorded on the recording medium 171 by the device 170.
- FIG. 15 is a flowchart illustrating the flow of the data combining process performed by the content combining device 10OA in the second embodiment.
- the data synthesizing process is a process executed in step S13 of the content synthesizing process described in FIG. Referring to FIG. 15, first, in step S 21, an attribute for determining the attribute of the animation data included in content data 20 input in step S 11 by attribute determining section 113 is used. A determination process is performed. The attribute determination processing will be described later with reference to FIG. Next, in step S22, the synthesis processing unit 112A generates a script corresponding to the attribute of the animation data included in the content data 20 determined in step S21 1 from the content data 10 It is determined whether it is included in the synthesis script.
- step S22 If the script corresponding to the attribute of the animation data included in the content data 20 is included in the synthesizing script of the content data 10 (Yes in step S22), the synthesizing process is performed in step S23. Based on the script corresponding to the attribute of the animation data included in the content data 20 determined in step S 21 by the unit 1 12 A, the content data 10 input in step S 11 is A synthesizing process for synthesizing with the content data 20 input in S11 is executed, and the process returns to the content synthesizing process.
- step S22 when the script corresponding to the attribute of the animation data included in the content data 20 is not included in the synthesis script of the content data 10 (No in step S22), the process returns to the content synthesis process.
- FIG. 16 is a flowchart showing the flow of an attribute discriminating process executed by the content synthesizing apparatus 100A in the second embodiment.
- the attribute discriminating process is a process executed by the attribute discriminating unit 113 in step S21 of the data synthesizing process described in FIG. Referring to FIG. 16, first, in step S31, the number w of objects included in the key frame of content data 20 is determined. And step S 3
- step S33 the number Y of image data included in the key frame of the content data 20 is determined. Further, in step S34, the number Z of music data included in the key frame of the content data 20 is determined.
- step S35 based on the determinations in steps S31 to S34, a sequence of WXYZ is added to the attribute of the animation data defined by the key frame included in the content data 20. Is returned to the data composition process.
- FIG. 17A, FIG. 17B, and FIG. 17C are diagrams showing the data structure of content data before being combined in the first combination example in the second embodiment.
- FIG. 17A is a diagram showing a data structure of content data 1F including a synthesis script.
- content data 1F includes a header, key frames 1 to 3, and a synthesis script.
- Key frame 1 to key frame 3 are the same as the object data included in key frame 2 to key frame 4 of content data 1 B described in FIG. 7, respectively, and thus description thereof will not be repeated.
- the synthesis script included in the content data 1F includes a synthesis script corresponding to the attribute “0 1 0 3 0 0” and a synthesis script corresponding to the attribute “0 1 0 2 0 0”.
- the composite script corresponding to the attribute “0 1 0 3 0 0” includes “input of an object in another file” as the control content and “key frame 1 to” as a parameter.
- the composite stab corresponding to the animation data of the attribute “0 1 0 2 0 0” includes “input of an object of another file” as the control content and “key frame 2 to” as a parameter. .
- the attribute “0 1 0 3 0 0” is the object data included in the animation data. It indicates that the number W of data is 01, the number X of key frames is 03, the number Y of image data is 0, and the number Z of music data is 0. Similarly, the attribute “010200” has the same number of object data, image data, and music data included in the animation data as the attribute “1”, and the number of key frames X is 02. It indicates that
- FIG. 17B is a diagram showing the data structure of the content data 2FA.
- Content data 2FA shown in FIG. 17B is the same as content data 2A described in FIG. 4B, and therefore description thereof will not be repeated.
- the number W of object data included in the animation data included in the content data 2 FA is 01
- the number of key frames X is 02
- the number of image data Y is 0,
- the number of music data Z is Since it is 0, the attribute of the animation data included in the content data 2FA is “01020 0”.
- FIG. 17C is a diagram showing the data structure of the content data 2 FB.
- content data 2 FB includes a header and key frame 1 to key frame 3.
- Keyframe 1 and keyframe 2 are the same as keyframe 1 and keyframe 2 of content data 2A described in FIG. 4B.
- Key frame 3 indicates the same figure as the object data shown in key frame 1 and key frame 2 and includes object data indicating that the figure is located at the lower center of the screen.
- the attribute of the animation data included in the content data 2FA is “010300”.
- the content synthesizing device 10 OA determines whether or not the content data 1 F or the content data 2 FA includes a synthesizing script. Since the synthesis script is included in content data 1F, the animation script included in content data 2FA is The attribute of the application data is determined. Since the attribute of the amalgamation data included in the content data 2 FA is “0 1 0 2 0 0”, the key included in the content data 1 F is based on the script corresponding to the attribute “0 1 0 2 0 0”. The content data is combined with the content data 2 FA, and the new content data is stored.
- the content synthesizing device 10 OA determines whether or not the content data 1F or the content data 2 FB includes a synthesizing script. Since the composite script is included in the content data 1F, next, the attribute of the animation data included in the content data 2 FB is determined. Since the attribute of the animation data included in the content data 2 FB is “0 1 0 3 0 0”, the animation included in the content data 1 F is based on the script corresponding to the attribute “0 1 0 3 0 0”. The data is combined with the content data 2 FB, and the new content data is stored.
- the synthetic script included a script corresponding to the attribute “0 1 0 3 0 0” and a script corresponding to the attribute “0 1 0 2 0 0”.
- the script corresponding to the attribute “0 1 0 3 0 0” has the attribute “0 1 0 3 0 0” in each key frame from the key frame 1 of the animation data included in the content data 1 F including the composite script.
- the description describes that object data included in each key frame of the animation data included in the content data 2 FA including the animation data was inserted.
- the script corresponding to the attribute “0 1 0 2 0 0” includes the attribute “0 1 0 2 0 0” in each key frame from the key frame 2 of the animation data included in the content data 1 F including the synthetic stabilito.
- the description describes that the object data included in each key frame of the animation data included in the content data 2 FB containing the animation data of the above is inserted.
- content synthesizing apparatus 100 sets key frame 1 of content data 1F as a new key frame 1.
- object data included in key frame 1 of content data 2 FA JP2004 / 000561 is inserted into the key frame 2 of the content data 1F to make a new key frame 2.
- the object data included in the key frame 2 of the content data 2 F A is inserted into the key frame 3 of the content data 1 F, and a new key frame 3 is set.
- a header is generated based on the new key frames 1 to 3, and the header and content data including the new key frames 1 to 3 are combined and stored.
- content synthesizing apparatus 100 converts the object data included in key frame 1 of content data 2 FB into key frame 1 of content data 1F. And make it a new keyframe 1.
- the object data included in the key frame 2 of the content data 2 FB is inserted into the key frame 2 of the content data 1 F, and a new key frame 2 is set.
- the object data included in the key frame 3 of the content data 2 FB is inserted into the key frame 3 of the content data 1 F, and is set as a new key frame 3.
- a header is generated based on the new key frames 1 to 3, and the header and the content data including the new key frames 1 to 3 are combined and stored.
- FIGS.18A, 18B, 18C, 18D, 18E, and 18F illustrate an animation displayed when the content data synthesized in the first synthesis example in the second embodiment is reproduced.
- FIG. 18A to 18C correspond to each key frame that is sequentially reproduced when the content data 1F is reproduced with the content data 2FA including the animation data having the attribute “010200”. It is a display screen figure.
- FIG. 18A a circular figure is first displayed slightly to the left of the upper center of the screen. Between Figure 18A and Figure 18B, the circular figure moves downward. In Figure 18B, the circle The shape figure is displayed slightly below the center of the screen, slightly to the left of the center, and the square figure is displayed slightly below the center of the screen. Between Figure 18B and Figure 18C, the circular figure moves upward, and the square figure moves upward at a faster rate than the circular figure. In Figure 18C, the circular shape stops slightly to the left of the center of the screen, and the square shape stops slightly above the center of the screen.
- FIG. 9 is a display screen diagram corresponding to each key frame to be displayed.
- Figure 18D first, a circular figure is displayed slightly to the left of the upper center of the screen, and a square figure is displayed slightly below the center of the screen. Between Figure 18D and Figure 18E, the circular figure moves down and the square figure moves up. In Figure 18E, a circular figure is displayed slightly below the center of the screen, slightly to the left of the center, and a square figure is displayed slightly above the center of the screen. Between Figure 18E and Figure 18F, the circular figure moves upward and the square figure moves downward. In Figure 18F, the circular shape stops slightly at the left center of the screen, and the square shape stops at the lower center of the screen.
- the combining process of combining the content data 1F with other content data is controlled based on the script corresponding to the determined attribute included in the combining script included in the content data 1F.
- the synthesizing process can be controlled from the content data side, and the synthesizing process can be performed in accordance with the genre “I” of the content data.
- the attribute of the content data has been described as the attribute of the animation data defined by the key frame included in the content data.
- the present invention is not limited to this. Other indicators may be used.
- the input of the first content data including the synthesis stabilization describing the synthesis of the content data and the input of the second content data Is accepted and entered
- the attribute of the second content data obtained is determined, and the first content data is converted to the second content data based on the script corresponding to the determined attribute included in the composite stabilite included in the input first content. It is combined with the content data. Therefore, the combining process is controlled by the script corresponding to the attribute of the second content data.
- the synthesizing process can be controlled from the content data side, and the synthesizing process can be performed according to the genus of the content data.
- the invention can be regarded as a computer-readable recording medium on which the content data is recorded.
- the synthesizing script described in the first embodiment includes a script corresponding to the time at which the content is synthesized by the content synthesizing device 100B.
- FIG. 19 is a diagram showing an outline of functions of the content synthesizing device 10 OB in the third embodiment.
- control unit 110B of content synthesizing device 100B includes an input receiving unit 111, a synthesizing processing unit 112B, and a time obtaining unit 114.
- the storage unit 130 of the content synthesizing device 100 B stores a plurality of content data.
- the content data includes content data including animation data and a synthesizing script, and content data including animation data.
- the input receiving unit 111 has been described with reference to FIG. 2 in the first embodiment, and therefore description thereof will not be repeated.
- the combining processing unit 1 12 B instructs the time acquisition unit 1 14 to acquire the time. .
- the time acquisition unit 114 acquires the current time according to the instruction from the synthesis processing unit 112B. It is sent to the synthesis processing unit 1 12B.
- the acquisition of the current time is realized by, for example, the clock function of the content synthesizing apparatus 100B, but may be realized by another method.
- the synthesizing unit 1 12B receives the content data 10 input by the input receiving unit 111 based on the script corresponding to the time obtained by the time obtaining unit 114 of the synthesizing script included in the content data 10. 1 1 Synthesize with the content data 20 input by 1. Then, the combining processing unit 112B causes the storage unit 130 to store the combined content data 30. Note that the combining processing unit 112B may transmit the combined content data 30 directly to another PC or the like via the communication unit 160 via the network 500, or may record the combined content data 30 using the external storage device 170. It may be recorded on the medium 171.
- FIG. 20 is a flowchart illustrating a flow of a data combining process executed by the content combining device 100B in the third embodiment.
- the data synthesizing process is a process executed in step S13 of the content synthesizing process described in FIG. Referring to FIG. 20, first, in step S41, it is determined whether or not the synthesized script includes a script according to time. If the script corresponding to the time of the synthetic script is included (Yes in step S41), the current time is obtained by the time obtaining unit 114 in step S42, and in step S43, the synthesis processing unit According to 1 12B, based on the script corresponding to the current time acquired in step S42, the content data 10 input in step S11 is synthesized with the content data 20 input in step S11. The process is executed, and the process returns to the content synthesizing process.
- step S41 If the combined script does not include a script according to the time (No in step S41), the process returns to the content combining process.
- FIGS. 21A and 21B are diagrams illustrating a data structure of content data before being combined in the first combination example in the third embodiment.
- FIG. 21A is a diagram showing a data structure of content data 1I including a synthesis script.
- content data 1I includes a header, key frames 1 to 3, and a synthesizing script.
- Key frame 1 to key frame 3 are the same as key frame 1 to key frame 3 of content data 1F described with reference to FIG. 17A, and therefore description thereof will not be repeated.
- the composite script included in the content data 1I includes a composite script corresponding to the time “AM” and a composite script corresponding to the time “PM”.
- the composite script corresponding to the time “AM” includes “input of an object in another file” as the control content and “keyframe 1 to” as a parameter.
- the synthesis script corresponding to the time “PM” includes “input of an object in another file” as a control content and “keyframe 2 to” as a parameter. Details of the control “Input object of other file”, parameter “key frame 1” and parameter “key frame 2” have already been described with reference to FIG. 4A, and therefore description thereof will not be repeated.
- FIG. 21B is a diagram showing the data structure of the content data 2I.
- Content data 2I shown in FIG. 21B is the same as content data 2A described in FIG. 4B, and therefore, description thereof will not be repeated.
- the content data 1I and the content data 2I are synthesized in the morning in the content synthesizing device 100B.
- the input of the content data II and the content data 2 I is received by the content synthesizing device 10 OB, and the synthesizing script is included in the content data 1 I.
- the object data included in each key frame of the content data 2I is inserted into each key frame from the key frame 1 of the content data 1I, and new content data is stored.
- the content synthesizing device 100B accepts the input of the content data II and the content data 2I ', and the synthesized script is included in the content data 1I. Based on the script corresponding to the afternoon, the object data included in each key frame of content data 2 I is inserted into each key frame from key frame 2 of content data 1 I, and new content data is stored. You. Synthetic scripts included scripts for morning and afternoon.
- the script corresponding to the morning is a script that describes that the object data included in each key frame of content data 2 I is to be inserted into each key frame from key frame 1 of content data 1 I including the composite script. there were.
- the script corresponding to the afternoon is a script that describes inserting the object data included in each key frame of content data 2 I into each key frame from key frame 2 of content data 1 I including the composite script.
- the content synthesizing apparatus 10 OB inserts the object data included in key frame 1 of content data 2 I into key frame 1 of content data 1 I, New key frame 1 Next, the object data included in the key frame 2 of the content data 2 I is inserted into the key frame 2 of the content data 1 I, and is used as the key frame 2 of the new content data.
- the key frame 3 of the content data 1 I is set as the key frame 3 of the new content data.
- a header is generated based on the new key frames 1 to 3, and the header and the content data including the new key frames 1 to 3 are combined and stored.
- the content synthesizing device 100B sets the key frame 1 of the content data 1I as a new key frame 1.
- the object data included in the key frame 1 of the content data 2 I is inserted into the key frame 2 of the content data 1 I, and is set as a new key frame 2.
- the object data included in the key frame 2 of the content data 2 I is inserted into the key frame 3 of the content data 1 I, and is used as the key frame 3 of the new content data.
- FIG. 22A, FIG. 22B, FIG. 22C, FIG. 22D, FIG. 22E, and FIG. 22F show content data synthesized in the first synthesis example in the third embodiment.
- FIG. 8 is a diagram for explaining an animation displayed when is played back.
- Fig. 22A to Fig. 22C are display screen diagrams corresponding to each key frame that is played back sequentially when playing back content data in which content data 1I and content data 2I are combined in the morning. is there.
- FIGS. 22A to 22C are the same as FIGS. 180 to 18F, respectively, and therefore description thereof will not be repeated.
- FIGS. 22D to 22F are display screen diagrams corresponding to the respective key frames that are sequentially reproduced when the content data obtained by combining the content data 1I and the content data 2I at noon is reproduced. .
- FIGS. 22D to 22F are the same as FIGS. 18A to 18F, respectively, and therefore description thereof will not be repeated.
- the combining process is controlled by the script included in the content data 1I according to the combining time.
- the synthesizing process can be controlled from the content data 1I side, and the synthesizing process according to the time at which the content data is synthesized can be performed.
- the first content data including the synthesis script describing the synthesis of the content data such as the content data
- the input of the second content data is received, the current time is obtained, and the second time is obtained based on the script corresponding to the obtained current time included in the synthesis script included in the input first content data.
- the first content data is combined with the input second content data.
- the combining process is controlled by a script corresponding to the combining time.
- the combining process can be controlled from the content data side, and the combining process can be performed according to the time at which the content data is combined.
- the processing performed by the content synthesizing apparatus 100B has been described.
- the content synthesizing method in which the processing shown in FIG. 20 is executed by a computer and the processing shown in FIG.
- a content synthesizing program for causing a computer to execute the processing a computer-readable recording medium recording the content synthesizing program, a data structure of the content data shown in FIG. 21A, and the data
- the invention can be regarded as a computer-readable recording medium on which content data having a data structure is recorded.
- the combined stabilization described in the first embodiment includes a stabilization corresponding to a position where the content is synthesized by the content synthesizing device 100C.
- FIG. 23 is a diagram showing an outline of functions of the content synthesizing device 10 OC in the fourth embodiment.
- control unit 110C of content synthesizing device 100C includes input accepting unit 111, synthesis processing unit 112C, and position acquisition unit 115.
- the storage unit 130 of the content synthesizing device 100 OC stores a plurality of content data.
- the content data includes content data including the animation data and the synthesizing script, and content data including the animation data.
- Input receiving unit 111 has been described with reference to FIG. 2 in the first embodiment, and therefore description thereof will not be repeated.
- the composition processing unit 1 1 2 C is a position acquisition unit when the composition script included in the content data 10 input by the input receiving unit 1 11 includes a script corresponding to the position where the content data is to be composed. Instruct 1 1 5 to acquire the position.
- the position obtaining unit 115 obtains the current position of the content synthesizing device 10OC in response to an instruction from the synthesizing unit 112C, and sends the current position to the synthesizing unit 112C.
- the acquisition of the current position is realized by, for example, a GPS (Global Positioning System), but may be realized by another method.
- the composition processing unit 1 1 2 C receives the content data input by the input reception unit 1 1 1 10 is combined with the content data 20 input by the input receiving unit 1 1 1. Then, the synthesis processing unit 112C stores the synthesized content data 30 in the storage unit 130. Note that the synthesis processing unit 112C may transmit the synthesized content data 30 directly to another PC or the like via the network 500 by the communication unit 160, or to an external storage. The information may be recorded on the recording medium 17 1 by the device 170.
- FIG. 24 is a flowchart illustrating the flow of the data combining process performed by the content combining device 100C in the fourth embodiment.
- the data synthesizing process is a process executed in step S13 of the content synthesizing process described in FIG. Referring to FIG. 24, first, in step S51, it is determined whether or not the synthesized script includes a script corresponding to the position. If a script corresponding to the position of the synthesis script is included (Yes in step S51), the current position is acquired by the position acquisition unit 115 in step S52, and the current position is acquired in step S53. Based on the script corresponding to the current position acquired in step S53, the synthesizing processing unit 1 1 2C replaces the content data 10 input in step S11 with the content data input in step S11. The combining process for combining with 20 is performed, and the process returns to the content combining process. On the other hand, when the script corresponding to the position is not included in the synthesizing script (No in step S51), the process returns to the content synthesizing process.
- FIGS. 25A and 25B are diagrams showing the data structure of content data before being combined in the first combination example in the fourth embodiment.
- content data 1 J including a synthesis script includes a header, key frames 1 to 3, and a synthesis script.
- Key frame 1 to key frame 3 are the same as key frame 1 to key frame 3 of content data 1F described with reference to FIG. 17A, and therefore description thereof will not be repeated.
- the synthesis script included in the content data 1 J includes a synthesis script corresponding to the position “Osaka j” and a synthesis script corresponding to the position “Nara”.
- the synthesis script corresponding to the position “Osaka” includes “insert object of other file” as control content and “keyframe 1 to” as parameters.
- the synthesis script corresponding to the position “Nara” includes “input of an object in another file” as the control content and “keyframe 2 to” as a parameter.
- the details of the control “input of an object in another file”, the parameters “keyframes 1 to”, and “keyframes 2 to” have already been described with reference to FIG. 4A, and thus description thereof will not be repeated.
- FIG. 25B is a diagram showing the data structure of the content data 2J.
- the content data 2J shown in FIG. 25B is the same as the content data 2A described in FIG. 4B. The description will not be repeated.
- the position where the content data 1J and the content data 2J are combined in the content combining device 100C in Osaka is described.
- the input of the content data 1 J and the content data 2 J is received by the content synthesizing device 100 C, and the synthesized script is included in the content data 1 J, and the position to be synthesized is in Osaka.
- the object data included in each key frame of content data 2 J is inserted into each key frame from key frame 1 of content data 1 J, and new content data is stored.
- You. Fig. 18D to Fig. 18F show the display screens corresponding to each key frame that is played back sequentially when the animation data composed of content data 1J and content data 2J is played back in Osaka. Therefore, the description will not be repeated.
- the content synthesizing device 10 OC receives the input of the content data 1 J and the content data 2 J, and the synthesizing script is included in the content data 1 J, and the synthesizing position is Nara.
- the object data included in each key frame of the content data 2 J is inserted into each key frame from the key frame 2 of the content data 1 J based on the script corresponding to the content data 1 J, and the content data is synthesized and stored. .
- the display screens corresponding to each key frame that is played back sequentially when content data 1 J and content data 2 J are played back in Nara are shown in Figs. 18 to 18C. The explanation is not repeated.
- the composite script included scripts corresponding to Osaka and Nara.
- the script corresponding to Osaka is a script that describes inserting the object data included in each key frame of content data 2J into each key frame from key frame 1 of content data 1J including the composite script.
- Scripts corresponding to Nara include the keyframes from key frame 2 of content data 1 J including the composite script, and the keyframes of content data 2 J was a script that described importing the object data included in the file.
- the content synthesizing device 10 OC inserts the object data included in the key frame 1 of the content data 2 J into the key frame 1 of the content data 1 J, New key frame 1 Next, the object data included in the key frame 2 of the content data 2 J is inserted into the key frame 2 of the content data 1 J, and is set as a new key frame 2.
- the key frame 3 of the content data 1 J is set as a new key frame 3.
- a header is generated based on the new key frames 1 to 3, and the header and the content data including the new key frames 1 to 3 are combined and stored.
- the content synthesizing device 100 C sets the key frame 1 of the content data 1 J as a new key frame 1 when the position to be synthesized is Nara.
- the object data included in the key frame 1 of the content data 2 J is inserted into the key frame 2 of the content data 1 J, and is set as a new key frame 2.
- the object data included in the key frame 2 of the content data 2 J is inserted into the key frame 3 of the content data 1 J, and is set as a new key frame 3.
- a header is generated based on the new key frames 1 to 3, and the header and the content data including the new key frames 1 to 3 are combined and stored.
- the display screen corresponding to each key frame that is played back sequentially when content data 1 J and content data 2 J are played back in Osaka when the content data is synthesized is the display described in Figs. 18D to 18F. Screen.
- the display screen corresponding to each key frame that is played back sequentially is described in FIGS. 18 to 18C. This is the displayed screen.
- This content data is combined with the input second content data.
- the combining process is controlled by a script corresponding to the place to be combined.
- the synthesizing process can be controlled from the content data side, and the synthesizing process can be performed according to the place where the content data is synthesized.
- the processing performed by the content synthesizing apparatus 100C has been described.
- a content synthesizing program for causing a computer to execute the processing, a computer-readable recording medium storing the content synthesizing program, a data structure of the content data shown in FIG. 25A, and content data having the data structure are recorded.
- the invention can be regarded as a computer-readable recording medium.
- a synthesizing example in which animation data is encrypted by a content synthesizing device based on a synthesizing script included in content data and a synthesizing example in which encrypted animation data is decrypted will be described.
- the function of the content synthesizing device in the fifth embodiment is the same as the function of content synthesizing device 100 described in the second embodiment, and therefore description thereof will not be repeated.
- FIG. 26 is a flowchart illustrating the flow of a data combining process performed by the content combining device according to the fifth embodiment.
- This data synthesizing process is a process executed in step S23 of the data synthesizing process described in FIG. Referring to FIG. 15, first, in step S51, the combining processing is executed by the combining processing unit based on the combining script. Then, in step S52, the combining processing unit determines whether or not a script indicating that a new combining script is to be included is included in the combined script.
- step S52 If the content is included in the synthesis script (Yes in step S52), the content data synthesized in step S51 is added to the content data synthesized in step S51 by the synthesis processing unit in step S53. A new synthesizing script is added, and the process returns to the data synthesizing process described in FIG.
- step S52 the process returns to the data synthesis process described with reference to FIG.
- FIGS. 27A and 27B are diagrams illustrating the data structure of content data before being combined in the first combination example in the fifth embodiment.
- FIG. 27A is a diagram showing a data structure of content data 1D including a synthesis script.
- content data 1D includes a header, a key frame 1, and a synthesizing script.
- the key frame 1 includes control data of “repeat” indicating that the key frame up to this key frame is repeatedly reproduced.
- the synthesizing script of the content data 1D includes “add key frame” as the first control content and “after key frame 1” as the first parameter.
- the second control content includes “addition of a synthesis script”, and the second parameter includes another synthesis script.
- the first control content “Add keyframe”, was explained in Figure 10, so the explanation is not repeated.
- the first parameter “after key frame 1” indicates that the target position of the synthesis processing indicated by the control content is after the key frame of the animation data included in the content data 1D including the synthesis script .
- the second control content “add synthesis script” indicates that the target data specified by the parameter is added to the content data 1D.
- the other synthesis script which is the second parameter, indicates the target data of the synthesis processing indicated by the control content.
- the second parameter corresponds to the attribute "0 0 0 0 0 0”
- the composite script corresponding to the attribute “0 0 0 0 0” includes “delete key frame” as the control content and “key frame 1” as the parameter.
- Control contents "Delete key frame” indicates that the target key frame specified by the parameter is deleted.
- FIG. 27B is a diagram showing the data structure of the content data 2D.
- content data 2D includes a header and key frames 1 to 3.
- Key frame 1 to key frame 3 are the same as key frame 2 to key frame 4 of content data 1 B described in FIG. 7A, respectively, and thus description thereof will not be repeated.
- the content synthesizing device receives the input of the content data 1D and the content data 2D, and determines whether or not the content data 1D or the content data 2D includes a synthesizing script. Since the content data 1D includes the synthesis script, the content data 1D is synthesized with the animation data 2D based on the synthesis script, and the content data 3D described later is recorded.
- the synthesis script adds the key frame included in the content data 2D after the key frame 1 of the content data 1D including the synthesis script, and includes another synthesis script in the synthesized content data. It was a statement.
- the content synthesizing device sets the key frame 1 of the content data 1 D including the control data “repeated” as a new key frame 1.
- the key frames 1 to 3 included in the content data 2D are added after the key frame 1 of the content data 1D, respectively, to obtain new key frames 2 to 4.
- a header is generated based on the new key frame 1 to key frame 4, and the header, the new key frame 1 to key frame 4, and the content data 1 Content data 3D including a new synthesis script included in the synthetic script of D is synthesized and stored.
- FIG. 28 is a diagram illustrating a data structure of the content data 3D after being synthesized in the first synthesis example in the fifth embodiment.
- content data 3D in which content data 1D and content data 2D are synthesized by the content synthesizing device includes a header, key frames 1 to 4, and a synthesizing script. .
- the key frame 1 is the same as the key frame 1 of the content data 1D described in FIG. 27A.
- the key frames 2 to 4 are the same as the key frames 1 to 3 of the content data 2 D described with reference to FIG. 27B, respectively.
- the synthesizing script is another synthesizing script included in the synthesizing script for the content data 1D described in FIG. 27A.
- the synthesizing process of adding the key frame included in the content data 2D to a predetermined position of the content data 1D is controlled by the synthesizing script included in the content data 1D. As a result, it is possible to control the synthesizing process of adding another content data 2D from the content data 1D side.
- the content data 1D including the synthesizing script described in FIG. 27A and the content described in FIG. 27B are obtained.
- the data 2D is synthesized by the content synthesizing device, and the content data 3D described in FIG. 28 is synthesized.
- an animation in which a circular figure moves is played.
- the control data “repeated” is stored in the key frame 1 even though a key frame for displaying the animation of the moving circular figure is included. Does not play the animation of moving a circular shape.
- the content data 2D can be set in a state where it cannot be reproduced. In this state, the content data 2D is a so-called encrypted state. And, to decode content data 1D and content data 2D I Use an encryption key.
- a key frame containing the control data "repeat” and a key frame contained in other content data are added after the key frame containing the control data "repeat", and the control data "repeat” corresponding to the predetermined attribute is added.
- content data that includes a new composite script that describes the deletion of the key frame that contains the composite key and the composite script that describes that the new frame is included in the composited content data the other content can be used.
- Data can be encrypted.
- the synthesizing script included in the content data 1G controls a synthesizing process for deleting a predetermined portion included in the content data 1G. As a result, it is possible to control a combining process for deleting a predetermined portion of the content data 1G from the content data 1G side.
- FIGS. 29A and 29B are diagrams illustrating the data structure of content data before being combined in the second combination example in the fifth embodiment.
- FIG. 29A is a diagram showing a data structure of content data 1G including a synthesis script.
- the content data 1G shown in FIG. 29A is the same as the content data 3D obtained by combining the content data 2D described in FIG. 28 with the content data 1D, and therefore, description thereof will not be repeated.
- FIG. 29B is a diagram showing the data structure of the content data 2G.
- content data 2G includes only a header. That is, the animation data included in the content data 2G is the animation data having the attribute “0000”.
- the content synthesizing device receives the input of the content data 1 G and the content data 2 G, and determines whether or not the content data 1 G or the content data 2 G includes a synthesizing script. Since the synthesizing script is included in the content data 1G, next, the attribute of the amation data included in the content data 2G is determined. Included in content data 2G Since the attribute of the animation data is “0 0 0 0 0 0 0 0”, the content data 1 G is combined with the content data 2 G based on the script corresponding to the attribute “0 0 0 0 0 0 0”, and the content data to be described later. 3 G is stored.
- the composite script included a script corresponding to the attribute “000000”.
- the script corresponding to the attribute “0 0 0 0 0 0” includes the synthesizing script.
- the attribute of the animation data included in the content data 1 G and other content data 2 G to be synthesized is “0 0 0 0 0 0”. In one case, it was described that key frame 1 of content data 1 G was deleted.
- the content synthesizing device deletes the key frame 1 of the content data 1G.
- the key frames 2 to 4 of the content data 1 G are set as new key frames 1 to 3.
- a header is generated based on the new key frames 1 to 3, and the header and the content data 3G including the new key frames 1 to 3 are synthesized and stored.
- FIG. 30 is a diagram illustrating a data structure of the content data 3G after being synthesized in the second synthesis example in the fifth embodiment.
- the content data 3G obtained by synthesizing the content data 1G and the content data 2G by the content synthesizing device 10 OA shown in FIG. 30 is the same as the content data 2D described in FIG. 27. Is not repeated.
- the synthesizing process shown in the second synthesizing example of the fifth embodiment is the same as the content data 3D described in FIG. 28, and includes the synthesizing script described in FIG. 29A.
- the content data 1 G and the content data 2 G described in FIG. 29B are synthesized by the content synthesizing device, and the content data 3 G described in FIG. 30 is synthesized.
- the content data 3G is the same as the content data 2D described in FIG. 27B.
- the content data 2D can be brought into a reproducible state. This state is a state in which the content data 2D is so-called decrypted.
- Conte The data 2G is a so-called decryption key for decrypting the content data 2D.
- FIGS. 31A and 31B are diagrams illustrating the data structure of content data before being combined in the third combination example in the fifth embodiment.
- content data 1H is obtained by modifying the synthesizing script included in content data 3D described in FIG. That is, another synthesis script included in the synthetic script of the content data ID described in FIG. 27 is modified and synthesized with the content data 2D.
- the composite script included in the content data 1H includes a composite stabilite corresponding to the attribute “0000 0000”.
- the composite script corresponding to the attribute “0 0 0 0 0 0” contains “change data” as the control content and “key frame 1 (jump 2)” as the parameter.
- the control content “change control data” indicates that the data at the target position specified by the parameter is changed to the target data specified by the parameter.
- the parameter ⁇ key frame 1 (jump 2) J is the key frame 1 of the animation data included in the content data including the synthesis script, and the target position of the synthesis processing indicated by the control content is indicated by the control content. Indicates that the target data of the compositing process to be performed is the control data “jump 2”.
- FIG. 31B is a diagram showing the data structure of the content data 2H.
- Content data 2H shown in FIG. 31B is the same as content data 2G described in FIG. 29B, and therefore description thereof will not be repeated.
- the content synthesizing device accepts the input of the content data 1H and the content data 2H, and determines whether or not the content data 1H or the content data 2H includes the synthesizing script.
- the synthesis script is included in the content data 1H, it is included in the content data 2H.
- the attribute of the animation data to be used is determined. Since the attribute of the animation data included in the content data 2 H is “0 0 0 0 0 0 0”, the content data 1 H is converted into the content data 2 H based on the script corresponding to the attribute “0 0 0 0 0 0”. H, and the content data 3 H described later is stored.
- the composite script included a script corresponding to the attribute “000000”.
- the script corresponding to the attribute “0 0 0 0 0 0” has the attribute “0 0 0 0 0 0 0” of the animation data included in the content data 1 H including the synthesis script and the other content data 2 G to be synthesized.
- the control data contained in key frame 1 of the animation data contained in content data 1H was changed to control data "jump 2".
- the content synthesizing apparatus changes the control data “Repeat” included in key frame 1 of content data 1 H to control data “Jump 2”. Then, a new key frame 1 is set.
- a header is generated based on the new key frames 1 to 4, and the header and the content data 3H including the new key frames 1 to 4 are synthesized and stored.
- FIG. 32 is a diagram showing a data structure of the content data 3H after being synthesized in the third synthesis example in the fifth embodiment.
- content data 3H in which content data 1H and content data 2H are synthesized by the content synthesizing device includes a header and key frames 1 to 4.
- key frame 1 the control data of key frame 1 in content data 1H is changed to control data “jump 2”.
- Key frames 2 to 4 are the same as key frames 1 to 3 of content data 1 D described in FIG. 27, and therefore description thereof will not be repeated.
- the content data 3G is played back by the playback device, the same animation as the content data 2D described in FIG. 27B played back by the playback device is played back.
- the content data 2 D can be decrypted.
- a key frame containing the control data “repeat” and a key frame contained in other content data are added after the key frame containing the control data “repeat”, and the control data “repeat” corresponding to the predetermined attribute is added.
- the encryption key as the content key that includes the composite stitch that describes that the new composite script that describes the change to Jump 2 J is included in the composited content data
- Data can be decoded, and other content data can be decrypted by using content data including animation data of a predetermined attribute as a decryption key.
- the processing performed by the content synthesizing apparatus has been described.
- the content synthesizing method in which the processing illustrated in FIG. 26 is executed by a computer and the processing illustrated in FIG. 26 is executed by a computer.
- a content synthesizing program to be executed a computer-readable recording medium storing the content synthesizing program, a data structure of the content data shown in FIGS. 27A, 29A, and 31A, and a data structure thereof.
- the invention can be regarded as a computer-readable recording medium on which the above content data is recorded.
- FIG. 33 is a diagram schematically illustrating functions of the content synthesizing device 1 OOD in the sixth embodiment.
- the control unit 110D of the content synthesizing device 10OD includes an input receiving unit 111, a synthesizing processing unit 112D, and a synthetic scribing acquiring unit 111.
- the storage unit 130 of the content synthesizing device 100 D stores a plurality of content data.
- the content data includes content data including animation data and a composite script, and content data including animation data.
- Input receiving unit 111 has been described with reference to FIG. 2 in the first embodiment, and therefore description thereof will not be repeated. '
- the combining processing unit 1 1 2 D acquires the combined script in the combined stabilizing acquisition unit 1 16 Instruct.
- the synthesizing script obtaining unit 1 16 obtains the synthesizing script 40 according to the instruction from the synthesizing processing unit 112 D and sends it to the synthesizing processing unit 112 D.
- the location indicated by the location information of the composite script is stored in the storage unit 130 of the content synthesizing device 100D.
- the location is not limited to the location indicated by the address of the storage medium, but may be the location indicated by the URL (Uniform Resource Locator) or the location indicated by the path of the synthesis script included in the recording medium 17 1. Good.
- the synthesis processing unit 1 1 2D receives the content data 10 input by the input reception unit 1 11 based on the synthesis script acquired by the synthesis script acquisition unit 1 16 and is input by the input reception unit 1 1 1 It is combined with the content data 20. Then, the combining processing unit 112D stores the combined content data 30 in the storage unit 130.
- the synthesizing unit 1 12 D may transmit the synthesized content data 30 directly to another PC or the like via the network 500 by the communication unit 160, or an external storage device. The information may be recorded on the recording medium 17 1 by using 170.
- the synthesis processing unit 1 1 2D Based on the information, the combined script acquired by the combined script acquisition unit 116 may be included in the newly combined content data.
- FIG. 34 is a flowchart illustrating the flow of the data combining process performed by the content combining device 100D according to the sixth embodiment.
- the data synthesis process is described in Figure 3. This is the process executed in step S13 of the content synthesizing process described above.
- step S61 the synthesis script included in content data 10 input in step S11 is interpreted by synthesis processing section 112D, and step S61 is performed.
- step S62 it is determined whether or not the content data 10 includes the location information of the synthesis script. If the content data 10 includes the location information of the synthesis script (Yes in step S62), in step S63, the synthesis script acquisition unit 116 executes the synthesis stabilization indicated by the location information of the synthesis stabilization.
- step S64 the content data 10 input in step S11 is converted by the synthesizing processing unit 112D into step S1 based on the composite script obtained in step S63.
- the synthesizing process for synthesizing with the content data 20 input in 1 is executed, and the process returns to the content synthesizing process.
- step S62 the process returns to the content synthesizing process.
- the first content data including the location information indicating the location of the synthesis script describing the synthesis of the content data; and
- the input of the second content data is received, a synthesis script indicated by the location information included in the input first content data is obtained, and the input first content data is based on the obtained synthesis script.
- the combining process is controlled by the combining script indicated by the location information included in the first content data.
- the first content data includes the location information of the synthesis script, there is no need to prepare a new synthesis script when synthesizing the first content data with the second content data.
- the synthesizing process can be controlled from the content data side, and it is not necessary to prepare a new synthesizing script necessary for synthesizing the content data.
- the content synthesizing apparatus 100D in the sixth embodiment another synthesized script indicated by the location information included in the synthesized stabilite indicating the location of another synthesized script is obtained, and the synthesized content data is obtained. Include other acquired synthetic scripts. For this reason, the synthesis processing starts from the newly synthesized content data side. Can be controlled.
- the processing performed by the content synthesizing apparatus 10 OD has been described in the sixth embodiment, the content synthesizing method of executing the processing shown in FIG. 34 by a computer and the processing shown in FIG. 34
- the invention can be regarded as a content synthesizing program for causing a computer to execute the method, and a computer-readable recording medium storing the content synthesizing program.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/542,902 US20060136515A1 (en) | 2003-01-23 | 2004-01-22 | Content synthesis device, content synthesis method, contents synthesis program, computer readable recording medium containing the content synthesis program, data structure of content data, and computer readable recording medium containing the content data |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-014948 | 2003-01-23 | ||
JP2003014948A JP2004264885A (ja) | 2003-01-23 | 2003-01-23 | コンテンツ合成装置、コンテンツ合成方法、コンテンツ合成プログラム、コンテンツ合成プログラムを記録したコンピュータ読取可能な記録媒体、コンテンツデータのデータ構造、および、コンテンツデータを記録したコンピュータ読取可能な記録媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004066216A1 true WO2004066216A1 (ja) | 2004-08-05 |
Family
ID=32767422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/000561 WO2004066216A1 (ja) | 2003-01-23 | 2004-01-22 | コンテンツ合成装置、コンテンツ合成方法、コンテンツ合成プログラム、コンテンツ合成プログラムを記録したコンピュータ読取可能な記録媒体、コンテンツデータのデータ構造、および、コンテンツデータを記録したコンピュータ読取可能な記録媒体 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20060136515A1 (ja) |
JP (1) | JP2004264885A (ja) |
CN (1) | CN1742297A (ja) |
WO (1) | WO2004066216A1 (ja) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5380770B2 (ja) * | 2006-10-18 | 2014-01-08 | 辰巳電子工業株式会社 | 自動写真作成装置および自動写真作成方法 |
JP5682596B2 (ja) * | 2012-06-25 | 2015-03-11 | 辰巳電子工業株式会社 | 遊戯用撮影装置および遊戯用撮影方法 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001344613A (ja) * | 2000-05-31 | 2001-12-14 | Sharp Corp | 画像情報処理装置、画像情報処理方法およびその処理を記録した記録媒体 |
-
2003
- 2003-01-23 JP JP2003014948A patent/JP2004264885A/ja active Pending
-
2004
- 2004-01-22 WO PCT/JP2004/000561 patent/WO2004066216A1/ja active Application Filing
- 2004-01-22 US US10/542,902 patent/US20060136515A1/en not_active Abandoned
- 2004-01-22 CN CN200480002801.6A patent/CN1742297A/zh active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001344613A (ja) * | 2000-05-31 | 2001-12-14 | Sharp Corp | 画像情報処理装置、画像情報処理方法およびその処理を記録した記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
JP2004264885A (ja) | 2004-09-24 |
US20060136515A1 (en) | 2006-06-22 |
CN1742297A (zh) | 2006-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8451832B2 (en) | Content using apparatus, content using method, distribution server apparatus, information distribution method, and recording medium | |
JP4982570B2 (ja) | オブジェクト基盤オーディオサービスのための多重オブジェクトオーディオコンテンツファイルの生成、編集、および再生方法と、オーディオプリセット生成方法 | |
US6078005A (en) | Apparatus for editing and reproducing visual events along musical events | |
EP2249265B1 (en) | Information Processing Apparatus and Method | |
EP1653471A2 (en) | Content using apparatus, content using method, distribution server apparatus, information distribution method, and recording medium | |
JP2004287595A (ja) | 複合メディアコンテンツの変換装置及び変換方法並びに複合メディアコンテンツ変換プログラム | |
JP2013197981A (ja) | 動画再生方法、動画再生装置及びプログラム | |
US20090037006A1 (en) | Device, medium, data signal, and method for obtaining audio attribute data | |
US7689591B2 (en) | Recording apparatus and method, reproducing apparatus and method, recording and reproducing apparatus and method, and program | |
WO2004066216A1 (ja) | コンテンツ合成装置、コンテンツ合成方法、コンテンツ合成プログラム、コンテンツ合成プログラムを記録したコンピュータ読取可能な記録媒体、コンテンツデータのデータ構造、および、コンテンツデータを記録したコンピュータ読取可能な記録媒体 | |
JP2982697B2 (ja) | テロップ表示装置 | |
JP2001202082A (ja) | 映像信号編集装置および方法 | |
JP4387543B2 (ja) | 動画像作成装置及びその制御方法及び記憶媒体 | |
JP4077525B2 (ja) | マルチメディア情報再生装置およびマルチメディア情報記録装置 | |
JP3294526B2 (ja) | カラオケ装置 | |
JP2009253342A (ja) | 情報処理装置および方法 | |
KR101562041B1 (ko) | 듀엣 모드의 미디어 콘텐츠 제작 방법 및 이에 사용되는 미디어 콘텐츠 제작 장치 | |
WO2003050746A1 (fr) | Processeur de donnees multimedia | |
JP2004064679A (ja) | 番組制作方法 | |
JP6478626B2 (ja) | 記録装置およびその制御方法、プログラム | |
JP2006048901A (ja) | マルチメディア情報再生装置およびマルチメディア情報記録装置 | |
TWI252042B (en) | Image processing device with an audio real-time integrating function | |
KR20110133654A (ko) | 사용자 맞춤형 애니메이션 저작 서비스 제공방법 및 그 시스템 | |
Jackson et al. | Android UI Design Considerations: Styles, Screen Density Targets and New Media Formats | |
JP2004221712A (ja) | 情報処理装置および情報処理方法、プログラム、並びにデータ構造および記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2006136515 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10542902 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048028016 Country of ref document: CN |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 10542902 Country of ref document: US |