WO2007138911A1 - キャラクタ服飾決定装置、キャラクタ服飾決定方法、およびキャラクタ服飾決定プログラム - Google Patents
キャラクタ服飾決定装置、キャラクタ服飾決定方法、およびキャラクタ服飾決定プログラム Download PDFInfo
- Publication number
- WO2007138911A1 WO2007138911A1 PCT/JP2007/060365 JP2007060365W WO2007138911A1 WO 2007138911 A1 WO2007138911 A1 WO 2007138911A1 JP 2007060365 W JP2007060365 W JP 2007060365W WO 2007138911 A1 WO2007138911 A1 WO 2007138911A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- character
- scenario
- clothing
- costume
- determination
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
- G06N5/025—Extracting rules from data
Definitions
- Character costume determination device character costume determination method, and character costume determination program
- the present invention relates to a character costume determining device, a character costume determining method, and a character costume determining program for determining the costume of a character appearing in a scenario from a scenario.
- a mail application that sends a mail with a decoration by freely combining images and mail templates on a mobile phone, or a specific pictogram or character string included in a received mail
- applications that display CG animation on the receiving screen
- services that send messages with gestures and facial expressions of characters by simple key operations.
- Patent Document 1 and Patent Document 2 can be selected and combined with character clothing and eye-nose packs.
- the method is used.
- Patent Document 3 uses user environment information (location, weather, time, etc.). If you decide on the avatar (character that represents the user) you want to give to your email using a band, etc., you will take a technique.
- Patent Document 1 Japanese Patent Laid-Open No. 2001-167287
- Patent Document 2 Japanese Patent Laid-Open No. 2003-346173
- Patent Document 3 Japanese Patent Laid-Open No. 11-346267
- An object of the present invention is to provide a character clothing determination device, a character clothing determination method, and a character clothing determination method capable of determining the clothing of a character that matches the content of the scenario without using the user's clothing specification or the user's environmental information. It is to provide a character costume determination program.
- the character costume determining apparatus of the present invention is a character costume determining apparatus that determines a scenario power character's costume, a scenario classifying unit that classifies the scenario based on the continuity of the character's clothes, and the scenario. For each scenario classification of the scenario classified by the classification section, the costume of the character is determined based on the contents of the scenario classification. The structure which has the clothing determination part to determine is taken.
- the character apparel determination method of the present invention is based on the character apparel determination method that determines the apparel of a scenario power character, and the scenario classification that classifies the scenario based on the continuity of the character apparel. And a costume determination step for determining the costume of the character based on the contents of the scenario classification for each scenario classification of the scenario classified by the scenario classification step.
- the character clothing determination program is a character clothing determination program for causing a computer to execute a process for determining a character's clothing as well as scenario power, and classifies the scenario based on the continuity of the clothing of the character. And a costume determination process for determining the costume of the character based on the contents of the scenario classification for each scenario classification of the scenario classified by the scenario classification process. It is a thing.
- FIG. 1 is a block diagram showing a configuration of an animation creating apparatus equipped with a character costume determining apparatus according to an embodiment of the present invention.
- FIG. 3 is a diagram showing an example of a semantic dictionary table stored in the semantic dictionary database of FIG. 1.
- FIG. 4 is a diagram showing an example of a scenario generated by the scenario generation unit of FIG.
- FIG. 6 A diagram showing an example of a clothing continuation determination rule table stored in the clothing continuation determination rule database of FIG.
- FIG.7 A diagram showing an example of a scenario that is segmented by consecutive units of clothing by the scenario segmentation unit in Fig. 1.
- FIG. 8 is a diagram showing an example of a character data table stored in the character database of FIG.
- FIG. 9 is a diagram showing an example of a first clothing determination rule table stored in the clothing determination rule database of FIG.
- FIG. 10 A diagram showing an example of a second clothing determination rule table stored in the clothing determination rule database of FIG.
- Figure 11 Diagram showing an example of a scenario rewritten by the scenario rewriting unit in Figure 1
- FIG. 12 is a diagram showing an example of an animation generated by the animation generation unit in FIG.
- FIG. 13 is a flowchart showing the operation of the animation creation device in FIG.
- FIG. 14 is a flowchart showing the contents of the scenario generation process (step S2000) in FIG.
- FIG.15 Flow chart showing the contents of the character costume determination process (step S3000) in Fig. 13.
- FIG. 16 is a flowchart showing the contents of the clothing determination process (step S3300) in FIG.
- FIG. 1 is a block diagram showing a configuration of an animation creation device equipped with a character costume determination device according to an embodiment of the present invention.
- the animation creating apparatus 100 shown in FIG. 1 has a function of determining the text writing ability as well, and roughly includes a scenario generation unit 200, a character costume determination unit 300, and an animation generation unit 400.
- the scenario generation unit 200 includes a semantic dictionary database 210 and a language information extraction unit 220.
- the character costume determining unit 300 includes a clothing continuous determination rule database 310, a scenario classification unit 320, a character database 330, a clothing determination rule database 340, a costume determining unit 350, and a scenario rewriting unit 360.
- the animation creation device 100 is a system that inputs a text sentence, generates animation information (for example, text etc.) corresponding to the animation itself or the animation corresponding to the content, and outputs it. is there.
- the animation may be a two-dimensional animation such as FlasM (registered trademark) or a three-dimensional CG animation using a well-known technology such as OpenGL (registered trademark) or DirectX (registered trademark).
- animations are a series of still images presented along a story, such as a four-frame comic book, or a plurality of still images, such as a flip book, that are displayed in a flipped manner along a story.
- the content of the animation is not particularly limited. It may be an animation in which at least one character appears (for example, an animation of only an avatar whose clothes change).
- the animation creating apparatus in order to generate the text writing power animation, generally determines a scenario generation unit that generates a text writing power animation scenario and a character appearing in the scenario power animation.
- the animation creating apparatus 100 according to the present embodiment has a function for determining the costume of the character appearing in the animation by the character costume determining unit 300 when generating the text sentence caption.
- clothing broadly means clothes, accessories, and accessories.
- the costume of the character appearing in the animation is a costume that can be easily inferred from the content of the message.
- many people will think of a scene in which a character wearing kendo wears kendo when they read the text “I was doing Kendo in the morning.” This is because, from the linguistic information representing the act of “doing kendo”, it is considered appropriate that the character is wearing kendo.
- the scenario generation unit 200 is provided with the semantic dictionary database 210 and the language information extraction unit 220, and the character costume determination unit 300 is newly provided, so that the scenario obtained from the input text sentence can be obtained. Is used to automatically determine the costume of the character appearing in the animation to match the content of the text.
- a mobile phone will be described as an example, but the present invention is not limited to this.
- the animation creation device equipped with the character clothing determination device according to the present invention can be applied to various hardware such as a PC (Personal Computer), a PDA (Personal Digital Assistant), a video camera, and an electronic book. Is equally applicable. Also applicable not only to e-mail software but also to various application software and services such as chat software, web (web) bulletin board, SNS (Social Network Service), blog (weblog), diary creation tool, etc. It is.
- the animation creation device 100 is incorporated in, for example, a mail creation / display function of a mobile phone.
- the mobile phone displays a 'function to create a mail, a function to input the text of the created mail to the animation creation device 100, and an animation that is the output result of the animation creation device 100. It has a function to do. These functions are started by user key operations.
- the mobile phone has a function of inputting the text of the received mail to the animation creating apparatus 100 and a function of displaying and saving an animation that is an output result of the animation creating apparatus 100. This makes it possible to display the contents of the text sentence as an animation not only from the text sentence written by itself but also from the text sentence written by others.
- FIG. 2 is a diagram illustrating an example of a text sentence.
- Figure 2 shows that in four different genres: mail, mobile mail, blog, and diary.
- the content of many texts for communication purposes consists of the actions, feelings, and scenes of yourself or the other party.
- This embodiment also targets texts with such a structure.
- the present embodiment can also be applied to movie and play scenarios, books, magazines, and newspaper articles.
- the scenario generation unit 200 receives a text sentence, generates an animation scenario by natural language analysis processing, and outputs the generated scenario.
- Natural language analysis processing is generally performed in the order of morphological analysis, syntax analysis, and semantic analysis.
- As a scenario generation method a method of complementing information such as subject, performance, and location in the result of semantic analysis is known. Natural language analysis and animation scenario generation are described in detail in, for example, RC Shank, CK Leeds Beck, Shun Ishizaki (Translation), “Introduction to Natural Language Understanding”, Soken Publishing, pages 224-258. Therefore, detailed description is omitted here.
- the semantic dictionary database 210 stores a semantic dictionary table for generating a scenario by extracting linguistic information particularly necessary for determining the character's clothing. Meaning
- the dictionary table consists of pairs of items and vocabulary.
- FIG. 3 is a diagram illustrating an example of a semantic dictionary table stored in the semantic dictionary database 210.
- a vocabulary 213 is registered in association with the item 212 as a key.
- item 212 has values such as “clothing designation” indicating the character's clothing designation, “acting” indicating the character's performance, “person” indicating the character's name, and “emotion” indicating the character's emotion. , “Objective” indicating the purpose of the character performance, “adjective” indicating the adjective expression of the scene, “place” indicating the location of the scene, “time” indicating the time of the scene, “weather” indicating the weather of the scene, and scenario It consists of “pictograms” indicating pictographs in the sentence. Note that the item 212 in the semantic dictionary table 211 need not be composed of all these items, and may be composed of at least one of these items.
- one or more vocabularies are registered for each item 212.
- “taxide” and “white robe” are registered as the vocabulary 213 corresponding to the item 212 “designate clothing”.
- the language information extraction unit 220 performs text matching on the vocabulary included in the semantic dictionary table 211 (see Fig. 3) stored in the semantic dictionary database 210 from the input text sentence (see Fig. 2). Then, the corresponding vocabulary 213 and its item 212 are extracted as linguistic information, and a scenario is generated and output based on the obtained linguistic information (vocabulary 213 and item 212). However, for the item 212 “person”, a scenario is generated by changing the value of the item 212 to “subject” or “partner” according to the content of the text sentence by natural language processing.
- the vocabulary 213 appears in a text sentence with “ga”
- the scenario generated by the scenario generation unit 200 is output to the character costume determination unit 300.
- FIG. 4 is a diagram illustrating an example of a scenario generated by the scenario generation unit 200.
- a scenario 230 shown in FIG. 4 is composed of a plurality of scenes 231.
- a scene 231 is a unit of animation scene, and generally indicates an animation in the same time zone and the same place.
- the animation corresponding to each scene 231 is sequentially reproduced in order of the upper power of the scenario 230.
- Each scene 231 further includes a plurality of directions 232.
- the direction 232 is a unit of animation direction, and generally indicates an animation at the same time.
- the animation corresponding to each direction 232 is played sequentially in the order of the top of scenario 230.
- Each direction 232 further includes an item 233 for language information and an item value 234 for language information.
- the item 233 indicates the item of the corresponding direction 232, and the content is described in the item value 234.
- the possible values for item 233 are, for example, “clothing designation”, “acting”, “subject”, “partner”, “emotion”, “purpose”, “adjective”, “location”, “time”, “weather” , And “Emoji”.
- the value that the item value 234 can take is an arbitrary character string.
- the character string of the item value 234 is determined based on the contents of the text sentence related to the corresponding item 233.
- FIG. 5 is a diagram showing an example of the data configuration of scenario 230 shown in FIG. Shown in Figure 5
- the scenario 230 takes, for example, an XML (eXtens3 ⁇ 4le Markup Language) format data structure.
- the value designated as “Scene id” corresponds to the scene 231 in FIG. 4
- the value designated as “Dir section id” corresponds to the direction 232 in FIG.
- “Location”, “Subject”, “Action”, “Emotion”, and “Time” are “location”, “subject”, “act”, “emotion”, “ The character string corresponding to “hour” and written together corresponds to the item value 234 in FIG.
- the character costume determining unit 300 inputs the scenario generated by the scenario generating unit 200, determines character data representing the actual character appearing in the animation using the character database 330, and outputs the character data.
- the character / apparel determination unit 300 uses the apparel continuous determination rule database 310 to classify the input scenarios by apparel units, and further, the character database 330 and the apparel determination rule beta.
- the segmented scenario power uses the base 340, the segmented scenario power also determines the character's costume.
- the character costume determining unit 300 outputs the decorated character as character data.
- clothes and characters are treated as one unit, and the character database 330 is described as storing character data for each piece of clothes.
- the present invention is not limited to this. Do not mean.
- character data and clothing data can be handled as separate data.
- the character / clothing determination unit refers to the clothing database in addition to the character database, and outputs character data and clothing data.
- an arbitrary pointer that specifies the substance of the character such as a file name or URL indicating a decorated character is output.
- the form of is not limited to this. For example, depending on the implementation, you should output the character data itself that is the substance of the decorated character.
- the clothing continuity determination rule database 310 includes a clothing continuity determination rule table in which rules necessary for determining the continuity of clothing in the scenario classification unit 320 (hereinafter referred to as “clothes continuity determination rule” ⁇ ⁇ ) are described. Storing.
- the apparel continuous judgment rule is This is a rule for determining how long a character's clothing continues and at what timing among the scenarios, and indicates a condition for the clothing to continue. By using such a garment continuous determination rule, it is possible to determine the timing for changing the garment.
- FIG. 6 is a diagram illustrating an example of a clothing continuous determination rule table stored in the clothing continuous determination rule database 310.
- the clothing continuation determination rule table 311 shown in FIG. 6 includes a set of an ID 312 and a clothing continuity determination rule 313.
- the clothing continuous determination rule 313 describes rules necessary for classifying scenarios for determining clothing.
- the clothing continuation determination rule table 311 is composed of three sets 311-1, 311-2, and 311-3.
- the group 311—1 ⁇ describes the rule “Apparel is set for each character” as the apparel continuous judgment rule 313 corresponding to ID312 “ID1”.
- a rule “313” is described as a clothing continuation determination rule 313 corresponding to ID 312 of “ID2”, “Character clothing does not change between directions in the scene”.
- the garment continuous judgment rule 313 corresponding to ID312 of “ID3” is as follows: “If the scene does not change even if the scene is different, the garment will inherit the garment of the previous scene” The rule is described. Scenarios are classified so as to satisfy all the clothing continuous judgment rules 313 described in the clothing continuous judgment rule table 311.
- the clothing continuation determination rule 313 is written in a program code that can be interpreted by a computer. Here, the contents are written in a language.
- the scenario classification unit 320 uses the clothing continuous judgment rule table 311 (see Fig. 6) stored in the clothing continuous judgment rule database 310, and the scenario 230 (see Fig. 4) is a section in which the character's clothing continues. Divide by unit (hereinafter “scenario category” and “!”).
- scenario category means that the character's clothing is contiguous with specific clothing within the same scenario category, and the character's clothing may be different between different scenario categories.
- FIG. 7 is a diagram showing an example of a scenario divided by a continuous unit of clothing by the scenario dividing unit 320. As shown in FIG.
- the segmented scenario 321 shown in FIG. 7 is composed of a plurality of scenario segments 322.
- Scenario category 322 is a category of the apparel unit of scenario 230 shown in Figure 4.
- Each scenario segment 322 is composed of one or more segments 326, and each segment 326 is further composed of a main body 326.
- the subject 323 only characters appearing in the scene included in the scenario category are shown.
- a scenario that is divided to determine the costume of the character in the scenario division is described.
- the scenario consists of item 324 and item value 325. Therefore, scenario category 322 is set for each subject 323 for each scene.
- the segmented scenario 321 is composed of two scenario segments 322.
- Scenario category 322 “Scenario category 1” consists of one category 326-1 corresponding to scene 1 shown in Figure 4.
- Scenario category 322, “Scenario category 2”, consists of two categories 326-2 and 326-3 corresponding to scene 2 in Fig. 4, and two categories 326-4 and 326-5 corresponding to scene 3 in Fig. 4. It consists of
- scenario category 322 “scenario category 1" describes a scenario for determining the clothing of the subject 323 "Hanako” as category 326-1.
- “Sinari Age Category 2” and! ⁇ ⁇ Sinari Age Category 322 describe scenarios for determining the clothing of the subject 323 “Hanako” as Categories 326-2 and 326-4.
- 326-3 and 326-5 describe scenarios for determining the clothing of the subject 323, “Nozomi”.
- a character database 330 shown in FIG. 1 stores a character data table for extracting character data using a character name and a set of clothes as a key.
- the character data table is composed of character names, clothing, and character data.
- FIG. 8 is a diagram showing an example of a character data table stored in the character database 330.
- Character data table 331 shown in FIG. 8 includes character name 332 and clothing 333 as keys, and character data 334 corresponding thereto. Character data 334 is determined by a combination of name 332 and clothing 333.
- the character data 334 includes a file name indicating a decorated character, It takes the form of a pointer that specifies the character entity such as a URL.
- a pointer that specifies the character entity such as a URL.
- the costumed character is handled as one character data, and thus the above configuration is adopted.
- character data and clothing data may be used in separate files, and character data and clothing data may be combined and used for animation.
- the force method that requires a separate clothing database is not significantly different, and the explanation is omitted here.
- the clothing determination rule database 340 stores a first clothing determination rule table and a second clothing determination rule table.
- the first clothing determination rule table is a table showing the correspondence between language information and clothing.
- linguistic information can be described by combining multiple pieces of linguistic information using logical symbols (eg, AND, OR, parentheses, etc.). By combining multiple language information using logical symbols in this way, multiple language information can be processed.
- the second clothing decision rule table consists of at least one piece of meta-knowledge for resolving conflicts and conflicts. By using such meta-knowledge, even if there are contradictions or conflicts, it is possible to resolve the conflicts and conflicts and determine one outfit.
- FIG. 9 is a diagram illustrating an example of a first clothing determination rule table stored in the clothing determination rule database 340.
- the first clothing determination rule table 341 shown in Fig. 9 includes an ID 342 and character clothing 343. And language information 344.
- Language information 344 is composed of items and item values (see Fig. 4 and Fig. 7), and multiple pairs of items and item values must be combined with logical symbols (eg, AND, OR, parentheses). This can also be described.
- the first clothing determination rule table 341 includes nine groups 341-1 to 341-9 having IDs 342 from “1” to “9”.
- the group 341-1 indicates that when the item of the language information 344 is “act” and the corresponding item value is “ski”, “ski wear” and clothing 343 are appropriate.
- Group 341-2 indicates that if the item of language information 344 is “Place” and the corresponding item value is “Nihon Ryokan”, “Yukata” and ⁇ ⁇ Clothing 343 will correspond accordingly!
- group 341-3 when the item of language information 344 is “act” and the corresponding item value is “swimming”, or the item value corresponding to item power S “purpose” of language information 344 is “swimming” Or, if the language information 344 item is “Location” and the corresponding item value is “Sea”, and the language information 344 item is “Season” and the corresponding item value is “Summer”, “Swimsuits” and “Usu garment 343” are suitable. Group 341-4 indicates that if the item value of linguistic information 344 is “subject” and the corresponding item value is “groom”, “tuxedo” and apparel 343 are correspondingly U!
- the group 341-5 indicates that the clothing item 343 “Jeep” is appropriate when the item of the language information 344 is “Emoji” and the corresponding item value is “(Jeep mark)”.
- the pair 341-6 indicates that the clothing item 343 of “rain coat” is appropriate when the corresponding item value is “rain” in the item power S “weather” of the language information 344.
- group 341-7 when the item of language information 344 is “place” and the corresponding item value is “company”, or the item value S of “language” 344 “S Objective” and the corresponding item value is “work” In the case of, or when the item of language information 344 is “partner” and the corresponding item value is “wife's parents”, “suit” t and apparel 343 indicate that they are appropriate.
- Group 341-8 indicates that apparel 343 “outing and going out” is appropriate when the item of language information 344 is “location” and the corresponding item value is “restaurant”, “hotel” or “Japanese inn”. And for group 341-9, the item of language information 344 is “place” and the corresponding item value is “home”, and the item of language information 344 is “hour” and the corresponding item value S is “night” “Pajamas” and Ugly 343 correspond to each other.
- FIG. 10 is a diagram showing an example of a second clothing determination rule table stored in the clothing determination rule database 340.
- the second costume determination rule table 345 shown in FIG. 10 is composed of a set of ID 346 and meta-knowledge 347.
- Meta-knowledge 347 is knowledge about how to operate the rules, and it is a knowledge rule that shows how to resolve when there is a contradiction or conflict.
- the second clothing determination rule table 345 includes four groups 345-1 to 345-4 having ID3 46 of “1” to “4”.
- the pair 345-1 describes meta-knowledge 347, which “adopts the strict conditions connected by AND and adopts the rule result”.
- group 345-2 meta-knowledge 347 of “taking over the previous clothes” is described.
- meta-knowledge 347 is described as "Use the following priority for items. Apparel specification>Acting>Purpose>Place>Subject>Opponent>Emotion>Adjective>Time>Weather>Emoji" Yes.
- meta-knowledge 347 “prioritize in ascending order of rule IDs” is described.
- Meta-knowledge 347 is a power that can be described in program code that can be interpreted by a computer. Here, the contents are expressed in language.
- the costume determination unit 350 includes a character data table 331 (see FIG. 8) stored in the character database 330, a first costume determination rule table 341 (see FIG. 9) stored in the costume determination rule database 340, and Using the second costume determination rule table 34 5 (see FIG. 10), the costume of the character is determined from the classified scenario (see FIG. 7), and the character data with the costume is output.
- the scenario rewriting unit 360 rewrites the scenario input from the scenario generation unit 200 using the character data with clothing output from the clothing determination unit 350. Specifically, the scenario rewriting unit 360 rewrites the item value 234 corresponding to the item 233 “subject” in the input scenario to the character data with clothing determined by the clothing determining unit 350.
- FIG. 11 is a diagram illustrating an example of a scenario rewritten by the scenario rewriting unit 360.
- the scenario 370 after rewriting shown in FIG. 11 is the same as the scenario 230 before rewriting shown in FIG. 4, and “Hana_skiwear.cg” is set as the “Hanako” in the direction 232-2! !
- scenario rewriting unit 360 is provided in character costume determining unit 300, but the present invention is not limited to this.
- the character data with costumes output from the costume determination unit 350 of the character costume determination unit 300 without providing the scenario rewriting unit 360 in the character costume determination unit 300 may be sent to the animation generation unit 400 as it is. It may be.
- the scenario generated by the scenario generation unit 200 is output not only to the character costume determination unit 300 but also to the animation generation unit 400.
- Animation generation section 400 generates an animation using at least a scenario after rewriting, that is, a scenario including character data with clothes (see FIG. 11).
- the animation generator 400 determines the placement and performance of the character based on the scenario after rewriting, and if necessary, performs camera work, lighting, deformed expression, BGM, sound effects, etc. By adding it, the animation is generated and output as 2D animation or 3D animation.
- it is also possible to generate an animation using the scenario from the scenario generation unit 200 and the character data with clothing from the clothing determination unit 350 of the character clothing determination unit 300. It is a spear.
- FIG. 12 is a diagram illustrating an example of an animation generated by the animation generation unit 400.
- Each animation shown in FIG. 12A, FIG. 12B, and FIG. 12C is a force that is actually a moving image, and one scene is cut out as a still image.
- Fig. 12A shows a scene where Hanako is skiing while wearing ski wear, and "Direction 2" (Direction 23 2) of “Scene 1" in scenario 370 after rewriting shown in Fig. 11. — Corresponds to 2).
- Figure 12B shows a scene where Hanako is wearing a yukata.
- “Revision 3” (Direction 2 32-6) and “Scene 3” of “Scene 2” in scenario 370 shown in FIG. 11 after rewriting.
- “Direction 3” (Direction 232-9).
- Fig. 12C shows a scene where Hanako puts on a pan and makes a peace sign. As shown in Fig.
- the animation creating apparatus 100 has a CPU (Central Processing Unit) and a memory such as a ROM (Read Only Memory) and a RAM (Random Access Memory).
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- the ROM stores a control program
- the RAM is a working memory.
- the functions of each unit shown in Fig. 1 are realized by the CPU executing the control program.
- the mobile phone described above includes a communication circuit as existing hardware, and the animation creation device 100 is connected to communication devices such as other mobile phones and personal computers via a mobile phone network.
- the animation information generated by can be sent.
- the above mobile phone can receive a text sentence, a scenario, and animation information transmitted from another mobile phone or a base station via a communication circuit.
- FIG. 13 is a flowchart showing the operation of the animation creating apparatus 100 shown in FIG.
- step S 1000 the scenario generation unit 200 determines whether or not a text sentence has been input.
- the text sentence is input by, for example, the operation of the mobile phone by the user or the reception from the outside through the communication circuit.
- the user is instructed to start creating animation by operating the mobile phone.
- step S2000 if a text sentence is entered (S1000: YES), the process proceeds to step S2000. If no text sentence is entered (S1000: NO), the process waits until a text sentence is entered.
- step S2000 scenario generation unit 200 executes scenario generation processing. Specifically, a text sentence is input, and a animation scenario is generated from the input text sentence by natural language analysis processing and output. This scenario generation process will be described later.
- step S3000 the character costume determining unit 300 executes a character costume determining process. Specifically, the scenario generated in step S 2000 is input, and the costumed character representing the substance of the character appearing in the animation using the costume continuity determination rule database 310, the character database 330, and the costume determination rule database 340 is used. Determine and output data. This character costume determination process will be described later.
- step S4000 the animation generation unit 400 executes an animation generation process. Specifically, an animation is generated based on the scenario generated in step S2000 and the character data with clothes determined in step S3000. This animation generation process will be described later.
- the animation file format is not limited here.
- Animation file formats include not only video file formats such as MPEG (Moving Picture Experts Group) and A VI (Audio Video Interleaving), but also data formats for CG animation, script language formats, flash animation formats, etc. It may be in the form of a gap.
- step S5000 the animation generation unit 400 outputs the animation generated in step S4000.
- the mobile phone provided with the animation creating apparatus 100 is provided with a display unit for displaying the animation output from the animation creating apparatus 100.
- a similar device is also provided at the communication partner of the mobile phone equipped with the animation creating device 100. Therefore, by sending this animation to the other party of communication, it is possible to show the animation according to the contents of the text sentence input by the user of the mobile phone equipped with the animation creating device 100 to the user of the other party of communication. it can.
- FIG. 14 is a flowchart showing the contents of the scenario generation process (step S2000) of FIG. This scenario generation process is executed by the scenario generation unit 200.
- step S 2100 it is determined whether or not a text sentence (see FIG. 2) input to animation creating apparatus 100 has been input. As a result of this determination, if a text sentence has been entered (S2100: YES), go to step S2200 and if no text sentence has been entered. If this is the case (S2100: NO), it waits until a text sentence is entered.
- step S2200 morphological analysis, syntax analysis, and semantic analysis are sequentially executed as natural language analysis of the input text sentence, and the analysis result is output.
- step S2300 the language information extraction unit 220 performs text matching on the vocabulary contained in the semantic dictionary table 211 (see Fig. 3) stored in the semantic dictionary database 210 from the analysis result of step S2200. Then, the corresponding vocabulary 213 and its item 212 are extracted as linguistic information, and a scenario is generated based on the obtained linguistic information (vocabulary 213 and item 212) and output.
- step S2400 based on the analysis result in step S2200 and the scenario extracted in step S2300, an animation scenario (see FIG. 4) that matches the contents of the input text sentence is generated.
- step S2500 the scenario generated in step S2400 is output to character costume determination unit 300, and then the process returns to the flowchart of FIG.
- the scenario 230 in which the text sentence power is also generated is divided into three scenes, up to the second sentence, the third sentence, and the fourth sentence.
- the first scene 1 consists of three directions 232-1 to 232-3. In other words, the first scene 1 is directed to direction 1 (direction 232-1) indicating that “place” is “skiing area” and direction indicating that “acting” is “skiing”. It consists of Chillon 2 (Direction 232-2) and Direction 3 (Direction 232-3) indicating that “Emotion” is “Fun! /,”.
- the second scene 2 is composed of three directions 232-4 to 232-6. In other words, the second scene 2 is Direction 1 (Direction 232-4 indicating that the place is a Japanese inn.
- Direction 2 (Direction 232-5) indicating that “Time” is “Night”
- Direction 3 (Direction 232-6) indicating that “Performance” is “Meal” It consists of and.
- the third scene 3 is Direction 1 (Direction 232-7) indicating that “Place” is “Japanese Ryokan”, and Direction 2 indicating that “Time” is “Night”. (Division 232-8) and Direction 3 (Direction 232-9), which indicates that “Acting” is “Staying”!
- a character (avatar) indicating a user of the mobile phone is preliminarily registered.
- the scenario generation unit 200 sets the registered character name as the subject when there is no subject in the text sentence. This is to deal with the characteristic that Japanese users often omit the subject of the user who is the writer, especially in text sentences for communication. In the scenario 230 example shown in Fig. 4, this is determined.
- “Nozomi” is further set for the subject of Direction 3 of Scene 2 and Direction 3 of Scene 3 from the description of scenario 230.
- FIG. 15 is a flowchart showing the contents of the character costume determination process (step S3000) of FIG. This character costume determining process is executed by the character costume determining unit 300.
- step S3100 it is determined whether or not a scenario (see FIG. 4) generated by scenario generation unit 200 has been input. As a result of this judgment, if a scenario is entered (S 3100: YES), step S3200 [Proceed, manually maneuver the scenario! / ⁇ ! If you enter a scenario (S3100: NO), until you enter the scenario stand by.
- step S3200 the scenario classification unit 320 satisfies all the clothing continuous determination rules 313 described in the clothing continuous determination rule table 311 (see Fig. 6) stored in the clothing continuous determination rule database 310. In this way, the section (scenario section) in which the character's clothing continues is determined. Then, the scenario 230 shown in FIG. 4 is divided based on the determined section, and the divided scenario 321 shown in FIG. 7 is generated and output to the clothing determination unit 350. [0088] As an example, the processing procedure for creating the segmented scenario 321 shown in Fig. 7 from the scenario 230 shown in Fig. 4 is shown below.
- the scenario classification unit 320 first stores the input scenario 230 in a memory (not shown). Next, the scenario classification unit 320 scans the scenario 230 in order according to the clothing continuous judgment rule 313 (group 311-1) of “ID1” in the clothing continuous judgment rule table 311 and the characters appearing in the scenario 230 are also scanned. Create a separate table.
- the scenario classification unit 320 scans the scenario 230 again in order, and inputs the items and values of the direction of the scenario 230 to the respective tables in order. At this time, even if the direction is the same, the scenario division unit 320 inputs the same table in accordance with the “ID2” clothing continuation determination rule 313 (group 311-2) in the same scene. When the location or time changes with respect to the immediately preceding scene, the scenario classification unit 320 divides the table according to the “ID3” clothing continuation determination rule 313 (group 311-3) and Enter the direction items and values in the new table.
- Direction 232-7 “Japan Ryokan” and Direction 232-8 “Night” belonging to the same scene belong to a different scene from Direction 232-6, but Direction 232—4 to 232— Since the time and place do not change with respect to 6, the “Hanako” table is not divided and is entered following the same table. The process for “Nozomi” is the same.
- the scenario sorting unit 320 integrates the created tables to create the sorted scenario 321 shown in FIG. Specifically, the scenario classification unit 320 uses the same table generated for each subject. The items corresponding to the scene are grouped into one scenario category, and the scenario categories and direction items and values are rearranged so as to follow the order of the original scenario 230.
- a scenario 321 divided into two scene divisions 322 is obtained.
- segmented scenario 321 scene 2 and scene 3 are combined into one scenario segment.
- scene 2 and scene 3 are determined to be sections in which the character's clothing continues.
- the classified scenario 321 is output to the costume determination unit 350.
- step S3300 the clothing determination unit 350 executes the clothing determination processing for each scenario category.
- the character data table 331 (see FIG. 8) stored in the character database 330
- the first clothing determination rule table 341 (see FIG. 9) stored in the clothing determination rule database 340
- the second clothing determination Using rule table 345 (see Fig. 10), the character's clothing is determined from the classified scenarios (see Fig. 7), and character data with clothing is output. This clothing determination process will be described in detail later.
- step S3400 scenario rewriting unit 360 rewrites the scenario input to character clothing determination unit 300 using the character data with clothing determined in step S3300, and after rewriting.
- scenario including decorated characters
- the item value 234 corresponding to “subject” and ⁇ ⁇ item 233 is rewritten to the character data with clothes determined in step S3300, and the rewritten scenario is output.
- FIG. 16 is a flowchart showing the contents of the costume determination process (step S3300) of FIG. This clothing determination process is executed by the clothing determination unit 350.
- step S3310 the determination and output of the character data is completed among the scenario categories constituting the scenario (see Fig. 7) classified in step S3200 in Fig. 15 and the main entities included in the scenario categories. Select one that is not.
- step S3320 clothing candidates are listed from the input classified scenarios using the first clothing determination rule table 341 (see FIG. 9) stored in the clothing determination rule database 340.
- step S3330 the character data table stored in the character database 330 using the name of the subject 323 in the classified scenario 321 and the clothing candidates enumerated in step S3320 as keys, respectively. Search 331 (see Fig. 8) and obtain the corresponding character data 334.
- step S3340 it is determined whether there is no corresponding character data 334 as the processing result of step S3330. As a result of this determination, if there is no applicable character data 334 (S3340: YES), the process proceeds to step S3350. If there is one or more applicable character data 334 (S3340: NO), the process proceeds to step S3360. move on.
- step S3350 the character data corresponding to the default clothing is searched because the corresponding character data could not be acquired for the given subject 323 name and clothing.
- the search searches the character data table 331 using only the name of the subject 323 as a key, and after the costume 333 acquires the “default” character data 334, the process proceeds to step S3380.
- the character database 330 is configured to extract the character data 334 with the “default” clothing 333 when a search is executed using only the name of the subject 323 as a key. Further, if the corresponding character data 334 is not obtained even if only the name of the subject 323 is used as a key, it is assumed that the arbitrary character data 334 is extracted.
- step S3360 it is further determined whether or not there are a plurality of corresponding character data 334 as the processing result of step S3330. As a result of this determination, if there are a plurality of corresponding character data 334 (S3360: YES), the process proceeds to step S3370, and there is no corresponding character data 334, that is, there is only one corresponding character data 334. (S3360: NO), immediately go to step S3380.
- step S3370 the meta data described in the second clothing determination rule table 345 (see Fig. 10) stored in the clothing determination rule database 340 in order to narrow down the corresponding character data 334 to one.
- Knowledge 347 is applied one by one in the order of ID346, and one character data 334 is determined.
- meta-knowledge 347 of ID346 "ID1" is applied, and character data 3 Try to narrow down 34.
- ID2 the next meta-knowledge 347 of ID346 called “ID2” is applied.
- Meta Knowledge 347 is applied in order until it is narrowed down to one character data.
- step S3380 only character data 334 obtained in step S3330, step S3350, or step S3370 is output.
- step S3390 it is determined whether a combination of unprocessed scenario categories and subjects remains. As a result of this determination, if there is still a combination of unprocessed scenario categories and entities (S3390: YES), return to step S3310, and if processing has been completed for all scenario category and entity combinations (S3390: NO), the process returns to the flowchart of FIG. As a result, the series of processing in steps S3320 to S3380 is performed for each scenario category and for each subject of the classified scenario 321, so that the number of character data 334 finally output from the clothing determination unit 350 is classified. It is equal to the number of combinations of scenario categories and subjects in scenario 321.
- step S3320 If “Location: Nihon Ryokan” and “Performance: Swim” exist in the same scenario classification of the classified scenarios, in step S3320, the first garment determination rule table 341 set 341— 2 and pair 341-3 powers, “Yukata” and “Swimsuit” two candidate clothes. In this case, since there are a plurality of corresponding character data 334, the meta-knowledge 347 described in the second costume determination rule table 345 is applied one by one.
- the meta-knowledge 347 described in the pair 345-1 “is to adopt the strict rule result of conditions connected by AND”. In this case, the first garment decision rule test is performed. Since both the pair 341-2 and the pair 341-3 of the table 341 have the same conditions, the following knowledge 347 is applied.
- the meta-knowledge 347 described in the next set 345-2 is “takes over the previous clothes”. In this case, since there is no previous clothes, the same meta-knowledge 347 applies.
- the meta-knowledge 347 described in the next group 345-3 is “use the following priority for items” and it is described that “act” takes precedence over “place”.
- the language information 344 of “act: swim” is applied, and the costume 343 is determined as “swimsuit”. If there is too much power here, meta-knowledge 347 described in the last pair 345-4, “Set priority in ascending order of rule IDs” is applied, and there is always one. It is becoming possible to focus on.
- FIG. 17 is a flowchart showing the contents of the animation generation process (step S4000) in FIG. This animation generation process is performed by the animation generation unit 400. Executed.
- step S4200 to step S4500 are described in detail in, for example, Japanese Patent Application Laid-Open No. 2005-44181, detailed description thereof is omitted here.
- step S4100 it is determined whether or not a scenario in which the scenario after rewriting (including character data with clothing) (see Fig. 11) output in step S3400 in Fig. 15 is input is input. As a result of this judgment, if the scenario after rewriting is entered (S4100: YES), proceed to step S4200, and if the scenario after rewriting is correct (S4100: NO), rewrite Wait until you enter a later scenario.
- step S4200 the placement of the character is determined from the input location of the scenario 370 after rewriting and the acting of the subject.
- “Arrangement” includes specific coordinates and postures. The simplest method for determining the character placement is, for example, a method of referring to a database that pre-arranges and stores where characters should be placed for each combination of place and performance.
- step S4300 the motion of the character is determined from the input of the subject of the rewritten scenario 370.
- “Motion” includes the shape, timing, and time of a specific figure.
- the simplest way to determine a character's motion is, for example, to refer to a database that has a one-to-one correspondence between performance and motion.
- step S4400 effects such as camera work and writing are determined from the input location and subject performance of scenario 370 after rewriting.
- the simplest method for determining the production is, for example, a method of referring to a database in which performance and production are associated one-to-one.
- step S4500 an animation composed of the character included in the input scenario and the arrangement information, motion information, and effect information determined in steps S4200 to S4400, respectively, is output.
- the input text sentence scenario is extracted using the semantic dictionary database 210, and the costume is determined using the clothing continuous determination rule database 310.
- the scenario power is also determined by the character database 330 and the costume decision rule database 340. Determine. Therefore, it is possible to determine the costume of the character that matches the contents of the input text sentence, that is, the contents of the generated scenario without using the costume designation by the user or the user's environment information.
- the character is displayed in a swimsuit, and the text sentence says “I came to the interview venue.” If the word is included, the character can be displayed while wearing a suit. Thereby, it is possible to automatically determine the costume of the character that matches the content of the scenario without using the fashion designation by the user or the user's environment information.
- the animation to be generated is composed of a plurality of performances, a plurality of persons, and a plurality of scenes.
- the character's clothing in order to determine the character's clothing, how to process multiple pieces of language information, how to resolve contradictions and conflicts, and how to determine how long the clothing will change and at what timing It becomes an important point.
- the clothing continuation determination rule database 310 since the clothing continuation determination rule database 310 is provided, it is possible to determine to what extent the clothing continues and at what timing.
- the garment continuous determination rule database 310 including the first garment determination rule table 341 and the second garment determination rule table 345 is provided, it is possible to process multiple language information, and there are contradictions and conflicts. In some cases, it is possible to resolve the contradiction and conflict and decide on one piece of clothing.
- a clothing continuation determination rule is applied in which a scenario classification is composed of one or more scenes.
- a clothing continuation determination rule such as a direction, where the scenario classification is composed of other units, may be applied.
- a clothing continuation determination rule may be applied so that a scenario is classified into one scenario from the middle of one scene to the middle of another scene.
- a plurality of types of tables such as a garment continuous determination rule table, a first garment determination rule table, a second garment determination rule table, and a character data table are prepared. You may make it switch and use it according to preference. This makes it possible to determine the character's clothing that matches the content of the scenario and the character's clothing that matches the user's preferences.
- the animation creation device equipped with the character costume determination device according to the present invention is applied to a mobile phone has been described as an example.
- the present invention is not limited to this. It can also be applied to various hardware, application software, and services that make up the system.
- the character clothing information generating device is a linguistic information power device that automatically generates the clothing of the language information ability character obtained from the input text sentence.
- a garment continuation determination rule storage unit for storing a garment continuation determination rule for determining sex and a linguistic information category for categorizing language information for determining a character's clothing with reference to the garment continuation determination rule storage unit
- Language information segmented by the language information segmentation unit with reference to the costume determination rule storage unit storing the first costume determination rule indicating the relationship between the language information and the costume, and the clothing determination rule storage unit From the above, a costume determining unit that determines the costume of the character is employed.
- a character clothing information generation method is a clothing information generation method for automatically generating clothing of a language information power character obtained from an input text sentence.
- a language information classification step for classifying language information for determining the clothing of the character with reference to the clothing continuation determination rule storage unit for storing the clothing continuation determination rule for determining the continuity of the clothing; Refer to the clothing determination rule storage unit that stores the first clothing determination rule indicating the relationship between the language information and the clothing, and determine the clothing to determine the character's clothing from the language information classified in the language information classification step. Steps.
- a character costume information generation program is a costume information generation program for automatically generating the text information ability of the input text information ability and character costume.
- a linguistic information classification step for classifying language information for deciding a character's clothing with reference to a clothing continuation determination rule storage unit storing a clothing continuation determination rule for determining continuity of the clothing;
- a clothing determination step for determining the clothing of the character from the language information classified in the language information classification step with reference to a clothing determination rule storage unit that stores a first clothing determination rule indicating the relationship between the language information and the clothing; Is executed by a computer.
- the character costume determination device is capable of determining the character costume matching the content of the scenario without using the user's costume designation or the environment information of the user. It is useful for mobile terminal devices such as PCs, game machines, etc.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Processing Or Creating Images (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/302,866 US8140460B2 (en) | 2006-05-30 | 2007-05-21 | Character outfit autoconfiguration device, character outfit autoconfiguration method, and character outfit autoconfiguration program |
JP2008517850A JP4869340B2 (ja) | 2006-05-30 | 2007-05-21 | キャラクタ服飾決定装置、キャラクタ服飾決定方法、およびキャラクタ服飾決定プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-150364 | 2006-05-30 | ||
JP2006150364 | 2006-05-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007138911A1 true WO2007138911A1 (ja) | 2007-12-06 |
Family
ID=38778430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/060365 WO2007138911A1 (ja) | 2006-05-30 | 2007-05-21 | キャラクタ服飾決定装置、キャラクタ服飾決定方法、およびキャラクタ服飾決定プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US8140460B2 (ja) |
JP (1) | JP4869340B2 (ja) |
CN (1) | CN101375314A (ja) |
WO (1) | WO2007138911A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016118992A (ja) * | 2014-12-22 | 2016-06-30 | カシオ計算機株式会社 | 画像生成装置、画像生成方法及びプログラム |
JP2020140326A (ja) * | 2019-02-27 | 2020-09-03 | みんとる合同会社 | コンテンツ生成システム、及びコンテンツ生成方法 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101473335B1 (ko) * | 2008-02-05 | 2014-12-16 | 삼성전자 주식회사 | 애니메이션 기반의 메시지 전송을 위한 장치 및 방법 |
US9319640B2 (en) * | 2009-12-29 | 2016-04-19 | Kodak Alaris Inc. | Camera and display system interactivity |
WO2012055100A1 (en) * | 2010-10-27 | 2012-05-03 | Nokia Corporation | Method and apparatus for identifying a conversation in multiple strings |
CN103645903B (zh) * | 2013-12-18 | 2016-06-08 | 王飞 | 一种脚本生成装置及方法 |
US10074200B1 (en) * | 2015-04-22 | 2018-09-11 | Amazon Technologies, Inc. | Generation of imagery from descriptive text |
CN105574912A (zh) * | 2015-12-15 | 2016-05-11 | 南京偶酷软件有限公司 | 一种自然语言转换为动画分镜头剧本数据的方法 |
US10432559B2 (en) | 2016-10-24 | 2019-10-01 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
US10454857B1 (en) * | 2017-01-23 | 2019-10-22 | Snap Inc. | Customized digital avatar accessories |
US11199957B1 (en) * | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11748570B2 (en) | 2020-04-07 | 2023-09-05 | International Business Machines Corporation | Automated costume design from dynamic visual media |
CN113050795A (zh) * | 2021-03-24 | 2021-06-29 | 北京百度网讯科技有限公司 | 虚拟形象的生成方法及装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08123976A (ja) * | 1994-10-28 | 1996-05-17 | Matsushita Electric Ind Co Ltd | アニメーション作成装置 |
JPH08263681A (ja) * | 1995-03-22 | 1996-10-11 | Matsushita Electric Ind Co Ltd | アニメーション作成装置およびその方法 |
JPH09153145A (ja) * | 1995-12-01 | 1997-06-10 | Matsushita Electric Ind Co Ltd | エージェント表示装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4054897B2 (ja) | 1998-03-26 | 2008-03-05 | 雅信 鯨田 | 会話のための装置 |
JP2001167287A (ja) * | 1999-10-25 | 2001-06-22 | Cmaker Corp | キャラクター生成方法及びそれを利用した絵文書生成方法 |
JP2003346173A (ja) * | 2002-05-24 | 2003-12-05 | Tokei Jikake:Kk | キャラクタ・アニメーションの配信システム |
JP2005149481A (ja) * | 2003-10-21 | 2005-06-09 | Zenrin Datacom Co Ltd | 音声認識を用いた情報入力を伴う情報処理装置 |
US7812840B2 (en) * | 2004-11-30 | 2010-10-12 | Panasonic Corporation | Scene modifier representation generation apparatus and scene modifier representation generation method |
US7973793B2 (en) * | 2005-06-10 | 2011-07-05 | Panasonic Corporation | Scenario generation device, scenario generation method, and scenario generation program |
-
2007
- 2007-05-21 JP JP2008517850A patent/JP4869340B2/ja not_active Expired - Fee Related
- 2007-05-21 CN CNA2007800032814A patent/CN101375314A/zh active Pending
- 2007-05-21 US US12/302,866 patent/US8140460B2/en not_active Expired - Fee Related
- 2007-05-21 WO PCT/JP2007/060365 patent/WO2007138911A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08123976A (ja) * | 1994-10-28 | 1996-05-17 | Matsushita Electric Ind Co Ltd | アニメーション作成装置 |
JPH08263681A (ja) * | 1995-03-22 | 1996-10-11 | Matsushita Electric Ind Co Ltd | アニメーション作成装置およびその方法 |
JPH09153145A (ja) * | 1995-12-01 | 1997-06-10 | Matsushita Electric Ind Co Ltd | エージェント表示装置 |
Non-Patent Citations (1)
Title |
---|
TERASAKI T. ET AL.: "Object Shiko ni yoru Dosa Data no Kanri Shuho to Shizen Gengo Kara no Animation Seisei System (An Object-Orientated Motion Database and Animation System Using Natural Language)", VISUAL COMPUTING GRAPHICS AND CAD GODO SYMPOSIUM 2004 YOKOSHU, 3 June 2004 (2004-06-03), pages 197 - 202, XP003019810 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016118992A (ja) * | 2014-12-22 | 2016-06-30 | カシオ計算機株式会社 | 画像生成装置、画像生成方法及びプログラム |
JP2020140326A (ja) * | 2019-02-27 | 2020-09-03 | みんとる合同会社 | コンテンツ生成システム、及びコンテンツ生成方法 |
Also Published As
Publication number | Publication date |
---|---|
CN101375314A (zh) | 2009-02-25 |
US8140460B2 (en) | 2012-03-20 |
JPWO2007138911A1 (ja) | 2009-10-01 |
JP4869340B2 (ja) | 2012-02-08 |
US20100010951A1 (en) | 2010-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4869340B2 (ja) | キャラクタ服飾決定装置、キャラクタ服飾決定方法、およびキャラクタ服飾決定プログラム | |
JP5980432B2 (ja) | 拡張現実見本の生成 | |
CN104933113B (zh) | 一种基于语义理解的表情输入方法和装置 | |
US11620001B2 (en) | Pictorial symbol prediction | |
CN113569088B (zh) | 一种音乐推荐方法、装置以及可读存储介质 | |
US20140164506A1 (en) | Multimedia message having portions of networked media content | |
CN104854539B (zh) | 一种对象搜索方法及装置 | |
US20140164507A1 (en) | Media content portions recommended | |
US20140163980A1 (en) | Multimedia message having portions of media content with audio overlay | |
Waugh | ‘My laptop is an extension of my memory and self’: Post-Internet identity, virtual intimacy and digital queering in online popular music | |
CN115212561B (zh) | 基于玩家的语音游戏数据的服务处理方法及相关产品 | |
US20150067538A1 (en) | Apparatus and method for creating editable visual object | |
CN107786432A (zh) | 信息展示方法、装置、计算机装置及计算可读存储介质 | |
CN108885555A (zh) | 基于情绪的交互方法和装置 | |
CN113746874A (zh) | 一种语音包推荐方法、装置、设备及存储介质 | |
CN114969282B (zh) | 基于富媒体知识图谱多模态情感分析模型的智能交互方法 | |
CN111813236B (zh) | 输入方法、装置、电子设备及可读存储介质 | |
Johnson | Josō or “gender free”? Playfully queer “lives” in visual kei | |
CN110837307A (zh) | 一种输入法及其系统 | |
CN110489581A (zh) | 一种图像处理方法和设备 | |
JP2012253673A (ja) | Html文書に基づく短編動画作品の自動制作 | |
Calderon | Body politix: QTIBPOC/NB drag revolutions in Vancouver | |
CN111324466A (zh) | 信息处理方法、设备、系统及存储介质 | |
CN112040329B (zh) | 动态处理并播放多媒体内容的方法及多媒体播放装置 | |
CN117009574B (zh) | 热点视频模板的生成方法、系统、设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07743799 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200780003281.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008517850 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12302866 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07743799 Country of ref document: EP Kind code of ref document: A1 |