WO2011155068A1 - Character generating system, character generating method, and program - Google Patents

Character generating system, character generating method, and program Download PDF

Info

Publication number
WO2011155068A1
WO2011155068A1 PCT/JP2010/059967 JP2010059967W WO2011155068A1 WO 2011155068 A1 WO2011155068 A1 WO 2011155068A1 JP 2010059967 W JP2010059967 W JP 2010059967W WO 2011155068 A1 WO2011155068 A1 WO 2011155068A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
similar
information
texture
parts
Prior art date
Application number
PCT/JP2010/059967
Other languages
French (fr)
Japanese (ja)
Inventor
正夫 桑原
奈生人 小湊
和満 盛山
Original Assignee
株式会社アルトロン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社アルトロン filed Critical 株式会社アルトロン
Priority to PCT/JP2010/059967 priority Critical patent/WO2011155068A1/en
Priority to EP10852904.1A priority patent/EP2581881A1/en
Priority to JP2012519194A priority patent/JP5632469B2/en
Publication of WO2011155068A1 publication Critical patent/WO2011155068A1/en
Priority to US13/693,623 priority patent/US8497869B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player

Definitions

  • the present invention relates to a character generation system, a character generation method, and a program for generating a character in a virtual space realized using a computer.
  • the present invention relates to a character generation system, a character generation method, and a program for generating a character's face in a virtual space realized by using a portable game device as an animation-like portrait based on face captured image information of an object.
  • an appearance character displayed in a virtual space such as a game realized by using a computer such as a portable game device is selected by a user selecting a desired character from a plurality of characters prepared in advance. It was displayed in the virtual space as an appearance character.
  • an image processing apparatus and an image processing method for pasting a captured image taken by a user with a digital camera or the like to an appearance character displayed in a virtual space.
  • two-dimensional image information is captured and a frame for specifying a clipping range provided with a gauge provided corresponding to the feature position of the virtual three-dimensional object is used.
  • An image processing apparatus and an image processing method are disclosed that cut out two-dimensional image information in a predetermined range and paste the cut-out two-dimensional image information on a virtual three-dimensional object.
  • the present invention has been made to solve the above-described problems.
  • the face of a character in a virtual space realized by using a computer such as a portable game device is based on the face-captured image information of the object. It is an object of the present invention to provide a character generation system, a character generation method, and a program that can be easily generated as a cartoon-like portrait.
  • the character generation system uses a computer to display the character face displayed on the display unit as being present in the virtual space in an animation-like manner based on the captured face image information of the object.
  • a character generation system for generating a portrait wherein the computer acquires the face captured image information of the object from captured image information of the object stored in a predetermined storage unit; The feature information of a predetermined facial part and the color information of a predetermined pixel are extracted from the face captured image information of the object acquired by the captured image acquisition means, and the color information of the predetermined pixel is extracted based on the extracted color information of the predetermined pixel.
  • Similar skin color information similar to the skin color of the object that is the skin color of the head texture of the character is set, and the front of the object is set based on the extracted feature points
  • Facial part texture generating means for selecting a similar animated part corresponding to the facial part similar to a facial part and generating a facial part texture corresponding to the facial part by setting an arrangement of the selected similar animated part
  • shape model generation means for generating the head shape model based on the feature points of the predetermined face part, and the head other than the face part texture pasted on the head shape model of the character Texture color information is replaced with the similar skin color information set by the face part texture generation means, and the face part texture generated by the face part texture generation means is pasted to the head shape model of the character And a sticking means.
  • the face part texture generation means of the character generation system according to the first aspect of the present invention is the face of the object acquired by the captured image acquisition means. From the captured image information, the position information of the feature point of the predetermined face part is acquired, and the feature point extraction unit that acquires the color information of the predetermined pixel; and the feature point acquired by the feature point extraction unit Parts selection means for selecting the similar anime-like part similar to the shape of the face part of the target object from among a plurality of animation-like parts prepared in advance corresponding to the face part based on position information; Based on the color information of the predetermined pixel acquired by the feature point extraction unit, the previous color similar to the skin color of the object from among a plurality of skin color information prepared in advance Skin color setting means for selecting similar skin color information, and based on the position information of the feature points acquired by the feature point extraction means, the similar animation-like parts corresponding to the face parts selected by the parts selection means Part arrangement setting means for setting the arrangement and generating the facial
  • the part selection means of the character generation system according to the second aspect of the present invention when configured such that when the face part is composed of two basic face parts, left and right The position information of the feature points of either one of the basic face parts is converted into position information moved in line symmetry with respect to the face center line, and the average of the position information of the feature points of the two left and right basic face parts
  • the shape of the basic face part is estimated from the value, the similar anime-like part corresponding to the estimated shape of the basic face part is selected from the plurality of anime-like parts, and the basic face having one face part is selected.
  • the shape of the face part is estimated from the position information of the feature points of the face part, and the similar anime-like parts corresponding to the estimated shape of the face part are converted into the plurality of anime-like parts. And selecting from among the over tool.
  • the position where the part placement setting means of the character generation system according to the second or third aspect of the present invention is pasted on the head shape model is preset.
  • the facial part texture corresponding to the facial part the nose and mouth of the similar animated part and the similar animated part of the mouth are arranged at positions in the facial part texture that are set in advance.
  • the facial part texture corresponding to the nose, mouth, eyes, and eyebrows is generated.
  • a position where the part placement setting means of the character generation system according to the second or third aspect of the present invention is pasted on the head shape model is preset.
  • the similarity in the facial part texture corresponding to the facial part based on the positional information of the feature point of the facial part acquired by the feature point extraction unit
  • the position of the anime-like part is calculated, and the face part texture is generated based on the calculated position of the similar anime-like part.
  • the part arrangement setting means of the character generation system according to the fourth or fifth aspect of the present invention includes a plurality of model face part textures corresponding to the face part. Then, a similar model face part texture similar to the arrangement of the similar anime-like parts corresponding to the face part is selected, and the selected similar model face part texture is set as the face part texture.
  • the shape model generation means of the character generation system according to any one of the first to sixth aspects of the present invention is prepared in advance corresponding to a face contour.
  • a basic head shape model similar to the face contour is selected from a plurality of contour shape models based on the feature points of the face contour, and a plurality of prepared in advance corresponding to predetermined face parts other than the face contour
  • a similar part shape model similar to the predetermined face part is selected from the part shape models of the predetermined face part based on the feature points of the predetermined face part, and the selected basic head shape model and the similar part shape are selected.
  • the head shape model is generated in combination with a model.
  • the shape model generation means of the character generation system according to any one of the first to sixth aspects of the present invention is prepared in advance corresponding to the face contour.
  • the head shape model similar to the face contour is selected from a plurality of reference shape models based on the feature points of the face contour.
  • a character generation system is the character generation system according to any one of the first to eighth aspects of the present invention, wherein the computer controls the display unit to perform the predetermined face.
  • Face arrangement guide information indicating the arrangement of parts, display control means for displaying the target object on the display unit, and an imaging unit are controlled to cause the target object to be imaged
  • the captured image information of the target object is the predetermined image Imaging control means to be stored in the storage unit
  • the captured image acquisition means interlocks the display control means and the imaging control means to display the face arrangement guide information and the object.
  • the face captured image information of the object obtained by imaging the object based on the face arrangement guide information while being displayed on the part is displayed in front of the object stored in the predetermined storage unit.
  • a character generation system is the character generation system according to any one of the first to ninth aspects of the present invention, wherein the computer controls the input unit to It further includes input control means for inputting various information including captured image information as input information, and storing the captured image information of the object in the predetermined storage unit, and the captured image acquisition means includes the display control means and In conjunction with the input control means, the face arrangement guide information and the captured image information of the inputted object are displayed on the display unit, and the object based on the face arrangement guide information is displayed. Face captured image information is obtained from the captured image information of the object stored in the predetermined storage unit.
  • the character generation system according to an eleventh aspect of the present invention is the character generation system according to the ninth or tenth aspect of the present invention, wherein the predetermined facial part whose arrangement is indicated by the face arrangement guide information has a face outline. It contains at least.
  • the character generation method uses a computer to convert a character's face displayed on the display unit as being in a virtual space to an animated tone based on the face image information of the object.
  • a character generation method for generating a portrait wherein the computer (a) acquires the captured face image information of the object from captured image information of the object stored in a predetermined storage unit; (B) acquiring positional information of the feature points of the predetermined facial part from the face captured image information of the object acquired by the step (a) acquired by the step (a); Obtaining color information of the predetermined pixel; and (c) calculating a head shape model of the character based on the feature points of the predetermined facial part acquired in the step (a).
  • step (d) setting similar skin color information similar to the skin color of the object to be the skin color of the head texture of the character based on the color information of the predetermined pixel acquired in the step (a). Further, a similar anime-like part corresponding to the face part similar to the face part of the target object is selected based on the feature points acquired in the step (a), and the selected similar anime-like part is selected.
  • the color information of the skin part of the head texture pasted on the head shape model generated by the step (c) is obtained by the step (d). Characterized in that it comprises a step of replacing the constant has been the similar skin color information.
  • the step (d) of the character generation method according to the first aspect of the present invention is characterized in that the feature points acquired by (d1) the step (b) Selecting the similar anime-like part similar to the shape of the face part of the target object from among a plurality of anime-like parts prepared in advance corresponding to the face part based on position information; (d2 ) Selecting the similar skin color information similar to the skin color of the object from a plurality of skin color information prepared in advance based on the color information of the predetermined pixel acquired in the step (b); d3) Based on the position information of the feature points acquired in the step (b), the arrangement of the similar anime-like parts corresponding to the face part selected in the step (d1) is set and set.
  • a step of generating the face part texture based on the arrangement of the serial similar animation style part characterized in that it comprises.
  • the step (d2) of the character generating method according to the second aspect of the present invention when the step (d2) of the character generating method according to the second aspect of the present invention is such that the face part is composed of two basic face parts on the left and right,
  • the position information of the feature points of one of the left and right basic face parts is converted into position information moved in line symmetry with respect to the face center line, and the position information of the feature points of the two left and right basic face parts
  • the shape of the basic face part is estimated from an average value, the similar anime-like part corresponding to the estimated shape of the basic face part is selected from the plurality of anime-like parts, and the basic part having one face part is selected.
  • the shape of the facial part is estimated from position information of the feature points of the facial part, and the similar animated part corresponding to the estimated shape of the facial part is converted into the plurality of animated parts. in Characterized by al selection.
  • a position where the step (d3) of the character generation method according to the second or third aspect of the present invention is pasted on the head shape model is preset.
  • the facial part texture corresponding to the facial part the nose and mouth of the similar animated part and the similar animated part of the mouth are arranged at positions in the facial part texture that are set in advance.
  • the part is arranged to be a line target, and the position of the eyebrow in the left and right direction within the face part texture set in advance is the position in the vertical direction calculated from the position information of the feature point.
  • a position where the step (d3) of the character generation method according to the second or third aspect of the present invention is pasted on the head shape model is preset.
  • the facial part texture corresponding to the facial part the similar animated part in the facial part texture corresponding to the facial part based on the positional information of the feature points acquired in the step (b).
  • a position is calculated, and the facial part texture is generated based on the calculated position of the similar anime-like part.
  • the step (d3) of the character generation method according to the fourth or fifth aspect of the present invention includes a plurality of model face part textures corresponding to the face part.
  • a similar model face part texture similar to the arrangement of the similar anime-like parts corresponding to the face part is selected, and the selected similar model face part texture is set as the face part texture.
  • the step (c) of the character generating method according to any one of the first to sixth aspects of the present invention is prepared in advance corresponding to the face contour.
  • a basic head shape model similar to the face contour is selected from a plurality of contour shape models based on the feature points of the face contour, and a plurality of prepared in advance corresponding to predetermined face parts other than the face contour
  • a similar part shape model similar to the predetermined face part is selected from the part shape models of the predetermined face part based on the feature points of the predetermined face part, and the selected basic head shape model and the similar part shape are selected.
  • the head shape model is generated by combining with a model.
  • the step (c) of the character generation method according to any one of the first to sixth aspects of the present invention is prepared in advance corresponding to the face contour.
  • the head shape model similar to the face contour is selected from a plurality of reference shape models based on the feature points of the face contour.
  • a character generation method is the character generation method according to any one of the first to eighth aspects of the present invention, wherein the computer shows (f) a predetermined arrangement of the face parts. While displaying the face arrangement guide information and the object on the display unit, the target object is imaged based on the face arrangement guide information, and the captured image information of the object is stored in the predetermined storage unit. A step is further provided before (a), and the step (a) stores the face captured image information of the object in the predetermined storage unit based on the face arrangement guide information. It is acquired from the captured image information of the object.
  • a character generation method is the character generation method according to any one of the first to ninth aspects of the present invention, wherein the computer (g) captures the captured image information of the object.
  • the step of inputting and storing in the predetermined storage unit is further provided before (a), wherein the step (a) includes the face arrangement guide information indicating the arrangement of the predetermined facial part, and the step Based on the face arrangement guide information, the face captured image information of the object is stored in a predetermined storage unit while displaying the captured image information of the object input in (g) on the display unit. It is acquired from the captured image information of the stored object.
  • a character generation method is the character generation method according to the ninth or tenth aspect of the present invention, wherein the predetermined facial part whose arrangement is indicated by the face arrangement guide information has a face outline. It contains at least.
  • the program according to the first aspect of the present invention performs processing for generating a character's face displayed on the display unit as existence in the virtual space as an animation-like portrait based on the captured face image information of the object.
  • a program to be executed by a computer characterized by causing the computer to execute processing for realizing each means of the character generation system according to any one of the first to eleventh aspects of the present invention.
  • a character's face in a virtual space realized by using a computer such as a portable game device can be easily converted into an animation-like caricature based on face image information of an object photographed by a digital camera or the like. Can be generated.
  • a character appearing in a virtual space such as a game as an animation-like portrait based on the user's face captured image information
  • the user can be more emotionally transferred to the character.
  • More highly entertaining game software can be constructed.
  • the amount of character data that uses anime-like caricatures is small compared to the amount of character data that uses photographic images, game software that speeds up the processing of games, etc., and allows more characters to appear Can be built.
  • FIG. 2A and 2B show an example of a character 70 generated by the character generation system 10 according to an embodiment of the present invention, where FIG. 1A shows an example of a female character 70a and FIG. 2B shows an example of a male character 70b.
  • FIG. 1A shows an example of a female character 70a
  • FIG. 2B shows an example of a male character 70b.
  • FIG. 1A shows an example of a female character 70a
  • FIG. 2B shows an example of a male character 70b.
  • FIG. 1A shows an example of a female character 70a
  • FIG. 2B shows an example of a male character 70b.
  • FIG. 1A shows an example of a female character 70a
  • FIG. 2B shows an example of a male character 70b.
  • FIG. 1A shows an example of a female character 70a
  • FIG. 2B shows an example of a male character 70b.
  • FIG. 1A shows an example of a female character 70a
  • FIG. 2B shows an example of
  • FIG. (A) is a figure for demonstrating a feature point
  • (b) is an enlarged view of the right eye area
  • or (c) is the figure which showed the some example of the animation-like part of a female eyebrow.
  • or (c) are the figures which showed the some example of the animation-like parts of a female eye.
  • or (c) are the figures which showed the some example of the animation-like parts of a woman's mouth.
  • or (c) is the figure which showed the some example of the animation-like parts of a male eyebrow.
  • (c) are the figures which showed the some example of the animation-like parts of a male eye.
  • or (c) is the figure which showed the some example of the animation-like parts of a male mouth. It is the top view seen from the front of head shape model 60 for explaining arrangement of similar animation tone parts in face part texture 51. It is a figure for demonstrating the basic head shape model 71 and the similar part shape model 72 of a state without a nose, (a) is a top view of a face front, (b) is a top view of the face side. . It is a figure for demonstrating the head shape model 60, (a) is a top view of the face front, (b) is a top view of the face side. It is an example of the flowchart figure which shows the process sequence of the program which makes a computer perform each process of the character production
  • a character generation system 10 targets a character face displayed on a display unit as a presence in a virtual space realized by software such as a game running on a computer.
  • This is a character generation system that generates an animation-like portrait based on captured face information of an object (for example, a user).
  • FIG. 1 shows an example of a character 70 in a virtual space realized by game software that is generated by a character generation system 10 according to an embodiment of the present invention and operates on a portable game device.
  • (b) is an example showing a male character 70b.
  • the face captured image information of the target is captured image information of the face portion in the captured image information of a person captured using a digital camera or the like, and an imaging unit connected to or built in the computer is used.
  • the captured image information acquired directly from the imaging unit may be used, or the captured image information captured in the past input from the input unit connected to the computer may be used.
  • the image information may be image information obtained by partially correcting one piece of captured image information or combining a plurality of captured images into one piece of face captured image information.
  • An animation-like caricature based on the captured face image information of the object is similar to the captured face image information of the object, and is a face like a person appearing in a cartoon or animation that emphasizes cuteness. That is.
  • FIG. 2 is a diagram showing a schematic configuration of the portable game device 20 that executes the character generation system 10 according to the embodiment of the present invention.
  • the portable game device 20 includes a CPU (central processing unit) 21, a ROM 22, a RAM 23, a display unit 24, an imaging unit 25, and an input unit 26.
  • the CPU 21 implements the character generation system 10 by reading out and executing necessary information from the ROM 22 storing software and data for realizing the character generation system 10 to be executed by the portable game device 20.
  • the RAM 23 also functions as a data storage device and software execution work area necessary for realizing the character generation system 10 executed by the portable game device 20.
  • the display unit 24 outputs display information (for example, information for prompting the user to perform work, face arrangement guide information, etc.) in accordance with a command from the CPU 21.
  • the face placement guide information is information for setting a position reference and a range for extracting face captured image information as a face portion in captured image information of a subject, and is display information indicating an arrangement of a predetermined facial part. It is.
  • the predetermined face parts are parts such as eyes, nose, mouth, face outline, and eyebrows constituting the face, and include at least the face outline.
  • FIG. 3 is an example of a display screen on which face arrangement guide information is displayed. As shown in FIG. 3, the face arrangement guide information indicates a nose position 31 and a face outline 32.
  • the face outline 32 corresponds to the head shape model of the character 70.
  • the imaging unit 25 images a subject in accordance with a command from the CPU 21 and stores captured image information of the captured subject in the RAM 23.
  • the input unit 26 is, for example, an operation button, a communication device, an external storage device, or the like, and inputs input information (for example, information on the operation button operated by the user, captured image information, etc.) according to a command from the CPU 21. And stored in the RAM 23.
  • FIG. 4 is a diagram showing an example of a system configuration in the character generation system 10 according to the embodiment of the present invention.
  • the character generation system 10 includes a display control unit 11, an imaging control unit 12, an input control unit 13, a captured image information acquisition unit 14, a facial part texture generation unit 15, a shape model generation unit 16, and a texture pasting.
  • the face part texture generation unit 15 includes a feature point extraction unit 151, a part selection unit 152, a skin color setting unit 153, and a part arrangement setting unit 154.
  • the display control unit 11 controls the display unit 24 to take out necessary information from the guide information storage unit 19 such as face placement guide information indicating the placement of a predetermined facial part, information for prompting the user to work, Display information is generated and displayed from the extracted information.
  • the display control unit 11 controls all processes of the display unit 24, not limited to the above-described control.
  • the imaging control unit 12 controls the imaging unit 25 to image a subject, and causes the character information storage unit 18 to store captured image information of the captured subject.
  • the input control unit 13 controls the corresponding input unit 26 to input various input information (for example, captured image information captured in the past), and causes the character information storage unit 18 to store necessary information.
  • the captured image information of the subject is stored in the character information storage unit 18, but it may be stored in a temporary storage unit.
  • the captured image information acquisition unit 14 causes the display control unit 11 and the imaging control unit 12 to work together to display face arrangement guide information indicating the arrangement of a predetermined facial part and an object (subject) on the display unit 24. Then, the face of the subject is imaged so as to match the arrangement of the face parts in the face arrangement guide information (see FIG. 3). In particular, the face of the subject is imaged by matching the width and chin contour (lower contour) of the face contour 32 with the width and chin contour of the face of the subject.
  • the captured image information acquisition unit 14 acquires facial captured image information from captured image information of the captured subject in order to generate an animation-like caricature based on the captured facial image information.
  • the captured face image information is acquired from the captured image information of the subject imaged by the imaging unit 25, but may be acquired from the captured image information of the subject input from the input unit 26.
  • the captured image information acquisition unit 14 links the display control unit 11 and the input control unit 13 to display the face arrangement guide information indicating the arrangement of predetermined facial parts and the captured image information of the subject on the display unit 24. While being displayed, the captured face image information matching the face placement guide information is acquired while relatively aligning the face part placement in the face placement guide information and the face part placement in the captured image information of the subject. .
  • the facial part texture generation unit 15 pastes the head shape model 60 (see FIGS. 15A and 15B) of the character 70 to be an animation-like caricature based on the face captured image information of the target object, A function for generating facial part textures corresponding to facial parts such as mouth, eyes, and eyebrows, and includes a feature point extraction unit 151, a part selection unit 152, a skin color setting unit 153, and a part arrangement setting unit 154.
  • FIG. 5 is a plan view of the head shape model 60 for explaining the facial part texture as viewed from the front.
  • the face part texture 51 (for example, mouth face part texture 51 a, nose face part texture 51 b, eye face part texture 51 c, eyebrow face part texture 51 d) is added to the head shape model 60.
  • a position to be pasted is set in advance.
  • the face part texture 51 is a texture in which a similar anime-like part 52 similar to the face part of the target object is arranged.
  • the eye face part texture 51c is a texture in which two similar anime-like parts 52 selected based on the basic face parts of the right eye and the left eye are arranged.
  • the facial part textures 51 are arranged so as not to overlap each other, but the facial part textures 51 may be arranged so as to overlap if at least different similar anime-like parts 52 do not overlap each other. .
  • the right eye and the left eye are called basic face parts, and the left and right eyes are collectively called face parts.
  • the mouth is called a basic face part and / or a face part.
  • the texture pasted on the head shape model 60 is referred to as a head texture 50, and the head texture 50 other than the face part texture 51 is referred to as a blank portion 54. That is, the head texture 50 pasted on the head shape model 60 is composed of the face part texture 51 and the margin part 54.
  • the feature point extraction unit 151 of the face part texture generation unit 15 extracts feature points of predetermined facial parts and color information of predetermined pixels from the face captured image information acquired by the captured image information acquisition unit 14. That is, the position information of the feature points of a predetermined facial part is acquired from the captured face image information acquired by the captured image information acquisition unit 14, and the color information of the predetermined pixel is acquired.
  • FIG. 6A is a diagram for explaining feature points
  • FIG. 6B is an enlarged view of the right eye region 41.
  • eyebrows, eyes, nose, mouth, and facial contours are set as predetermined facial parts.
  • the points indicated by black circles indicate feature points.
  • the feature points are points that characterize the shape of a predetermined facial part.
  • the seven black circles in the right eye region 41 are the right eye feature points 43a to 43g.
  • the color information of the predetermined pixel is, for example, the color information of the pixel at the nose vertex 42 and the color information of the pixels at the positions 43a and 44a under both eyes.
  • the right and left of the right eye and the left eye are the right side and the left side of the drawing.
  • the color information is color information represented by RGB, YUV, or the like.
  • the part selection unit 152 of the facial part texture generation unit 15 estimates the shape of the facial part based on the position information of the feature point of the predetermined facial part extracted by the feature point extraction unit 151, and the estimated facial part
  • a similar anime-like part 52 similar to the shape of is selected from a plurality of animation-like parts prepared in advance.
  • the anime-like parts are pictures that deform the corresponding face parts.
  • a plurality of anime-like parts are prepared based on gender, race, age, and the like.
  • 7 to 9 are diagrams showing examples of anime-like parts corresponding to facial parts.
  • FIGS. 7A to 7C are diagrams showing a plurality of examples of anime-like parts for female eyebrows.
  • 8A to 8C are diagrams showing a plurality of examples of anime-like parts for female eyes.
  • FIGS. 9A to 9C are diagrams showing a plurality of examples of anime-like parts of a female mouth.
  • FIGS. 10A to 10C are diagrams showing a plurality of examples of anime-like parts for male eyebrows.
  • FIGS. 11A to 11C are diagrams showing a plurality of examples of anime-like parts of male eyes.
  • 12 (a) to 12 (c) are diagrams showing a plurality of examples of anime-like parts of a male mouth.
  • the width of the eye (the distance between the points 43b and 43f), the height of the eye (the distance between the points 43a and 43d), and the eyelet
  • the shape of the eye according to the inclination of the eye (such as a line segment 49 connecting the points 43g and 43b and an angle ⁇ between the line segment 48 extending horizontally from the point 43g in the left-right direction (X direction)).
  • the anime-like part closest to the estimated eye width, eye height, and eye inclination among the anime-like parts of the plurality of eyes becomes the similar anime-like part 52 of the eyes.
  • the eyebrows and eyes are composed of two basic face parts on the left and right, if the shape is estimated separately on the left and right, the left and right may have different shapes. However, when considering the balance of the entire face, if the left and right sides have the same shape (a shape that is line-symmetric with respect to the face center line), the balance of the entire face will not be lost.
  • the position information is converted into position information that is moved symmetrically with respect to the face center line, that is, the position of the eye is calculated from the average of the position information of the feature points of the two right eyes after being horizontally reversed and converted into the position of the right eye. The position information of the points is calculated, and the shape of the eyes is estimated. The same applies to the eyebrows.
  • the shape is estimated from the extracted feature point position information.
  • the color information of the pupil may be extracted by the feature point extraction unit 151, and the color information of the pupil of the similar anime-like part 52 may be changed based on the extracted color information.
  • the skin color setting unit 153 of the face part texture generation unit 15 uses color information similar to the skin color of the target object (hereinafter referred to as similar skin color information) based on the color information of the predetermined pixel extracted by the feature point extraction unit 151.
  • the calculated similar skin color information is set as the color information of the head texture 50 of the character 70.
  • the color information of the predetermined pixel extracted by the feature point extraction unit 151 is the color information of the pixel of the nose vertex 42 and the color information of the pixels 43a and 44a under both eyes, the colors of the three pixels An average of the information is calculated, and the calculated color information is set as similar skin color information.
  • the part arrangement setting unit 154 of the facial part texture generation unit 15 selects the part selection unit 152 in the facial part texture 51 based on the position information of the feature points of the predetermined facial part extracted by the feature point extraction unit 151.
  • the arrangement of the similar anime-like part 52 is set, and the face part texture 51 is generated based on the set position of the similar anime-like part 52.
  • FIG. 13 is a plan view seen from the front of the head shape model 60 for explaining the arrangement of the similar animation-like parts 52 in the face part texture 51.
  • FIG. 13 As shown in FIG. 13, in each of the facial part textures 51 a, 51 b, 51 c, 51 d to be pasted on the head shape model 60, the point 61 is the arrangement of the nose-like animation part 52, and the point 62 is the mouth similarity The arrangement of the anime-like parts 52, the point 63 the arrangement of the similar-animation-like parts 52 of the right eye, the point 64 the arrangement of the similar-animation-like parts 52 of the left-eye, and the point 65 the arrangement of the similar-animation-like parts 52 of the right eyebrow.
  • a point 66 indicates the arrangement of the similar anime-like part 52 of the left eyebrow.
  • the position of the head shape model 60 to which the facial part textures 51 a, 51 b, 51 c, 51 d are pasted is set in advance corresponding to the head shape model 60.
  • a point 61 that is the arrangement of the nose-like animation-like part 52 and a point 62 that is the arrangement of the mouth-like animation-like part 52 are preset. Yes.
  • the position of the eye-like anime-like part 52 in the vertical direction (height direction: Y direction) is set in advance, and the left-right direction (width)
  • the position of (direction: X direction) is calculated from the feature points of both eyes. That is, the distance 67 between the similar anime-like parts 52 of both eyes is calculated from the feature points of both eyes at the preset vertical position (height direction: Y direction) of the similar anime-like parts 52 of both eyes.
  • the similar anime-like parts 52 are arranged so as to be line targets with respect to the face center line 69.
  • the horizontal position of the similar animation-like part 52 of the eyebrow is preset, and the vertical position is calculated from the feature points of both eyebrows.
  • the vertical position of the similar animation-like parts 52 of both eyebrows is calculated from the feature points of both eyebrows, and the similar animation-like parts 52 of both eyebrows are arranged so as to be a line target with respect to the face center line 69.
  • the arrangement of the similar anime-like parts 52 with respect to the face part texture 51 is not limited to the arrangement process described above, and the arrangement of the similar anime-like parts 52 in all the face part textures 51 is determined based on the feature points of the face parts. It may be calculated. Further, the arrangement of the similar animation-like parts 52 with respect to the face part texture 51 may be fixed in advance.
  • a plurality of model facial part textures corresponding to the facial part are included.
  • the similar model face part texture similar to the arrangement of the similar anime-like part 52 with respect to the face part texture 51 may be selected, and the selected similar model face part texture may be set as the face part texture 51.
  • the facial part texture generation unit 15 generates the facial part texture 51 corresponding to the facial part of the cartoon-like portrait based on the captured face image information of the subject. Then, the generated facial part texture 51 is stored in the character information storage unit 18.
  • the shape model generation unit 16 has a function of generating a head shape model 60 based on feature points of predetermined face parts. Further, the margin part 54 of the head texture 50 is pasted on the generated head shape model 60.
  • the head contour component shape similar to the face contour based on the facial contour feature points acquired by the facial component texture generation unit 15 from the plurality of contour shape models stored in advance in the head shape model 60 A model (hereinafter referred to as a basic head shape model) 71 is selected, and similar to the face part among a plurality of part shape models corresponding to a predetermined face part stored in the head shape model 60 in advance.
  • a part shape model (hereinafter referred to as a similar part shape model) 72 of the face part is selected. Then, the head shape model 60 is generated by combining (combining) the selected basic head shape model 71 and the similar shape model 72.
  • the basic head shape model 71 is a head shape model in a state where there is no part shape model portion corresponding to a predetermined face part. That is, for example, when the predetermined facial part is the nose, the head shape model without the nose is the basic head shape model 71.
  • FIGS. 15A and 15B are diagrams for explaining the basic head shape model 71 and the similar part shape model 72 in a state where no nose is present.
  • FIG. 14A is a plan view of the face front, and FIG. A plan view is shown.
  • FIGS. 15A and 15B are diagrams for explaining the head shape model 60.
  • FIG. 15A is a plan view of the face front, and
  • FIG. 15B is a plan view of the face side.
  • 14A, 14 ⁇ / b> B, 15 ⁇ / b> A, and 15 ⁇ / b> B, the nose similar part shape model 72 will be described as an example.
  • the head shape model 60 is stored in advance in the head shape model 60.
  • a head shape model 60 similar to the face contour may be selected from a plurality of reference shape models based on the feature points of the face contour acquired by the face part texture generation unit 15. Further, one of the head shape models 60 set in advance may be stored in the character information storage unit 18 and the head shape model 60 may be taken out from the character information storage unit 18.
  • the texture pasting unit 17 pastes the face part texture 51 generated by the face part texture generation unit 15 on the head shape model 60 generated by the shape model generation unit 16 and the margin part 54 of the head texture 50.
  • the color information and the color information of the skin part of the face part texture 51 are replaced with similar skin color information set by the skin color setting unit 153 of the face part texture generation unit 15.
  • a plurality of parts corresponding to hairstyles, clothes, accessories, etc. are stored in advance in the character information storage unit 18 in accordance with the situation, sex, etc. of the game, etc. You can also set parts.
  • the texture pasting unit 17 described above the color information of the marginal part 54 of the head texture 50 and the color information of the skin part of the face part texture 51 are replaced with similar skin color information.
  • the shape model generation unit 16 performs a process of replacing the color information of the margin part 54 with the similar skin color information, and the process of replacing the color information of the skin part of the face part texture 51 with the similar skin color information.
  • the arrangement setting unit 154 may perform this.
  • the start timing of the character generation system 10 is started by starting the character generation system 10 before starting the RPG.
  • the character generation system 10 is used when a new character 70 appears in the virtual space during the progress of the RPG game.
  • the face of the appearing character 70 can also be generated by operating it. That is, the character generation system 10 can be activated at a user's desired activation timing.
  • FIG. 16 is an example of a flowchart showing a processing procedure of a program that causes a computer to execute each step of the character generation method according to the embodiment of the present invention.
  • step 101 face-captured image information that is a target for generating an animation-like portrait is acquired (step 101: S101).
  • the face arrangement guide information indicating the arrangement of a predetermined face part and the subject are displayed on the display unit 24, the face of the subject is imaged by the imaging unit 25 so as to match the arrangement of the face parts of the face arrangement guide information.
  • the captured image information of the captured subject is acquired.
  • face captured image information is acquired from the captured image information of the captured subject.
  • captured image information of the subject imaged by the imaging unit 25 is used, but captured image information input from the input unit 26 may be used.
  • the image information may be image information obtained by partially correcting one piece of captured image information or combining a plurality of captured images into one piece of face captured image information.
  • step 102 feature points of predetermined face parts and color information of predetermined pixels are extracted from the captured face image information (step 102: S102). That is, the position information of feature points of a predetermined facial part is acquired from the captured face image information acquired in the step 101, and the color information of the predetermined pixel is acquired.
  • the predetermined facial parts are eyebrows, eyes, nose, mouth, and face outline.
  • the color information of the predetermined pixel is the color information of the pixel at the nose vertex 42 in FIGS. 6A and 6B and the color information of the pixels 43a and 44a under both eyes.
  • the head shape model 60 is generated based on the position information of the feature points of the predetermined facial part extracted in the step 102 (step 103: S103).
  • the facial contour feature points acquired by the facial part texture generation unit 15 from the plurality of contour shape models stored in advance in the head shape model 60 are used.
  • a basic head shape model 71 similar to the face contour is selected, and similar to the face part from among a plurality of part shape models corresponding to predetermined face parts stored in the head shape model 60 in advance.
  • the selected similar part shape model 72 is selected, and the head shape model 60 is generated by combining the selected basic head shape model 71 and the similar shape model 72.
  • the head shape model similar to the face contour based on the facial contour feature points acquired by the face part texture generation unit 15 from the plurality of reference shape models stored in advance in the head shape model 60. 60 may be selected. Further, one head shape model 60 may be stored in the character information storage unit 18 in advance, and the head shape model 60 may be taken out from the character information storage unit 18.
  • the shape of the face part is estimated based on the position information of the feature points of the predetermined face part extracted in step 102 (step 104: S104).
  • the eye width the distance between the points 43b and 43f
  • the eye height the distance between the points 43a and 43d.
  • the shape of the eye is estimated based on the inclination of the eye indicating the leaning, fishing, etc. (angle ⁇ between the line connecting point 43g and point 43b and the line extending horizontally from point 43g in the horizontal direction). .
  • a similar anime-like part 52 similar to the shape of the face part estimated in the step 104 is selected from a plurality of animation-like parts prepared in advance (step 105: S105).
  • similar skin color information similar to the skin color of the object is calculated based on the color information of the predetermined pixel extracted in step 102 (step 106: S106).
  • the color information of the extracted predetermined pixel is the color information of the pixel of the nose vertex 42, the pixels 43a and 44a below both eyes.
  • the average of the color information of three pixels is calculated, and the calculated color information is set as similar skin color information.
  • the arrangement of the similar animated parts 52 in the facial part texture 51 is set, and the similar animated parts 52 of the set similar animated parts 52 are set.
  • the face part texture 51 is generated based on the position (step 107: S107).
  • the point 61 which is the arrangement of the nose-like animation part 52, and the arrangement of the mouth-like animation part 52
  • the point 62 is set in advance.
  • the vertical position of the eye-like anime-like part 52 is preset, and the left-right direction position is calculated from the feature points of both eyes. .
  • the distance 67 between the similar anime-like parts 52 of both eyes is calculated from the feature points of both eyes, and the similar anime-like parts 52 of both eyes are arranged so as to be a line target with respect to the face center line 69.
  • the horizontal position of the similar animation-like part 52 of the eyebrow is preset, and the vertical position is calculated from the feature points of both eyebrows. To do.
  • the vertical position of the similar animation-like parts 52 of both eyebrows is calculated from the feature points of both eyebrows, and the similar animation-like parts 52 of both eyebrows are arranged so as to be a line target with respect to the face center line 69.
  • the arrangement processing is not limited to the above-described arrangement processing, and the arrangement of the similar animated parts 52 in all the facial part textures 51 may be calculated based on the feature points of the facial parts. Further, the arrangement of the similar animation-like parts 52 with respect to the face part texture 51 may be fixed in advance.
  • a similar model face part texture similar to the arrangement of the similar anime-like part 52 with respect to the face part texture 51 may be selected from the textures, and the selected similar model face part texture may be set as the face part texture 51.
  • the facial part texture 51 generated by the step 107 is pasted on the head shape model 60 generated by the step 103, and the color information and the facial part texture of the margin part 54 of the head texture 50 are pasted.
  • the color information of the skin portion 51 is replaced with the similar skin color information set in the step 106 (step 108: S108).
  • the character 70 with the animation-like portrait attached is generated.
  • the color information of the margin part 54 of the head texture 50 and the color information of the skin part of the face part texture 51 are replaced with similar skin color information.
  • the process of replacing the color information of the part 54 with the similar skin color information may be performed in the process of step 103, and the process of replacing the color information of the skin part of the facial part texture 51 with the similar skin color information may be performed in the process of step 107.
  • the face of the character 70 in the virtual space realized by using the portable game device 20 is converted into face-captured image information of an object photographed by a digital camera or the like. It can be easily generated as a cartoon-like caricature based on it. Further, by generating the face of the character 70 appearing in a virtual space such as a game as an animation-like caricature based on the user's captured face image information, the user can more empathize with the character 70. It is possible to build game software with higher entertainment. Further, the data amount of the character 70 using the cartoon-like caricature is smaller than the data amount of the character 70 using the photographic image, and the processing speed of the game or the like is increased, or more characters 70 appear. Game software can be built.
  • the character generation system 10 is a system using the portable game device 20, but is not limited to this, and is not limited to this, and is a game machine for arcade (arcade game machine), a home game. Applicable to the generation of 2D or 3D characters represented as existence in a virtual space realized by using a computer, a mobile phone, a stand-alone computer, a workstation computer system, a network computer system, etc. it can.

Abstract

Disclosed is a character generating system, a character generating method, and a program, wherein a face of a character within a virtual space attained using a computer such as a mobile game device is easy to generate, as an animation type portrait that is based on picked-up face-image information of an object. The character generating system (10) is provided with a picked-up image information acquisition unit (14), a face-part texture generating unit (15), a shape-model generating unit (16), and a texture pasting unit (17). The face-part texture generating unit (15) has a function to generate a part texture in accordance with a face part to be pasted onto a head-shape model of a character (70) that is to become the animation type portrait that is based on a picked-up face-image information of an object, and is comprised of a feature point extraction unit (151), a part selection unit (152), a flesh-color setting unit (153), and a parts-arrangement setting unit (154).

Description

キャラクタ生成システム、キャラクタ生成方法及びプログラムCharacter generation system, character generation method and program
 本発明は、コンピュータを用いて実現される仮想空間内のキャラクタを生成するキャラクタ生成システム、キャラクタ生成方法及びプログラムに関する。特に、携帯ゲーム装置を用いて実現される仮想空間内のキャラクタの顔を、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として生成するキャラクタ生成システム、キャラクタ生成方法及びプログラムに関する。 The present invention relates to a character generation system, a character generation method, and a program for generating a character in a virtual space realized using a computer. In particular, the present invention relates to a character generation system, a character generation method, and a program for generating a character's face in a virtual space realized by using a portable game device as an animation-like portrait based on face captured image information of an object.
 従来、携帯ゲーム装置等のコンピュータを用いて実現されるゲーム等の仮想空間において表示される登場キャラクタは、予め用意された複数のキャラクタの中からユーザが所望のキャラクタを選択し、選択したキャラクタを登場キャラクタとして、仮想空間に表示させていた。 Conventionally, an appearance character displayed in a virtual space such as a game realized by using a computer such as a portable game device is selected by a user selecting a desired character from a plurality of characters prepared in advance. It was displayed in the virtual space as an appearance character.
 また、仮想空間において表示される登場キャラクタに対して、デジタルカメラ等で撮影されたユーザの思い入れのある撮像画像を貼り付ける画像処理装置や画像処理方法も提案されている。例えば、特許文献1では、2次元画像情報を取り込み、仮想3次元物体の特徴位置に対応して設けられているゲージを備えた切り抜き範囲指定用の枠を使って、該2次元画像情報から、所定範囲の2次元画像情報を切り出し、切り出した2次元画像情報を仮想3次元物体に貼り付ける画像処理装置や画像処理方法が開示されている。 Also proposed are an image processing apparatus and an image processing method for pasting a captured image taken by a user with a digital camera or the like to an appearance character displayed in a virtual space. For example, in Patent Document 1, two-dimensional image information is captured and a frame for specifying a clipping range provided with a gauge provided corresponding to the feature position of the virtual three-dimensional object is used. An image processing apparatus and an image processing method are disclosed that cut out two-dimensional image information in a predetermined range and paste the cut-out two-dimensional image information on a virtual three-dimensional object.
特開2000-235656号公報JP 2000-235656 A
 しかしながら、特許文献1に開示されているようにデジタルカメラ等で撮影された2次元画像情報から顔部分の2次元画像情報(顔撮像画像情報)を切り出して、仮想3次元物体(頭部形状モデル)に貼り付ける場合、キャラクタを生成するためのデータ量が多くなり、携帯ゲーム装置のように記憶容量の少ない装置においては、キャラクタを生成するのに時間がかかってしまったり、生成したキャラクタを仮想空間内で動かしたりするときに処理速度が遅くなってしまったり、登場させるキャラクタの数が制限されてしまったりするという問題があった。また、被写体の顔そのままの画像をキャラクタの顔として表示させたくない場合、あるいは、個人情報保護のために表示できない場合もある。 However, as disclosed in Patent Document 1, two-dimensional image information (face-captured image information) of a face part is cut out from two-dimensional image information captured by a digital camera or the like, and a virtual three-dimensional object (head shape model) is extracted. ) Increases the amount of data for generating the character, and in a device with a small storage capacity such as a portable game device, it takes time to generate the character, or the generated character is virtually There are problems that the processing speed is slow when moving in the space, and the number of characters to appear is limited. Also, there are cases where it is not desired to display an image of the subject's face as the character's face, or to display it for personal information protection.
 本発明は、以上のような問題点を解決するためになされたもので、携帯ゲーム装置等のコンピュータを用いて実現される仮想空間内のキャラクタの顔を、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として容易に生成することが可能なキャラクタ生成システム、キャラクタ生成方法及びプログラムを提供することを目的とする。 The present invention has been made to solve the above-described problems. The face of a character in a virtual space realized by using a computer such as a portable game device is based on the face-captured image information of the object. It is an object of the present invention to provide a character generation system, a character generation method, and a program that can be easily generated as a cartoon-like portrait.
 上述した従来の問題点を解決すべく下記の発明を提供する。
 本発明の第1の態様にかかるキャラクタ生成システムは、コンピュータを使用して、仮想空間内の存在として表示部に表示されるキャラクタの顔を、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として生成するキャラクタ生成システムであって、前記コンピュータが、前記対象物の前記顔撮像画像情報を、所定の記憶部に格納されている前記対象物の撮像画像情報から取得する撮像画像取得手段と、前記撮像画像取得手段によって取得された前記対象物の前記顔撮像画像情報から、所定の顔部品の特徴点及び所定画素の色情報を抽出し、抽出した前記所定画素の色情報に基づいて前記キャラクタの頭部テクスチャの肌色となる前記対象物の肌色に類似した類似肌色情報を設定し、更に、抽出した前記特徴点に基づいて前記対象物の前記顔部品に類似した前記顔部品に対応する類似アニメ調パーツを選択し、選択した前記類似アニメ調パーツの配置を設定することにより前記顔部品に対応した顔部品テクスチャを生成する顔部品テクスチャ生成手段と、所定の前記顔部品の前記特徴点に基づいて、前記頭部形状モデルを生成する形状モデル生成手段と、前記キャラクタの頭部形状モデルに貼り付けられた前記顔部品テクスチャ以外の前記頭部テクスチャの色情報を前記顔部品テクスチャ生成手段によって設定された前記類似肌色情報に置き換え、前記顔部品テクスチャ生成手段によって生成された前記顔部品テクスチャを、前記キャラクタの前記頭部形状モデルに貼り付けるテクスチャ貼付手段と、を備えていることを特徴とする。
The following invention is provided to solve the above-mentioned conventional problems.
The character generation system according to the first aspect of the present invention uses a computer to display the character face displayed on the display unit as being present in the virtual space in an animation-like manner based on the captured face image information of the object. A character generation system for generating a portrait, wherein the computer acquires the face captured image information of the object from captured image information of the object stored in a predetermined storage unit; The feature information of a predetermined facial part and the color information of a predetermined pixel are extracted from the face captured image information of the object acquired by the captured image acquisition means, and the color information of the predetermined pixel is extracted based on the extracted color information of the predetermined pixel. Similar skin color information similar to the skin color of the object that is the skin color of the head texture of the character is set, and the front of the object is set based on the extracted feature points Facial part texture generating means for selecting a similar animated part corresponding to the facial part similar to a facial part and generating a facial part texture corresponding to the facial part by setting an arrangement of the selected similar animated part And shape model generation means for generating the head shape model based on the feature points of the predetermined face part, and the head other than the face part texture pasted on the head shape model of the character Texture color information is replaced with the similar skin color information set by the face part texture generation means, and the face part texture generated by the face part texture generation means is pasted to the head shape model of the character And a sticking means.
 本発明の第2の態様にかかるキャラクタ生成システムは、本発明の第1の態様にかかるキャラクタ生成システムの前記顔部品テクスチャ生成手段が、前記撮像画像取得手段によって取得された前記対象物の前記顔撮像画像情報から、前記所定の顔部品の前記特徴点の位置情報を取得するとともに、前記所定画素の色情報を取得する特徴点抽出手段と、前記特徴点抽出手段によって取得された前記特徴点の位置情報に基づいて、前記顔部品に対応して予め用意された複数のアニメ調パーツの中から前記対象物の前記顔部品の形状に類似した前記類似アニメ調パーツを選択するパーツ選択手段と、前記特徴点抽出手段によって取得された前記所定画素の色情報に基づいて、予め用意された複数の肌色情報の中から前記対象物の肌色に類似した前記類似肌色情報を選択する肌色設定手段と、前記特徴点抽出手段によって取得された前記特徴点の位置情報に基づいて、前記パーツ選択手段によって選択された前記顔部品に対応する前記類似アニメ調パーツの配置を設定し、設定した前記類似アニメ調パーツの配置に基づいて前記顔部品テクスチャを生成するパーツ配置設定手段と、を備えていることを特徴とする。 In the character generation system according to the second aspect of the present invention, the face part texture generation means of the character generation system according to the first aspect of the present invention is the face of the object acquired by the captured image acquisition means. From the captured image information, the position information of the feature point of the predetermined face part is acquired, and the feature point extraction unit that acquires the color information of the predetermined pixel; and the feature point acquired by the feature point extraction unit Parts selection means for selecting the similar anime-like part similar to the shape of the face part of the target object from among a plurality of animation-like parts prepared in advance corresponding to the face part based on position information; Based on the color information of the predetermined pixel acquired by the feature point extraction unit, the previous color similar to the skin color of the object from among a plurality of skin color information prepared in advance Skin color setting means for selecting similar skin color information, and based on the position information of the feature points acquired by the feature point extraction means, the similar animation-like parts corresponding to the face parts selected by the parts selection means Part arrangement setting means for setting the arrangement and generating the facial part texture based on the arrangement of the set similar animation-like parts.
 本発明の第3の態様にかかるキャラクタ生成システムは、本発明の第2の態様にかかるキャラクタ生成システムの前記パーツ選択手段が、前記顔部品が左右2つの基本顔部品から構成されるとき、左右どちらか一方の前記基本顔部品の前記特徴点の位置情報を顔中心線に対して線対称に移動させた位置情報に変換し、左右2つの前記基本顔部品の前記特徴点の位置情報の平均値から前記基本顔部品の形状を推定し、推定した前記基本顔部品の形状に対応した前記類似アニメ調パーツを前記複数のアニメ調パーツの中から選択し、前記顔部品が1つの前記基本顔部品から構成されるとき、前記顔部品の前記特徴点の位置情報から前記顔部品の形状を推定し、推定した前記顔部品の形状に対応した前記類似アニメ調パーツを前記複数のアニメ調パーツの中から選択することを特徴とする。 In the character generation system according to the third aspect of the present invention, when the part selection means of the character generation system according to the second aspect of the present invention is configured such that when the face part is composed of two basic face parts, left and right The position information of the feature points of either one of the basic face parts is converted into position information moved in line symmetry with respect to the face center line, and the average of the position information of the feature points of the two left and right basic face parts The shape of the basic face part is estimated from the value, the similar anime-like part corresponding to the estimated shape of the basic face part is selected from the plurality of anime-like parts, and the basic face having one face part is selected. When configured from parts, the shape of the face part is estimated from the position information of the feature points of the face part, and the similar anime-like parts corresponding to the estimated shape of the face part are converted into the plurality of anime-like parts. And selecting from among the over tool.
 本発明の第4の態様にかかるキャラクタ生成システムは、本発明の第2または3の態様にかかるキャラクタ生成システムの前記パーツ配置設定手段が、前記頭部形状モデルに貼り付けられる位置が予め設定されている前記顔部品に対応した前記顔部品テクスチャにおいて、予め設定されている鼻及び口の前記顔部品テクスチャ内の位置に、鼻の前記類似アニメ調パーツ及び口の前記類似アニメ調パーツを配置し、予め設定されている目の前記顔部品テクスチャ内の上下方向の位置に、前記特徴点の位置情報から算出した左右の目の間隔で、顔中心線に対して左右の目の前記類似アニメ調パーツが線対象となるように配置し、予め設定されている眉の前記顔部品テクスチャ内の左右方向の位置に、前記特徴点の位置情報から算出した上下方向の位置で、顔中心線に対して左右の眉の前記類似アニメ調パーツが線対象となるように配置し、鼻、口、目、及び眉に対応した前記類似アニメ調パーツの配置に基づいて、鼻、口、目、及び眉に対応した前記顔部品テクスチャを生成することを特徴とする。 In the character generation system according to the fourth aspect of the present invention, the position where the part placement setting means of the character generation system according to the second or third aspect of the present invention is pasted on the head shape model is preset. In the facial part texture corresponding to the facial part, the nose and mouth of the similar animated part and the similar animated part of the mouth are arranged at positions in the facial part texture that are set in advance. The similar animation tone of the left and right eyes with respect to the face center line at a predetermined vertical position in the facial part texture of the eyes, with the left and right eye intervals calculated from the position information of the feature points. The vertical direction calculated from the position information of the feature point at the position in the left-right direction in the facial part texture of the eyebrow set in advance so that the part is a line target In the position, arrange the similar anime-like parts of the left and right eyebrows with respect to the center line of the face to be line targets, based on the arrangement of the similar anime-like parts corresponding to the nose, mouth, eyes, and eyebrows, The facial part texture corresponding to the nose, mouth, eyes, and eyebrows is generated.
 本発明の第5の態様にかかるキャラクタ生成システムは、本発明の第2または3の態様にかかるキャラクタ生成システムの前記パーツ配置設定手段が、前記頭部形状モデルに貼り付けられる位置が予め設定されている前記顔部品に対応した前記顔部品テクスチャにおいて、前記特徴点抽出手段によって取得された前記顔部品の前記特徴点の位置情報に基づいて前記顔部品に対応した前記顔部品テクスチャ内の前記類似アニメ調パーツの位置を算出し、算出した前記類似アニメ調パーツの位置に基づいて前記顔部品テクスチャを生成することを特徴とする。 In the character generation system according to the fifth aspect of the present invention, a position where the part placement setting means of the character generation system according to the second or third aspect of the present invention is pasted on the head shape model is preset. In the facial part texture corresponding to the facial part, the similarity in the facial part texture corresponding to the facial part based on the positional information of the feature point of the facial part acquired by the feature point extraction unit The position of the anime-like part is calculated, and the face part texture is generated based on the calculated position of the similar anime-like part.
 本発明の第6の態様にかかるキャラクタ生成システムは、本発明の第4または5の態様にかかるキャラクタ生成システムの前記パーツ配置設定手段が、前記顔部品に対応する複数のモデル顔部品テクスチャの中から、前記顔部品に対応する前記類似アニメ調パーツの配置に類似した類似モデル顔部品テクスチャを選択し、選択した類似モデル顔部品テクスチャを前記顔部品テクスチャとして設定することを特徴とする。 In the character generation system according to the sixth aspect of the present invention, the part arrangement setting means of the character generation system according to the fourth or fifth aspect of the present invention includes a plurality of model face part textures corresponding to the face part. Then, a similar model face part texture similar to the arrangement of the similar anime-like parts corresponding to the face part is selected, and the selected similar model face part texture is set as the face part texture.
 本発明の第7の態様にかかるキャラクタ生成システムは、本発明の第1乃至6のいずれか1つの態様にかかるキャラクタ生成システムの前記形状モデル生成手段が、顔輪郭に対応して予め用意された複数の輪郭形状モデルの中から、顔輪郭の前記特徴点に基づいて、顔輪郭に類似した基礎頭部形状モデルを選択し、顔輪郭以外の所定の顔部品に対応して予め用意された複数の部品形状モデルの中から、前記所定の顔部品の前記特徴点に基づいて、前記所定の顔部品に類似した類似部品形状モデルを選択し、選択した前記基礎頭部形状モデルと前記類似部品形状モデルとを組み合わせて前記頭部形状モデルを生成することを特徴とする。 In a character generation system according to a seventh aspect of the present invention, the shape model generation means of the character generation system according to any one of the first to sixth aspects of the present invention is prepared in advance corresponding to a face contour. A basic head shape model similar to the face contour is selected from a plurality of contour shape models based on the feature points of the face contour, and a plurality of prepared in advance corresponding to predetermined face parts other than the face contour A similar part shape model similar to the predetermined face part is selected from the part shape models of the predetermined face part based on the feature points of the predetermined face part, and the selected basic head shape model and the similar part shape are selected. The head shape model is generated in combination with a model.
 本発明の第8の態様にかかるキャラクタ生成システムは、本発明の第1乃至6のいずれか1つの態様にかかるキャラクタ生成システムの前記形状モデル生成手段が、顔輪郭に対応して予め用意された複数の基準形状モデルの中から、顔輪郭の前記特徴点に基づいて、顔輪郭に類似した前記頭部形状モデルを選択することを特徴とする。 In the character generation system according to the eighth aspect of the present invention, the shape model generation means of the character generation system according to any one of the first to sixth aspects of the present invention is prepared in advance corresponding to the face contour. The head shape model similar to the face contour is selected from a plurality of reference shape models based on the feature points of the face contour.
 本発明の第9の態様にかかるキャラクタ生成システムは、本発明の第1乃至8のいずれか1つの態様にかかるキャラクタ生成システムにおいて、前記コンピュータが、前記表示部を制御して、所定の前記顔部品の配置を示す顔配置ガイド情報と前記対象物を前記表示部に表示させる表示制御手段と、撮像部を制御して、前記対象物を撮像させ、前記対象物の前記撮像画像情報を前記所定の記憶部に格納させる撮像制御手段と、を更に備え、前記撮像画像取得手段は、前記表示制御手段と前記撮像制御手段とを連動させて、前記顔配置ガイド情報と前記対象物とを前記表示部に表示させながら、前記顔配置ガイド情報に基づいて前記対象物を撮像させた前記対象物の前記顔撮像画像情報を、前記所定の記憶部に格納されている前記対象物の前記撮像画像情報から取得することを特徴する。 A character generation system according to a ninth aspect of the present invention is the character generation system according to any one of the first to eighth aspects of the present invention, wherein the computer controls the display unit to perform the predetermined face. Face arrangement guide information indicating the arrangement of parts, display control means for displaying the target object on the display unit, and an imaging unit are controlled to cause the target object to be imaged, and the captured image information of the target object is the predetermined image Imaging control means to be stored in the storage unit, and the captured image acquisition means interlocks the display control means and the imaging control means to display the face arrangement guide information and the object. The face captured image information of the object obtained by imaging the object based on the face arrangement guide information while being displayed on the part is displayed in front of the object stored in the predetermined storage unit. Features that obtained from the captured image information.
 本発明の第10の態様にかかるキャラクタ生成システムは、本発明の第1乃至9のいずれか1つの態様にかかるキャラクタ生成システムにおいて、前記コンピュータが、入力部を制御して、前記対象物の前記撮像画像情報を含む各種情報を入力情報として入力させ、前記対象物の前記撮像画像情報を前記所定の記憶部に格納させる入力制御手段を更に備え、前記撮像画像取得手段は、前記表示制御手段と前記入力制御手段とを連動させて、前記顔配置ガイド情報と入力された前記対象物の前記撮像画像情報とを前記表示部に表示させながら、前記顔配置ガイド情報に基づいた前記対象物の前記顔撮像画像情報を、前記所定の記憶部に格納されている前記対象物の前記撮像画像情報から取得することを特徴する。 A character generation system according to a tenth aspect of the present invention is the character generation system according to any one of the first to ninth aspects of the present invention, wherein the computer controls the input unit to It further includes input control means for inputting various information including captured image information as input information, and storing the captured image information of the object in the predetermined storage unit, and the captured image acquisition means includes the display control means and In conjunction with the input control means, the face arrangement guide information and the captured image information of the inputted object are displayed on the display unit, and the object based on the face arrangement guide information is displayed. Face captured image information is obtained from the captured image information of the object stored in the predetermined storage unit.
 本発明の第11の態様にかかるキャラクタ生成システムは、本発明の第9または10の態様にかかるキャラクタ生成システムにおいて、前記顔配置ガイド情報によって配置が示される所定の前記顔部品は、顔輪郭を少なくとも含んでいることを特徴とする。 The character generation system according to an eleventh aspect of the present invention is the character generation system according to the ninth or tenth aspect of the present invention, wherein the predetermined facial part whose arrangement is indicated by the face arrangement guide information has a face outline. It contains at least.
 本発明の第1の態様にかかるキャラクタ生成方法は、コンピュータを使用して、仮想空間内の存在として表示部に表示されるキャラクタの顔を、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として生成するキャラクタ生成方法であって、前記コンピュータが、(a)前記対象物の前記顔撮像画像情報を、所定の記憶部に格納されている前記対象物の撮像画像情報から取得する工程と、(b)前記工程(a)によって取得された前記工程(a)によって取得された前記対象物の前記顔撮像画像情報から、前記所定の顔部品の前記特徴点の位置情報を取得するとともに、前記所定画素の色情報を取得する工程と、(c)前記工程(a)によって取得された所定の前記顔部品の前記特徴点に基づいて、前記キャラクタの頭部形状モデルを生成する工程と、(d)前記工程(a)によって取得された前記所定画素の色情報に基づいて前記キャラクタの頭部テクスチャの肌色となる前記対象物の肌色に類似した類似肌色情報を設定し、更に、前記工程(a)によって取得された前記特徴点に基づいて前記対象物の前記顔部品に類似した前記顔部品に対応する類似アニメ調パーツを選択し、選択した前記類似アニメ調パーツの配置を設定することにより前記顔部品に対応した顔部品テクスチャを生成する工程と、(e)前記工程(b)によって生成された前記顔部品テクスチャを、前記キャラクタの前記頭部形状モデルに貼り付けるとともに、前記工程(c)によって生成された前記頭部形状モデルに貼り付けられた前記頭部テクスチャの肌の部分の色情報を前記工程(d)によって設定された前記類似肌色情報に置き換える工程と、を備えていることを特徴とする。 The character generation method according to the first aspect of the present invention uses a computer to convert a character's face displayed on the display unit as being in a virtual space to an animated tone based on the face image information of the object. A character generation method for generating a portrait, wherein the computer (a) acquires the captured face image information of the object from captured image information of the object stored in a predetermined storage unit; (B) acquiring positional information of the feature points of the predetermined facial part from the face captured image information of the object acquired by the step (a) acquired by the step (a); Obtaining color information of the predetermined pixel; and (c) calculating a head shape model of the character based on the feature points of the predetermined facial part acquired in the step (a). And (d) setting similar skin color information similar to the skin color of the object to be the skin color of the head texture of the character based on the color information of the predetermined pixel acquired in the step (a). Further, a similar anime-like part corresponding to the face part similar to the face part of the target object is selected based on the feature points acquired in the step (a), and the selected similar anime-like part is selected. A step of generating a facial part texture corresponding to the facial part by setting an arrangement; and (e) pasting the facial part texture generated by the step (b) to the head shape model of the character. In addition, the color information of the skin part of the head texture pasted on the head shape model generated by the step (c) is obtained by the step (d). Characterized in that it comprises a step of replacing the constant has been the similar skin color information.
 本発明の第2の態様にかかるキャラクタ生成方法は、本発明の第1の態様にかかるキャラクタ生成方法の前記工程(d)が、(d1)前記工程(b)によって取得された前記特徴点の位置情報に基づいて、前記顔部品に対応して予め用意された複数のアニメ調パーツの中から前記対象物の前記顔部品の形状に類似した前記類似アニメ調パーツを選択する工程と、(d2)前記工程(b)によって取得された前記所定画素の色情報に基づいて、予め用意された複数の肌色情報の中から前記対象物の肌色に類似した前記類似肌色情報を選択する工程と、(d3)前記工程(b)によって取得された前記特徴点の位置情報に基づいて、前記工程(d1)によって選択された前記顔部品に対応する前記類似アニメ調パーツの配置を設定し、設定した前記類似アニメ調パーツの配置に基づいて前記顔部品テクスチャを生成する工程と、を備えていることを特徴とする。 In the character generation method according to the second aspect of the present invention, the step (d) of the character generation method according to the first aspect of the present invention is characterized in that the feature points acquired by (d1) the step (b) Selecting the similar anime-like part similar to the shape of the face part of the target object from among a plurality of anime-like parts prepared in advance corresponding to the face part based on position information; (d2 ) Selecting the similar skin color information similar to the skin color of the object from a plurality of skin color information prepared in advance based on the color information of the predetermined pixel acquired in the step (b); d3) Based on the position information of the feature points acquired in the step (b), the arrangement of the similar anime-like parts corresponding to the face part selected in the step (d1) is set and set. A step of generating the face part texture based on the arrangement of the serial similar animation style part, characterized in that it comprises.
 本発明の第3の態様にかかるキャラクタ生成方法は、本発明の第2の態様にかかるキャラクタ生成方法の前記工程(d2)が、前記顔部品が左右2つの基本顔部品から構成されるとき、左右どちらか一方の前記基本顔部品の前記特徴点の位置情報を顔中心線に対して線対称に移動させた位置情報に変換し、左右2つの前記基本顔部品の前記特徴点の位置情報の平均値から前記基本顔部品の形状を推定し、推定した前記基本顔部品の形状に対応した前記類似アニメ調パーツを前記複数のアニメ調パーツの中から選択し、前記顔部品が1つの前記基本顔部品から構成されるとき、前記顔部品の前記特徴点の位置情報から前記顔部品の形状を推定し、推定した前記顔部品の形状に対応した前記類似アニメ調パーツを前記複数のアニメ調パーツの中から選択することを特徴とする。 In the character generating method according to the third aspect of the present invention, when the step (d2) of the character generating method according to the second aspect of the present invention is such that the face part is composed of two basic face parts on the left and right, The position information of the feature points of one of the left and right basic face parts is converted into position information moved in line symmetry with respect to the face center line, and the position information of the feature points of the two left and right basic face parts The shape of the basic face part is estimated from an average value, the similar anime-like part corresponding to the estimated shape of the basic face part is selected from the plurality of anime-like parts, and the basic part having one face part is selected. When configured from a facial part, the shape of the facial part is estimated from position information of the feature points of the facial part, and the similar animated part corresponding to the estimated shape of the facial part is converted into the plurality of animated parts. in Characterized by al selection.
 本発明の第4の態様にかかるキャラクタ生成方法は、本発明の第2または3の態様にかかるキャラクタ生成方法の前記工程(d3)が、前記頭部形状モデルに貼り付けられる位置が予め設定されている前記顔部品に対応した前記顔部品テクスチャにおいて、予め設定されている鼻及び口の前記顔部品テクスチャ内の位置に、鼻の前記類似アニメ調パーツ及び口の前記類似アニメ調パーツを配置し、予め設定されている目の前記顔部品テクスチャ内の上下方向の位置に、前記特徴点の位置情報から算出した左右の目の間隔で、顔中心線に対して左右の目の前記類似アニメ調パーツが線対象となるように配置し、予め設定されている眉の前記顔部品テクスチャ内の左右方向の位置に、前記特徴点の位置情報から算出した上下方向の位置で、顔中心線に対して左右の眉の前記類似アニメ調パーツが線対象となるように配置し、鼻、口、目、及び眉に対応した前記類似アニメ調パーツの配置に基づいて、鼻、口、目、及び眉に対応した前記顔部品テクスチャを生成することを特徴とする。 In the character generation method according to the fourth aspect of the present invention, a position where the step (d3) of the character generation method according to the second or third aspect of the present invention is pasted on the head shape model is preset. In the facial part texture corresponding to the facial part, the nose and mouth of the similar animated part and the similar animated part of the mouth are arranged at positions in the facial part texture that are set in advance. The similar animation tone of the left and right eyes with respect to the face center line at a predetermined vertical position in the facial part texture of the eyes, with the left and right eye intervals calculated from the position information of the feature points. The part is arranged to be a line target, and the position of the eyebrow in the left and right direction within the face part texture set in advance is the position in the vertical direction calculated from the position information of the feature point. Place the similar anime-like parts on the left and right eyebrows with respect to the line as the line target, and based on the arrangement of the similar anime-like parts corresponding to the nose, mouth, eyes, and eyebrows, the nose, mouth, eyes And the facial part texture corresponding to the eyebrows is generated.
 本発明の第5の態様にかかるキャラクタ生成方法は、本発明の第2または3の態様にかかるキャラクタ生成方法の前記工程(d3)が、前記頭部形状モデルに貼り付けられる位置が予め設定されている前記顔部品に対応した前記顔部品テクスチャにおいて、前記工程(b)によって取得された前記特徴点の位置情報に基づいて前記顔部品に対応した前記顔部品テクスチャ内の前記類似アニメ調パーツの位置を算出し、算出した前記類似アニメ調パーツの位置に基づいて前記顔部品テクスチャを生成することを特徴とする。 In the character generation method according to the fifth aspect of the present invention, a position where the step (d3) of the character generation method according to the second or third aspect of the present invention is pasted on the head shape model is preset. In the facial part texture corresponding to the facial part, the similar animated part in the facial part texture corresponding to the facial part based on the positional information of the feature points acquired in the step (b). A position is calculated, and the facial part texture is generated based on the calculated position of the similar anime-like part.
 本発明の第6の態様にかかるキャラクタ生成方法は、本発明の第4または5の態様にかかるキャラクタ生成方法の前記工程(d3)が、前記顔部品に対応する複数のモデル顔部品テクスチャの中から、前記顔部品に対応する前記類似アニメ調パーツの配置に類似した類似モデル顔部品テクスチャを選択し、選択した類似モデル顔部品テクスチャを前記顔部品テクスチャとして設定することを特徴とすることを特徴とする。 In the character generation method according to the sixth aspect of the present invention, the step (d3) of the character generation method according to the fourth or fifth aspect of the present invention includes a plurality of model face part textures corresponding to the face part. A similar model face part texture similar to the arrangement of the similar anime-like parts corresponding to the face part is selected, and the selected similar model face part texture is set as the face part texture. And
 本発明の第7の態様にかかるキャラクタ生成方法は、本発明の第1乃至6のいずれか1つの態様にかかるキャラクタ生成方法の前記工程(c)が、顔輪郭に対応して予め用意された複数の輪郭形状モデルの中から、顔輪郭の前記特徴点に基づいて、顔輪郭に類似した基礎頭部形状モデルを選択し、顔輪郭以外の所定の顔部品に対応して予め用意された複数の部品形状モデルの中から、前記所定の顔部品の前記特徴点に基づいて、前記所定の顔部品に類似した類似部品形状モデルを選択し、選択した前記基礎頭部形状モデルと前記類似部品形状モデルとを組み合わせて前記頭部形状モデルを生成することを特徴とすることを特徴とする。 In the character generating method according to the seventh aspect of the present invention, the step (c) of the character generating method according to any one of the first to sixth aspects of the present invention is prepared in advance corresponding to the face contour. A basic head shape model similar to the face contour is selected from a plurality of contour shape models based on the feature points of the face contour, and a plurality of prepared in advance corresponding to predetermined face parts other than the face contour A similar part shape model similar to the predetermined face part is selected from the part shape models of the predetermined face part based on the feature points of the predetermined face part, and the selected basic head shape model and the similar part shape are selected. The head shape model is generated by combining with a model.
 本発明の第8の態様にかかるキャラクタ生成方法は、本発明の第1乃至6のいずれか1つの態様にかかるキャラクタ生成方法の前記工程(c)が、顔輪郭に対応して予め用意された複数の基準形状モデルの中から、顔輪郭の前記特徴点に基づいて、顔輪郭に類似した前記頭部形状モデルを選択することを特徴とする。 In the character generation method according to the eighth aspect of the present invention, the step (c) of the character generation method according to any one of the first to sixth aspects of the present invention is prepared in advance corresponding to the face contour. The head shape model similar to the face contour is selected from a plurality of reference shape models based on the feature points of the face contour.
 本発明の第9の態様にかかるキャラクタ生成方法は、本発明の第1乃至8のいずれか1つの態様にかかるキャラクタ生成方法において、前記コンピュータが、(f)所定の前記顔部品の配置を示す顔配置ガイド情報と前記対象物とを前記表示部に表示させながら、前記顔配置ガイド情報に基づいて前記対象物を撮像し、前記対象物の前記撮像画像情報を前記所定の記憶部に格納する工程を、前記(a)の前に更に備え、前記工程(a)は、前記顔配置ガイド情報に基づいて、前記対象物の前記顔撮像画像情報を、前記所定の記憶部に格納されている前記対象物の前記撮像画像情報から取得することを特徴する。 A character generation method according to a ninth aspect of the present invention is the character generation method according to any one of the first to eighth aspects of the present invention, wherein the computer shows (f) a predetermined arrangement of the face parts. While displaying the face arrangement guide information and the object on the display unit, the target object is imaged based on the face arrangement guide information, and the captured image information of the object is stored in the predetermined storage unit. A step is further provided before (a), and the step (a) stores the face captured image information of the object in the predetermined storage unit based on the face arrangement guide information. It is acquired from the captured image information of the object.
 本発明の第10の態様にかかるキャラクタ生成方法は、本発明の第1乃至9のいずれか1つの態様にかかるキャラクタ生成方法において、前記コンピュータが、(g)前記対象物の前記撮像画像情報を入力して、前記所定の記憶部に格納する工程を、前記(a)の前に更に備え、前記工程(a)は、所定の前記顔部品の配置を示す前記顔配置ガイド情報と、前記工程(g)によって入力された前記対象物の前記撮像画像情報とを前記表示部に表示させながら、前記顔配置ガイド情報に基づいて、前記対象物の前記顔撮像画像情報を、所定の記憶部に格納されている前記対象物の前記撮像画像情報から取得することを特徴する。 A character generation method according to a tenth aspect of the present invention is the character generation method according to any one of the first to ninth aspects of the present invention, wherein the computer (g) captures the captured image information of the object. The step of inputting and storing in the predetermined storage unit is further provided before (a), wherein the step (a) includes the face arrangement guide information indicating the arrangement of the predetermined facial part, and the step Based on the face arrangement guide information, the face captured image information of the object is stored in a predetermined storage unit while displaying the captured image information of the object input in (g) on the display unit. It is acquired from the captured image information of the stored object.
 本発明の第11の態様にかかるキャラクタ生成方法は、本発明の第9または10の態様にかかるキャラクタ生成方法において、前記顔配置ガイド情報によって配置が示される所定の前記顔部品は、顔輪郭を少なくとも含んでいることを特徴とする。 A character generation method according to an eleventh aspect of the present invention is the character generation method according to the ninth or tenth aspect of the present invention, wherein the predetermined facial part whose arrangement is indicated by the face arrangement guide information has a face outline. It contains at least.
 本発明の第1の態様にかかるプログラムは、仮想空間内の存在として表示部に表示されるキャラクタの顔を、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として生成するための処理を、コンピュータに実行させるプログラムであって、本発明の第1乃至第11のいずれか1つの態様にかかるキャラクタ生成システムの各手段を実現させる処理を前記コンピュータに実行させることを特徴とする。 The program according to the first aspect of the present invention performs processing for generating a character's face displayed on the display unit as existence in the virtual space as an animation-like portrait based on the captured face image information of the object. A program to be executed by a computer, characterized by causing the computer to execute processing for realizing each means of the character generation system according to any one of the first to eleventh aspects of the present invention.
 本発明によれば、携帯ゲーム装置等のコンピュータを用いて実現される仮想空間内のキャラクタの顔を、デジタルカメラ等で撮影された対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として容易に生成することができる。また、ゲーム等の仮想空間内に登場するキャラクタの顔を、ユーザの顔撮像画像情報に基づいたアニメ調の似顔絵として生成することにより、該キャラクタに対してユーザがより感情移入をすることができ、より娯楽性の高いゲームソフトウェアを構築することができる。また、写真画像を使用したキャラクタのデータ量に比較してアニメ調の似顔絵を使用したキャラクタのデータ量は少なく、ゲーム等の処理速度を早くしたり、より多くのキャラクタを登場させたりするゲームソフトウェアを構築することができる。 According to the present invention, a character's face in a virtual space realized by using a computer such as a portable game device can be easily converted into an animation-like caricature based on face image information of an object photographed by a digital camera or the like. Can be generated. In addition, by generating the face of a character appearing in a virtual space such as a game as an animation-like portrait based on the user's face captured image information, the user can be more emotionally transferred to the character. More highly entertaining game software can be constructed. In addition, the amount of character data that uses anime-like caricatures is small compared to the amount of character data that uses photographic images, game software that speeds up the processing of games, etc., and allows more characters to appear Can be built.
本発明の一実施形態にかかるキャラクタ生成システム10によって生成されたキャラクタ70の一例を示したもので、(a)は女性のキャラクタ70aを、(b)は男性のキャラクタ70bを示した一例である。2A and 2B show an example of a character 70 generated by the character generation system 10 according to an embodiment of the present invention, where FIG. 1A shows an example of a female character 70a and FIG. 2B shows an example of a male character 70b. . 本発明の一実施形態に係るにかかるキャラクタ生成システム10を実行させる携帯ゲーム装置20の概略構成を示す図である。It is a figure which shows schematic structure of the portable game device 20 which performs the character production | generation system 10 concerning one Embodiment of this invention. 顔配置ガイド情報が表示された表示画面の一例である。It is an example of the display screen on which face arrangement guide information is displayed. 本発明の一実施形態にかかるキャラクタ生成システム10におけるシステム構成の一例を示す図である。It is a figure which shows an example of the system configuration | structure in the character production | generation system 10 concerning one Embodiment of this invention. 顔部品テクスチャを説明するための頭部形状モデル60の正面から見た平面図である。It is the top view seen from the front of the head shape model 60 for demonstrating a facial component texture. (a)は、特徴点を説明するための図であり、(b)は右目領域41の拡大図である(A) is a figure for demonstrating a feature point, (b) is an enlarged view of the right eye area | region 41. FIG. (a)乃至(c)は、女性の眉のアニメ調パーツの複数の例を示した図である。(A) thru | or (c) is the figure which showed the some example of the animation-like part of a female eyebrow. (a)乃至(c)は、女性の目のアニメ調パーツの複数の例を示した図である。(A) thru | or (c) are the figures which showed the some example of the animation-like parts of a female eye. (a)乃至(c)は、女性の口のアニメ調パーツの複数の例を示した図である。(A) thru | or (c) are the figures which showed the some example of the animation-like parts of a woman's mouth. (a)乃至(c)は、男性の眉のアニメ調パーツの複数の例を示した図である。(A) thru | or (c) is the figure which showed the some example of the animation-like parts of a male eyebrow. (a)乃至(c)は、男性の目のアニメ調パーツの複数の例を示した図である。(A) thru | or (c) are the figures which showed the some example of the animation-like parts of a male eye. (a)乃至(c)は、男性の口のアニメ調パーツの複数の例を示した図である。(A) thru | or (c) is the figure which showed the some example of the animation-like parts of a male mouth. 顔部品テクスチャ51内における類似アニメ調パーツの配置を説明するための頭部形状モデル60の正面から見た平面図である。It is the top view seen from the front of head shape model 60 for explaining arrangement of similar animation tone parts in face part texture 51. 鼻の無い状態の基礎頭部形状モデル71と類似部品形状モデル72を説明するための図で、(a)は、顔正面の平面図であり、(b)は、顔側面の平面図である。It is a figure for demonstrating the basic head shape model 71 and the similar part shape model 72 of a state without a nose, (a) is a top view of a face front, (b) is a top view of the face side. . 頭部形状モデル60を説明するための図で、(a)は、顔正面の平面図であり、(b)は、顔側面の平面図である。It is a figure for demonstrating the head shape model 60, (a) is a top view of the face front, (b) is a top view of the face side. 本発明の一実施形態にかかるキャラクタ生成方法の各工程をコンピュータに実行させるプログラムの処理手順を示すフローチャート図の一例である。It is an example of the flowchart figure which shows the process sequence of the program which makes a computer perform each process of the character production | generation method concerning one Embodiment of this invention.
 本発明の一実施形態を、図面を参照しながら説明する。なお、以下に説明する実施形態は説明のためのものであり、本発明の範囲を制限するものではない。従って、当業者であればこれらの各要素もしくは全要素をこれと均等なもので置換した実施形態を採用することが可能であるが、これらの実施形態も本発明の範囲に含まれる。 An embodiment of the present invention will be described with reference to the drawings. In addition, embodiment described below is for description and does not limit the scope of the present invention. Accordingly, those skilled in the art can employ embodiments in which each or all of these elements are replaced by equivalents thereof, and these embodiments are also included in the scope of the present invention.
 本発明の一実施形態にかかるキャラクタ生成システム10は、コンピュータ上で動作するゲーム等のソフトウェアによって実現される仮想空間において、該仮想空間内の存在として表示部に表示されるキャラクタの顔を、対象物(例えば、ユーザ自身)の顔撮像画像情報に基づいたアニメ調の似顔絵として生成するキャラクタ生成システムである。図1は、本発明の一実施形態にかかるキャラクタ生成システム10によって生成された、携帯ゲーム装置上で動作するゲームソフトウェアによって実現される仮想空間内のキャラクタ70の一例を示したもので、(a)は女性のキャラクタ70aを、(b)は男性のキャラクタ70bを示した一例である。 A character generation system 10 according to an embodiment of the present invention targets a character face displayed on a display unit as a presence in a virtual space realized by software such as a game running on a computer. This is a character generation system that generates an animation-like portrait based on captured face information of an object (for example, a user). FIG. 1 shows an example of a character 70 in a virtual space realized by game software that is generated by a character generation system 10 according to an embodiment of the present invention and operates on a portable game device. ) Is an example showing a female character 70a, and (b) is an example showing a male character 70b.
 ここで、対象物の顔撮像画像情報とは、デジタルカメラ等を用いて撮像された人物の撮像画像情報における顔部分の撮像画像情報であり、コンピュータに接続された、あるいは内蔵された撮像部を使用して、撮像部から直接取得した撮像画像情報であっても、コンピュータに接続された入力部から入力された、過去に撮像された撮像画像情報であっても良い。また、一枚の撮像画像情報を部分的に修正したり、複数の撮像画像を合成したりして、一枚の顔撮像画像情報にした画像情報であっても良い。また、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵とは、対象物の顔撮像画像情報に類似し、さらに、かわいらしさを強調させた漫画やアニメーションに登場する人物のような顔のことである。 Here, the face captured image information of the target is captured image information of the face portion in the captured image information of a person captured using a digital camera or the like, and an imaging unit connected to or built in the computer is used. The captured image information acquired directly from the imaging unit may be used, or the captured image information captured in the past input from the input unit connected to the computer may be used. Also, the image information may be image information obtained by partially correcting one piece of captured image information or combining a plurality of captured images into one piece of face captured image information. An animation-like caricature based on the captured face image information of the object is similar to the captured face image information of the object, and is a face like a person appearing in a cartoon or animation that emphasizes cuteness. That is.
 まず、本発明の一実施形態にかかるキャラクタ生成システム10を実行させるコンピュータの概略構成について説明する。尚、本実施形態においては、コンピュータとして、携帯ゲーム装置20を例に挙げて説明する。 First, a schematic configuration of a computer that executes a character generation system 10 according to an embodiment of the present invention will be described. In the present embodiment, the portable game device 20 will be described as an example of a computer.
 図2は、本発明の一実施形態に係るにかかるキャラクタ生成システム10を実行させる携帯ゲーム装置20の概略構成を示す図である。図2に示すように、携帯ゲーム装置20は、CPU(中央処理装置)21、ROM22、RAM23、表示部24、撮像部25、入力部26を備えている。 FIG. 2 is a diagram showing a schematic configuration of the portable game device 20 that executes the character generation system 10 according to the embodiment of the present invention. As shown in FIG. 2, the portable game device 20 includes a CPU (central processing unit) 21, a ROM 22, a RAM 23, a display unit 24, an imaging unit 25, and an input unit 26.
 CPU21は、携帯ゲーム装置20で実行させるキャラクタ生成システム10を実現するためのソフトウェアおよびデータを記憶しているROM22から、必要な情報を読み出し、実行することにより、キャラクタ生成システム10を実現する。また、RAM23は、携帯ゲーム装置20で実行させるキャラクタ生成システム10を実現するために必要なデータの記憶装置及びソフトウェアの実行作業領域として機能する。 The CPU 21 implements the character generation system 10 by reading out and executing necessary information from the ROM 22 storing software and data for realizing the character generation system 10 to be executed by the portable game device 20. The RAM 23 also functions as a data storage device and software execution work area necessary for realizing the character generation system 10 executed by the portable game device 20.
 また、表示部24は、CPU21からの命令に従って表示情報(例えば、ユーザに作業を促すための情報、顔配置ガイド情報等)を出力する。ここで、顔配置ガイド情報とは、被写体の撮像画像情報において、顔部分となる顔撮像画像情報を取り出すための位置基準及び範囲を設定するため情報で、所定の顔部品の配置を示す表示情報である。また、所定の顔部品とは、顔を構成する目、鼻、口、顔輪郭、眉等の部品であり、少なくとも顔輪郭を含んでいる。図3は、顔配置ガイド情報が表示された表示画面の一例である。図3に示すように、顔配置ガイド情報は、鼻の位置31、顔の輪郭32を示している。また、顔の輪郭32は、キャラクタ70の頭部形状モデルに対応するものである。 Further, the display unit 24 outputs display information (for example, information for prompting the user to perform work, face arrangement guide information, etc.) in accordance with a command from the CPU 21. Here, the face placement guide information is information for setting a position reference and a range for extracting face captured image information as a face portion in captured image information of a subject, and is display information indicating an arrangement of a predetermined facial part. It is. The predetermined face parts are parts such as eyes, nose, mouth, face outline, and eyebrows constituting the face, and include at least the face outline. FIG. 3 is an example of a display screen on which face arrangement guide information is displayed. As shown in FIG. 3, the face arrangement guide information indicates a nose position 31 and a face outline 32. The face outline 32 corresponds to the head shape model of the character 70.
 また、撮像部25は、CPU21からの命令に従って被写体を撮像し、撮像した被写体の撮像画像情報をRAM23に記憶する。また、入力部26は、例えば、操作ボタン、通信装置、外部記憶装置等であり、CPU21からの命令に従って入力情報(例えば、ユーザによって操作された操作ボタンの情報、撮像画像情報等)を入力して、RAM23に記憶する。 Further, the imaging unit 25 images a subject in accordance with a command from the CPU 21 and stores captured image information of the captured subject in the RAM 23. The input unit 26 is, for example, an operation button, a communication device, an external storage device, or the like, and inputs input information (for example, information on the operation button operated by the user, captured image information, etc.) according to a command from the CPU 21. And stored in the RAM 23.
 次に、本発明の一実施形態にかかるキャラクタ生成システム10におけるシステム構成について説明する。図4は、本発明の一実施形態にかかるキャラクタ生成システム10におけるシステム構成の一例を示す図である。 Next, a system configuration in the character generation system 10 according to an embodiment of the present invention will be described. FIG. 4 is a diagram showing an example of a system configuration in the character generation system 10 according to the embodiment of the present invention.
 図4に示すように、キャラクタ生成システム10は、表示制御部11、撮像制御部12、入力制御部13、撮像画像情報取得部14、顔部品テクスチャ生成部15、形状モデル生成部16、テクスチャ貼付部17、キャラクタ情報記憶部18、及びガイド情報記憶部19を備えている。また、顔部品テクスチャ生成部15は、特徴点抽出部151、パーツ選択部152、肌色設定部153、及びパーツ配置設定部154を備えている。 As shown in FIG. 4, the character generation system 10 includes a display control unit 11, an imaging control unit 12, an input control unit 13, a captured image information acquisition unit 14, a facial part texture generation unit 15, a shape model generation unit 16, and a texture pasting. Unit 17, character information storage unit 18, and guide information storage unit 19. The face part texture generation unit 15 includes a feature point extraction unit 151, a part selection unit 152, a skin color setting unit 153, and a part arrangement setting unit 154.
 表示制御部11は、表示部24を制御して、所定の顔部品の配置を示す顔配置ガイド情報、ユーザに作業を促すための情報等の必要な情報をガイド情報記憶部19から取り出させ、取り出させた情報から表示情報を生成させ表示させる。尚、表示制御部11は、上述した制御に限らず、表示部24の全ての処理を制御する。また、撮像制御部12は、撮像部25を制御して被写体を撮像させ、撮像した被写体の撮像画像情報をキャラクタ情報記憶部18に記憶させる。また、入力制御部13は、対応する入力部26を制御して様々な入力情報(例えば、過去に撮像した撮像画像情報等)を入力させ、必要な情報をキャラクタ情報記憶部18に記憶させる。ここで、被写体の撮像画像情報をキャラクタ情報記憶部18に記憶させると記載しているが、一時的な記憶部に記憶させても良い。 The display control unit 11 controls the display unit 24 to take out necessary information from the guide information storage unit 19 such as face placement guide information indicating the placement of a predetermined facial part, information for prompting the user to work, Display information is generated and displayed from the extracted information. The display control unit 11 controls all processes of the display unit 24, not limited to the above-described control. Further, the imaging control unit 12 controls the imaging unit 25 to image a subject, and causes the character information storage unit 18 to store captured image information of the captured subject. Further, the input control unit 13 controls the corresponding input unit 26 to input various input information (for example, captured image information captured in the past), and causes the character information storage unit 18 to store necessary information. Here, it is described that the captured image information of the subject is stored in the character information storage unit 18, but it may be stored in a temporary storage unit.
 撮像画像情報取得部14は、表示制御部11と撮像制御部12とを連動させて、所定の顔部品の配置を示す顔配置ガイド情報と対象物(被写体)とを表示部24に表示させて、顔配置ガイド情報の顔部品の配置に合うように被写体の顔を撮像させる(図3参照)。特に、顔の輪郭32の横幅及び顎輪郭(下側輪郭)と、被写体の顔の横幅及び顎輪郭とを合わせるようにして、被写体の顔を撮像させる。 The captured image information acquisition unit 14 causes the display control unit 11 and the imaging control unit 12 to work together to display face arrangement guide information indicating the arrangement of a predetermined facial part and an object (subject) on the display unit 24. Then, the face of the subject is imaged so as to match the arrangement of the face parts in the face arrangement guide information (see FIG. 3). In particular, the face of the subject is imaged by matching the width and chin contour (lower contour) of the face contour 32 with the width and chin contour of the face of the subject.
 また、撮像画像情報取得部14は、顔撮像画像情報に基づいたアニメ調の似顔絵を生成するために、撮像した被写体の撮像画像情報から顔撮像画像情を取得する。顔撮像画像情報は、撮像部25によって撮像された被写体の撮像画像情報から取得したものであるが、入力部26から入力された被写体の撮撮像画像情報から取得したものであっても良い。即ち、撮像画像情報取得部14は、表示制御部11と入力制御部13とを連動させて、所定の顔部品の配置を示す顔配置ガイド情報と被写体の撮撮像画像情報とを表示部24に表示させながら、顔配置ガイド情報の顔部品の配置と被写体の撮撮像画像情報の中の顔部品の配置とを相対的に合わせながら、顔配置ガイド情報に合わせた顔撮撮像画像情を取得する。 Also, the captured image information acquisition unit 14 acquires facial captured image information from captured image information of the captured subject in order to generate an animation-like caricature based on the captured facial image information. The captured face image information is acquired from the captured image information of the subject imaged by the imaging unit 25, but may be acquired from the captured image information of the subject input from the input unit 26. In other words, the captured image information acquisition unit 14 links the display control unit 11 and the input control unit 13 to display the face arrangement guide information indicating the arrangement of predetermined facial parts and the captured image information of the subject on the display unit 24. While being displayed, the captured face image information matching the face placement guide information is acquired while relatively aligning the face part placement in the face placement guide information and the face part placement in the captured image information of the subject. .
 顔部品テクスチャ生成部15は、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵となるキャラクタ70の頭部形状モデル60(図15(a)及び(b)参照)に貼り付ける、鼻、口、目、眉等の顔部品に対応した顔部品テクスチャを生成する機能で、特徴点抽出部151、パーツ選択部152、肌色設定部153、及びパーツ配置設定部154から構成される。図5は、顔部品テクスチャを説明するための頭部形状モデル60を正面から見た平面図である。 The facial part texture generation unit 15 pastes the head shape model 60 (see FIGS. 15A and 15B) of the character 70 to be an animation-like caricature based on the face captured image information of the target object, A function for generating facial part textures corresponding to facial parts such as mouth, eyes, and eyebrows, and includes a feature point extraction unit 151, a part selection unit 152, a skin color setting unit 153, and a part arrangement setting unit 154. FIG. 5 is a plan view of the head shape model 60 for explaining the facial part texture as viewed from the front.
 図5に示すように、顔部品テクスチャ51(例えば、口の顔部品テクスチャ51a、鼻の顔部品テクスチャ51b、目の顔部品テクスチャ51c、眉の顔部品テクスチャ51d)は、頭部形状モデル60に対して貼り付けられる位置が予め設定されている。また、顔部品テクスチャ51は、対象物の顔部品に類似した類似アニメ調パーツ52が配置されたテクスチャである。例えば、図5に示すように、目の顔部品テクスチャ51cの場合、右目及び左目の基本顔部品に基づいて選択された2つの類似アニメ調パーツ52が配置されたテクスチャである。また、図5では、それぞれの顔部品テクスチャ51は重ならないように配置されているが、少なくとも異なる類似アニメ調パーツ52同士が重ならなければ、顔部品テクスチャ51が重なって配置されていても良い。 As shown in FIG. 5, the face part texture 51 (for example, mouth face part texture 51 a, nose face part texture 51 b, eye face part texture 51 c, eyebrow face part texture 51 d) is added to the head shape model 60. A position to be pasted is set in advance. The face part texture 51 is a texture in which a similar anime-like part 52 similar to the face part of the target object is arranged. For example, as shown in FIG. 5, the eye face part texture 51c is a texture in which two similar anime-like parts 52 selected based on the basic face parts of the right eye and the left eye are arranged. In FIG. 5, the facial part textures 51 are arranged so as not to overlap each other, but the facial part textures 51 may be arranged so as to overlap if at least different similar anime-like parts 52 do not overlap each other. .
 尚、本明細書では、目のように左右2つある場合、右目及び左目それぞれを基本顔部品と呼び、左右の目をまとめて顔部品と呼ぶ。また、口のように1つしかない場合は、口を基本顔部品及び/または顔部品と呼ぶ。また、頭部形状モデル60に貼り付けられるテクスチャを頭部テクスチャ50と呼び、顔部品テクスチャ51以外の頭部テクスチャ50を余白部54と呼ぶ。即ち、頭部形状モデル60に貼り付けられる頭部テクスチャ50は、顔部品テクスチャ51と余白部54とから構成される。 In this specification, when there are two left and right eyes like the eyes, the right eye and the left eye are called basic face parts, and the left and right eyes are collectively called face parts. In addition, when there is only one like a mouth, the mouth is called a basic face part and / or a face part. The texture pasted on the head shape model 60 is referred to as a head texture 50, and the head texture 50 other than the face part texture 51 is referred to as a blank portion 54. That is, the head texture 50 pasted on the head shape model 60 is composed of the face part texture 51 and the margin part 54.
 顔部品テクスチャ生成部15の特徴点抽出部151は、撮像画像情報取得部14によって取得された顔撮像画像情報から、所定の顔部品の特徴点及び所定画素の色情報を抽出する。即ち、撮像画像情報取得部14によって取得された顔撮像画像情報から、所定の顔部品の特徴点の位置情報を取得するとともに、所定画素の色情報を取得する。図6(a)は、特徴点を説明するための図であり、(b)は右目領域41の拡大図である。図6(a)、(b)では、所定の顔部品として、眉、目、鼻、口、顔の輪郭を設定している。また、黒丸で示した点は特徴点を示している。特徴点とは、所定の顔部品の形状を特徴付ける点で、例えば、右目領域41内の7つの黒丸が、右目の特徴点43a~43gである。また、所定画素の色情報とは、例えば、鼻の頂点42の画素の色情報、両方の目の下の位置43a及び44aの画素の色情報である。尚、右目や左目等の右左は図面に向かって右側、左側とする。また、色情報とは、RGB、YUV等で表される色の情報である。 The feature point extraction unit 151 of the face part texture generation unit 15 extracts feature points of predetermined facial parts and color information of predetermined pixels from the face captured image information acquired by the captured image information acquisition unit 14. That is, the position information of the feature points of a predetermined facial part is acquired from the captured face image information acquired by the captured image information acquisition unit 14, and the color information of the predetermined pixel is acquired. FIG. 6A is a diagram for explaining feature points, and FIG. 6B is an enlarged view of the right eye region 41. In FIGS. 6A and 6B, eyebrows, eyes, nose, mouth, and facial contours are set as predetermined facial parts. The points indicated by black circles indicate feature points. The feature points are points that characterize the shape of a predetermined facial part. For example, the seven black circles in the right eye region 41 are the right eye feature points 43a to 43g. The color information of the predetermined pixel is, for example, the color information of the pixel at the nose vertex 42 and the color information of the pixels at the positions 43a and 44a under both eyes. The right and left of the right eye and the left eye are the right side and the left side of the drawing. The color information is color information represented by RGB, YUV, or the like.
 顔部品テクスチャ生成部15のパーツ選択部152は、特徴点抽出部151によって抽出された所定の顔部品の特徴点の位置情報に基づいて、該顔部品の形状を推定し、推定した該顔部品の形状に類似した類似アニメ調パーツ52を、予め用意された複数のアニメ調パーツの中から選択する。ここで、アニメ調パーツとは、対応する顔部品をデフォルメした絵である。また、アニメ調パーツは、性別、人種、年齢等に基づいて、複数用意されている。図7乃至図9は、顔部品に対応したアニメ調パーツの例を示した図である。図7(a)乃至(c)は、女性の眉のアニメ調パーツの複数の例を示した図である。図8(a)乃至(c)は、女性の目のアニメ調パーツの複数の例を示した図である。図9(a)乃至(c)は、女性の口のアニメ調パーツの複数の例を示した図である。図10(a)乃至(c)は、男性の眉のアニメ調パーツの複数の例を示した図である。図11(a)乃至(c)は、男性の目のアニメ調パーツの複数の例を示した図である。図12(a)乃至(c)は、男性の口のアニメ調パーツの複数の例を示した図である。 The part selection unit 152 of the facial part texture generation unit 15 estimates the shape of the facial part based on the position information of the feature point of the predetermined facial part extracted by the feature point extraction unit 151, and the estimated facial part A similar anime-like part 52 similar to the shape of is selected from a plurality of animation-like parts prepared in advance. Here, the anime-like parts are pictures that deform the corresponding face parts. A plurality of anime-like parts are prepared based on gender, race, age, and the like. 7 to 9 are diagrams showing examples of anime-like parts corresponding to facial parts. FIGS. 7A to 7C are diagrams showing a plurality of examples of anime-like parts for female eyebrows. 8A to 8C are diagrams showing a plurality of examples of anime-like parts for female eyes. FIGS. 9A to 9C are diagrams showing a plurality of examples of anime-like parts of a female mouth. FIGS. 10A to 10C are diagrams showing a plurality of examples of anime-like parts for male eyebrows. FIGS. 11A to 11C are diagrams showing a plurality of examples of anime-like parts of male eyes. 12 (a) to 12 (c) are diagrams showing a plurality of examples of anime-like parts of a male mouth.
 図6(a)、(b)に示すように、例えば目の場合、目の幅(点43bと点43fとの間隔)、目も高さ(点43aと点43dとの間隔)、たれ目や釣り目等を示す目の傾き(点43gと点43bとを結ぶ線分49と、点43gから水平に左右方向(X方向)へ伸ばした線分48との角度θ)によって、目の形状を推定する。従って、複数の目のアニメ調パーツの中から、推定された目の幅、目も高さ、目の傾きに最も近いアニメ調パーツが、目の類似アニメ調パーツ52となる。 As shown in FIGS. 6A and 6B, for example, in the case of an eye, the width of the eye (the distance between the points 43b and 43f), the height of the eye (the distance between the points 43a and 43d), and the eyelet The shape of the eye according to the inclination of the eye (such as a line segment 49 connecting the points 43g and 43b and an angle θ between the line segment 48 extending horizontally from the point 43g in the left-right direction (X direction)). Is estimated. Therefore, the anime-like part closest to the estimated eye width, eye height, and eye inclination among the anime-like parts of the plurality of eyes becomes the similar anime-like part 52 of the eyes.
 また、眉や目の場合は、左右2つの基本顔部品から構成されているため、左右別々に形状を推定すると、左右で異なる形状になってしまう場合がある。しかし、顔全体のバランスを考えたとき、左右が同じ形状(顔中心線に対して線対称な形状)である方が顔全体のバランスを崩さないため、眉や目の場合は、例えば左目の位置情報を顔中心線に対して線対称に移動させた位置情報に変換し、即ち、左右反転させて右目の位置に変換し、2つの右目の特徴点の位置情報の平均から、目の特徴点の位置情報を算出し、目の形状を推定する。眉の場合も目と同様である。尚、鼻、口、顔の輪郭については、抽出した特徴点の位置情報から形状を推定する。また、目の場合、瞳の色情報を、特徴点抽出部151において抽出し、抽出した色情報に基づいて、類似アニメ調パーツ52の瞳の色情報を変更しても良い。 In addition, since the eyebrows and eyes are composed of two basic face parts on the left and right, if the shape is estimated separately on the left and right, the left and right may have different shapes. However, when considering the balance of the entire face, if the left and right sides have the same shape (a shape that is line-symmetric with respect to the face center line), the balance of the entire face will not be lost. The position information is converted into position information that is moved symmetrically with respect to the face center line, that is, the position of the eye is calculated from the average of the position information of the feature points of the two right eyes after being horizontally reversed and converted into the position of the right eye. The position information of the points is calculated, and the shape of the eyes is estimated. The same applies to the eyebrows. For the nose, mouth and face outlines, the shape is estimated from the extracted feature point position information. In the case of eyes, the color information of the pupil may be extracted by the feature point extraction unit 151, and the color information of the pupil of the similar anime-like part 52 may be changed based on the extracted color information.
 顔部品テクスチャ生成部15の肌色設定部153は、特徴点抽出部151によって抽出された所定画素の色情報に基づいて、対象物の肌色に類似した色情報(以下、類似肌色情報と呼ぶ)を算出し、算出した類似肌色情報をキャラクタ70の頭部テクスチャ50の色情報として設定する。例えば、特徴点抽出部151によって抽出された所定画素の色情報が、鼻の頂点42の画素の色情報、両方の目の下の位置43a及び44aの画素の色情報であるとき、3つの画素の色情報の平均を算出し、算出した色情報を類似肌色情報とする。 The skin color setting unit 153 of the face part texture generation unit 15 uses color information similar to the skin color of the target object (hereinafter referred to as similar skin color information) based on the color information of the predetermined pixel extracted by the feature point extraction unit 151. The calculated similar skin color information is set as the color information of the head texture 50 of the character 70. For example, when the color information of the predetermined pixel extracted by the feature point extraction unit 151 is the color information of the pixel of the nose vertex 42 and the color information of the pixels 43a and 44a under both eyes, the colors of the three pixels An average of the information is calculated, and the calculated color information is set as similar skin color information.
 顔部品テクスチャ生成部15のパーツ配置設定部154は、特徴点抽出部151によって抽出された所定の顔部品の特徴点の位置情報に基づいて、顔部品テクスチャ51内における、パーツ選択部152によって選択された類似アニメ調パーツ52の配置を設定し、設定した類似アニメ調パーツ52の位置に基づいて顔部品テクスチャ51を生成する。 The part arrangement setting unit 154 of the facial part texture generation unit 15 selects the part selection unit 152 in the facial part texture 51 based on the position information of the feature points of the predetermined facial part extracted by the feature point extraction unit 151. The arrangement of the similar anime-like part 52 is set, and the face part texture 51 is generated based on the set position of the similar anime-like part 52.
 図13は、顔部品テクスチャ51内における類似アニメ調パーツ52の配置を説明するための頭部形状モデル60の正面から見た平面図である。図13に示すように、頭部形状モデル60に貼り付けられる顔部品テクスチャ51a、51b、51c、51dのそれぞれにおいて、点61は鼻の類似アニメ調パーツ52の配置を、点62は口の類似アニメ調パーツ52の配置を、点63は右目の類似アニメ調パーツ52の配置を、点64は左目の類似アニメ調パーツ52の配置を、点65は右眉の類似アニメ調パーツ52の配置を、点66は左眉の類似アニメ調パーツ52の配置を示す。 FIG. 13 is a plan view seen from the front of the head shape model 60 for explaining the arrangement of the similar animation-like parts 52 in the face part texture 51. FIG. As shown in FIG. 13, in each of the facial part textures 51 a, 51 b, 51 c, 51 d to be pasted on the head shape model 60, the point 61 is the arrangement of the nose-like animation part 52, and the point 62 is the mouth similarity The arrangement of the anime-like parts 52, the point 63 the arrangement of the similar-animation-like parts 52 of the right eye, the point 64 the arrangement of the similar-animation-like parts 52 of the left-eye, and the point 65 the arrangement of the similar-animation-like parts 52 of the right eyebrow. A point 66 indicates the arrangement of the similar anime-like part 52 of the left eyebrow.
 顔部品テクスチャ51a、51b、51c、51dが貼り付けられる頭部形状モデル60の位置は、頭部形状モデル60に対応して予め設定されている。
 鼻の顔部品テクスチャ51b及び口の顔部品テクスチャ51bに対して、鼻の類似アニメ調パーツ52の配置である点61及び口の類似アニメ調パーツ52の配置である点62は、予め設定されている。また、目の顔部品テクスチャ51cに対する目の類似アニメ調パーツ52の配置については、目の類似アニメ調パーツ52の上下方向(高さ方向:Y方向)の位置が予め設定され、左右方向(幅方向:X方向)の位置は両目の特徴点から算出する。即ち、予め設定されている両目の類似アニメ調パーツ52の上下方向(高さ方向:Y方向)の位置で、両目の類似アニメ調パーツ52の間隔67を両目の特徴点から算出し、両目の類似アニメ調パーツ52が、顔中心線69に対して線対象となるように配置する。また、眉の顔部品テクスチャ51dに対する眉の類似アニメ調パーツ52の配置については、眉の類似アニメ調パーツ52の左右方向の位置が予め設定され、上下方向の位置を両眉の特徴点から算出する。両眉の類似アニメ調パーツ52の上下方向の位置は、両眉の特徴点から算出し、両眉の類似アニメ調パーツ52が、顔中心線69に対して線対象となるように配置する。
The position of the head shape model 60 to which the facial part textures 51 a, 51 b, 51 c, 51 d are pasted is set in advance corresponding to the head shape model 60.
With respect to the nose face part texture 51b and the mouth face part texture 51b, a point 61 that is the arrangement of the nose-like animation-like part 52 and a point 62 that is the arrangement of the mouth-like animation-like part 52 are preset. Yes. As for the arrangement of the eye-like anime-like part 52 with respect to the eye-face part texture 51c, the position of the eye-like anime-like part 52 in the vertical direction (height direction: Y direction) is set in advance, and the left-right direction (width) The position of (direction: X direction) is calculated from the feature points of both eyes. That is, the distance 67 between the similar anime-like parts 52 of both eyes is calculated from the feature points of both eyes at the preset vertical position (height direction: Y direction) of the similar anime-like parts 52 of both eyes. The similar anime-like parts 52 are arranged so as to be line targets with respect to the face center line 69. In addition, regarding the arrangement of the similar animation-like part 52 of the eyebrow with respect to the facial part texture 51d of the eyebrow, the horizontal position of the similar animation-like part 52 of the eyebrow is preset, and the vertical position is calculated from the feature points of both eyebrows. To do. The vertical position of the similar animation-like parts 52 of both eyebrows is calculated from the feature points of both eyebrows, and the similar animation-like parts 52 of both eyebrows are arranged so as to be a line target with respect to the face center line 69.
 顔部品テクスチャ51に対する類似アニメ調パーツ52の配置は、上述した配置処理に限定されることはなく、顔部品の特徴点に基づいて全ての顔部品テクスチャ51内の類似アニメ調パーツ52の配置を算出するようにしても良い。また、顔部品テクスチャ51に対する類似アニメ調パーツ52の配置を予め固定するようにしても良い。 The arrangement of the similar anime-like parts 52 with respect to the face part texture 51 is not limited to the arrangement process described above, and the arrangement of the similar anime-like parts 52 in all the face part textures 51 is determined based on the feature points of the face parts. It may be calculated. Further, the arrangement of the similar animation-like parts 52 with respect to the face part texture 51 may be fixed in advance.
 また、顔部品テクスチャ51に対する類似アニメ調パーツ52の配置に基づいて顔部品テクスチャ51内に類似アニメ調パーツ52を配置した顔部品テクスチャ51に限らず、顔部品に対応する複数のモデル顔部品テクスチャの中から、顔部品テクスチャ51に対する類似アニメ調パーツ52の配置に類似し類似モデル顔部品テクスチャを選択し、選択した類似モデル顔部品テクスチャを顔部品テクスチャ51として設定するようにしても良い。 In addition to the facial part texture 51 in which the similar animated part 52 is arranged in the facial part texture 51 based on the arrangement of the similar animated part 52 with respect to the facial part texture 51, a plurality of model facial part textures corresponding to the facial part are included. The similar model face part texture similar to the arrangement of the similar anime-like part 52 with respect to the face part texture 51 may be selected, and the selected similar model face part texture may be set as the face part texture 51.
 以上のようにして、顔部品テクスチャ生成部15は、被写体の顔撮像画像情報に基づいたアニメ調の似顔絵の顔部品に対応した顔部品テクスチャ51を生成する。そして、生成した顔部品テクスチャ51をキャラクタ情報記憶部18に格納する。 As described above, the facial part texture generation unit 15 generates the facial part texture 51 corresponding to the facial part of the cartoon-like portrait based on the captured face image information of the subject. Then, the generated facial part texture 51 is stored in the character information storage unit 18.
 形状モデル生成部16は、所定の顔部品の特徴点に基づいて、頭部形状モデル60を生成する機能である。また、生成した頭部形状モデル60には、頭部テクスチャ50の余白部54を貼り付ける。 The shape model generation unit 16 has a function of generating a head shape model 60 based on feature points of predetermined face parts. Further, the margin part 54 of the head texture 50 is pasted on the generated head shape model 60.
 頭部形状モデル60に予め格納してある複数の輪郭形状モデルの中から、顔部品テクスチャ生成部15によって取得された顔輪郭の特徴点に基づいて、顔輪郭に類似した頭部輪郭の部品形状モデル(以下、基礎頭部形状モデルと呼ぶ)71を選択し、頭部形状モデル60に予め格納してある所定の顔部品に対応する複数の部品形状モデルの中から、該顔部品に類似した該顔部品の部品形状モデル(以下、類似部品形状モデルと呼ぶ)72を選択する。そして、選択された基礎頭部形状モデル71と類似形状モデル72を組み合わせて(合体させて)頭部形状モデル60を生成する。ここで、基礎頭部形状モデル71は、所定の顔部品に対応する部品形状モデル部分が無い状態の頭部形状モデルである。即ち、例えば、所定の顔部品が鼻である場合は、鼻の無い状態の頭部形状モデルが基礎頭部形状モデル71である。 The head contour component shape similar to the face contour based on the facial contour feature points acquired by the facial component texture generation unit 15 from the plurality of contour shape models stored in advance in the head shape model 60 A model (hereinafter referred to as a basic head shape model) 71 is selected, and similar to the face part among a plurality of part shape models corresponding to a predetermined face part stored in the head shape model 60 in advance. A part shape model (hereinafter referred to as a similar part shape model) 72 of the face part is selected. Then, the head shape model 60 is generated by combining (combining) the selected basic head shape model 71 and the similar shape model 72. Here, the basic head shape model 71 is a head shape model in a state where there is no part shape model portion corresponding to a predetermined face part. That is, for example, when the predetermined facial part is the nose, the head shape model without the nose is the basic head shape model 71.
 図14は、鼻の無い状態の基礎頭部形状モデル71と類似部品形状モデル72を説明するための図で、(a)は、顔正面の平面図を示し、(b)は、顔側面の平面図を示す。また、図15は、頭部形状モデル60を説明するための図で、(a)は、顔正面の平面図を示し、(b)は、顔側面の平面図を示す。尚、図14(a)、(b)及び図15(a)、(b)では、類似部品形状モデル72として鼻の類似部品形状モデル72を例に挙げて説明する。図14(a)、(b)に示す、顔輪郭の特徴点に基づいて選択された基礎頭部形状モデル71と、鼻の特徴点に基づいて選択された鼻の類似部品形状モデル72と組み合わせて、図15(a)、(b)に示す頭部形状モデル60を生成する。 14A and 14B are diagrams for explaining the basic head shape model 71 and the similar part shape model 72 in a state where no nose is present. FIG. 14A is a plan view of the face front, and FIG. A plan view is shown. FIGS. 15A and 15B are diagrams for explaining the head shape model 60. FIG. 15A is a plan view of the face front, and FIG. 15B is a plan view of the face side. 14A, 14 </ b> B, 15 </ b> A, and 15 </ b> B, the nose similar part shape model 72 will be described as an example. 14A and 14B, a combination of the basic head shape model 71 selected based on the feature points of the face contour and the similar part shape model 72 of the nose selected based on the feature points of the nose Thus, the head shape model 60 shown in FIGS. 15A and 15B is generated.
 上述の頭部形状モデル60の生成は、基礎頭部形状モデル71と類似部品形状モデル72と組み合わせて頭部形状モデル60を生成する場合を説明したが、頭部形状モデル60に予め格納してある複数の基準形状モデルの中から、顔部品テクスチャ生成部15によって取得された顔輪郭の特徴点に基づいて、顔輪郭に類似した頭部形状モデル60を選択するようにしても良い。また、頭部形状モデル60は、予め設定された1つをキャラクタ情報記憶部18に格納しておき、キャラクタ情報記憶部18から頭部形状モデル60を取り出すようにしても良い。 Although the generation of the head shape model 60 described above has been described with respect to the case where the head shape model 60 is generated in combination with the basic head shape model 71 and the similar part shape model 72, the head shape model 60 is stored in advance in the head shape model 60. A head shape model 60 similar to the face contour may be selected from a plurality of reference shape models based on the feature points of the face contour acquired by the face part texture generation unit 15. Further, one of the head shape models 60 set in advance may be stored in the character information storage unit 18 and the head shape model 60 may be taken out from the character information storage unit 18.
 テクスチャ貼付部17は、形状モデル生成部16によって生成された頭部形状モデル60に、顔部品テクスチャ生成部15によって生成された顔部品テクスチャ51を貼り付けるとともに、頭部テクスチャ50の余白部54の色情報及び顔部品テクスチャ51の肌の部分の色情報を顔部品テクスチャ生成部15の肌色設定部153によって設定された類似肌色情報に置き換える。 The texture pasting unit 17 pastes the face part texture 51 generated by the face part texture generation unit 15 on the head shape model 60 generated by the shape model generation unit 16 and the margin part 54 of the head texture 50. The color information and the color information of the skin part of the face part texture 51 are replaced with similar skin color information set by the skin color setting unit 153 of the face part texture generation unit 15.
 尚、髪型、服装、アクセサリ等に対応する部品は、ゲーム等のシチュエーション、性別等に応じて、予めキャラクタ情報記憶部18にそれぞれ複数格納されており、ユーザに所望の部品を選択させたり、既定の部品を設定したりすることもできる。また、上述のテクスチャ貼付部17では、頭部テクスチャ50の余白部54の色情報及び顔部品テクスチャ51の肌の部分の色情報を類似肌色情報に置き換える処理を行っているが、頭部テクスチャ50の余白部54の色情報の類似肌色情報に置き換える処理を形状モデル生成部16で行い、顔部品テクスチャ51の肌の部分の色情報を類似肌色情報に置き換える処理を顔部品テクスチャ生成部15のパーツ配置設定部154で行っても良い。 Note that a plurality of parts corresponding to hairstyles, clothes, accessories, etc. are stored in advance in the character information storage unit 18 in accordance with the situation, sex, etc. of the game, etc. You can also set parts. In the texture pasting unit 17 described above, the color information of the marginal part 54 of the head texture 50 and the color information of the skin part of the face part texture 51 are replaced with similar skin color information. The shape model generation unit 16 performs a process of replacing the color information of the margin part 54 with the similar skin color information, and the process of replacing the color information of the skin part of the face part texture 51 with the similar skin color information. The arrangement setting unit 154 may perform this.
 次に、本発明の一実施形態にかかるキャラクタ生成システム10の起動タイミングについて説明する。本発明の一実施形態にかかるキャラクタ生成システム10の起動タイミングは、例えば、ロールプレイングゲーム(RPG)の場合、該RPGを開始する前に、キャラクタ生成システム10を起動させて、該RPGを開始する前に該RPGの仮想空間内に登場する全てのキャラクタ70の顔を生成するだけでなく、該RPGのゲーム進行途中において、新しく仮想空間内にキャラクタ70が登場したときに、キャラクタ生成システム10を動作させて、登場したキャラクタ70の顔を生成することもできる。即ち、ユーザの所望の起動タイミングで、キャラクタ生成システム10を起動させることができる。 Next, the start timing of the character generation system 10 according to an embodiment of the present invention will be described. For example, in the case of a role playing game (RPG), the activation timing of the character generation system 10 according to the embodiment of the present invention is started by starting the character generation system 10 before starting the RPG. In addition to generating the faces of all the characters 70 that have previously appeared in the virtual space of the RPG, the character generation system 10 is used when a new character 70 appears in the virtual space during the progress of the RPG game. The face of the appearing character 70 can also be generated by operating it. That is, the character generation system 10 can be activated at a user's desired activation timing.
 上述した本発明の一実施形態にかかるキャラクタ生成システム10により、携帯ゲーム装置20を用いて実現される仮想空間内のキャラクタ70の顔を、デジタルカメラ等で撮影された対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として容易に生成することができる。また、ゲーム等の仮想空間内に登場するキャラクタ70の顔を、ユーザの顔撮像画像情報に基づいたアニメ調の似顔絵として生成することにより、該キャラクタ70に対してユーザがより感情移入をすることができ、より娯楽性の高いゲームソフトウェアを構築することができる。また、写真画像を使用したキャラクタ70のデータ量に比較してアニメ調の似顔絵を使用したキャラクタ70のデータ量は少なく、ゲーム等の処理速度を早くしたり、より多くのキャラクタ70を登場させたりするゲームソフトウェアを構築することができる。 Face-captured image information of an object obtained by photographing the face of the character 70 in the virtual space realized by using the portable game device 20 with a digital camera or the like by the character generation system 10 according to the embodiment of the present invention described above. It can be easily generated as an anime-like caricature based on. Further, by generating the face of the character 70 appearing in a virtual space such as a game as an animation-like caricature based on the user's captured face image information, the user can more empathize with the character 70. It is possible to build game software with higher entertainment. Further, the data amount of the character 70 using the cartoon-like caricature is smaller than the data amount of the character 70 using the photographic image, so that the processing speed of the game or the like is increased, or more characters 70 appear. Game software can be built.
 次に、本発明の一実施形態にかかるキャラクタ生成方法について説明する。
 図16は、本発明の一実施形態にかかるキャラクタ生成方法の各工程をコンピュータに実行させるプログラムの処理手順を示すフローチャート図の一例である。
Next, a character generation method according to an embodiment of the present invention will be described.
FIG. 16 is an example of a flowchart showing a processing procedure of a program that causes a computer to execute each step of the character generation method according to the embodiment of the present invention.
 キャラクタの生成は、まず、アニメ調の似顔絵を生成するための対象となる顔撮像画像情報を取得する(ステップ101:S101)。ここでは、所定の顔部品の配置を示す顔配置ガイド情報と被写体とを表示部24に表示させながら、顔配置ガイド情報の顔部品の配置に合うように被写体の顔を撮像部25によって撮像し、撮像した被写体の撮像画像情報を取得する。更に、撮像した被写体の撮像画像情報から顔撮像画像情報を取得する。尚、ここでは、撮像部25によって撮像された被写体の撮像画像情報を用いているが、入力部26から入力した撮像画像情報であっても良い。また、一枚の撮像画像情報を部分的に修正したり、複数の撮像画像を合成したりして、一枚の顔撮像画像情報にした画像情報であっても良い。 To generate a character, first, face-captured image information that is a target for generating an animation-like portrait is acquired (step 101: S101). Here, while the face arrangement guide information indicating the arrangement of a predetermined face part and the subject are displayed on the display unit 24, the face of the subject is imaged by the imaging unit 25 so as to match the arrangement of the face parts of the face arrangement guide information. The captured image information of the captured subject is acquired. Furthermore, face captured image information is acquired from the captured image information of the captured subject. Here, captured image information of the subject imaged by the imaging unit 25 is used, but captured image information input from the input unit 26 may be used. Also, the image information may be image information obtained by partially correcting one piece of captured image information or combining a plurality of captured images into one piece of face captured image information.
 次に、顔撮像画像情報から、所定の顔部品の特徴点及び所定画素の色情報を抽出する(ステップ102:S102)。即ち、ステップ101の工程によって取得された顔撮像画像情報から、所定の顔部品の特徴点の位置情報を取得するとともに、所定画素の色情報を取得する。ここで、例えば、所定の顔部品は、眉、目、鼻、口、顔の輪郭である。また、例えば、所定画素の色情報は、図6(a)、(b)における鼻の頂点42の画素の色情報、両方の目の下の位置43a及び44aの画素の色情報である。 Next, feature points of predetermined face parts and color information of predetermined pixels are extracted from the captured face image information (step 102: S102). That is, the position information of feature points of a predetermined facial part is acquired from the captured face image information acquired in the step 101, and the color information of the predetermined pixel is acquired. Here, for example, the predetermined facial parts are eyebrows, eyes, nose, mouth, and face outline. Further, for example, the color information of the predetermined pixel is the color information of the pixel at the nose vertex 42 in FIGS. 6A and 6B and the color information of the pixels 43a and 44a under both eyes.
 次に、ステップ102の工程で抽出された所定の顔部品の特徴点の位置情報に基づいて、頭部形状モデル60を生成する(ステップ103:S103)。上述の形状モデル生成16で説明したように、例えば、頭部形状モデル60に予め格納してある複数の輪郭形状モデルの中から、顔部品テクスチャ生成部15によって取得された顔輪郭の特徴点に基づいて、顔輪郭に類似した基礎頭部形状モデル71を選択し、頭部形状モデル60に予め格納してある所定の顔部品に対応する複数の部品形状モデルの中から、該顔部品に類似した類似部品形状モデル72を選択し、選択された基礎頭部形状モデル71と類似形状モデル72を組み合わせて頭部形状モデル60を生成する。 Next, the head shape model 60 is generated based on the position information of the feature points of the predetermined facial part extracted in the step 102 (step 103: S103). As described in the shape model generation 16 above, for example, the facial contour feature points acquired by the facial part texture generation unit 15 from the plurality of contour shape models stored in advance in the head shape model 60 are used. Based on this, a basic head shape model 71 similar to the face contour is selected, and similar to the face part from among a plurality of part shape models corresponding to predetermined face parts stored in the head shape model 60 in advance. The selected similar part shape model 72 is selected, and the head shape model 60 is generated by combining the selected basic head shape model 71 and the similar shape model 72.
 また、頭部形状モデル60に予め格納してある複数の基準形状モデルの中から、顔部品テクスチャ生成部15によって取得された顔輪郭の特徴点に基づいて、顔輪郭に類似した頭部形状モデル60を選択するようにしても良い。また、頭部形状モデル60は予め設定された1つをキャラクタ情報記憶部18に格納しておき、キャラクタ情報記憶部18から頭部形状モデル60を取り出すようにしても良い。 Further, the head shape model similar to the face contour based on the facial contour feature points acquired by the face part texture generation unit 15 from the plurality of reference shape models stored in advance in the head shape model 60. 60 may be selected. Further, one head shape model 60 may be stored in the character information storage unit 18 in advance, and the head shape model 60 may be taken out from the character information storage unit 18.
 次に、ステップ102の工程で抽出された所定の顔部品の特徴点の位置情報に基づいて、該顔部品の形状を推定する(ステップ104:S104)。図6(a)、(b)を用いて上述したように、例えば目の場合、目の幅(点43bと点43fとの間隔)、目も高さ(点43aと点43dとの間隔)、たれ目や釣り目等を示す目の傾き(点43gと点43bとを結ぶ線分と、点43gから水平に左右方向へ伸ばした線分との角度θ)によって、目の形状を推定する。 Next, the shape of the face part is estimated based on the position information of the feature points of the predetermined face part extracted in step 102 (step 104: S104). As described above with reference to FIGS. 6A and 6B, for example, in the case of eyes, the eye width (the distance between the points 43b and 43f) and the eye height (the distance between the points 43a and 43d). The shape of the eye is estimated based on the inclination of the eye indicating the leaning, fishing, etc. (angle θ between the line connecting point 43g and point 43b and the line extending horizontally from point 43g in the horizontal direction). .
 次に、ステップ104の工程で推定した顔部品の形状に類似した類似アニメ調パーツ52を、予め用意された複数のアニメ調パーツの中から選択する(ステップ105:S105)。次に、ステップ102の工程で抽出された所定画素の色情報に基づいて、対象物の肌色に類似した類似肌色情報を算出する(ステップ106:S106)。図6(a)、(b)を用いて上述したように、例えば、抽出された所定画素の色情報が、鼻の頂点42の画素の色情報、両方の目の下の位置43a及び44aの画素の色情報であるとき、3つの画素の色情報の平均を算出し、算出した色情報を類似肌色情報とする。 Next, a similar anime-like part 52 similar to the shape of the face part estimated in the step 104 is selected from a plurality of animation-like parts prepared in advance (step 105: S105). Next, similar skin color information similar to the skin color of the object is calculated based on the color information of the predetermined pixel extracted in step 102 (step 106: S106). As described above with reference to FIGS. 6A and 6B, for example, the color information of the extracted predetermined pixel is the color information of the pixel of the nose vertex 42, the pixels 43a and 44a below both eyes. When it is color information, the average of the color information of three pixels is calculated, and the calculated color information is set as similar skin color information.
 次に、ステップ102の工程で抽出された所定の顔部品の特徴点の位置情報に基づいて、顔部品テクスチャ51内における類似アニメ調パーツ52の配置を設定し、設定した類似アニメ調パーツ52の位置に基づいて顔部品テクスチャ51を生成する(ステップ107:S107)。図13を用いて上述したように、鼻の顔部品テクスチャ51b及び口の顔部品テクスチャ51bに対して、鼻の類似アニメ調パーツ52の配置である点61及び口の類似アニメ調パーツ52の配置である点62は、予め設定されている。また、目の顔部品テクスチャ51cに対する目の類似アニメ調パーツ52の配置については、目の類似アニメ調パーツ52の上下方向の位置が予め設定され、左右方向の位置は両目の特徴点から算出する。両目の類似アニメ調パーツ52の間隔67を両目の特徴点から算出し、両目の類似アニメ調パーツ52が、顔中心線69に対して線対象となるように配置する。また、眉の顔部品テクスチャ51dに対する眉の類似アニメ調パーツ52の配置については、眉の類似アニメ調パーツ52の左右方向の位置が予め設定され、上下方向の位置を両眉の特徴点から算出する。両眉の類似アニメ調パーツ52の上下方向の位置は、両眉の特徴点から算出し、両眉の類似アニメ調パーツ52が、顔中心線69に対して線対象となるように配置する。また、上述した配置処理に限定されることはなく、顔部品の特徴点に基づいて全ての顔部品テクスチャ51内の類似アニメ調パーツ52の配置を算出するようにしても良い。また、顔部品テクスチャ51に対する類似アニメ調パーツ52の配置を予め固定するようにしても良い。 Next, based on the position information of the feature points of the predetermined facial part extracted in the process of step 102, the arrangement of the similar animated parts 52 in the facial part texture 51 is set, and the similar animated parts 52 of the set similar animated parts 52 are set. The face part texture 51 is generated based on the position (step 107: S107). As described above with reference to FIG. 13, with respect to the nose facial part texture 51b and the mouth facial part texture 51b, the point 61, which is the arrangement of the nose-like animation part 52, and the arrangement of the mouth-like animation part 52 The point 62 is set in advance. As for the arrangement of the eye-like anime-like part 52 with respect to the eye face part texture 51c, the vertical position of the eye-like anime-like part 52 is preset, and the left-right direction position is calculated from the feature points of both eyes. . The distance 67 between the similar anime-like parts 52 of both eyes is calculated from the feature points of both eyes, and the similar anime-like parts 52 of both eyes are arranged so as to be a line target with respect to the face center line 69. In addition, regarding the arrangement of the similar animation-like part 52 of the eyebrow with respect to the facial part texture 51d of the eyebrow, the horizontal position of the similar animation-like part 52 of the eyebrow is preset, and the vertical position is calculated from the feature points of both eyebrows. To do. The vertical position of the similar animation-like parts 52 of both eyebrows is calculated from the feature points of both eyebrows, and the similar animation-like parts 52 of both eyebrows are arranged so as to be a line target with respect to the face center line 69. Further, the arrangement processing is not limited to the above-described arrangement processing, and the arrangement of the similar animated parts 52 in all the facial part textures 51 may be calculated based on the feature points of the facial parts. Further, the arrangement of the similar animation-like parts 52 with respect to the face part texture 51 may be fixed in advance.
 また、顔部品テクスチャ51に対する類似アニメ調パーツ52の配置に基づいて、顔部品テクスチャ51内に類似アニメ調パーツ52を配置した顔部品テクスチャ51に限らず、顔部品に対応する複数のモデル顔部品テクスチャの中から、顔部品テクスチャ51に対する類似アニメ調パーツ52の配置に類似し類似モデル顔部品テクスチャを選択し、選択した類似モデル顔部品テクスチャを顔部品テクスチャ51として設定するようにしても良い。 Further, based on the arrangement of the similar anime-like parts 52 with respect to the face-part texture 51, not only the face-part texture 51 in which the similar-animation-like parts 52 are arranged in the face-part texture 51, but also a plurality of model face parts corresponding to the face parts. A similar model face part texture similar to the arrangement of the similar anime-like part 52 with respect to the face part texture 51 may be selected from the textures, and the selected similar model face part texture may be set as the face part texture 51.
 最後に、ステップ103の工程によって生成された頭部形状モデル60に、ステップ107の工程によって生成された顔部品テクスチャ51を貼り付けるとともに、頭部テクスチャ50の余白部54の色情報及び顔部品テクスチャ51の肌の部分の色情報をステップ106の工程によって設定された類似肌色情報に置き換える(ステップ108:S108)。これにより、アニメ調の似顔絵を貼り付けたキャラクタ70が生成される。尚、ステップ108の工程において、頭部テクスチャ50の余白部54の色情報及び顔部品テクスチャ51の肌の部分の色情報を類似肌色情報に置き換える処理を行っているが、頭部テクスチャ50の余白部54の色情報の類似肌色情報に置き換える処理をステップ103の工程で行い、顔部品テクスチャ51の肌の部分の色情報を類似肌色情報に置き換える処理をステップ107の工程で行っても良い。 Finally, the facial part texture 51 generated by the step 107 is pasted on the head shape model 60 generated by the step 103, and the color information and the facial part texture of the margin part 54 of the head texture 50 are pasted. The color information of the skin portion 51 is replaced with the similar skin color information set in the step 106 (step 108: S108). As a result, the character 70 with the animation-like portrait attached is generated. In step 108, the color information of the margin part 54 of the head texture 50 and the color information of the skin part of the face part texture 51 are replaced with similar skin color information. The process of replacing the color information of the part 54 with the similar skin color information may be performed in the process of step 103, and the process of replacing the color information of the skin part of the facial part texture 51 with the similar skin color information may be performed in the process of step 107.
 上述した本発明の一実施形態にかかるキャラクタ生成方法により、携帯ゲーム装置20を用いて実現される仮想空間内のキャラクタ70の顔を、デジタルカメラ等で撮影された対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として容易に生成することができる。また、ゲーム等の仮想空間内に登場するキャラクタ70の顔を、ユーザの顔撮像画像情報に基づいたアニメ調の似顔絵として生成することにより、該キャラクタ70に対してユーザがより感情移入をすることができ、より娯楽性の高いゲームソフトウェアを構築することができる。また、写真画像を使用したキャラクタ70のデータ量に比較してアニメ調の似顔絵を使用したキャラクタ70のデータ量は少なく、ゲーム等の処理速度を早くしたり、より多くのキャラクタ70を登場させたりするゲームソフトウェアを構築することができる。 By using the character generation method according to the embodiment of the present invention described above, the face of the character 70 in the virtual space realized by using the portable game device 20 is converted into face-captured image information of an object photographed by a digital camera or the like. It can be easily generated as a cartoon-like caricature based on it. Further, by generating the face of the character 70 appearing in a virtual space such as a game as an animation-like caricature based on the user's captured face image information, the user can more empathize with the character 70. It is possible to build game software with higher entertainment. Further, the data amount of the character 70 using the cartoon-like caricature is smaller than the data amount of the character 70 using the photographic image, and the processing speed of the game or the like is increased, or more characters 70 appear. Game software can be built.
 上述した本発明の一実施形態にかかるキャラクタ生成システム10は、携帯ゲーム装置20を使用したシステムであるが、これに限定されるものではなく、業務用ゲーム装置(アーケードゲーム機)、家庭用ゲーム機、携帯電話、スタンドアローン型のコンピュータ、ワークステーション型のコンピュータシステム、ネットワーク型のコンピュータシステム等を用いて実現される仮想空間内の存在として表される2次元又は3次元のキャラクタの生成に適用できる。 The character generation system 10 according to the embodiment of the present invention described above is a system using the portable game device 20, but is not limited to this, and is not limited to this, and is a game machine for arcade (arcade game machine), a home game. Applicable to the generation of 2D or 3D characters represented as existence in a virtual space realized by using a computer, a mobile phone, a stand-alone computer, a workstation computer system, a network computer system, etc. it can.
  10 : キャラクタ生成システム
  11 : 表示制御部
  12 : 撮像制御部
  13 : 入力制御部
  14 : 撮像画像情報取得部
  15 : 顔部品テクスチャ生成部
  16 : 形状モデル生成部
  17 : テクスチャ貼付部
  18 : キャラクタ情報記憶部
  19 : ガイド情報記憶部
  50 : 頭部テクスチャ
  51 : 顔部品テクスチャ
  52 : 類似アニメ調パーツ
  54 : 余白部
  60 : 頭部形状モデル
  70 : キャラクタ
  71 : 基礎頭部形状モデル
  72 : 類似部品形状モデル
  151 : 特徴点抽出部
  152 : パーツ選択部
  153 : 肌色設定部
  154 : パーツ配置設定部
DESCRIPTION OF SYMBOLS 10: Character generation system 11: Display control part 12: Imaging control part 13: Input control part 14: Captured image information acquisition part 15: Facial part texture generation part 16: Shape model generation part 17: Texture sticking part 18: Character information storage Unit 19: Guide information storage unit 50: Head texture 51: Face part texture 52: Similar animation part 54: Margin part 60: Head shape model 70: Character 71: Basic head shape model 72: Similar part shape model 151 : Feature point extraction unit 152: Parts selection unit 153: Skin color setting unit 154: Parts placement setting unit

Claims (23)

  1.  コンピュータを使用して、仮想空間内の存在として表示部に表示されるキャラクタの顔を、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として生成するキャラクタ生成システムであって、
     前記コンピュータが、
     前記対象物の前記顔撮像画像情報を、所定の記憶部に格納されている前記対象物の撮像画像情報から取得する撮像画像取得手段と、
     前記撮像画像取得手段によって取得された前記対象物の前記顔撮像画像情報から、所定の顔部品の特徴点及び所定画素の色情報を抽出し、抽出した前記所定画素の色情報に基づいて前記キャラクタの頭部テクスチャの肌色となる前記対象物の肌色に類似した類似肌色情報を設定し、更に、抽出した前記特徴点に基づいて前記対象物の前記顔部品に類似した前記顔部品に対応する類似アニメ調パーツを選択し、選択した前記類似アニメ調パーツの配置を設定することにより前記顔部品に対応した顔部品テクスチャを生成する顔部品テクスチャ生成手段と、
     所定の前記顔部品の前記特徴点に基づいて、前記頭部形状モデルを生成する形状モデル生成手段と、
     前記キャラクタの頭部形状モデルに貼り付けられた前記顔部品テクスチャ以外の前記頭部テクスチャの色情報を前記顔部品テクスチャ生成手段によって設定された前記類似肌色情報に置き換え、前記顔部品テクスチャ生成手段によって生成された前記顔部品テクスチャを、前記キャラクタの前記頭部形状モデルに貼り付けるテクスチャ貼付手段と、
    を備えていることを特徴とするキャラクタ生成システム。
    A character generation system that uses a computer to generate a face of a character displayed on a display unit as a presence in a virtual space as an animation-like caricature based on face captured image information of an object,
    The computer is
    Captured image acquisition means for acquiring the captured face image information of the object from the captured image information of the object stored in a predetermined storage unit;
    Feature information of a predetermined facial part and color information of a predetermined pixel are extracted from the face captured image information of the object acquired by the captured image acquisition means, and the character is based on the extracted color information of the predetermined pixel. Similar skin color information similar to the skin color of the object that is the skin color of the head texture is set, and the similarity corresponding to the facial part similar to the facial part of the target object based on the extracted feature points A facial part texture generating means for selecting a cartoon part and generating a facial part texture corresponding to the facial part by setting the arrangement of the selected similar animated part;
    Shape model generation means for generating the head shape model based on the feature points of a predetermined face part;
    The color information of the head texture other than the face part texture pasted on the head shape model of the character is replaced with the similar skin color information set by the face part texture generation means, and the face part texture generation means Texture pasting means for pasting the generated facial part texture to the head shape model of the character;
    A character generation system comprising:
  2.  前記顔部品テクスチャ生成手段は、
     前記撮像画像取得手段によって取得された前記対象物の前記顔撮像画像情報から、前記所定の顔部品の前記特徴点の位置情報を取得するとともに、前記所定画素の色情報を取得する特徴点抽出手段と、
     前記特徴点抽出手段によって取得された前記特徴点の位置情報に基づいて、前記顔部品に対応して予め用意された複数のアニメ調パーツの中から前記対象物の前記顔部品の形状に類似した前記類似アニメ調パーツを選択するパーツ選択手段と、
     前記特徴点抽出手段によって取得された前記所定画素の色情報に基づいて、予め用意された複数の肌色情報の中から前記対象物の肌色に類似した前記類似肌色情報を選択する肌色設定手段と、
     前記特徴点抽出手段によって取得された前記特徴点の位置情報に基づいて、前記パーツ選択手段によって選択された前記顔部品に対応する前記類似アニメ調パーツの配置を設定し、設定した前記類似アニメ調パーツの配置に基づいて前記顔部品テクスチャを生成するパーツ配置設定手段と、
    を備えていることを特徴とする請求項1に記載のキャラクタ生成システム。
    The facial part texture generation means includes:
    Feature point extraction means for acquiring position information of the feature point of the predetermined facial part and color information of the predetermined pixel from the face captured image information of the object acquired by the captured image acquisition means When,
    Based on the position information of the feature points acquired by the feature point extraction unit, the shape is similar to the shape of the face part of the target object from among a plurality of animation-like parts prepared in advance corresponding to the face part Parts selection means for selecting the similar anime-like parts;
    Skin color setting means for selecting the similar skin color information similar to the skin color of the target object from a plurality of skin color information prepared in advance based on the color information of the predetermined pixel acquired by the feature point extraction means;
    Based on the position information of the feature points acquired by the feature point extraction means, the arrangement of the similar animation-like parts corresponding to the face parts selected by the parts selection means is set, and the set similar animation style Parts placement setting means for generating the facial part texture based on the placement of the parts;
    The character generation system according to claim 1, further comprising:
  3.  前記パーツ選択手段は、
     前記顔部品が左右2つの基本顔部品から構成されるとき、左右どちらか一方の前記基本顔部品の前記特徴点の位置情報を顔中心線に対して線対称に移動させた位置情報に変換し、左右2つの前記基本顔部品の前記特徴点の位置情報の平均値から前記基本顔部品の形状を推定し、推定した前記基本顔部品の形状に対応した前記類似アニメ調パーツを前記複数のアニメ調パーツの中から選択し、
     前記顔部品が1つの前記基本顔部品から構成されるとき、前記顔部品の前記特徴点の位置情報から前記顔部品の形状を推定し、推定した前記顔部品の形状に対応した前記類似アニメ調パーツを前記複数のアニメ調パーツの中から選択することを特徴とする請求項2に記載のキャラクタ生成システム。
    The parts selection means is
    When the face part is composed of two left and right basic face parts, the position information of the feature point of either the left or right basic face part is converted into position information moved in line symmetry with respect to the face center line. The shape of the basic face part is estimated from the average value of the position information of the feature points of the two left and right basic face parts, and the similar anime-like parts corresponding to the estimated shape of the basic face part are determined as the plurality of animations. Select from the key parts,
    When the face part is composed of one basic face part, the shape of the face part is estimated from position information of the feature points of the face part, and the similar animation tone corresponding to the estimated shape of the face part The character generation system according to claim 2, wherein a part is selected from the plurality of anime-like parts.
  4.  前記パーツ配置設定手段は、
     前記頭部形状モデルに貼り付けられる位置が予め設定されている前記顔部品に対応した前記顔部品テクスチャにおいて、
     予め設定されている鼻及び口の前記顔部品テクスチャ内の位置に、鼻の前記類似アニメ調パーツ及び口の前記類似アニメ調パーツを配置し、
     予め設定されている目の前記顔部品テクスチャ内の上下方向の位置に、前記特徴点の位置情報から算出した左右の目の間隔で、顔中心線に対して左右の目の前記類似アニメ調パーツが線対象となるように配置し、
     予め設定されている眉の前記顔部品テクスチャ内の左右方向の位置に、前記特徴点の位置情報から算出した上下方向の位置で、顔中心線に対して左右の眉の前記類似アニメ調パーツが線対象となるように配置し、
     鼻、口、目、及び眉に対応した前記類似アニメ調パーツの配置に基づいて、鼻、口、目、及び眉に対応した前記顔部品テクスチャを生成することを特徴とする請求項2または3に記載のキャラクタ生成システム。
    The parts arrangement setting means
    In the facial part texture corresponding to the facial part, the position pasted on the head shape model is preset,
    The nose and mouth similar animation-like parts and the mouth similar animation-like parts are arranged in positions in the face part texture of the nose and mouth,
    The similar anime-like parts of the left and right eyes with respect to the face center line at the vertical position in the face part texture set in advance with the distance between the left and right eyes calculated from the position information of the feature points Is placed to be a line target,
    The similar animated parts of the left and right eyebrows with respect to the face center line at the position in the vertical direction calculated from the position information of the feature point at the position in the left and right direction in the face part texture of the eyebrow set in advance. Arrange to be a line target,
    4. The face part texture corresponding to the nose, mouth, eyes, and eyebrows is generated based on the arrangement of the similar anime-like parts corresponding to the nose, mouth, eyes, and eyebrows. The character generation system described in 1.
  5.  前記パーツ配置設定手段は、
     前記頭部形状モデルに貼り付けられる位置が予め設定されている前記顔部品に対応した前記顔部品テクスチャにおいて、
     前記特徴点抽出手段によって取得された前記顔部品の前記特徴点の位置情報に基づいて前記顔部品に対応した前記顔部品テクスチャ内の前記類似アニメ調パーツの位置を算出し、
     算出した前記類似アニメ調パーツの位置に基づいて前記顔部品テクスチャを生成することを特徴とする請求項2または3に記載のキャラクタ生成システム。
    The parts arrangement setting means
    In the facial part texture corresponding to the facial part, the position pasted on the head shape model is preset,
    Calculating the position of the similar anime-like part in the face part texture corresponding to the face part based on the position information of the feature point of the face part acquired by the feature point extracting means;
    The character generation system according to claim 2, wherein the facial part texture is generated based on the calculated position of the similar anime-like part.
  6.  前記パーツ配置設定手段は、
     前記顔部品に対応する複数のモデル顔部品テクスチャの中から、前記顔部品に対応する前記類似アニメ調パーツの配置に類似した類似モデル顔部品テクスチャを選択し、選択した類似モデル顔部品テクスチャを前記顔部品テクスチャとして設定することを特徴とする請求項4または5に記載のキャラクタ生成システム。
    The parts arrangement setting means
    A similar model face part texture similar to the arrangement of the similar animation-like parts corresponding to the face part is selected from a plurality of model face part textures corresponding to the face part, and the selected similar model face part texture is The character generation system according to claim 4, wherein the character generation system is set as a facial part texture.
  7.  前記形状モデル生成手段が、
     顔輪郭に対応して予め用意された複数の輪郭形状モデルの中から、顔輪郭の前記特徴点に基づいて、顔輪郭に類似した基礎頭部形状モデルを選択し、
     顔輪郭以外の所定の顔部品に対応して予め用意された複数の部品形状モデルの中から、前記所定の顔部品の前記特徴点に基づいて、前記所定の顔部品に類似した類似部品形状モデルを選択し、
     選択した前記基礎頭部形状モデルと前記類似部品形状モデルとを組み合わせて前記頭部形状モデルを生成することを特徴とする請求項1乃至6のいずれか1項に記載のキャラクタ生成システム。
    The shape model generating means is
    A basic head shape model similar to the face contour is selected from a plurality of contour shape models prepared in advance corresponding to the face contour, based on the feature points of the face contour,
    A similar part shape model similar to the predetermined face part based on the feature points of the predetermined face part from among a plurality of part shape models prepared in advance corresponding to the predetermined face part other than the face contour Select
    The character generation system according to claim 1, wherein the head shape model is generated by combining the selected basic head shape model and the similar part shape model.
  8.  前記形状モデル生成手段が、
     顔輪郭に対応して予め用意された複数の基準形状モデルの中から、顔輪郭の前記特徴点に基づいて、顔輪郭に類似した前記頭部形状モデルを選択することを特徴とする請求項1乃至6のいずれか1項に記載のキャラクタ生成システム。
    The shape model generating means is
    2. The head shape model similar to a face contour is selected based on the feature points of the face contour from a plurality of reference shape models prepared in advance corresponding to the face contour. The character generation system of any one of thru | or 6.
  9.  前記コンピュータが、
     前記表示部を制御して、所定の前記顔部品の配置を示す顔配置ガイド情報と前記対象物を前記表示部に表示させる表示制御手段と、
     撮像部を制御して、前記対象物を撮像させ、前記対象物の前記撮像画像情報を前記所定の記憶部に格納させる撮像制御手段と、
    を更に備え、
     前記撮像画像取得手段は、前記表示制御手段と前記撮像制御手段とを連動させて、前記顔配置ガイド情報と前記対象物とを前記表示部に表示させながら、前記顔配置ガイド情報に基づいて前記対象物を撮像させた前記対象物の前記顔撮像画像情報を、前記所定の記憶部に格納されている前記対象物の前記撮像画像情報から取得することを特徴する請求項1乃至8のいずれか1項に記載のキャラクタ生成システム。
    The computer is
    Display control means for controlling the display unit to display face arrangement guide information indicating the arrangement of the predetermined facial parts and the object on the display unit;
    Imaging control means for controlling the imaging unit to image the object and storing the captured image information of the object in the predetermined storage unit;
    Further comprising
    The captured image acquisition means links the display control means and the imaging control means to display the face placement guide information and the object on the display unit, based on the face placement guide information. 9. The face picked-up image information of the object picked up by the object is acquired from the picked-up image information of the object stored in the predetermined storage unit. The character generation system according to item 1.
  10.  前記コンピュータが、
     入力部を制御して、前記対象物の前記撮像画像情報を含む各種情報を入力情報として入力させ、前記対象物の前記撮像画像情報を前記所定の記憶部に格納させる入力制御手段を更に備え、
     前記撮像画像取得手段は、前記表示制御手段と前記入力制御手段とを連動させて、前記顔配置ガイド情報と入力された前記対象物の前記撮像画像情報とを前記表示部に表示させながら、前記顔配置ガイド情報に基づいた前記対象物の前記顔撮像画像情報を、前記所定の記憶部に格納されている前記対象物の前記撮像画像情報から取得することを特徴する請求項1乃至9のいずれか1項に記載のキャラクタ生成システム。
    The computer is
    An input control unit that controls the input unit to input various information including the captured image information of the object as input information, and stores the captured image information of the object in the predetermined storage unit;
    The captured image acquisition means links the display control means and the input control means to display the face arrangement guide information and the input captured image information of the object on the display unit, 10. The face captured image information of the object based on face arrangement guide information is acquired from the captured image information of the object stored in the predetermined storage unit. The character generation system of Claim 1.
  11.  前記顔配置ガイド情報によって配置が示される所定の前記顔部品は、顔輪郭を少なくとも含んでいることを特徴とする請求項9または10に記載のキャラクタ生成システム。 The character generation system according to claim 9 or 10, wherein the predetermined face part whose arrangement is indicated by the face arrangement guide information includes at least a face outline.
  12.  コンピュータを使用して、仮想空間内の存在として表示部に表示されるキャラクタの顔を、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として生成するキャラクタ生成方法であって、
     前記コンピュータが、
     (a)前記対象物の前記顔撮像画像情報を、所定の記憶部に格納されている前記対象物の撮像画像情報から取得する工程と、
     (b)前記工程(a)によって取得された前記工程(a)によって取得された前記対象物の前記顔撮像画像情報から、前記所定の顔部品の前記特徴点の位置情報を取得するとともに、前記所定画素の色情報を取得する工程と、
     (c)前記工程(a)によって取得された所定の前記顔部品の前記特徴点に基づいて、前記キャラクタの頭部形状モデルを生成する工程と、
     (d)前記工程(a)によって取得された前記所定画素の色情報に基づいて前記キャラクタの頭部テクスチャの肌色となる前記対象物の肌色に類似した類似肌色情報を設定し、更に、前記工程(a)によって取得された前記特徴点に基づいて前記対象物の前記顔部品に類似した前記顔部品に対応する類似アニメ調パーツを選択し、選択した前記類似アニメ調パーツの配置を設定することにより前記顔部品に対応した顔部品テクスチャを生成する工程と、
     (e)前記工程(b)によって生成された前記顔部品テクスチャを、前記キャラクタの前記頭部形状モデルに貼り付けるとともに、前記工程(c)によって生成された前記頭部形状モデルに貼り付けられた前記頭部テクスチャの肌の部分の色情報を前記工程(d)によって設定された前記類似肌色情報に置き換える工程と、
    を備えていることを特徴とするキャラクタ生成方法。
    A character generation method for generating a face of a character displayed on a display unit as being present in a virtual space as a caricature of an anime style based on face captured image information of an object using a computer,
    The computer is
    (A) obtaining the captured face image information of the object from the captured image information of the object stored in a predetermined storage unit;
    (B) acquiring positional information of the feature points of the predetermined facial part from the face captured image information of the object acquired by the step (a) acquired by the step (a), and Obtaining color information of a predetermined pixel;
    (C) generating a head shape model of the character based on the feature points of the predetermined facial part acquired in the step (a);
    (D) setting similar skin color information similar to the skin color of the object to be the skin color of the head texture of the character based on the color information of the predetermined pixel acquired in the step (a); Selecting a similar anime-like part corresponding to the face part similar to the face part of the object based on the feature points acquired in (a), and setting the arrangement of the selected similar anime-like part Generating a facial part texture corresponding to the facial part by:
    (E) The facial part texture generated in the step (b) is pasted on the head shape model of the character and is pasted on the head shape model generated in the step (c). Replacing the color information of the skin portion of the head texture with the similar skin color information set by the step (d);
    A character generation method characterized by comprising:
  13.  前記工程(d)は、
     (d1)前記工程(b)によって取得された前記特徴点の位置情報に基づいて、前記顔部品に対応して予め用意された複数のアニメ調パーツの中から前記対象物の前記顔部品の形状に類似した前記類似アニメ調パーツを選択する工程と、
     (d2)前記工程(b)によって取得された前記所定画素の色情報に基づいて、予め用意された複数の肌色情報の中から前記対象物の肌色に類似した前記類似肌色情報を選択する工程と、
     (d3)前記工程(b)によって取得された前記特徴点の位置情報に基づいて、前記工程(d1)によって選択された前記顔部品に対応する前記類似アニメ調パーツの配置を設定し、設定した前記類似アニメ調パーツの配置に基づいて前記顔部品テクスチャを生成する工程と、
    を備えていることを特徴とする請求項12に記載のキャラクタ生成方法。
    The step (d)
    (D1) Based on the position information of the feature points acquired in the step (b), the shape of the facial part of the object from among a plurality of animation-like parts prepared in advance corresponding to the facial part Selecting the similar anime-like parts similar to
    (D2) selecting the similar skin color information similar to the skin color of the object from a plurality of skin color information prepared in advance based on the color information of the predetermined pixel acquired in the step (b); ,
    (D3) Based on the position information of the feature points acquired in the step (b), the arrangement of the similar anime-like parts corresponding to the face part selected in the step (d1) is set and set Generating the facial part texture based on the arrangement of the similar anime-like parts;
    The character generation method according to claim 12, further comprising:
  14.  前記工程(d1)は、
     前記顔部品が左右2つの基本顔部品から構成されるとき、左右どちらか一方の前記基本顔部品の前記特徴点の位置情報を顔中心線に対して線対称に移動させた位置情報に変換し、左右2つの前記基本顔部品の前記特徴点の位置情報の平均値から前記基本顔部品の形状を推定し、推定した前記基本顔部品の形状に対応した前記類似アニメ調パーツを前記複数のアニメ調パーツの中から選択し、
     前記顔部品が1つの前記基本顔部品から構成されるとき、前記顔部品の前記特徴点の位置情報から前記顔部品の形状を推定し、推定した前記顔部品の形状に対応した前記類似アニメ調パーツを前記複数のアニメ調パーツの中から選択することを特徴とする請求項13に記載のキャラクタ生成方法。
    The step (d1)
    When the face part is composed of two left and right basic face parts, the position information of the feature point of either the left or right basic face part is converted into position information moved in line symmetry with respect to the face center line. The shape of the basic face part is estimated from the average value of the position information of the feature points of the two left and right basic face parts, and the similar anime-like parts corresponding to the estimated shape of the basic face part are determined as the plurality of animations. Select from the key parts,
    When the face part is composed of one basic face part, the shape of the face part is estimated from position information of the feature points of the face part, and the similar animation tone corresponding to the estimated shape of the face part The character generation method according to claim 13, wherein a part is selected from the plurality of anime-like parts.
  15.  前記工程(d3)は、
     前記頭部形状モデルに貼り付けられる位置が予め設定されている前記顔部品に対応した前記顔部品テクスチャにおいて、
     予め設定されている鼻及び口の前記顔部品テクスチャ内の位置に、鼻の前記類似アニメ調パーツ及び口の前記類似アニメ調パーツを配置し、
     予め設定されている目の前記顔部品テクスチャ内の上下方向の位置に、前記特徴点の位置情報から算出した左右の目の間隔で、顔中心線に対して左右の目の前記類似アニメ調パーツが線対象となるように配置し、
     予め設定されている眉の前記顔部品テクスチャ内の左右方向の位置に、前記特徴点の位置情報から算出した上下方向の位置で、顔中心線に対して左右の眉の前記類似アニメ調パーツが線対象となるように配置し、
     鼻、口、目、及び眉に対応した前記類似アニメ調パーツの配置に基づいて、鼻、口、目、及び眉に対応した前記顔部品テクスチャを生成することを特徴とする請求項13または14に記載のキャラクタ生成方法。
    The step (d3)
    In the facial part texture corresponding to the facial part, the position pasted on the head shape model is preset,
    The nose and mouth similar animation-like parts and the mouth similar animation-like parts are arranged in positions in the face part texture of the nose and mouth,
    The similar anime-like parts of the left and right eyes with respect to the face center line at the vertical position in the face part texture set in advance with the distance between the left and right eyes calculated from the position information of the feature points Is placed to be a line target,
    The similar animated parts of the left and right eyebrows with respect to the face center line at the position in the vertical direction calculated from the position information of the feature point at the position in the left and right direction in the face part texture of the eyebrow set in advance. Arrange to be a line target,
    15. The facial part texture corresponding to the nose, mouth, eyes, and eyebrows is generated based on the arrangement of the similar anime-like parts corresponding to the nose, mouth, eyes, and eyebrows. The character generation method described in 1.
  16.  前記工程(d3)は、
     前記頭部形状モデルに貼り付けられる位置が予め設定されている前記顔部品に対応した前記顔部品テクスチャにおいて、
     前記工程(b)によって取得された前記特徴点の位置情報に基づいて前記顔部品に対応した前記顔部品テクスチャ内の前記類似アニメ調パーツの位置を算出し、
     算出した前記類似アニメ調パーツの位置に基づいて前記顔部品テクスチャを生成することを特徴とする請求項13または14に記載のキャラクタ生成方法。
    The step (d3)
    In the facial part texture corresponding to the facial part, the position pasted on the head shape model is preset,
    Calculating the position of the similar anime-like part in the facial part texture corresponding to the facial part based on the positional information of the feature point acquired in the step (b);
    15. The character generation method according to claim 13, wherein the facial part texture is generated based on the calculated position of the similar anime-like part.
  17.  前記工程(d3)は、
     前記顔部品に対応する複数のモデル顔部品テクスチャの中から、前記顔部品に対応する前記類似アニメ調パーツの配置に類似した類似モデル顔部品テクスチャを選択し、選択した類似モデル顔部品テクスチャを前記顔部品テクスチャとして設定することを特徴とすることを特徴とする請求項15または16に記載のキャラクタ生成方法。
    The step (d3)
    A similar model face part texture similar to the arrangement of the similar animation-like parts corresponding to the face part is selected from a plurality of model face part textures corresponding to the face part, and the selected similar model face part texture is The character generation method according to claim 15, wherein the character generation method is set as a facial part texture.
  18.  前記工程(c)は、
     顔輪郭に対応して予め用意された複数の輪郭形状モデルの中から、顔輪郭の前記特徴点に基づいて、顔輪郭に類似した基礎頭部形状モデルを選択し、
     顔輪郭以外の所定の顔部品に対応して予め用意された複数の部品形状モデルの中から、前記所定の顔部品の前記特徴点に基づいて、前記所定の顔部品に類似した類似部品形状モデルを選択し、
     選択した前記基礎頭部形状モデルと前記類似部品形状モデルとを組み合わせて前記頭部形状モデルを生成することを特徴とすることを特徴とする請求項12乃至17のいずれか1項に記載のキャラクタ生成方法。
    The step (c)
    A basic head shape model similar to the face contour is selected from a plurality of contour shape models prepared in advance corresponding to the face contour, based on the feature points of the face contour,
    A similar part shape model similar to the predetermined face part based on the feature points of the predetermined face part from among a plurality of part shape models prepared in advance corresponding to the predetermined face part other than the face contour Select
    18. The character according to claim 12, wherein the head shape model is generated by combining the selected basic head shape model and the similar part shape model. Generation method.
  19.  前記工程(c)は、
     顔輪郭に対応して予め用意された複数の基準形状モデルの中から、顔輪郭の前記特徴点に基づいて、顔輪郭に類似した前記頭部形状モデルを選択することを特徴とする請求項12乃至17のいずれか1項に記載のキャラクタ生成方法。
    The step (c)
    13. The head shape model similar to a face contour is selected based on the feature points of the face contour from a plurality of reference shape models prepared in advance corresponding to the face contour. 18. The character generation method according to any one of items 1 to 17.
  20.  前記コンピュータが、
     (f)所定の前記顔部品の配置を示す顔配置ガイド情報と前記対象物とを前記表示部に表示させながら、前記顔配置ガイド情報に基づいて前記対象物を撮像し、前記対象物の前記撮像画像情報を前記所定の記憶部に格納する工程を、前記(a)の前に更に備え、
     前記工程(a)は、前記顔配置ガイド情報に基づいて、前記対象物の前記顔撮像画像情報を、前記所定の記憶部に格納されている前記対象物の前記撮像画像情報から取得することを特徴する請求項12乃至19のいずれか1項に記載のキャラクタ生成方法。
    The computer is
    (F) While displaying the face arrangement guide information indicating the arrangement of the predetermined face part and the object on the display unit, the object is imaged based on the face arrangement guide information, and the object The step of storing captured image information in the predetermined storage unit is further provided before (a),
    In the step (a), based on the face arrangement guide information, the face captured image information of the object is acquired from the captured image information of the object stored in the predetermined storage unit. The character generation method according to claim 12, wherein the character generation method is characterized.
  21.  前記コンピュータが、
     (g)前記対象物の前記撮像画像情報を入力して、前記所定の記憶部に格納する工程を、前記(a)の前に更に備え、
     前記工程(a)は、所定の前記顔部品の配置を示す前記顔配置ガイド情報と、前記工程(g)によって入力された前記対象物の前記撮像画像情報とを前記表示部に表示させながら、前記顔配置ガイド情報に基づいて、前記対象物の前記顔撮像画像情報を、所定の記憶部に格納されている前記対象物の前記撮像画像情報から取得することを特徴する請求項12乃至20のいずれか1項に記載のキャラクタ生成方法。
    The computer is
    (G) The step of inputting the captured image information of the object and storing it in the predetermined storage unit is further provided before (a),
    In the step (a), the face arrangement guide information indicating the arrangement of the predetermined facial parts and the captured image information of the target object input in the step (g) are displayed on the display unit, 21. The captured face image information of the object is acquired from the captured image information of the object stored in a predetermined storage unit based on the face arrangement guide information. The character generation method according to any one of the preceding claims.
  22.  前記顔配置ガイド情報によって配置が示される所定の前記顔部品は、顔輪郭を少なくとも含んでいることを特徴とする請求項20または21に記載のキャラクタ生成方法。 The character generation method according to claim 20 or 21, wherein the predetermined face part whose arrangement is indicated by the face arrangement guide information includes at least a face outline.
  23.  仮想空間内の存在として表示部に表示されるキャラクタの顔を、対象物の顔撮像画像情報に基づいたアニメ調の似顔絵として生成するための処理を、コンピュータに実行させるプログラムであって、
     請求項1乃至11のいずれか1項に記載のキャラクタ生成システムの各手段を実現させる処理を前記コンピュータに実行させることを特徴とするプログラム。
    A program for causing a computer to execute processing for generating a character face displayed on a display unit as a presence in a virtual space as an animation-like caricature based on face captured image information of an object,
    The program which makes the said computer perform the process which implement | achieves each means of the character generation system of any one of Claims 1 thru | or 11.
PCT/JP2010/059967 2010-06-11 2010-06-11 Character generating system, character generating method, and program WO2011155068A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/JP2010/059967 WO2011155068A1 (en) 2010-06-11 2010-06-11 Character generating system, character generating method, and program
EP10852904.1A EP2581881A1 (en) 2010-06-11 2010-06-11 Character generating system, character generating method, and program
JP2012519194A JP5632469B2 (en) 2010-06-11 2010-06-11 Character generation system, character generation method and program
US13/693,623 US8497869B2 (en) 2010-06-11 2012-12-04 Character generating system, character generating method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/059967 WO2011155068A1 (en) 2010-06-11 2010-06-11 Character generating system, character generating method, and program

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/693,623 Continuation US8497869B2 (en) 2010-06-11 2012-12-04 Character generating system, character generating method, and program

Publications (1)

Publication Number Publication Date
WO2011155068A1 true WO2011155068A1 (en) 2011-12-15

Family

ID=45097697

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/059967 WO2011155068A1 (en) 2010-06-11 2010-06-11 Character generating system, character generating method, and program

Country Status (4)

Country Link
US (1) US8497869B2 (en)
EP (1) EP2581881A1 (en)
JP (1) JP5632469B2 (en)
WO (1) WO2011155068A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013179971A1 (en) * 2012-05-30 2013-12-05 株式会社コナミデジタルエンタテインメント Server device, avatar information processing system, avatar information processing method, and program
EP2789373A1 (en) * 2013-04-11 2014-10-15 Kabushiki Kaisha Square Enix (also trading as "Square Enix Co., Ltd." Video game processing apparatus and video game processing program
JP2014210213A (en) * 2014-08-21 2014-11-13 株式会社スクウェア・エニックス Video game processor, and video game processing program
JP2019145108A (en) * 2018-02-23 2019-08-29 三星電子株式会社Samsung Electronics Co.,Ltd. Electronic device for generating image including 3d avatar with facial movements reflected thereon, using 3d avatar for face
WO2020183961A1 (en) * 2019-03-12 2020-09-17 ソニー株式会社 Image processing device, image processign method, and program
JP7202045B1 (en) * 2022-09-09 2023-01-11 株式会社PocketRD 3D avatar generation device, 3D avatar generation method and 3D avatar generation program

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5279930B1 (en) * 2012-03-27 2013-09-04 株式会社東芝 Server, electronic device, server control method, server control program
JP6152125B2 (en) * 2015-01-23 2017-06-21 任天堂株式会社 Program, information processing apparatus, information processing system, and avatar image generation method
US20160284122A1 (en) * 2015-03-26 2016-09-29 Intel Corporation 3d model recognition apparatus and method
US9786032B2 (en) * 2015-07-28 2017-10-10 Google Inc. System for parametric generation of custom scalable animated characters on the web
US10360708B2 (en) * 2016-06-30 2019-07-23 Snap Inc. Avatar based ideogram generation
CN110136236B (en) * 2019-05-17 2022-11-29 腾讯科技(深圳)有限公司 Personalized face display method, device and equipment for three-dimensional character and storage medium
CN110728256A (en) * 2019-10-22 2020-01-24 上海商汤智能科技有限公司 Interaction method and device based on vehicle-mounted digital person and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000235656A (en) 1999-02-15 2000-08-29 Sony Corp Image processor, method and program providing medium
JP2001222725A (en) * 2000-02-07 2001-08-17 Sharp Corp Image processor

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3671257B2 (en) * 1995-04-21 2005-07-13 カシオ計算機株式会社 Composite pattern output device
JP2003016431A (en) * 2001-06-28 2003-01-17 Mitsubishi Electric Corp Generating device for portrait
US6828972B2 (en) * 2002-04-24 2004-12-07 Microsoft Corp. System and method for expression mapping
US6919892B1 (en) * 2002-08-14 2005-07-19 Avaworks, Incorporated Photo realistic talking head creation system and method
CA2654960A1 (en) * 2006-04-10 2008-12-24 Avaworks Incorporated Do-it-yourself photo realistic talking head creation system and method
US8831379B2 (en) * 2008-04-04 2014-09-09 Microsoft Corporation Cartoon personalization
JP2010066853A (en) * 2008-09-09 2010-03-25 Fujifilm Corp Image processing device, method and program
WO2011155067A1 (en) * 2010-06-11 2011-12-15 株式会社アルトロン Character generation system, character generation method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000235656A (en) 1999-02-15 2000-08-29 Sony Corp Image processor, method and program providing medium
JP2001222725A (en) * 2000-02-07 2001-08-17 Sharp Corp Image processor

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013179971A1 (en) * 2012-05-30 2013-12-05 株式会社コナミデジタルエンタテインメント Server device, avatar information processing system, avatar information processing method, and program
EP2789373A1 (en) * 2013-04-11 2014-10-15 Kabushiki Kaisha Square Enix (also trading as "Square Enix Co., Ltd." Video game processing apparatus and video game processing program
US9710974B2 (en) 2013-04-11 2017-07-18 Kabushiki Kaisha Square Enix Video game processing apparatus and video game processing program
JP2014210213A (en) * 2014-08-21 2014-11-13 株式会社スクウェア・エニックス Video game processor, and video game processing program
JP2019145108A (en) * 2018-02-23 2019-08-29 三星電子株式会社Samsung Electronics Co.,Ltd. Electronic device for generating image including 3d avatar with facial movements reflected thereon, using 3d avatar for face
US11798246B2 (en) 2018-02-23 2023-10-24 Samsung Electronics Co., Ltd. Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same
WO2020183961A1 (en) * 2019-03-12 2020-09-17 ソニー株式会社 Image processing device, image processign method, and program
US11798227B2 (en) 2019-03-12 2023-10-24 Sony Group Corporation Image processing apparatus and image processing method
JP7202045B1 (en) * 2022-09-09 2023-01-11 株式会社PocketRD 3D avatar generation device, 3D avatar generation method and 3D avatar generation program
WO2024053235A1 (en) * 2022-09-09 2024-03-14 株式会社PocketRD Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program

Also Published As

Publication number Publication date
JPWO2011155068A1 (en) 2013-08-01
US20130120425A1 (en) 2013-05-16
US8497869B2 (en) 2013-07-30
JP5632469B2 (en) 2014-11-26
EP2581881A1 (en) 2013-04-17

Similar Documents

Publication Publication Date Title
JP5632469B2 (en) Character generation system, character generation method and program
US9563975B2 (en) Makeup support apparatus and method for supporting makeup
JP4359784B2 (en) Face image synthesis method and face image synthesis apparatus
KR101190686B1 (en) Image processing apparatus, image processing method, and computer readable recording medium
CN109410298B (en) Virtual model manufacturing method and expression changing method
JP5603452B1 (en) Video game processing apparatus and video game processing program
CN107204033B (en) The generation method and device of picture
US7653220B2 (en) Face image creation device and method
JP6302132B2 (en) Image processing apparatus, image processing system, image processing method, and program
JP2011209887A (en) Method and program for creating avatar, and network service system
CN112669447A (en) Model head portrait creating method and device, electronic equipment and storage medium
JP5659228B2 (en) Character generation system, character generation method and program
KR20160030037A (en) Portrait generating device, portrait generating method
TW201002399A (en) Image processing device, method for controlling an image processing device, and an information storage medium
JP2020016961A (en) Information processing apparatus, information processing method, and information processing program
US20110057954A1 (en) Image processing apparatus, method, program and recording medium for the program
JP2006120128A (en) Image processing device, image processing method and image processing program
KR100608840B1 (en) Method for synthesis of 3d avata model of handset
JP5857606B2 (en) Depth production support apparatus, depth production support method, and program
CN112819932B (en) Method, system and storage medium for manufacturing three-dimensional digital content
JP6685094B2 (en) Image processing apparatus, image processing method, and computer program
JP2004179845A (en) Image processing method and apparatus thereof
WO2019044333A1 (en) Simulation device, simulation method, and computer program
JP2001216531A (en) Method for displaying participant in three-dimensional virtual space and three-dimensional virtual space display device
JP7076861B1 (en) 3D avatar generator, 3D avatar generation method and 3D avatar generation program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10852904

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012519194

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010852904

Country of ref document: EP