WO2024053235A1 - Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program - Google Patents

Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program Download PDF

Info

Publication number
WO2024053235A1
WO2024053235A1 PCT/JP2023/025313 JP2023025313W WO2024053235A1 WO 2024053235 A1 WO2024053235 A1 WO 2024053235A1 JP 2023025313 W JP2023025313 W JP 2023025313W WO 2024053235 A1 WO2024053235 A1 WO 2024053235A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
dimensional
head
surface image
image
Prior art date
Application number
PCT/JP2023/025313
Other languages
French (fr)
Japanese (ja)
Inventor
内田茂樹
川岸孝輔
籾倉宏哉
Original Assignee
株式会社PocketRD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社PocketRD filed Critical 株式会社PocketRD
Publication of WO2024053235A1 publication Critical patent/WO2024053235A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a three-dimensional avatar generation device, etc. that creates three-dimensional computer graphics data based on imaging data of a person or the like.
  • Patent Documents 1 and 2 both disclose examples in which an avatar imitating the player himself or a co-player is used in a computer game that uses a head-mounted display to represent a virtual space.
  • Patent Documents 1 and 2 discloses how to create an avatar imitating a person such as a player.
  • computer games and the like are not only those with a realistic world view that imitates the real world, but also those that are set in a specific creative world (for example, those that express the world in paintings or animations), and the latter
  • the avatar used in the game be designed to match the worldview of the work while maintaining the characteristics of the person such as the player, but Patent Documents 1 and 2 do not disclose any technology for creating such an avatar. No disclosure.
  • modeling tasks involve synthesizing images of the user's figure from multiple directions to generate a three-dimensional shape, and automatically modeling the surface shape to some extent based on this. is not impossible.
  • automate the skeleton construction and skinning operations which are tasks related to the internal structure, from images of the user's figure (surface) alone, and these tasks require long hours of manual labor by skilled engineers. I have no choice but to do it. In view of the fact that such a time-consuming process is required, it is not realistic to create a character using general three-dimensional computer graphics for each user's avatar.
  • a 3D avatar is generated by directly compositing videos of the user's figure, the user's figure will of course be faithfully reproduced, but such 3D avatars will be created by the services used. does not necessarily fit into the worldview of Furthermore, using a three-dimensional avatar that faithfully reproduces the user's figure in computer games, SNS, etc. in which an unspecified number of users participate may pose a problem from the viewpoint of privacy protection.
  • the present invention has been made in view of the above-mentioned problems, and is designed to reflect the characteristics of real people, protect the privacy of those people, and adapt to the world view set by the services used. It is an object of the present invention to provide a device, method, and program for easily generating a suitable, high-quality three-dimensional avatar.
  • the three-dimensional avatar generation device provides a three-dimensional surface shape of an object expressed using a group of vertices that define a positional relationship in a three-dimensional space, and a three-dimensional surface shape that is A three-dimensional avatar generation device that generates a three-dimensional avatar including a surface image displayed on the object, the positional relationship of feature points corresponding to each element constituting the head shape among the vertices based on the facial image of the object.
  • a three-dimensional shape generating means for generating a three-dimensional surface shape of the head of a specified object; and a surface image regarding all or a part of the surface shape of the head;
  • the positional relationship between two or more feature points among the plurality of partial surface images in which the positional relationship of the feature points corresponding to each element constituting the head shape has been specified is the three-dimensional image of the head that has a corresponding relationship.
  • a partial surface image selection means for selecting a partial surface image that matches the positional relationship between two or more feature points of the surface shape, and a partial surface image selected by the partial surface image selection means are synthesized.
  • a head avatar generator that generates a head avatar based on the three-dimensional surface shape of the head and the composite surface image. It is characterized by having a means.
  • the three-dimensional avatar generation device selects the partial surface image selected by the partial surface image selection means;
  • the method is characterized in that the combined surface image is generated by combining the facial image of the object with an image of a portion corresponding to the area of the partial surface image.
  • the three-dimensional avatar generation device derives a synthesis ratio of the partial surface image on the three-dimensional surface shape of the head in the synthetic surface image.
  • a composition ratio deriving means for deriving a composition ratio, variation information of the ownership name of the three-dimensional avatar generated with respect to a non-fungible token corresponding to the generated three-dimensional avatar, identification information of a partial surface image constituting the composite surface image, and the information output means for outputting at least the synthesis ratio to a first distributed network in which a first distributed ledger in which the synthesis ratio is recorded; and the three-dimensional information recorded on the first distributed ledger.
  • a distribution revenue calculating means for calculating a distribution ratio between the ownership name of the three-dimensional avatar and the partial surface image regarding the revenue generated for the generated three-dimensional avatar based on the ownership name of the partial surface image recorded in the It is characterized by having the following.
  • the three-dimensional avatar generation method includes a three-dimensional surface shape of an object expressed using a group of vertices that define a positional relationship in a three-dimensional space, and a three-dimensional surface
  • a three-dimensional avatar generation method for generating a three-dimensional avatar including a surface image displayed on a shape comprising: generating a three-dimensional avatar including a surface image displayed on a shape, the method comprising: generating feature points corresponding to each element constituting the head shape among the vertices based on a facial image of the object; a three-dimensional shape generation step of generating a three-dimensional surface shape of the head of the object whose positional relationship has been specified; and a surface image regarding all or a part of the surface shape of the head;
  • the positional relationships between two or more feature points of the head that have been specified the positional relationships between two or more feature points of the head that have been specified.
  • a partial surface image selection step of selecting a partial surface image that matches the positional relationship between two or more feature points of the three-dimensional surface shape; and combining the partial surface images selected in the partial surface image selection step.
  • a synthetic surface image generation step of generating a synthetic surface image that is a surface image that conforms to a three-dimensional surface shape of the head; and a head generating a head avatar based on the three-dimensional surface shape of the head and the synthetic surface image.
  • the method is characterized by including an avatar generation step.
  • the three-dimensional avatar generation method derives a synthesis ratio of the partial surface image on the three-dimensional surface shape of the head in the synthetic surface image.
  • a distribution revenue calculation step of calculating a distribution ratio between the ownership name of the three-dimensional avatar and the partial surface image regarding the revenue generated for the generated three-dimensional avatar, based on the ownership name of the partial surface image recorded in the It is characterized by including the following.
  • the three-dimensional avatar generation program is configured to display a three-dimensional surface shape and a three-dimensional surface shape of the target object expressed by a group of vertices based on the facial image of the target object.
  • a three-dimensional avatar generation program that causes a computer to generate a three-dimensional avatar including a surface image, the computer having the computer generate a three-dimensional surface of an object expressed using a group of vertices that define positional relationships in three-dimensional space.
  • a three-dimensional avatar generation device that generates a three-dimensional avatar including a shape and a surface image displayed on the three-dimensional surface shape, the elements constituting the head shape among the vertices based on the facial image of the object.
  • a three-dimensional shape generation function that generates a three-dimensional surface shape of the head of an object in which the positional relationship of feature points corresponding to The positional relationship between two or more feature points is determined from among the plurality of partial surface images in which the positional relationship of the feature points corresponding to each element constituting the head shape in the whole or part of the region is specified.
  • a partial surface image selection function that selects a partial surface image that matches the positional relationship between two or more feature points of the three-dimensional surface shape of the head, and a partial surface image selected by the partial surface image selection function.
  • a composite surface image generation function that generates a composite surface image that is a surface image that conforms to the three-dimensional surface shape of the head by combining;
  • a head avatar generation function for generating an avatar.
  • the present invention it is easy to create a high-quality three-dimensional avatar that reflects the characteristics of a real person, protects the privacy of the person, and conforms to the world view set by the service used. This has the effect of generating
  • FIG. 1 is a schematic diagram showing the configuration of a three-dimensional avatar generation device according to a first embodiment
  • FIG. It is a flowchart for explaining the operation of the head shape generation unit 4 in the embodiment. It is a flowchart for explaining the operation of the partial surface image selection section 6 and the composite surface image generation section 7 in the embodiment. It is a flowchart for explaining the operation of the avatar composition unit 13 in the embodiment.
  • FIG. 2 is a schematic diagram showing the configuration of a three-dimensional avatar generation device according to a second embodiment. It is a schematic diagram showing the composition of a three-dimensional avatar generation device concerning a modification.
  • the three-dimensional avatar generation device includes a facial image input unit 1 for inputting a facial image of an object, and a head body database that stores the head body avatar.
  • a head shape generation unit 4 generates three-dimensional shape information that is information about the three-dimensional surface shape of the head of the object based on the information, and a partial surface image that is a surface image about a partial region of the three-dimensional surface shape of the head.
  • a composite surface image generation unit 7 that generates a composite surface image that is a surface image in a dimensional surface shape; a head avatar generation unit 9 that generates a head avatar based on the three-dimensional shape information of the head and the composite surface image;
  • a torso database 10 that stores information on a torso avatar to be combined with a head avatar, a hair database 11 that stores information about a hair avatar to be combined with a head avatar, and torso avatars stored in the torso database 10 and hair database 11.
  • the device includes a component avatar selection section 12 that selects a hair avatar, and an avatar synthesis section 13 that synthesizes the head avatar, torso avatar, and hair avatar to generate an integrated whole body avatar.
  • the facial image input unit 1 is for inputting a facial image of an object for which a three-dimensional avatar is to be generated.
  • the "facial image” may be a three-dimensional stereoscopic image taken by a 3D scanner or the like, but in the first embodiment, it is a two-dimensional image.
  • the facial image input unit 1 may be a data input mechanism for simply inputting data from the outside, or may be configured to include an imaging camera or the like and directly acquire facial images.
  • the facial image inputted by the facial image input unit 1 is an image related to the face of a living thing including a person or a character imitating the living thing, and more specifically, a facial image inputted by the facial image input unit 1 is an image related to the face of a living thing including a person or a character imitating the same.
  • the image includes parts corresponding to the eyes, nose, mouth, eyebrows, ears, chin, etc., and detailed parts thereof. Multiple feature points are set, and ideally it is desirable that the image includes parts corresponding to all the feature points, but if it contains a certain percentage (for example, about 70%) of the parts, this implementation will be effective. There is no problem with avatar generation in this embodiment, and even if the ratio is less than this, as long as three or more feature points can be extracted, avatar generation itself in this embodiment is possible.
  • the facial image includes the eyes, nose, and mouth.
  • the head body database 2 is for storing information regarding head body avatars.
  • "Head body avatar” is three-dimensional computer graphics information consisting of the head of a person of average build, for example, and specifically, the head (the part above the neck of a person). .), preferably in addition to this, skeletal information regarding a skeletal structure for controlling head movements, etc., and relationships regarding the relationship between the surface shape and the skeletal structure. information.
  • the head body database 2 stores head body avatars with such information in different formats (data format, usage, feature points (hereinafter referred to as head body avatars) in order to distinguish them from feature points derived from facial images. It has a function of storing a plurality of base avatars corresponding to feature points in a base avatar (referred to as "basic feature points”) with different definitions.
  • “Surface shape information” is information regarding the surface shape of three-dimensional computer graphics, which corresponds to the three-dimensional shape of organs such as eyes and nose, and skin on the human body surface.
  • the information may be in a format in which the entire surface is defined as a collection of minute units such as voxels and the position information of each minute unit is recorded, but from the perspective of reducing the burden on information processing, so-called modeling processing is performed. It is desirable that the three-dimensional surface shape be expressed by defining a predetermined number of vertices and the manner of connection between the vertices.
  • edges connecting vertices are formed based on information about vertices and the connection mode between vertices, an area surrounded by three or more edges is defined as a surface (polygon), and a set of surfaces ( The surface shape is specified by the mesh).
  • the application target of the present invention is not limited to these, and the surface shape information includes position information of a plurality of vertices and/or a plurality of faces arranged corresponding to the surface shape. If it is, it can be used as surface shape information in the present invention.
  • the format includes either information regarding the absolute position or information regarding the relative position with respect to the skeletal structure consisting of joints and bones. In this embodiment, not only the former position information but also the latter position information is included, and the position of each vertex changes while maintaining the relative positional relationship according to changes in joint position, bone position, length, etc. It shall be.
  • Skleton information is information regarding the internal structure that corresponds to the skeleton of a human body and serves as a reference when creating motion in three-dimensional computer graphics.
  • the information format may be a skeletal structure consisting of bones and joints with a predetermined thickness and size, similar to the skeletal structure of the human body, but it may also be a skeletal structure consisting of bones and joints of a predetermined thickness and size. (represented as points) and bones (represented as lines) located between joints and corresponding to bones in the human body.
  • the application of the present invention is not limited to these information formats, and includes parts such as joints that can be translated and rotated and also function as a fulcrum in relation to adjacent parts (in the present invention, These are collectively referred to as "joints.”
  • Other formats are also acceptable.
  • Relationship information is information that defines the relationship between skeletal information and surface shape information, and more specifically, it is information that defines the relationship between skeletal information and surface shape information. This is information that specifies how much each vertex should follow and operate. If the surface shape were configured to 100% follow the movements of the joints and bones, the character would behave like a tin robot and lack a sense of reality, even though it is a human character. Therefore, when generating three-dimensional computer graphics of a person or the like, it is desirable to set in advance information regarding how much the movement of adjacent bones and joints is followed for each part of the surface shape. Also in this embodiment, for each vertex constituting the surface shape information, numerical information indicating followability with respect to bones and/or joints adjacent to the vertex is set as relevance information.
  • the work of generating relevance information is called skinning processing, weight editing, etc., and weight values are generally used for relevance information as well, but the relevance information in the present invention is not limited to these, and the above-mentioned All information that satisfies the conditions shall be included.
  • the head element selection unit 3 selects a head element avatar suitable for generating a head avatar of an object from among a plurality of head element avatars stored in the head element database 2. It is something.
  • the head body database 2 stores head body avatars in different data formats depending on the format and usage, and the head body selection unit 3 selects the usage etc. from among the plurality of head body avatars. It has a function of selecting an appropriate head body avatar according to the situation and outputting it to the head shape generation section 4. More specifically, the head body selection unit 3 acquires information regarding the torso avatar selected by the parts avatar selection unit 12 (described later), and selects a head body avatar in a data format consistent with the selected torso avatar. It has the function to select. Furthermore, after satisfying such conditions, it has a function of selecting a head body avatar from a plurality of head body avatars mechanically or according to the user's selection.
  • the head shape generation section 4 generates a head shape of the object based on the facial image of the object input through the facial image input section 1 and the head element avatar selected by the head element selection section 3. This is for generating three-dimensional shape information that is information about the three-dimensional surface shape of the part.
  • the head shape generation unit 4 works with a feature point extraction unit 14 that extracts the positional relationship between feature points in a two-dimensional facial image, and a A positional relationship derivation unit 15 that derives a three-dimensional positional relationship, the coordinates of the feature points from which the three-dimensional positional relationship has been derived, and each vertex that constitutes the three-dimensional surface shape of the head body avatar.
  • a coordinate transformation unit 16 that performs coordinate transformation on the coordinates of the face image
  • a position adjustment unit 17 that adjusts the positional relationship of the feature points on the facial image extracted by the feature point extraction unit 14; and a region formed by the feature points of the facial image.
  • a body deformation unit 19 that moves the apex of the head body avatar to match the position of the feature point of the face image.
  • the feature point extraction unit 14 is for extracting feature points in the facial image of the target object by performing image analysis on the facial image of the target object.
  • the feature point extraction unit 14 has a function of extracting feature points from a target facial image by using two-dimensional image analysis technology such as face recognition technology.
  • the feature points extracted by the feature point extraction unit 14 have the same definition as the basic feature points of the head element avatar selected by the head element selection unit 3.
  • the positional relationship deriving unit 15 is for deriving a three-dimensional positional relationship between the feature points of the facial image extracted by the feature point extracting unit 14.
  • the specific derivation mechanism uses a machine learning model that is trained by giving the two-dimensional feature points in the facial image as the input layer and the three-dimensional positional relationships of the feature points as the output layer as training data. It is also possible to derive the value by using the above method, or other mechanisms may be used.
  • the depth (z coordinate) of each feature point is calculated using the x and y coordinates of each feature point. It may also be derived using a machine learning model trained by giving coordinates as input layer and z coordinate as output layer as teacher data.
  • the coordinate conversion unit 16 is for converting the coordinates indicating the position of the feature points extracted from the facial image, and also for converting the coordinates of each vertex including the basic feature points in the head element avatar. Since the feature points of the facial image and the coordinates of each vertex of the head body avatar belong to separate coordinate systems, it is not possible to generate three-dimensional shape information of the object's head based on the two as they are. Therefore, the coordinate transformation unit 16 performs predetermined transformation processing on each coordinate system to develop both on the same coordinate space.
  • the coordinate conversion unit 16 sets a relative coordinate system with a specific feature point (for example, the vertex corresponding to the tip of the nose) as the origin, and converts each of the feature points extracted from the facial image and the head body avatar.
  • a specific feature point for example, the vertex corresponding to the tip of the nose
  • coordinate transformation processing is performed so that each specific feature point is located at the origin.
  • the coordinate transformation unit 16 also has a function of performing a coordinate transformation process of returning the coordinate system related to the information to the coordinate system of the head body avatar after generating the three-dimensional shape information of the head.
  • the position adjustment unit 17 is for adjusting the position of the feature points of the face image after coordinate transformation with the basic feature points of the head body avatar.
  • the feature points extracted from the facial image and each vertex of the head body avatar that have been moved to the common coordinates by the processing of the coordinate conversion unit 16 have a specific feature point in a corresponding relationship as the origin.
  • the positional relationships of other feature points and vertices are different.For example, the feature points of a face image and the basic feature points and vertices of the head body avatar are misaligned in the direction the face is facing. The situation is as follows.
  • the position adjustment unit 17 is for adjusting this deviation. For example, assuming a line segment connecting two feature points corresponding to both cheeks, the position adjustment unit 17 adjusts the line segment at the feature point of the facial image and the head body avatar. The position adjustment unit 17 performs a process such as rotating one side to adjust the position so that the line segments are parallel to each other.
  • the position adjustment unit 17 also has a function of performing shear correction on the coordinates of feature points extracted from the facial image. Depending on the processing method for extracting feature points of a facial image, shearing may occur in the position coordinates of each feature point. In this case, the position adjustment unit 17 has a function of calculating shear amounts for each of the three axes of the three-dimensional coordinate system regarding the feature points of the facial image, and canceling the shear amounts.
  • the enlarging/reducing unit 18 is for changing the position coordinates of the feature points of the facial image so that the space formed by the feature points of the facial image is expanded or reduced. Specifically, the enlarging/reducing unit 18 expands the space formed by the feature points of the facial image and the space formed by the basic feature points corresponding to the feature points of the facial image among the basic feature points of the head body avatar. It has a function of changing the positional coordinates of the feature points of the facial image while maintaining the mutual positional relationship so that the width of the former space is comparable to the width of the latter space.
  • the "space formed by feature points” may be a space formed based on the same definition for each of the feature points of the facial image and the basic feature points of the head body avatar.
  • a rectangular parallelepiped (so-called three-dimensional bounding box) whose sides are made of straight lines parallel to the x-axis, y-axis, or z-axis, which is formed so as to contain each feature point and have a minimum volume, or
  • the feature points may be connected by line segments, and a spatial region formed by a surface formed by three or more line segments may be defined as a "space formed by the feature points.”
  • the widths are about the same is not limited to the case where the volumes completely match.
  • the volume of the space formed by the feature points of the facial image is 0.9 times the volume of the space formed by the basic feature points corresponding to the feature points of the facial image among the basic feature points of the head body avatar. This refers to a range of ⁇ 1.1 times, but it is also possible to set a range other than this.
  • the element body deforming unit 19 transforms the three-dimensional surface shape of a general-purpose head material so that it approaches the three-dimensional surface shape of the object by moving the position of the basic feature points of the head element avatar. It is for the purpose of Specifically, the element transformation unit 19 converts the position coordinates of the basic feature points of the head element avatar that correspond to the feature points of the face image into the position coordinates of the feature points of the face image. Has a function. In addition, the elemental body deformation unit 19 also performs basic feature points such that continuity between the basic feature points and other vertices is maintained for the vertices of the head elemental body avatar that are located in the vicinity of the corresponding basic feature points.
  • a head shape is generated that has the characteristics of the object in the face region of the three-dimensional surface shape, while remaining in the shape of the head elemental body avatar in other regions.
  • the elemental body deforming unit 19 not only moves the positions of the basic feature points of the head elemental body avatar to approach the three-dimensional surface shape of the object, but also changes the movement direction and movement of the position of each basic feature point. Record information about distance.
  • the information regarding the movement direction and movement distance corresponds to relative position information indicating how much the feature points in the target facial image have changed with respect to the position of the basic feature points in the element body in the basic state before deformation. Therefore, in comparing the position of the feature point with a partial surface image in the feature point comparison unit 22, which will be described later, it is used as information indicating the position of the feature point in the three-dimensional shape information of the head of the object.
  • the partial surface image database 5 is for storing partial surface images that are surface images regarding a partial region of the three-dimensional surface shape of the head.
  • the partial surface image database 5 includes information on a part of the surface of three-dimensional shape information, such as organs formed on the face such as eyes, nose, mouth, ears, and eyebrows, as well as cheeks, forehead, and chin. It has a function of storing information regarding a partial surface image, which is a surface image regarding a skin area such as a skin area.
  • “Surface image” is an image related to at least one of color/pattern and texture in the three-dimensional surface shape of the head.
  • “Texture” is also expressed as texture, and refers to external features consisting of minute irregularities on the surface.
  • the outer surface forms edges that connect vertices based on information about the vertices and the connection mode between the vertices, and an area surrounded by three or more edges is defined as a surface (polygon).
  • the surface shape is specified by a set of surfaces (mesh). Since such a method approximates a realistic surface shape, it does not include information regarding texture such as minute irregularities on the surface.
  • the information regarding texture is created separately and added to the outer surface formed by modeling processing to create a realistic appearance.
  • the information regarding the texture is specifically information regarding the secondary image added on the outer surface, which forms a pattern that reflects Yin and Yang on a two-dimensional plane, such as a height map or a normal map. By doing so, irregularities, etc. are expressed in a pseudo manner.
  • the surface image in the first embodiment includes both color/pattern and texture on the surface of the head shape, it may be composed of only one of them.
  • the information on each partial surface image stored in the partial surface image database 5 includes image information on the partial surface image, that is, information on the color, pattern, and texture in a certain area, and information on the three-dimensional positions of feature points in the area. and information regarding classification.
  • Information regarding the three-dimensional position of a feature point refers to the deviation of a predetermined feature point from the basic feature point of a general-purpose head element that corresponds to this, that is, the head element. This is information regarding the movement distance and movement direction from the basic feature point in , and is relative position information with the position of the basic feature point of the head body as a reference.
  • “Information regarding classification” is information indicating to which classification the partial surface image belongs among a plurality of classifications determined in advance according to predetermined conditions.
  • the "predetermined conditions" can be arbitrary; for example, the generated three-dimensional avatar may be classified differently depending on the design that is suitable for each service such as a game or SNS. Alternatively, the classification may be different depending on the designer who created the partial surface image. Alternatively, different classifications may be made for partial surface images that match the style of a famous painter, illustrator, animator, manga artist, etc., or the world view of a particular movie, painting, animation, or manga. When classifying partial surface images that are consistent with the style and worldview, a machine learning model trained by giving partial surface images as input layer and classification as training data is used to classify individual partial surface images. It may be possible to classify the information, or it may be classified using other mechanisms. Furthermore, the surface image in the partial surface image can be in any form, for example, it may be realistic, cartoon-like, or illustrative.
  • the partial surface image selection unit 6 selects a surface that matches the three-dimensional surface shape of the object's head generated by the head shape generation unit 4 from among the plurality of partial surface images stored in the partial surface image database 5. This is for selecting a partial surface image consisting of a shape.
  • the partial surface image selection unit 6 includes a classification selection unit 21 for selecting a specific classification from among a plurality of classifications regarding partial surface images, and a classification selection unit 21 for selecting a specific classification from among a plurality of classifications regarding partial surface images, and a classification selection unit 21 for selecting a specific classification from among a plurality of classifications regarding partial surface images, and a classification selection unit 21 for selecting a specific classification from among a plurality of classifications regarding partial surface images.
  • a feature point comparison unit 22 that compares the position information with the position information of the feature points of the three-dimensional surface shape of the object's head, and calculates position information about each part of the three-dimensional surface shape of the object's head based on the comparison result.
  • the image determination unit 23 includes an image determination unit 23 that determines which partial surface images match, and an image output unit 24 that outputs the partial surface images determined to match to the composite surface image generation unit 7.
  • the classification selection unit 21 is for selecting classification items regarding a plurality of partial surface images stored in the partial surface image database 5.
  • a configuration is adopted in which a classification item is automatically selected that is suitable for the service in which the generated three-dimensional avatar is used.
  • the configuration may be such that the classification items are selected, or the classification items may be selected according to the user's preferences analyzed based on the browsing history of web pages and the like.
  • the feature point comparison unit 22 is for comparing the feature points of each partial surface image belonging to the selected classification item with the positions of the feature points in corresponding parts of the three-dimensional surface shape of the object's head. It is. Specifically, the feature point comparison unit 22 compares the positional relationship of feature points of both sides that correspond to each other, and provides information as to whether the positions match or not, and if they do not match, how far apart they are. Outputs information regarding whether the In the comparison of the positional relationships by the feature point comparison unit 22, the position information of the feature points on the surface of the three-dimensional shape information of the object's head is based on the position information of the head element body avatar recorded by the element body transformation unit 19. Relative position information containing the direction and amount of movement from the position of the feature point. Further, the feature point comparison unit 22 may compare the position coordinates of all feature points in a correspondence relationship when comparing feature points, but may also compare only some predetermined feature points. good.
  • the image determination unit 23 selects features of the three-dimensional shape information of the object's head for each region of the three-dimensional surface shape of the head from among the plurality of partial surface images. This is for determining a partial surface image that includes feature points that are close to the position of the point. Specifically, the image determination unit 23 determines the partial surface image with the smallest total value of the difference in distance between each feature point to be compared derived by the feature point comparison unit 22. However, it may be possible to determine the partial surface image with the largest number of feature points with matching positions, or it may be possible to determine the partial surface image having a surface shape that matches the surface shape of the relevant portion according to another algorithm. Good too.
  • the image output unit 24 is for outputting the partial surface image determined by the image determination unit 23 to the composite surface image generation unit 7.
  • the image output unit 24 outputs information regarding at least one of color/pattern and texture, the position coordinates of feature points, and the partial surface image regarding which part of the three-dimensional surface shape of the head. It has a function to output information indicating whether the
  • the composite surface image generation unit 7 synthesizes partial surface images related to each part selected by the partial surface image selection unit 6 by arranging them on each part in a manner that matches the three-dimensional surface shape of the head of the object. This is for generating surface images.
  • the composite surface image generation unit 7 includes a surface shape adjustment unit 26 that adjusts the surface shape so that the three-dimensional surface shape of the partial surface image matches the three-dimensional surface shape of the head of the object;
  • An image synthesis unit 27 arranges the partial surface image whose shape has been adjusted on the surface shape of the object's head, and performs interpolation processing on the gap region between the partial surface images that occurs in the three-dimensional surface shape of the object's head. and a complement processing unit 28 that performs the processing.
  • the surface shape adjustment unit 26 changes the surface shape of the partial surface image selected by the partial surface image selection unit 6 so that the feature points match the positions of the feature points in the three-dimensional surface shape of the object's head. It is for.
  • the surface shape adjustment unit 26 also changes the color, pattern, and texture of the surface image in accordance with the movement of the feature points in the partial surface image. In addition to moving the position of the color, pattern, and texture on the feature point, the position of the color, pattern, and texture near the feature point is also changed according to the distance from the feature point to prevent an unnatural surface image. It is desirable to move the
  • the image synthesis unit 27 is for arranging the partial surface image that has been adjusted by the surface shape adjustment unit 26 on a corresponding region of the surface shape of the object's head. Specifically, the image synthesis unit 27 determines that the position of the feature point of each partial surface image that has been subjected to shape adjustment by the surface shape adjustment unit 26 is a feature in the corresponding portion of the three-dimensional surface shape of the object's head. It has the function of arranging points so that their positions match.
  • the complementation processing section 28 detects the case where the partial surface image is not placed on the entire surface and a gap area occurs. Second, it is used to perform complementary processing on the void area. Depending on the shape of each partial surface image to be placed, a gap may occur between adjacent partial surface images, and the color, pattern, and texture of the gap cannot be determined.
  • the complementation processing unit 28 performs complementation processing for setting the color, pattern, and texture of the gap portion to match the color, pattern, and texture of the neighboring partial surface image, so that the gap portion does not look unnatural.
  • the specific content of the interpolation process is to divide the area formed by the main color (for example, skin color) in the partial surface image that exists around the gap into a large number of minute areas, and to extract the color information in each area.
  • a representative color (S model ) is determined by combining some of the frequently appearing colors (for example, the colors ranked 1st to 5th in frequency).
  • S max the maximum value
  • S median median value
  • S mean average value
  • the complementation processing unit 28 performs a blurring process to make the boundary unclear so that the boundary between the gap and the adjacent partial surface image does not stand out unnaturally.
  • the display area of the color/pattern of the void part and the partial surface image are each enlarged near the boundary to form an area where both overlap, and in the overlap area, the respective mixing ratios are set to 10:0 and 9. :1, ...2:8, 1:9, 0:10 to make the boundary unclear.
  • the head avatar generation section 9 synthesizes the three-dimensional surface shape of the head of the object formed by the head shape generation section 4 and the composite surface image generated by the composite surface image generation section 7 to generate a head avatar. This is for generating a partial avatar.
  • a three-dimensional surface image that has the three-dimensional surface shape of the object's head and reflects the contents of the partial surface image selected by the partial surface image selection section 6 is created as a surface image.
  • the head avatar is completed.
  • the torso database 10 is for storing torso avatars.
  • the torso avatars that are stored have various shapes and designs, and there are many different data formats (different formats, uses, definitions of basic feature points, etc.).
  • the various data formats of the torso avatar are the same as the various data formats of the head body avatar, and more specifically, when a specific one is selected as the torso avatar, A partial body avatar is selected.
  • information regarding the position where the head body avatar is connected in the torso avatar, information regarding the direction of the reference axis of the head body avatar at the time of connection, and Information regarding the size of the head element body avatar to be connected is defined.
  • the avatar compositing section 13 uses this information to perform a whole-body avatar compositing process.
  • the torso avatar may be an imitation of a realistic human body, an animal, an imaginary creature, or even an animated character.
  • the shape of the torso avatar may be 8-headed, 2-headed, or any other shape, and may be nude or wearing clothes, shoes, accessories, etc.
  • the hair database 11 is for storing hair avatars.
  • the hair avatar is formed to match the outer surface shape near the top of each head element avatar stored in the head element database 2.
  • the part avatar selection section 12 is for selecting a torso avatar and a hair avatar.
  • the selection algorithm of the torso avatar by the part avatar selection unit 12 may be one that is automatically selected depending on the purpose, or one that is selected based on instructions from the user (for example, the person who is the subject of the facial image).
  • the selection of the hair avatar in the first embodiment it is assumed that a hair avatar whose shape matches that of the object with respect to the two-dimensional shape of the hair region projected in one direction (for example, the front direction) is selected. More specifically, the component avatar selection unit 12 clusters the two-dimensional silhouette of the hair avatar and the two-dimensional silhouette of the user's hair using the k-means method or principal component analysis to determine their mutual similarity. A hair avatar having a silhouette similar to that of the user's hair is selected.
  • the avatar synthesis section 13 is for synthesizing the torso avatar and hair avatar selected by the parts avatar selection section 12 with the head avatar generated by the head avatar generation section 9 to generate an integrated whole body avatar. It is.
  • the avatar synthesis unit 13 includes a position adjustment unit 29 that aligns the torso avatar and the head avatar when composing them, and a position adjustment unit 29 that aligns the torso avatar with the head avatar, and a reference axis of the head avatar and the head avatar at the time of connection set in the torso avatar.
  • a reference axis adjustment section 30 that adjusts the reference axes so that they are in the same direction
  • a size adjustment section 31 that adjusts the size of the head avatar
  • a synthesis processing section 32 that synthesizes the head avatar with the torso avatar and the hair avatar. Be prepared.
  • the position adjustment unit 29 is for moving the relative position of the head avatar with respect to the torso avatar to the position of the connection point in the torso avatar.
  • the positional information of the connecting point of the head avatar is set in the torso avatar in advance, and the position adjustment unit 29 adjusts the relative position so that the connecting point of the torso avatar and the connecting point of the head avatar match, based on this positional information. Adjust the position.
  • the connection point in the torso avatar is a place corresponding to the neck between both shoulders in the case of an avatar imitating a normal person, but is not limited to such a place. If it imitates an imaginary creature such as a synthetic beast, it is possible to provide a connection point at a location that matches the design; for example, a connection point may be provided in the palm of the right hand.
  • the position adjustment unit 29 also adjusts the positions of the head avatar and hair avatar.
  • the hair avatar is formed to match the outer surface shape near the top of the head body avatar, which is the source of the head avatar, and the positional relationship is determined at the vertices of both. With reference to this correspondence, the position adjustment unit 29 performs position adjustment so that the hair avatar is placed at an appropriate position on the head avatar.
  • the reference axis adjustment unit 30 is for adjusting the directional relationship between the reference axis for the head avatar, which is set in advance for the torso avatar, and the reference axis for the head avatar, so that they match.
  • the direction adjustment by the reference axis adjustment unit 30 is performed, for example, by rotation processing on the torso avatar or head avatar.
  • the size adjustment unit 31 is for adjusting the size of the head avatar so that it becomes the size preset for the torso avatar.
  • the torso avatar information regarding the volume of the head avatar part when an integrated whole avatar is synthesized is determined, and if the volume of the head avatar differs from this, the size adjustment unit 31 adjusts the volume of the torso avatar.
  • the head avatar is enlarged or reduced so that it matches the setting information.
  • the position adjustment section 29 performs alignment
  • the reference axis adjustment section 30 matches the directions of the reference axes of the head avatar and the body avatar
  • the size adjustment section 31 adjusts the size of the head avatar. This is for compositing the completed head avatar, torso avatar, and hair avatar.
  • the synthesis processing unit 32 combines mesh loops (referring to closed curves formed by vertices located at the ends of the connection points) formed at the respective connection points of the head avatar and the body avatar. By doing so, a bond is formed in the surface shape portion.
  • a new line segment is formed between the vertices of the head avatar and the vertices of the body avatar for which the correspondence relationship is defined in advance, and the surface formed by the newly formed line segment is made into a partial polygon. It is also possible to construct a new mesh structure. Further, the head avatar and the torso avatar may be combined by integrating vertices with defined correspondences into one vertex. The same applies to the combination of the head avatar and the hair avatar.
  • the compositing processing unit 32 also performs a merging process on their skeletal structures (bones). Regarding the bone joining process, it is desirable to connect the head bone or a similar bone that is generally provided in a head avatar and the neck bone or a similar bone that is generally provided in a torso avatar in a format that sets a parent-child relationship. .
  • the feature point extraction section 14 extracts feature points from the input facial image (step S101), and the positional relationship derivation section 15 derives the three-dimensional positional relationship of the feature points (step S102).
  • the coordinate conversion unit 16 converts the position information of the feature points extracted from the facial image and the position information of the basic feature points of the head body avatar to a specific feature point (in this embodiment, a point corresponding to the tip of the nose). ) is converted into position coordinates of a coordinate system with the origin as the origin (step S103), and each feature point extracted from the facial image matches the position of the basic feature point of the corresponding head body avatar.
  • the position is adjusted (step S104), and if shearing of position coordinates occurs during feature point extraction, correction is performed (step S105).
  • the scaling unit 18 matches the size of the space formed by the feature points extracted from the facial image with the space formed by the basic feature points of the corresponding head body avatar. Then, the position coordinates are changed (step S106). Then, the element body deforming unit 19 converts the position coordinates of the basic feature points of the head element avatar into the positions of the corresponding feature points of the face image in a state where the width of the space formed by the feature points is approximately the same. A process is performed to move the position coordinates to the coordinates (step S107), and finally, a process is performed to return the position coordinates of all vertices of the head element body avatar to the original coordinate system by the coordinate conversion unit 16 (step S108). The operation of the head shape generation unit 4 is thus completed, and the three-dimensional shape information of the head of the object, which has the characteristics of the object derived from the facial image of the object, is completed.
  • a classification item is selected by the classification selection unit 21 (step S201), and feature points of a partial surface image belonging to the selected classification item are compared with feature points that have a corresponding relationship in the three-dimensional surface shape of the object's head.
  • a comparison process is performed regarding the positional relationship (step S202).
  • the image determination unit 23 selects a partial surface image that has the feature point closest to the feature point in the three-dimensional shape information of the object's head (step S203).
  • the selected partial surface image is processed by the surface shape adjustment unit 26 to move its feature points so as to match the three-dimensional surface shape of the object's head (step S204), and the partial surface image is , is placed on the corresponding area in the three-dimensional surface shape of the head of the target part by the image synthesis unit 27 (step S205), and after performing complementation processing as necessary (step S206), the head of the target part is A composite surface image is generated in which the selected partial surface images are arranged in each region on the three-dimensional surface shape of the surface.
  • the position adjustment unit 29 adjusts the positions of the connection points set for the head avatar and the connection points set for the torso avatar to match (step S301), and adjusts the positions of the head avatar and hair avatar. (Step S302).
  • the reference axis adjustment unit 30 performs adjustment so that the direction of the reference axis for the head avatar set in advance for the torso avatar matches the direction of the reference axis set for the head avatar (step S303).
  • the size adjustment unit 31 performs an enlargement/reduction process so that the size of the head avatar becomes the size set in advance for the body avatar (step S304).
  • the head avatar, torso avatar, and hair avatar are combined by the synthesis processing unit 32 (step S305), thereby completing an integrated whole-body avatar.
  • the three-dimensional avatar generation device generates a three-dimensional surface shape of the head of a three-dimensional avatar based on a facial image of a target object, and a surface image to be placed on the three-dimensional surface shape.
  • a composite surface image is used, which is obtained by combining separately prepared partial surface images having shapes close to the three-dimensional surface shape.
  • the three-dimensional avatar generation device reflects the physical characteristics of the object (separated eyes, large mouth, high nose, etc.) while also generating specific colors and A partial surface image that differs from the real thing in the pattern is used, thereby making it possible to generate a three-dimensional avatar that has the characteristics of the person but also has a surface image that differs from the appearance of the person.
  • Such three-dimensional avatars give acquaintances who know the person's appearance a friendly impression with the person's characteristics, but third parties cannot guess the person's specific appearance, so they are used by an unspecified number of people. It has the advantage that the user's privacy can be appropriately protected even when used in participating computer games, SNS, etc.
  • the three-dimensional avatar generation device selects partial surface images of a predetermined classification from among a large number of partial surface images according to a predetermined classification to generate a composite surface image.
  • Embodiment 2 Next, a three-dimensional avatar generation device according to a second embodiment will be described.
  • components with the same names and the same symbols as in Embodiment 1 shall exhibit the same functions as the components in Embodiment 1, unless otherwise specified. .
  • the three-dimensional avatar generation device has, in addition to the configuration shown in the first embodiment, a non-fungible token and a partial surface image that correspond to the generated three-dimensional avatar.
  • the token generation unit 33 generates non-fungible tokens that correspond to individual partial surface images recorded in the database, and the composition ratio of the partial surface image to the original target person's facial image in the generated three-dimensional avatar.
  • a combination ratio derivation unit 34 that derives the combination ratio
  • an information output unit 35 that outputs information regarding the partial surface images constituting the three-dimensional avatar including the derived combination ratio, and distribution of profits obtained by the three-dimensional avatar based on the combination ratio.
  • the token generation unit 33 is for generating non-fungible tokens corresponding to three-dimensional avatars and individual partial surface images.
  • a “non-fungible token” is a so-called NFT (Non-Fungible-Token), that is, a token that has unique data and cannot be replaced with other tokens, such as Ethereum (registered trademark). It is published based on the standard ERC721 or other predetermined standards.
  • the transaction history of issued non-fungible tokens is recorded in a distributed ledger stored in a distributed network corresponding to each token.
  • block chain technology is used as a distributed ledger management technology using a distributed network.
  • “Blockchain technology” is a technology that uses cryptographic technology to synchronize data between multiple computers that make up a distributed network.
  • a “distributed ledger” is one in which each block consists of a collection of records agreed upon among multiple computers and information for connecting with other blocks (information about the previous block).
  • a distributed ledger is constructed by connecting multiple blocks.
  • the distributed ledger management technology in Embodiment 2 is not necessarily limited to blockchain technology, and other technologies may be used as long as they perform similar functions.
  • the distributed ledger for each token records the transaction history regarding the ownership name of the three-dimensional avatar or partial surface image that corresponds to the token.
  • the second embodiment by referring to the information recorded in the distributed ledger regarding each token, the current holding name of the three-dimensional avatar and partial surface image corresponding to each token can be determined.
  • a distributed ledger corresponding to a three-dimensional avatar is referred to as a first distributed ledger, and a distributed network in which the first distributed ledger is stored is referred to as a first distributed network;
  • the distributed ledger corresponding to the partial surface image is referred to as a second distributed ledger, and the distributed network in which the second distributed ledger is stored is referred to as a second distributed network.
  • type networks can be composed of the same network.
  • a plurality of second distributed ledgers exist according to the number of partial surface images.
  • information regarding the partial surface image used for the 3D avatar is recorded in the distributed ledger.
  • information on the overall three-dimensional surface shape of the object's head in the three-dimensional avatar which is derived by the synthesis ratio deriving unit 34 described later, is provided.
  • the composite ratio of the individual partial surface images is recorded.
  • the identifier of each token is linked one-to-one with the identification information of the corresponding three-dimensional avatar and partial surface image. More directly, the identifier of the token and the identification information of the three-dimensional avatar or the like may have the same content, but there is no problem in an embodiment in which a correspondence relationship is set between the identifiers of the token and the identification information of the three-dimensional avatar or the like.
  • the synthesis ratio derivation unit 34 is for deriving the synthesis ratio of partial surface images in the three-dimensional surface shape of the head for the three-dimensional avatar synthesized by the avatar synthesis unit 13.
  • the composition ratio derivation unit 34 derives the composition ratio based on the ratio of the area of the area replaced by a specific partial surface image to the total surface area of the head surface excluding the hair part of the three-dimensional avatar. I am planning to do so.
  • the synthesis ratio may be derived individually for all partial surface images, in the second embodiment, it is derived collectively for each partial surface image belonging to the same ownership name.
  • the information output unit 35 sends information regarding the partial surface image used for the three-dimensional avatar, including the combination ratio derived by the combination ratio derivation unit 34 and identification information of the partial surface image used for combination, to the distributed network. It is for output. Specifically, the information output unit 35 is directly or indirectly connected to a decentralized network regarding tokens that correspond to three-dimensional avatars, and outputs information such as the synthesis ratio to the decentralized network. It has the function of Although the specific configuration of the information output unit 35 can be arbitrary as long as it can realize such a function, in the second embodiment, the information output unit 35 is configured to handle transactions and secrets whose contents include information regarding the composition ratio, etc. The mode is such that a digital signature generated using the key is output.
  • the electronic signature is decrypted in the distributed network and when it is confirmed that the transaction is genuine, the contents of the transaction are recorded in the distributed ledger.
  • the information output unit 35 can record the change in the holding name in the information output unit 35. It is also possible to output ownership change information consisting of a transaction and an electronic signature using a private key to a distributed network that stores a distributed ledger regarding each token.
  • the distribution revenue calculation unit 36 calculates the amount of revenue generated when a 3D avatar is used as a character in commercial game software, and is calculated by the person who owns the 3D avatar and the partial surface image that makes up the 3D avatar. This is to calculate the distribution ratio to the person who holds the name.
  • the distribution revenue calculation unit 36 is directly or indirectly connected to the distributed network, and refers to the transaction history recorded in the distributed ledger regarding non-fungible tokens corresponding to the three-dimensional avatar and each partial surface image. Then, the person who currently owns the three-dimensional avatar, etc. is determined.
  • the distributed revenue calculation unit 36 calculates the 3D avatar based on the combination ratio derived by the combination ratio derivation unit 34, the identification information of the used partial surface image, the ownership name of the 3D avatar and the partial surface image, The distribution ratio and recipients of the profits generated will be calculated.
  • the specific method of calculating the distribution ratio the simplest structure is to derive it by multiplying the total revenue by the composite ratio, but other methods are possible as long as it is calculated using the composite ratio. But that's fine.
  • the distributed revenue calculation unit 36 weights each region on the head surface according to its importance, and for example, for a partial surface image including the eyes, a predetermined value that is greater than 1 is multiplied by the synthesis ratio.
  • the distribution ratio may be the result of multiplying the coefficients.
  • the composition ratio derivation unit 34 derives the composition ratio
  • the distribution revenue calculation unit 36 calculates the revenue distribution ratio based on the derived composition ratio
  • the three-dimensional avatar It has the advantage of being able to easily and fairly distribute profits when profits are earned.
  • the synthesis ratio is derived using a certain algorithm (for example, in the second embodiment, the synthesis ratio is calculated by calculating the ratio of the area occupied by the partial surface image to the total surface area of the head surface excluding the hair part of the three-dimensional avatar). ), and the calculation of the revenue distribution ratio is also the same, so it is possible to easily realize fair revenue distribution without making arbitrary judgments in the series of operations.
  • non-fungible tokens are generated corresponding to each of the three-dimensional avatar and the partial surface image, and distribution revenue is calculated based on the recorded contents of the distributed ledger provided corresponding to each token.
  • the structure is such that the division 36 adopts the revenue distribution ratio.
  • the three-dimensional avatar generation device does not generate a composite surface image by simply arranging the selected partial surface images; It has a configuration that generates a composite surface image by combining the facial image of the target object.
  • the 3D avatar generation device includes a 3D image generation unit 37 that converts a 2D facial image input through the facial image input unit 1 into 3D;
  • a synthetic surface image generation unit 38 superimposes the facial image converted into a dimensional image and the partial surface image selected by the partial surface image selection unit 6, and a synthesis ratio is derived in consideration of the superposition mode etc. by the synthetic surface image generation unit 38.
  • a combination ratio deriving unit 39 is provided.
  • the three-dimensional image generation unit 37 is for converting the facial image of the two-dimensional object input through the facial image input unit 1 into a three-dimensional image. Specifically, the three-dimensional image generation unit 37 moves the two-dimensional feature points of the facial image so as to correspond to the three-dimensional positional relationship derived by the positional relationship derivation unit 15, and also moves the two-dimensional feature points of the facial image to correspond to the three-dimensional positional relationship derived by the positional relationship derivation unit 15. It has a function of making the facial image three-dimensional by moving the position of the color, pattern, etc. in the surrounding area according to the movement of the feature point. Note that parts that are not represented in the two-dimensional facial image (for example, skin on the sides and back) may be left blank, or may be supplemented in the same manner as the complementation processing section 28. good.
  • the composite surface image generation unit 38 generates a composite surface image by combining the partial surface image selected by the partial surface image selection unit 6 and the facial image made three-dimensional by the three-dimensional image generation unit 37. It is for. Specifically, in addition to the complementary processing unit 28 in the second embodiment, the composite surface image generation unit 38 includes a surface shape adjustment unit 40 that changes the shape of not only the partial surface image but also the three-dimensional facial image.
  • the apparatus includes an image synthesis section 41 that synthesizes the partial surface image and the facial image that have undergone surface shape adjustment processing by superimposing them at a predetermined mixing ratio.
  • the surface shape adjustment section 40 has a function of adjusting not only the surface shape of the partial surface image like the surface shape adjustment section 26 in the first and second embodiments, but also the surface shape of the facial image. Specifically, when the positions of a predetermined feature point on the partial surface image and the corresponding feature point on the face image are different, the surface shape adjustment unit 40 adjusts the feature point so that it matches the position of one of them. In addition to adjusting the position of the other, it has a function of moving the positions of both feature points to match their positions. The amount of movement of both feature points in the surface shape adjustment unit 40 may be determined by the user's selection, or the amount of movement of both feature points may be determined based on the results of deep learning etc. so that the shape becomes more natural or more characteristic.
  • the amount of movement of the feature point may also be determined. Furthermore, the amount of movement of both feature points may be set to a uniform value for all partial surface images and facial images, or may be set to a different value for each partial surface image, or may be set to a different value for each feature point in the partial surface image. It may be a different value. Note that the content of determining the amount of movement of both feature points includes a case where only the feature points of the surface shape of the partial surface image are moved, and in this case, the surface shape adjustment unit 40 The same processing as that of the shape adjustment section 26 will be performed.
  • the surface shape adjustment unit 40 moves the position of the feature point on the facial image
  • the surface shape adjustment unit 40 provides the head shape generation unit 4 and the synthesis ratio derivation unit 39 with respect to the position change information (for example, movement direction and movement distance). Output against.
  • the head shape generation unit 4 adjusts the position of the feature point in the three-dimensional shape information of the object's head so that it matches the position of the adjusted feature point based on the variation information.
  • the combination ratio derivation unit 39 derives the combination ratio while also referring to the fluctuation information.
  • the image synthesis unit 41 does not simply arrange the partial surface image on the three-dimensional surface shape of the object like the image synthesis unit 27 in the first and second embodiments, but instead arranges the partial surface image and the partial surface image. It has a function of arranging facial images (three-dimensionalized by the three-dimensional image generation unit 37) corresponding to the area in a superimposed state at a predetermined mixing ratio. Specifically, the image synthesis unit 41 mixes the partial surface image and the facial image, which have the same shape by adjusting the position of feature points by the surface shape adjustment unit 40, at a predetermined mixing ratio (for example, 8 for the partial surface image, 8 for the facial image, etc.).
  • a predetermined mixing ratio for example, 8 for the partial surface image, 8 for the facial image, etc.
  • the image synthesis unit 41 can also perform image synthesis with a mixing ratio of the partial surface image and the facial image of 10:0; however, in this case, the image synthesis unit 41 may The same processing as the combining section 27 will be performed.
  • the image synthesis section 41 outputs information regarding the mixing ratio to the synthesis ratio derivation section 39.
  • the combination ratio derivation unit 39 derives the combination ratio while considering the information regarding the mixture ratio.
  • the synthesis ratio derivation unit 39 is for deriving the synthesis ratio of partial surface images on the head surface of the generated three-dimensional avatar.
  • the facial image and three-dimensional surface shape of the subject are transformed not only by simply arranging the partial surface image, but also by moving the feature points according to the shape of the partial surface image. Since a configuration is adopted in which a surface image and a facial image are superimposed and displayed at a predetermined mixing ratio, the synthesis ratio deriving unit 39 calculates the degree of deformation of the facial image and three-dimensional surface shape and the mixture of the partial surface image and the facial image.
  • the composite ratio is derived by also referring to the ratio.
  • the combination ratio deriving unit 38 determines the degree of deformation of the three-dimensional surface shape in the region of the target partial surface image, for example, by multiplying the average moving distance of the feature points in the region by a predetermined coefficient. , derive the synthesis ratio in this modification.
  • the derived combination ratio is output to the distributed network as in the second embodiment, and the distribution revenue calculation unit 36 calculates the revenue distribution ratio based on the combination ratio.
  • the synthesis ratio deriving unit 39 derives the synthesis ratio by referring not only to the area ratio but also to the degree of movement of feature points in the surface shape of the object's head and the mixing ratio with the partial surface image.
  • the modified example also has the advantage that it is possible to derive a synthesis ratio that appropriately evaluates the degree of contribution of a partial surface image in a wide variety of synthetic surface images, and that it is possible to calculate an appropriate and fair revenue distribution ratio.
  • the facial image of the object inputted through the facial image input section 1 is a two-dimensional image, but there is no need to limit it to such an aspect.
  • a three-dimensional facial image may also be used.
  • a three-dimensional facial image it becomes possible to omit the processing by the positional relationship derivation unit 15 and the processing by the three-dimensional image generation unit 37, and a three-dimensional avatar generation device is realized with a simpler configuration. There is also the advantage of being able to do so.
  • an avatar consisting only of the head is used as the head body avatar, but an avatar including not only the head but also a torso portion may be used as the head body avatar.
  • a head body avatar consisting only of a head it may be configured such that the head avatar becomes an independent avatar without being combined with a body avatar or the like.
  • a head avatar formed integrally with the hair may be generated.
  • the moving means in the claims does not need to be interpreted as being limited to aspects such as the coordinate transformation unit 16.
  • the three-dimensional positional relationship of the feature points of the face image is derived on the same coordinate system as the vertices of the head element avatar, without coordinate transformation. It is also possible to move the vertices of the partial element body avatar. Furthermore, regarding the movement/rotation of the feature points of the face image, the basic feature points of the head body avatar, or the group of vertices, it is possible to move not only one of them, but also both of them. .
  • the present invention easily generates a high-quality three-dimensional avatar that reflects the characteristics of a real person, protects the privacy of the person, and conforms to the worldview set by the service used. It can be used as a technology to

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

[Problem] To provide a device, a method, and a program with which it is possible to easily generate a high-quality three-dimensional avatar that reflects the features of an actual person or the like, while protecting the privacy of the person or the like. [Solution] Embodiment 1 relates to a three-dimensional avatar generation device that comprises: a face image input unit 1 for inputting a face image of an object; a head element database 2 for storage concerning head element avatars; a partial surface image database 5 that stores partial surface images that are surface images related to partial regions of the surface of three-dimensional shapes of heads; a partial surface image selection unit 6 that selects a partial surface image matching the three-dimensional shape of the head of the object; a synthesized surface image generation unit 7 that generates, on the basis of the selected partial surface image, a synthesized surface image that is a surface image on the surface of the three-dimensional shape of the head of the object; and a head avatar generation unit 9 that generates a head avatar on the basis of the three-dimensional shape information of the head and the synthesized surface image.

Description

3次元アバター生成装置、3次元アバター生成方法及び3次元アバター生成プログラム3D avatar generation device, 3D avatar generation method, and 3D avatar generation program
 本発明は、人物等を対象とした撮像データに基づく3次元コンピュータグラフィックスデータを作成する3次元アバター生成装置等に関するものである。 The present invention relates to a three-dimensional avatar generation device, etc. that creates three-dimensional computer graphics data based on imaging data of a person or the like.
 近年、コンピュータ等の電子計算機における処理能力向上等に伴い、3次元のコンピュータグラフィックスを活用したゲーム、動画コンテンツの活用が進められている。すなわち、かつては2次元アニメーションで表現されていた著名なコンピュータゲームのシリーズが3DCGを用いてリメイクされ、また、古典的な童話を原作とした映画が3DCGを用いて作製されて世界的に大ヒットしており、今や3次元コンピュータグラフィックスは、動画コンテンツにおける表現態様において、事実上の標準として広く活用されている。 In recent years, with improvements in the processing power of computers and other electronic computers, the use of games and video content that utilizes three-dimensional computer graphics is progressing. In other words, famous computer game series that were once expressed in 2D animation have been remade using 3DCG, and movies based on classic fairy tales have been made using 3DCG and become huge hits worldwide. Three-dimensional computer graphics is now widely used as a de facto standard in terms of expression in video content.
 3次元コンピュータグラフィックスの発展形として、実在の人物等をモデルとした写実的な3次元コンピュータグラフィックスを利用することも提案されている。これは、コンピュータゲーム等のキャラクターとしてゲームをプレイする人物、あるいは共同でプレイする相手方の人物等を模したアバター(分身としてのキャラクター)を使用するものであって、アバターを使用することにより作品世界への没入感が向上し、よりリアルな体験としてゲームを楽しむことが可能となる。 As an advanced form of three-dimensional computer graphics, it has also been proposed to use realistic three-dimensional computer graphics modeled after real people. This uses an avatar (character as an alter ego) that imitates the person playing the game as a character in a computer game, or the person of the partner with whom the game is played. This improves the sense of immersion and allows you to enjoy the game as a more realistic experience.
 特許文献1、2は、いずれもヘッドマウントディスプレイを使用して仮想空間を表現するコンピュータゲームにおいて、プレイヤー自身、あるいは共同プレイヤーの姿を模したアバターを使用した例について開示している。 Patent Documents 1 and 2 both disclose examples in which an avatar imitating the player himself or a co-player is used in a computer game that uses a head-mounted display to represent a virtual space.
特開2019-012509号公報JP 2019-012509 Publication 特開2019-139673号公報JP 2019-139673 Publication
 しかし、特許文献1、2のいずれにおいても、プレイヤー等の人物を模したアバターをどのように作成するかについては開示されていない。特に、コンピュータゲーム等は現実世界を模した写実的な世界観のものだけでなく、特定の創作世界を舞台にしたもの(たとえば絵画的、アニメーション的に世界を表現したもの)も存在し、後者にて使用するアバターはプレイヤー等の人物の特徴を維持しつつも作品の世界観に合わせたデザインとすることが好ましいところ、特許文献1、2にはそのようなアバターを作成する技術については何ら開示がない。 However, neither of Patent Documents 1 and 2 discloses how to create an avatar imitating a person such as a player. In particular, computer games and the like are not only those with a realistic world view that imitates the real world, but also those that are set in a specific creative world (for example, those that express the world in paintings or animations), and the latter It is preferable that the avatar used in the game be designed to match the worldview of the work while maintaining the characteristics of the person such as the player, but Patent Documents 1 and 2 do not disclose any technology for creating such an avatar. No disclosure.
 理屈上は、一般的な3次元コンピュータグラフィックスと同様の手法によってユーザのアバターを作成することは可能である。しかしながら、かかる方法にてアバターを作成すること現実的でない。なぜならば、3次元コンピュータグラフィックスによるキャラクター作成は、まず人体における皮膚等に相当するメッシュ構造についてモデリング作業等を行い、人体における骨格に相当するスケルトン構造の構築作業を行い、さらに、人体における筋肉、腱に相当する機能すなわち骨格と皮膚との関連付けであるスキニング作業を行うという様々な作業を行う必要があるためである。 In theory, it is possible to create a user's avatar using a method similar to general three-dimensional computer graphics. However, it is not practical to create an avatar using such a method. This is because character creation using 3D computer graphics involves first modeling a mesh structure that corresponds to the skin of a human body, then constructing a skeleton structure that corresponds to the skeleton of a human body, and then building a skeleton structure that corresponds to the skeleton of a human body. This is because it is necessary to perform various tasks such as skinning, which is a function corresponding to a tendon, that is, an association between the skeleton and the skin.
 これらの作業のうち、たとえばモデリング作業については、複数方向からユーザの姿形を撮影した映像を合成して立体形状を生成し、これに基づきある程度は自動的に表面形状に関するモデリング作業等を行うことは不可能ではない。しかしながら、ユーザの姿形(表面)の画像のみからは内部構造に関する作業であるスケルトンの構築作業及びスキニング作業を自動化することはできず、これらの作業は熟練した技術者による長時間の手作業によって行うほかない。これだけの手間を要する事実に鑑みて、個々のユーザのアバターについて、一般的な3次元コンピュータグラフィックスによるキャラクター作成作業を行うことは、現実的でない。 Among these tasks, for example, modeling tasks involve synthesizing images of the user's figure from multiple directions to generate a three-dimensional shape, and automatically modeling the surface shape to some extent based on this. is not impossible. However, it is not possible to automate the skeleton construction and skinning operations, which are tasks related to the internal structure, from images of the user's figure (surface) alone, and these tasks require long hours of manual labor by skilled engineers. I have no choice but to do it. In view of the fact that such a time-consuming process is required, it is not realistic to create a character using general three-dimensional computer graphics for each user's avatar.
 また、ユーザの姿形を撮影した映像をそのまま合成等して3次元アバターを生成した場合、当然のことながらユーザの姿形が忠実に再現されるところ、そのような3次元アバターは利用するサービスの世界観に必ずしも適合しない。また、ユーザの姿形を忠実に再現した3次元アバターを不特定多数のユーザが参加するコンピュータゲームやSNS等で使用することは、プライバシー保護の観点からは問題となるおそれがある。 In addition, if a 3D avatar is generated by directly compositing videos of the user's figure, the user's figure will of course be faithfully reproduced, but such 3D avatars will be created by the services used. does not necessarily fit into the worldview of Furthermore, using a three-dimensional avatar that faithfully reproduces the user's figure in computer games, SNS, etc. in which an unspecified number of users participate may pose a problem from the viewpoint of privacy protection.
 本発明は上記の課題に鑑みてなされたものであって、実在の人物等の特徴を反映しつつも当該人物等のプライバシーを保護し、かつ使用されるサービス等にて設定される世界観に適合した高品質な3次元アバターを容易に生成する装置、方法及びプログラムを提供することを目的とする。 The present invention has been made in view of the above-mentioned problems, and is designed to reflect the characteristics of real people, protect the privacy of those people, and adapt to the world view set by the services used. It is an object of the present invention to provide a device, method, and program for easily generating a suitable, high-quality three-dimensional avatar.
 上記目的を達成するため、請求項1にかかる3次元アバター生成装置は、3次元空間上における位置関係を定めた頂点群を用いて表現した対象物の3次元表面形状及び当該3次元表面形状上に表示された表面画像を含む3次元アバターを生成する3次元アバター生成装置であって、対象物の顔面画像に基づき前記頂点のうち頭部形状を構成する各要素に対応した特徴点の位置関係が特定された対象物の頭部の3次元表面形状を生成する3次元形状生成手段と、頭部の表面形状における全部又は一部の領域に関する表面画像であり、当該全部又は一部の領域における頭部形状を構成する各要素に対応した特徴点の位置関係が特定された複数の部分表面画像の中から、2以上の特徴点間の位置関係が、対応関係にある前記頭部の3次元表面形状の2以上の特徴点間の位置関係と一致する部分表面画像を選択する部分表面画像選択手段と、前記部分表面画像選択手段によって選択された部分表面画像を合成することにより、前記頭部の3次元表面形状に適合した表面画像である合成表面画像を生成する合成表面画像生成手段と、前記頭部の3次元表面形状と前記合成表面画像に基づき頭部アバターを生成する頭部アバター生成手段と、を備えたことを特徴とする。 In order to achieve the above object, the three-dimensional avatar generation device according to claim 1 provides a three-dimensional surface shape of an object expressed using a group of vertices that define a positional relationship in a three-dimensional space, and a three-dimensional surface shape that is A three-dimensional avatar generation device that generates a three-dimensional avatar including a surface image displayed on the object, the positional relationship of feature points corresponding to each element constituting the head shape among the vertices based on the facial image of the object. a three-dimensional shape generating means for generating a three-dimensional surface shape of the head of a specified object; and a surface image regarding all or a part of the surface shape of the head; The positional relationship between two or more feature points among the plurality of partial surface images in which the positional relationship of the feature points corresponding to each element constituting the head shape has been specified is the three-dimensional image of the head that has a corresponding relationship. A partial surface image selection means for selecting a partial surface image that matches the positional relationship between two or more feature points of the surface shape, and a partial surface image selected by the partial surface image selection means are synthesized. and a head avatar generator that generates a head avatar based on the three-dimensional surface shape of the head and the composite surface image. It is characterized by having a means.
 また、上記目的を達成するため、請求項2にかかる3次元アバター生成装置は、上記の発明において、前記合成表面画像生成手段は、前記部分表面画像選択手段によって選択された前記部分表面画像と、前記対象物の顔面画像のうち前記部分表面画像の領域に対応した部分の画像とを合成することにより前記合成表面画像を生成することを特徴とする。 Further, in order to achieve the above object, the three-dimensional avatar generation device according to claim 2 is provided, in the above invention, wherein the composite surface image generation means selects the partial surface image selected by the partial surface image selection means; The method is characterized in that the combined surface image is generated by combining the facial image of the object with an image of a portion corresponding to the area of the partial surface image.
 また、上記目的を達成するため、請求項3にかかる3次元アバター生成装置は、上記の発明において、前記合成表面画像における前記頭部の3次元表面形状上における前記部分表面画像の合成比率を導出する合成比導出手段と、生成した3次元アバターと対応関係にある非代替性トークンに関して生成され前記3次元アバターの保有名義の変動情報、前記合成表面画像を構成する部分表面画像の識別情報及び前記合成比率が記録される第1の分散型台帳が保存される第1の分散型ネットワークに対し少なくとも前記合成比率を出力する情報出力手段と、前記第1の分散型台帳に記録された前記3次元アバターの保有名義、前記識別情報及び前記合成比率と、前記部分表面画像と対応関係にある非代替性トークンに関して生成され前記部分表面画像の保有名義の変動情報が記録される第2の分散型台帳に記録された前記部分表面画像の保有名義とに基づき、生成された前記3次元アバターについて生ずる収益に関する前記3次元アバター及び前記部分表面画像の保有名義間における分配比率を算出する分配収益算出手段と、を備えたことを特徴とする。 Further, in order to achieve the above object, the three-dimensional avatar generation device according to claim 3, in the above invention, derives a synthesis ratio of the partial surface image on the three-dimensional surface shape of the head in the synthetic surface image. a composition ratio deriving means for deriving a composition ratio, variation information of the ownership name of the three-dimensional avatar generated with respect to a non-fungible token corresponding to the generated three-dimensional avatar, identification information of a partial surface image constituting the composite surface image, and the information output means for outputting at least the synthesis ratio to a first distributed network in which a first distributed ledger in which the synthesis ratio is recorded; and the three-dimensional information recorded on the first distributed ledger. a second distributed ledger that is generated regarding the ownership name of the avatar, the identification information, the synthesis ratio, and a non-fungible token in correspondence with the partial surface image, and records fluctuation information of the ownership name of the partial surface image; a distribution revenue calculating means for calculating a distribution ratio between the ownership name of the three-dimensional avatar and the partial surface image regarding the revenue generated for the generated three-dimensional avatar based on the ownership name of the partial surface image recorded in the It is characterized by having the following.
 また、上記目的を達成するため、請求項4にかかる3次元アバター生成方法は、3次元空間上における位置関係を定めた頂点群を用いて表現した対象物の3次元表面形状及び当該3次元表面形状上に表示された表面画像を含む3次元アバターを生成する3次元アバター生成方法であって、対象物の顔面画像に基づき前記頂点のうち頭部形状を構成する各要素に対応した特徴点の位置関係が特定された対象物の頭部の3次元表面形状を生成する3次元形状生成ステップと、頭部の表面形状における全部又は一部の領域に関する表面画像であり、当該全部又は一部の領域における頭部形状を構成する各要素に対応した特徴点の位置関係が特定された複数の部分表面画像の中から、2以上の特徴点間の位置関係が、対応関係にある前記頭部の3次元表面形状の2以上の特徴点間の位置関係と一致する部分表面画像を選択する部分表面画像選択ステップと、前記部分表面画像選択ステップにおいて選択された部分表面画像を合成することにより、前記頭部の3次元表面形状に適合した表面画像である合成表面画像を生成する合成表面画像生成ステップと、前記頭部の3次元表面形状と前記合成表面画像に基づき頭部アバターを生成する頭部アバター生成ステップと、を含むことを特徴とする。 In addition, in order to achieve the above object, the three-dimensional avatar generation method according to claim 4 includes a three-dimensional surface shape of an object expressed using a group of vertices that define a positional relationship in a three-dimensional space, and a three-dimensional surface A three-dimensional avatar generation method for generating a three-dimensional avatar including a surface image displayed on a shape, the method comprising: generating a three-dimensional avatar including a surface image displayed on a shape, the method comprising: generating feature points corresponding to each element constituting the head shape among the vertices based on a facial image of the object; a three-dimensional shape generation step of generating a three-dimensional surface shape of the head of the object whose positional relationship has been specified; and a surface image regarding all or a part of the surface shape of the head; Among the plurality of partial surface images in which the positional relationships of feature points corresponding to each element constituting the head shape in the region have been specified, the positional relationships between two or more feature points of the head that have a corresponding relationship are determined. a partial surface image selection step of selecting a partial surface image that matches the positional relationship between two or more feature points of the three-dimensional surface shape; and combining the partial surface images selected in the partial surface image selection step. a synthetic surface image generation step of generating a synthetic surface image that is a surface image that conforms to a three-dimensional surface shape of the head; and a head generating a head avatar based on the three-dimensional surface shape of the head and the synthetic surface image. The method is characterized by including an avatar generation step.
 また、上記目的を達成するため、請求項5にかかる3次元アバター生成方法は、上記の発明において、前記合成表面画像における前記頭部の3次元表面形状上における前記部分表面画像の合成比率を導出する合成比導出ステップと、生成した3次元アバターと対応関係にある非代替性トークンに関して生成され前記3次元アバターの保有名義の変動情報、前記合成表面画像を構成する部分表面画像の識別情報及び前記合成比率が記録される第1の分散型台帳が保存される第1の分散型ネットワークに対し少なくとも前記合成比率を出力する情報出力ステップと、前記第1の分散型台帳に記録された前記3次元アバターの保有名義、前記識別情報及び前記合成比率と、前記部分表面画像と対応関係にある非代替性トークンに関して生成され前記部分表面画像の保有名義の変動情報が記録される第2の分散型台帳に記録された前記部分表面画像の保有名義とに基づき、生成された前記3次元アバターについて生ずる収益に関する前記3次元アバター及び前記部分表面画像の保有名義間における分配比率を算出する分配収益算出ステップと、を含むことを特徴とする。 Further, in order to achieve the above object, the three-dimensional avatar generation method according to claim 5, in the above invention, derives a synthesis ratio of the partial surface image on the three-dimensional surface shape of the head in the synthetic surface image. a synthesis ratio deriving step of deriving a synthesis ratio, variation information of the ownership name of the three-dimensional avatar generated regarding the non-fungible token corresponding to the generated three-dimensional avatar, identification information of the partial surface image constituting the synthetic surface image, and the an information output step of outputting at least the composite ratio to a first distributed network in which a first distributed ledger in which the composite ratio is recorded; and the three-dimensional information recorded in the first distributed ledger. a second distributed ledger that is generated regarding the ownership name of the avatar, the identification information, the synthesis ratio, and a non-fungible token in correspondence with the partial surface image, and records fluctuation information of the ownership name of the partial surface image; a distribution revenue calculation step of calculating a distribution ratio between the ownership name of the three-dimensional avatar and the partial surface image regarding the revenue generated for the generated three-dimensional avatar, based on the ownership name of the partial surface image recorded in the It is characterized by including the following.
 また、上記目的を達成するため、請求項6にかかる3次元アバター生成プログラムは、対象物の顔面画像に基づき、頂点群によって表現した対象物の3次元表面形状および3次元表面形状上に表示された表面画像を含む3次元アバターをコンピュータに生成させる3次元アバター生成プログラムであって、前記コンピュータに対し、3次元空間上における位置関係を定めた頂点群を用いて表現した対象物の3次元表面形状及び当該3次元表面形状上に表示された表面画像を含む3次元アバターを生成する3次元アバター生成装置であって、対象物の顔面画像に基づき前記頂点のうち頭部形状を構成する各要素に対応した特徴点の位置関係が特定された対象物の頭部の3次元表面形状を生成する3次元形状生成機能と、頭部の表面形状における全部又は一部の領域に関する表面画像であり、当該全部又は一部の領域における頭部形状を構成する各要素に対応した特徴点の位置関係が特定された複数の部分表面画像の中から、2以上の特徴点間の位置関係が、対応関係にある前記頭部の3次元表面形状の2以上の特徴点間の位置関係と一致する部分表面画像を選択する部分表面画像選択機能と、前記部分表面画像選択機能によって選択された部分表面画像を合成することにより、前記頭部の3次元表面形状に適合した表面画像である合成表面画像を生成する合成表面画像生成機能と、前記頭部の3次元表面形状と前記合成表面画像に基づき頭部アバターを生成する頭部アバター生成機能と、を実現させることを特徴とする。 Further, in order to achieve the above object, the three-dimensional avatar generation program according to claim 6 is configured to display a three-dimensional surface shape and a three-dimensional surface shape of the target object expressed by a group of vertices based on the facial image of the target object. A three-dimensional avatar generation program that causes a computer to generate a three-dimensional avatar including a surface image, the computer having the computer generate a three-dimensional surface of an object expressed using a group of vertices that define positional relationships in three-dimensional space. A three-dimensional avatar generation device that generates a three-dimensional avatar including a shape and a surface image displayed on the three-dimensional surface shape, the elements constituting the head shape among the vertices based on the facial image of the object. a three-dimensional shape generation function that generates a three-dimensional surface shape of the head of an object in which the positional relationship of feature points corresponding to The positional relationship between two or more feature points is determined from among the plurality of partial surface images in which the positional relationship of the feature points corresponding to each element constituting the head shape in the whole or part of the region is specified. a partial surface image selection function that selects a partial surface image that matches the positional relationship between two or more feature points of the three-dimensional surface shape of the head, and a partial surface image selected by the partial surface image selection function. A composite surface image generation function that generates a composite surface image that is a surface image that conforms to the three-dimensional surface shape of the head by combining; A head avatar generation function for generating an avatar.
 本発明によれば、実在の人物等の特徴を反映しつつも当該人物等のプライバシーを保護し、かつ使用されるサービス等にて設定される世界観に適合した高品質な3次元アバターを容易に生成するという効果を奏する。 According to the present invention, it is easy to create a high-quality three-dimensional avatar that reflects the characteristics of a real person, protects the privacy of the person, and conforms to the world view set by the service used. This has the effect of generating
実施の形態1にかかる3次元アバター生成装置の構成を示す模式図である。1 is a schematic diagram showing the configuration of a three-dimensional avatar generation device according to a first embodiment; FIG. 実施の形態における頭部形状生成部4の動作について説明するためのフローチャートである。It is a flowchart for explaining the operation of the head shape generation unit 4 in the embodiment. 実施の形態における部分表面画像選択部6及び合成表面画像生成部7の動作について説明するためのフローチャートである。It is a flowchart for explaining the operation of the partial surface image selection section 6 and the composite surface image generation section 7 in the embodiment. 実施の形態におけるアバター合成部13の動作について説明するためのフローチャートである。It is a flowchart for explaining the operation of the avatar composition unit 13 in the embodiment. 実施の形態2にかかる3次元アバター生成装置の構成を示す模式図である。FIG. 2 is a schematic diagram showing the configuration of a three-dimensional avatar generation device according to a second embodiment. 変形例にかかる3次元アバター生成装置の構成を示す模式図である。It is a schematic diagram showing the composition of a three-dimensional avatar generation device concerning a modification.
 以下、本発明の実施の形態を図面に基づいて詳細に説明する。以下の実施の形態においては、本発明の実施の形態として最も適切と考えられる例について記載するものであり、当然のことながら、本発明の内容を本実施の形態にて示された具体例に限定して解すべきではない。同様の作用・効果を奏する構成であれば、実施の形態にて示す具体的構成以外のものであっても、本発明の技術的範囲に含まれることは勿論である。 Hereinafter, embodiments of the present invention will be described in detail based on the drawings. The following embodiments describe examples that are considered to be the most appropriate embodiments of the present invention, and it goes without saying that the content of the present invention is not limited to the specific examples shown in this embodiment mode. It should not be interpreted in a limited manner. It goes without saying that even configurations other than the specific configurations shown in the embodiments are included in the technical scope of the present invention as long as they provide similar functions and effects.
(実施の形態1)
 まず、実施の形態1にかかる3次元アバター生成装置について説明する。図1に示すとおり、本実施の形態1にかかる3次元アバター生成装置は、対象物の顔面画像を入力するための顔面画像入力部1と、頭部素体アバターについて記憶する頭部素体データベース2と、頭部素体データベース2に記憶された複数の頭部素体アバターの中から所定のものを選択する頭部素体選択部3と、対象物の顔面画像及び頭部素体アバターに基づき対象物の頭部の3次元表面形状に関する情報である3次元形状情報を生成する頭部形状生成部4と、頭部の3次元表面形状の一部領域に関する表面画像である部分表面画像を記憶する部分表面画像データベース5と、対象物の頭部の3次元表面形状に適合する部分表面画像を選択する部分表面画像選択部6と、選択した部分表面画像に基づき対象物の頭部の3次元表面形状における表面画像である合成表面画像を生成する合成表面画像生成部7と、頭部の3次元形状情報及び合成表面画像に基づき頭部アバターを生成する頭部アバター生成部9と、頭部アバターと組み合わせる胴体アバターの情報について記憶する胴体データベース10と、頭部アバターと組み合わせる頭髪アバターの情報について記憶する頭髪データベース11と、胴体データベース10に記憶された胴体アバター及び頭髪データベース11に記憶された頭髪アバターを選択する部品アバター選択部12と、頭部アバターと胴体アバター及び頭髪アバターを合成して一体の全身アバターを生成するアバター合成部13とを備える。
(Embodiment 1)
First, a three-dimensional avatar generation device according to the first embodiment will be described. As shown in FIG. 1, the three-dimensional avatar generation device according to the first embodiment includes a facial image input unit 1 for inputting a facial image of an object, and a head body database that stores the head body avatar. 2, a head body selection unit 3 that selects a predetermined one from a plurality of head body avatars stored in the head body database 2, and a head body selection unit 3 that selects a predetermined one from a plurality of head body avatars stored in the head body database 2; A head shape generation unit 4 generates three-dimensional shape information that is information about the three-dimensional surface shape of the head of the object based on the information, and a partial surface image that is a surface image about a partial region of the three-dimensional surface shape of the head. A partial surface image database 5 to be stored, a partial surface image selection section 6 that selects a partial surface image that matches the three-dimensional surface shape of the object's head, and a partial surface image selection section 6 that selects a partial surface image that matches the three-dimensional surface shape of the object's head based on the selected partial surface image. A composite surface image generation unit 7 that generates a composite surface image that is a surface image in a dimensional surface shape; a head avatar generation unit 9 that generates a head avatar based on the three-dimensional shape information of the head and the composite surface image; A torso database 10 that stores information on a torso avatar to be combined with a head avatar, a hair database 11 that stores information about a hair avatar to be combined with a head avatar, and torso avatars stored in the torso database 10 and hair database 11. The device includes a component avatar selection section 12 that selects a hair avatar, and an avatar synthesis section 13 that synthesizes the head avatar, torso avatar, and hair avatar to generate an integrated whole body avatar.
 顔面画像入力部1は、3次元アバターの生成対象である対象物の顔面画像を入力するためのものである。「顔面画像」は、3Dスキャナ等によって撮影された3次元的な立体画像でもよいが、本実施の形態1では2次元的な画像とする。顔面画像入力部1は、具体的には、単に外部からデータを入力するためのデータ入力機構としてもよいし、撮像カメラ等を備え直接的に顔面画像を取得する構成としてもよい。顔面画像入力部1にて入力される顔面画像は、人物を含む生物又はこれを模したキャラクターを対象物とした顔面に関する画像であり、より具体的には、あらかじめ定めた顔面上の特徴点(たとえば目、鼻、口、眉、耳、あご等及びこれらにおける詳細な各部位)に相当する部位を含む画像である。特徴点は複数設定されており、理想的には全ての特徴点に相当する部位を含む画像であることが望ましいが、ある程度の割合(たとえば7割程度)の部位を含んでいれば本実施の形態におけるアバター生成に問題はなく、また、これ以下の割合であっても、3以上の特徴点が抽出可能であれば、本実施の形態におけるアバター生成自体は可能である。部位に関しては、顔面画像に目、鼻、口が含まれていることが望ましい。 The facial image input unit 1 is for inputting a facial image of an object for which a three-dimensional avatar is to be generated. The "facial image" may be a three-dimensional stereoscopic image taken by a 3D scanner or the like, but in the first embodiment, it is a two-dimensional image. Specifically, the facial image input unit 1 may be a data input mechanism for simply inputting data from the outside, or may be configured to include an imaging camera or the like and directly acquire facial images. The facial image inputted by the facial image input unit 1 is an image related to the face of a living thing including a person or a character imitating the living thing, and more specifically, a facial image inputted by the facial image input unit 1 is an image related to the face of a living thing including a person or a character imitating the same. For example, the image includes parts corresponding to the eyes, nose, mouth, eyebrows, ears, chin, etc., and detailed parts thereof. Multiple feature points are set, and ideally it is desirable that the image includes parts corresponding to all the feature points, but if it contains a certain percentage (for example, about 70%) of the parts, this implementation will be effective. There is no problem with avatar generation in this embodiment, and even if the ratio is less than this, as long as three or more feature points can be extracted, avatar generation itself in this embodiment is possible. Regarding parts, it is desirable that the facial image includes the eyes, nose, and mouth.
 頭部素体データベース2は、頭部素体アバターに関する情報について記憶するためのものである。「頭部素体アバター」とは、たとえば平均的な体格からなる人物等の頭部からなる3次元コンピュータグラフィックス情報であり、具体的には、頭部(人物において首から上の部分をいう。)に関する表面の3次元的形状に関する表面形状情報を備え、望ましくはこれに加えて頭部の動作等を制御するための骨格構造に関する骨格情報と、表面形状と骨格構造の関連性に関する関連性情報とを備える。頭部素体データベース2は、このような情報を備えた頭部素体アバターについて、異なるフォーマット(データ形式、用途、特徴点(以下、顔面画像から導出される特徴点と区別するため、頭部素体アバターにおける特徴点を「基本特徴点」という。)の定義態様等が異なるもの)に対応した複数の素体アバターを記憶する機能を有する。 The head body database 2 is for storing information regarding head body avatars. "Head body avatar" is three-dimensional computer graphics information consisting of the head of a person of average build, for example, and specifically, the head (the part above the neck of a person). .), preferably in addition to this, skeletal information regarding a skeletal structure for controlling head movements, etc., and relationships regarding the relationship between the surface shape and the skeletal structure. information. The head body database 2 stores head body avatars with such information in different formats (data format, usage, feature points (hereinafter referred to as head body avatars) in order to distinguish them from feature points derived from facial images. It has a function of storing a plurality of base avatars corresponding to feature points in a base avatar (referred to as "basic feature points") with different definitions.
 「表面形状情報」とは、人体表面における目、鼻等の器官及び皮膚等の立体的な形状に相当する、3次元コンピュータグラフィックスの表面形状に関する情報である。情報の形式としては、表面全体をボクセル等の微小単位の集合と規定し各微小単位の位置情報を記録した形式としてもよいが、情報処理上の負担軽減の観点から、いわゆるモデリング処理を行い、所定数の頂点及び頂点間の接続態様を規定することによって、3次元表面形状を表現した形式とすることが望ましい。モデリング処理を行った場合、頂点及び頂点間の接続態様に関する情報に基づき頂点間を結ぶ辺が形成され、3本以上の辺によって囲まれた領域が面(ポリゴン)として定義され、面の集合(メッシュ)によって、表面形状が特定されることとなる。ただし、本発明の適用対象はこれらに限定されるものではなく、表面形状情報としては、表面形状に対応して配置された複数の頂点及び/又は複数の面の位置情報を含んで構成されるものであれば、本発明における表面形状情報として使用することが可能である。 "Surface shape information" is information regarding the surface shape of three-dimensional computer graphics, which corresponds to the three-dimensional shape of organs such as eyes and nose, and skin on the human body surface. The information may be in a format in which the entire surface is defined as a collection of minute units such as voxels and the position information of each minute unit is recorded, but from the perspective of reducing the burden on information processing, so-called modeling processing is performed. It is desirable that the three-dimensional surface shape be expressed by defining a predetermined number of vertices and the manner of connection between the vertices. When modeling is performed, edges connecting vertices are formed based on information about vertices and the connection mode between vertices, an area surrounded by three or more edges is defined as a surface (polygon), and a set of surfaces ( The surface shape is specified by the mesh). However, the application target of the present invention is not limited to these, and the surface shape information includes position information of a plurality of vertices and/or a plurality of faces arranged corresponding to the surface shape. If it is, it can be used as surface shape information in the present invention.
 また、表面形状情報に含まれる複数の頂点の一部については、その位置情報及び頂点間の接続態様に関する情報に加え、その頂点の意味内容に関する情報についても記録されているものとする。たとえば、目、鼻、口等の部位における特定箇所及び各部位における詳細な位置関係(目尻、瞳、鼻頭、口角等)に対応する特徴点に該当する頂点については、右目の目尻に該当する、等の情報が付されるものとする。また、頂点の位置情報に関しては、絶対的な位置に関する情報と、ジョイント、ボーンからなる骨格構造に対する相対的な位置に関する情報のいずれか一方を含む形式が望ましい。本実施の形態では前者のみならず後者の位置情報も含むものとし、ジョイントの位置変化、ボーンの位置、長さ等の変化に応じて、各頂点は相対的な位置関係を維持しつつ位置が変化するものとする。 Further, for some of the plurality of vertices included in the surface shape information, in addition to the positional information and information regarding the connection mode between the vertices, information regarding the meaning and content of the vertices is also recorded. For example, for vertices that correspond to feature points corresponding to specific points in parts such as the eyes, nose, mouth, etc. and detailed positional relationships among the parts (outer corners of the eyes, pupils, tip of the nose, corners of the mouth, etc.), vertices corresponding to the outer corners of the right eye, The following information shall be attached. Furthermore, regarding the position information of the vertices, it is desirable that the format includes either information regarding the absolute position or information regarding the relative position with respect to the skeletal structure consisting of joints and bones. In this embodiment, not only the former position information but also the latter position information is included, and the position of each vertex changes while maintaining the relative positional relationship according to changes in joint position, bone position, length, etc. It shall be.
 「骨格情報」とは、人体における骨格等に相当する、3次元コンピュータグラフィックスにおいて動作を作出する際等において基準となる内部構造に関する情報である。情報の形式としては、人体における骨格構造等と同様に所定の太さ、大きさを有する骨や関節からなる骨格構造としてもよいが、いわゆるスケルトンと称される、人体における関節等に相当するジョイント(点として表現される。)と、ジョイント間に位置し、人体における骨に相当するボーン(線として表現される。)の集合によって表現される形式とすることが望ましい。ただし、本発明の適用対象がこれらの情報形式に限定されることはなく、関節等のように平行移動・回転移動可能であると共に隣接部分との関係で支点としても機能する部分(本発明ではこれらを総称して「ジョイント」という。)と、骨等のように平行移動・回転移動のみ可能とした部分(これらを総称して「ボーン」という。)に関する情報によって構成されたものであれば他の形式でもよい。 "Skeleton information" is information regarding the internal structure that corresponds to the skeleton of a human body and serves as a reference when creating motion in three-dimensional computer graphics. The information format may be a skeletal structure consisting of bones and joints with a predetermined thickness and size, similar to the skeletal structure of the human body, but it may also be a skeletal structure consisting of bones and joints of a predetermined thickness and size. (represented as points) and bones (represented as lines) located between joints and corresponding to bones in the human body. However, the application of the present invention is not limited to these information formats, and includes parts such as joints that can be translated and rotated and also function as a fulcrum in relation to adjacent parts (in the present invention, These are collectively referred to as "joints." Other formats are also acceptable.
 「関連性情報」とは、骨格情報と表面形状情報との間の関連性を規定する情報であり、より具体的には、骨格構造に含まれるジョイント、ボーンの動作に対し、表面形状を形成する各頂点がどの程度追従して動作するかについて規定する情報である。仮に表面形状がジョイント、ボーンの動作に100%追従する構成の場合、人間等のキャラクターであるにもかかわらずブリキ製ロボットのような動作となり現実感に乏しいキャラクターとなってしまう。そのため、人物等の3次元コンピュータグラフィックスを生成する際には、表面形状の各部分ごとに、近接するボーン、ジョイントの移動に対しどの程度追従するかに関する情報を予め設定することが望ましい。本実施の形態においても、表面形状情報を構成する各頂点に関して、これと近接するボーン及び/又はジョイントに対する追従性を示す数値情報を設定したものを関連性情報として設定する。なお、関連性情報の生成作業はスキニング処理、ウェイト編集等と称され関連性情報についてもウェイト値が一般に使用されるところ、本発明における関連性情報はこれらに限定されることはなく、上述の条件を満たす情報全てを含むこととする。 "Relationship information" is information that defines the relationship between skeletal information and surface shape information, and more specifically, it is information that defines the relationship between skeletal information and surface shape information. This is information that specifies how much each vertex should follow and operate. If the surface shape were configured to 100% follow the movements of the joints and bones, the character would behave like a tin robot and lack a sense of reality, even though it is a human character. Therefore, when generating three-dimensional computer graphics of a person or the like, it is desirable to set in advance information regarding how much the movement of adjacent bones and joints is followed for each part of the surface shape. Also in this embodiment, for each vertex constituting the surface shape information, numerical information indicating followability with respect to bones and/or joints adjacent to the vertex is set as relevance information. Note that the work of generating relevance information is called skinning processing, weight editing, etc., and weight values are generally used for relevance information as well, but the relevance information in the present invention is not limited to these, and the above-mentioned All information that satisfies the conditions shall be included.
 頭部素体選択部3は、頭部素体データベース2に記憶された複数の頭部素体アバターの中から、対象物の頭部アバター生成に適した頭部素体アバターを選択するためのものである。頭部素体データベース2は、フォーマットや用途に合わせて異なるデータ形式の頭部素体アバターを記憶しており、頭部素体選択部3は、複数の頭部素体アバターの中から用途等に合わせて適切な頭部素体アバターを選択し、頭部形状生成部4に対し出力する機能を有する。より具体的には、頭部素体選択部3は、後述する部品アバター選択部12により選択された胴体アバターに関する情報を取得し、選択された胴体アバターと整合するデータ形式の頭部素体アバターを選択する機能を有する。また、かかる条件を満たしたうえで、複数の頭部素体アバターの中から機械的に、あるいはユーザの選択にしたがって頭部素体アバターを選択する機能を有する。 The head element selection unit 3 selects a head element avatar suitable for generating a head avatar of an object from among a plurality of head element avatars stored in the head element database 2. It is something. The head body database 2 stores head body avatars in different data formats depending on the format and usage, and the head body selection unit 3 selects the usage etc. from among the plurality of head body avatars. It has a function of selecting an appropriate head body avatar according to the situation and outputting it to the head shape generation section 4. More specifically, the head body selection unit 3 acquires information regarding the torso avatar selected by the parts avatar selection unit 12 (described later), and selects a head body avatar in a data format consistent with the selected torso avatar. It has the function to select. Furthermore, after satisfying such conditions, it has a function of selecting a head body avatar from a plurality of head body avatars mechanically or according to the user's selection.
 頭部形状生成部4は、顔面画像入力部1を介して入力された対象物の顔面画像と、頭部素体選択部3によって選択された頭部素体アバターとに基づき、対象物の頭部の3次元表面形状に関する情報である3次元形状情報を生成するためのものである。具体的には、頭部形状生成部4は、2次元の顔面画像における特徴点の位置関係を抽出する特徴点抽出部14と、抽出された2次元的な位置関係に基づき、特徴点間の立体的な、3次元的な位置関係を導出する位置関係導出部15と、3次元的な位置関係が導出された特徴点の座標及び頭部素体アバターの3次元表面形状を構成する各頂点の座標について座標変換を行う座標変換部16と、特徴点抽出部14によって抽出された顔面画像上の特徴点の位置関係を調整する位置調整部17と、顔面画像の特徴点によって形成される領域を拡大又は縮小する拡大縮小部18と、顔面画像の特徴点の位置と一致するよう頭部素体アバターの頂点を移動させる素体変形部19とを備える。 The head shape generation section 4 generates a head shape of the object based on the facial image of the object input through the facial image input section 1 and the head element avatar selected by the head element selection section 3. This is for generating three-dimensional shape information that is information about the three-dimensional surface shape of the part. Specifically, the head shape generation unit 4 works with a feature point extraction unit 14 that extracts the positional relationship between feature points in a two-dimensional facial image, and a A positional relationship derivation unit 15 that derives a three-dimensional positional relationship, the coordinates of the feature points from which the three-dimensional positional relationship has been derived, and each vertex that constitutes the three-dimensional surface shape of the head body avatar. a coordinate transformation unit 16 that performs coordinate transformation on the coordinates of the face image; a position adjustment unit 17 that adjusts the positional relationship of the feature points on the facial image extracted by the feature point extraction unit 14; and a region formed by the feature points of the facial image. and a body deformation unit 19 that moves the apex of the head body avatar to match the position of the feature point of the face image.
 特徴点抽出部14は、対象物の顔面画像に対し画像分析を行うことで対象物の顔面画像における特徴点を抽出するためのものである。具体的には、特徴点抽出部14は、顔認証技術等の2次元画像分析技術を利用することにより、対象となる顔面画像の中から特徴点を抽出する機能を有する。なお、特徴点抽出部14において抽出される特徴点は、頭部素体選択部3によって選択された頭部素体アバターの基本特徴点と同じ定義によるものとする。 The feature point extraction unit 14 is for extracting feature points in the facial image of the target object by performing image analysis on the facial image of the target object. Specifically, the feature point extraction unit 14 has a function of extracting feature points from a target facial image by using two-dimensional image analysis technology such as face recognition technology. Note that the feature points extracted by the feature point extraction unit 14 have the same definition as the basic feature points of the head element avatar selected by the head element selection unit 3.
 位置関係導出部15は、特徴点抽出部14にて抽出された顔面画像の特徴点について、相互間の3次元的な位置関係を導出するためのものである。具体的な導出メカニズムとしては、顔面画像における2次元的な特徴点を入力層、特徴点の3次元的な位置関係を出力層としたものを教師データとして与えて学習させた機械学習モデルを用いて導出することとしてもよいし、それ以外のメカニズムを用いてもよい。また、顔面画像が示す平面をxy平面として2次元的な特徴点の位置座標(x、y)を導出した後、各特徴点の深度(z座標)については、各特徴点のx座標及びy座標を入力層、z座標を出力層としたものを教師データとして与えて学習させた機械学習モデルを用いて導出することとしてもよい。 The positional relationship deriving unit 15 is for deriving a three-dimensional positional relationship between the feature points of the facial image extracted by the feature point extracting unit 14. The specific derivation mechanism uses a machine learning model that is trained by giving the two-dimensional feature points in the facial image as the input layer and the three-dimensional positional relationships of the feature points as the output layer as training data. It is also possible to derive the value by using the above method, or other mechanisms may be used. In addition, after deriving the two-dimensional position coordinates (x, y) of the feature points using the plane indicated by the facial image as the xy plane, the depth (z coordinate) of each feature point is calculated using the x and y coordinates of each feature point. It may also be derived using a machine learning model trained by giving coordinates as input layer and z coordinate as output layer as teacher data.
 座標変換部16は、顔面画像から抽出された特徴点の位置を示す座標を変換し、また、頭部素体アバターにおける基本特徴点を含む各頂点の座標を変換するためのものである。顔面画像の特徴点と、頭部素体アバターにおける各頂点の座標は別個の座標系に属しているため、そのままでは両者に基づき対象物の頭部の3次元形状情報を生成することはできない。そのため、座標変換部16は、それぞれの座標系に対し所定の変換処理を施すことにより両者を同一座標空間上に展開することとしている。 The coordinate conversion unit 16 is for converting the coordinates indicating the position of the feature points extracted from the facial image, and also for converting the coordinates of each vertex including the basic feature points in the head element avatar. Since the feature points of the facial image and the coordinates of each vertex of the head body avatar belong to separate coordinate systems, it is not possible to generate three-dimensional shape information of the object's head based on the two as they are. Therefore, the coordinate transformation unit 16 performs predetermined transformation processing on each coordinate system to develop both on the same coordinate space.
 具体的には、座標変換部16は、特定の特徴点(たとえば鼻頭に対応する頂点)を原点とする相対座標系を設定し、顔面画像から抽出された特徴点及び頭部素体アバターの各頂点に対し平行移動、回転、拡大・縮小、スキュー等のアフィン変換を実施することで、それぞれ特定の特徴点が原点に位置するように座標変換処理を行う。 Specifically, the coordinate conversion unit 16 sets a relative coordinate system with a specific feature point (for example, the vertex corresponding to the tip of the nose) as the origin, and converts each of the feature points extracted from the facial image and the head body avatar. By performing affine transformation such as parallel translation, rotation, enlargement/reduction, and skew on the vertices, coordinate transformation processing is performed so that each specific feature point is located at the origin.
 また、座標変換部16は、頭部の3次元形状情報を生成した後、当該情報に関する座標系を頭部素体アバターの座標系に戻すことを内容とした座標変換処理を行う機能も有する。 The coordinate transformation unit 16 also has a function of performing a coordinate transformation process of returning the coordinate system related to the information to the coordinate system of the head body avatar after generating the three-dimensional shape information of the head.
 位置調整部17は、座標変換後の顔面画像の特徴点について、頭部素体アバターの基本特徴点との位置調整を行うためのものである。具体的には、座標変換部16の処理によって共通の座標上に移動した顔面画像から抽出された特徴点と頭部素体アバターの各頂点は、いずれも対応関係にある特定の特徴点が原点に位置するものの、他の特徴点・頂点の位置関係はばらばらであり、例えるならば、顔面画像の特徴点と頭部素体アバターの基本特徴点・頂点とでは顔の向かう方向にずれが生じた状態となっている。 The position adjustment unit 17 is for adjusting the position of the feature points of the face image after coordinate transformation with the basic feature points of the head body avatar. Specifically, the feature points extracted from the facial image and each vertex of the head body avatar that have been moved to the common coordinates by the processing of the coordinate conversion unit 16 have a specific feature point in a corresponding relationship as the origin. However, the positional relationships of other feature points and vertices are different.For example, the feature points of a face image and the basic feature points and vertices of the head body avatar are misaligned in the direction the face is facing. The situation is as follows.
 位置調整部17はこのずれを調整するためのものであり、たとえば、両頬に相当する2つの特徴点を結ぶ線分を想定し、顔面画像の特徴点における線分と、頭部素体アバターにおける線分とが互いに平行となるよう、位置調整部17は一方を回転させる等の処理を行って位置調整を行う。 The position adjustment unit 17 is for adjusting this deviation. For example, assuming a line segment connecting two feature points corresponding to both cheeks, the position adjustment unit 17 adjusts the line segment at the feature point of the facial image and the head body avatar. The position adjustment unit 17 performs a process such as rotating one side to adjust the position so that the line segments are parallel to each other.
 また、位置調整部17は、顔面画像から抽出された特徴点の座標についてせん断の修正を行う機能も有する。顔面画像の特徴点を抽出する処理態様によっては、各特徴点の位置座標にせん断が生じることがある。この場合に位置調整部17は、顔面画像の特徴点に関して3次元座標系の3軸方向の各軸についてせん断量を算出し、これを相殺する機能を有する。 The position adjustment unit 17 also has a function of performing shear correction on the coordinates of feature points extracted from the facial image. Depending on the processing method for extracting feature points of a facial image, shearing may occur in the position coordinates of each feature point. In this case, the position adjustment unit 17 has a function of calculating shear amounts for each of the three axes of the three-dimensional coordinate system regarding the feature points of the facial image, and canceling the shear amounts.
 拡大縮小部18は、顔面画像の特徴点によって形成される空間が拡大または縮小するように、顔面画像の特徴点の位置座標を変化させるためのものである。具体的には、拡大縮小部18は、顔面画像の特徴点によって形成される空間と、頭部素体アバターの基本特徴点のうち顔面画像の特徴点と対応する基本特徴点によって形成される空間とを比較し、前者の空間の広さが後者の空間の広さと同程度となるように、顔面画像の特徴点について、相互の位置関係を保持しつつ位置座標を変化させる機能を有する。 The enlarging/reducing unit 18 is for changing the position coordinates of the feature points of the facial image so that the space formed by the feature points of the facial image is expanded or reduced. Specifically, the enlarging/reducing unit 18 expands the space formed by the feature points of the facial image and the space formed by the basic feature points corresponding to the feature points of the facial image among the basic feature points of the head body avatar. It has a function of changing the positional coordinates of the feature points of the facial image while maintaining the mutual positional relationship so that the width of the former space is comparable to the width of the latter space.
 なお、「特徴点によって形成される空間」とは、顔面画像の特徴点と頭部素体アバターの基本特徴点のそれぞれについて同じ定義に基づき形成される空間であればよい。たとえば、各特徴点を内部に含み、かつ、体積が最小となるよう形成された、各辺がx軸、y軸又はz軸に平行な直線からなる直方体(いわゆる3次元バウンディングボックス)や、隣接する特徴点間を線分で結び、3本以上の線分によって形成される面からなる空間領域を「特徴点によって形成される空間」としてもよい。 Note that the "space formed by feature points" may be a space formed based on the same definition for each of the feature points of the facial image and the basic feature points of the head body avatar. For example, a rectangular parallelepiped (so-called three-dimensional bounding box) whose sides are made of straight lines parallel to the x-axis, y-axis, or z-axis, which is formed so as to contain each feature point and have a minimum volume, or The feature points may be connected by line segments, and a spatial region formed by a surface formed by three or more line segments may be defined as a "space formed by the feature points."
 また、「広さが同程度になる」とは、体積が完全に一致する場合に限定されない。好ましくは顔面画像の特徴点によって形成される空間の体積が、頭部素体アバターの基本特徴点のうち顔面画像の特徴点と対応する基本特徴点によって形成される空間の体積の0.9倍~1.1倍の範囲に収まることを指すが、これ以外の範囲を設定することとしてもよい。 Furthermore, "the widths are about the same" is not limited to the case where the volumes completely match. Preferably, the volume of the space formed by the feature points of the facial image is 0.9 times the volume of the space formed by the basic feature points corresponding to the feature points of the facial image among the basic feature points of the head body avatar. This refers to a range of ~1.1 times, but it is also possible to set a range other than this.
 素体変形部19は、汎用的な頭部素材の3次元的表面形状に対し、頭部素体アバターの基本特徴点の位置を移動することにより対象物の3次元的表面形状に近づけるよう変形させるためのものである。具体的には、素体変形部19は、頭部素体アバターの基本特徴点のうち顔面画像の特徴点に対応する基本特徴点の位置座標を、顔面画像の特徴点の位置座標に変換する機能を有する。また、素体変形部19は、対応関係にある基本特徴点の近傍に位置する頭部素体アバターの頂点についても、基本特徴点と他の頂点との連続性が維持されるよう基本特徴点との距離に応じた所定の割合にて位置を変化させる機能を有する。素体変形部19の処理により、3次元表面形状の顔領域において対象物の特徴を備え、他の領域については頭部素体アバターの形状のままとなる頭部形状が生成される。 The element body deforming unit 19 transforms the three-dimensional surface shape of a general-purpose head material so that it approaches the three-dimensional surface shape of the object by moving the position of the basic feature points of the head element avatar. It is for the purpose of Specifically, the element transformation unit 19 converts the position coordinates of the basic feature points of the head element avatar that correspond to the feature points of the face image into the position coordinates of the feature points of the face image. Has a function. In addition, the elemental body deformation unit 19 also performs basic feature points such that continuity between the basic feature points and other vertices is maintained for the vertices of the head elemental body avatar that are located in the vicinity of the corresponding basic feature points. It has a function of changing the position at a predetermined rate depending on the distance from the object. Through the processing of the elemental body deformation unit 19, a head shape is generated that has the characteristics of the object in the face region of the three-dimensional surface shape, while remaining in the shape of the head elemental body avatar in other regions.
 なお、素体変形部19は、頭部素体アバターの基本特徴点の位置を対象物の3次元的表面形状に近づけるよう移動させるだけでなく、それぞれの基本特徴点の位置の移動方向及び移動距離に関する情報を記録する。移動方向及び移動距離に関する情報は、変形前の基本状態の素体における基本特徴点の位置に対し、対象の顔面画像における特徴点がどの程度変化しているかを示す相対的な位置情報に相当することから、後述する特徴点比較部22における部分表面画像との特徴点の位置比較において、対象物の頭部の3次元形状情報における特徴点の位置を示す情報として使用される。 The elemental body deforming unit 19 not only moves the positions of the basic feature points of the head elemental body avatar to approach the three-dimensional surface shape of the object, but also changes the movement direction and movement of the position of each basic feature point. Record information about distance. The information regarding the movement direction and movement distance corresponds to relative position information indicating how much the feature points in the target facial image have changed with respect to the position of the basic feature points in the element body in the basic state before deformation. Therefore, in comparing the position of the feature point with a partial surface image in the feature point comparison unit 22, which will be described later, it is used as information indicating the position of the feature point in the three-dimensional shape information of the head of the object.
 部分表面画像データベース5は、頭部の3次元表面形状の一部領域に関する表面画像である部分表面画像を記憶するためのものである。具体的には、部分表面画像データベース5は、3次元形状情報の表面の一部、たとえば目、鼻、口、耳、眉といった顔面上に形成される器官の他、頬部、額部、顎部といった皮膚領域に関する表面画像である部分表面画像に関する情報を記憶する機能を有する。 The partial surface image database 5 is for storing partial surface images that are surface images regarding a partial region of the three-dimensional surface shape of the head. Specifically, the partial surface image database 5 includes information on a part of the surface of three-dimensional shape information, such as organs formed on the face such as eyes, nose, mouth, ears, and eyebrows, as well as cheeks, forehead, and chin. It has a function of storing information regarding a partial surface image, which is a surface image regarding a skin area such as a skin area.
 「表面画像」とは、頭部に関する3次元表面形状における色・模様と質感の少なくとも一方に関する画像とする。「質感」とは、テクスチャ等とも表現され、表面における微小な凹凸等からなる外観的な特徴をいう。一般に、3次元CGではモデリング処理によって、外表面が頂点及び頂点間の接続態様に関する情報に基づき頂点間を結ぶ辺が形成され、3本以上の辺によって囲まれた領域が面(ポリゴン)として定義され、面の集合(メッシュ)によって、表面形状が特定されることとなる。かかる手法は現実的な表面形状を近似的に表現するものであるため、表面の微小な凹凸等からなる質感に関する情報は含まれない。そのため、質感に関する情報を別途作成し、モデリング処理によって形成された外表面に付加して、リアルな外観を作出している。なお、質感に関する情報とは、具体的には外表面上に付加される2次的な画像に関する情報であり、2次元平面上に陰陽等を反映したパターン、たとえばハイトマップ、ノーマルマップを形成することによって、疑似的に凹凸等を表現している。本実施の形態1における表面画像は頭部形状の表面における色・模様及び質感の双方を含むものとするが、いずれか一方のみから構成することとしてもよい。 "Surface image" is an image related to at least one of color/pattern and texture in the three-dimensional surface shape of the head. "Texture" is also expressed as texture, and refers to external features consisting of minute irregularities on the surface. Generally, in 3D CG, through modeling processing, the outer surface forms edges that connect vertices based on information about the vertices and the connection mode between the vertices, and an area surrounded by three or more edges is defined as a surface (polygon). The surface shape is specified by a set of surfaces (mesh). Since such a method approximates a realistic surface shape, it does not include information regarding texture such as minute irregularities on the surface. Therefore, information regarding texture is created separately and added to the outer surface formed by modeling processing to create a realistic appearance. Note that the information regarding the texture is specifically information regarding the secondary image added on the outer surface, which forms a pattern that reflects Yin and Yang on a two-dimensional plane, such as a height map or a normal map. By doing so, irregularities, etc. are expressed in a pseudo manner. Although the surface image in the first embodiment includes both color/pattern and texture on the surface of the head shape, it may be composed of only one of them.
 部分表面画像データベース5に記憶される各部分表面画像の情報は、当該部分表面画像における画像情報すなわち一定領域における色・模様及び質感に関する情報と、当該領域における特徴点の3次元的な位置に関する情報と、分類に関する情報を含むものとする。「特徴点の3次元的な位置に関する情報」とは、あらかじめ定めた所定の特徴点について、これと対応関係にある汎用的な頭部素体の基本特徴点からのずれ、すなわち頭部素体における基本特徴点からの移動距離及び移動方向に関する情報であって、頭部素体の基本特徴点の位置を基準とした相対的な位置情報とする。
「分類に関する情報」とは、部分表面画像が、予め所定の条件に従って定められた複数の分類のうちどの分類に属するかを示す情報である。「所定の条件」は任意のものとすることが可能であり、たとえば、生成される3次元アバターが使用されるゲーム、SNS等のそれぞれのサービスに適合したデザインごとに異なる分類とすることとしてもよいし、部分表面画像を作成したデザイナーごとに異なる分類とすることとしてもよい。また、たとえば著名な画家、イラストレータ、アニメータ、漫画家等の作風や、特定の映画、絵画、アニメーション、漫画の世界観と整合する部分表面画像ごとに異なる分類とすることとしてもよい。作風・世界観と整合する部分表面画像ごとに分類する場合は、部分表面画像を入力層、分類を出力層としたものを教師データとして与えて学習した機械学習モデルを用いて個々の部分表面画像を分類することとしてもよいし、それ以外のメカニズムを用いて分類することとしてもよい。さらに、部分表面画像における表面画像については任意の態様とすることが可能であり、たとえば写実的なものでもよいし、漫画的あるいはイラスト的なものであってもよい。
The information on each partial surface image stored in the partial surface image database 5 includes image information on the partial surface image, that is, information on the color, pattern, and texture in a certain area, and information on the three-dimensional positions of feature points in the area. and information regarding classification. "Information regarding the three-dimensional position of a feature point" refers to the deviation of a predetermined feature point from the basic feature point of a general-purpose head element that corresponds to this, that is, the head element. This is information regarding the movement distance and movement direction from the basic feature point in , and is relative position information with the position of the basic feature point of the head body as a reference.
"Information regarding classification" is information indicating to which classification the partial surface image belongs among a plurality of classifications determined in advance according to predetermined conditions. The "predetermined conditions" can be arbitrary; for example, the generated three-dimensional avatar may be classified differently depending on the design that is suitable for each service such as a game or SNS. Alternatively, the classification may be different depending on the designer who created the partial surface image. Alternatively, different classifications may be made for partial surface images that match the style of a famous painter, illustrator, animator, manga artist, etc., or the world view of a particular movie, painting, animation, or manga. When classifying partial surface images that are consistent with the style and worldview, a machine learning model trained by giving partial surface images as input layer and classification as training data is used to classify individual partial surface images. It may be possible to classify the information, or it may be classified using other mechanisms. Furthermore, the surface image in the partial surface image can be in any form, for example, it may be realistic, cartoon-like, or illustrative.
 部分表面画像選択部6は、部分表面画像データベース5に記憶された複数の部分表面画像の中から、頭部形状生成部4によって生成された対象物の頭部の3次元表面形状と整合する表面形状からなる部分表面画像を選択するためのものである。具体的には、部分表面画像選択部6は、部分表面画像に関する複数の分類の中から特定の分類を選択するための分類選択部21と、選択された分類に属する部分表面画像の特徴点の位置情報と対象物の頭部の3次元表面形状の特徴点の位置情報とを比較する特徴点比較部22と、比較結果に基づき対象物の頭部の3次元表面形状の各部分について位置情報が整合する部分表面画像を判定する画像判定部23と、整合すると判定された部分表面画像を合成表面画像生成部7に対し出力する画像出力部24とを備える。 The partial surface image selection unit 6 selects a surface that matches the three-dimensional surface shape of the object's head generated by the head shape generation unit 4 from among the plurality of partial surface images stored in the partial surface image database 5. This is for selecting a partial surface image consisting of a shape. Specifically, the partial surface image selection unit 6 includes a classification selection unit 21 for selecting a specific classification from among a plurality of classifications regarding partial surface images, and a classification selection unit 21 for selecting a specific classification from among a plurality of classifications regarding partial surface images, and a classification selection unit 21 for selecting a specific classification from among a plurality of classifications regarding partial surface images, and a classification selection unit 21 for selecting a specific classification from among a plurality of classifications regarding partial surface images. A feature point comparison unit 22 that compares the position information with the position information of the feature points of the three-dimensional surface shape of the object's head, and calculates position information about each part of the three-dimensional surface shape of the object's head based on the comparison result. The image determination unit 23 includes an image determination unit 23 that determines which partial surface images match, and an image output unit 24 that outputs the partial surface images determined to match to the composite surface image generation unit 7.
 分類選択部21は、部分表面画像データベース5に記憶された複数の部分表面画像に関する分類項目を選択するためのものである。本実施の形態1では分類項目を選択する態様として生成する3次元アバターが使用されるサービスに適合した分類を自動的に選択する構成を採用するが、それ以外に、たとえばユーザが自ら分類項目を選択する構成としてもよいし、ウェブページの閲覧履歴等に基づき解析されたユーザの嗜好にあわせて分類項目を選択することとしてもよい。 The classification selection unit 21 is for selecting classification items regarding a plurality of partial surface images stored in the partial surface image database 5. In the first embodiment, a configuration is adopted in which a classification item is automatically selected that is suitable for the service in which the generated three-dimensional avatar is used. The configuration may be such that the classification items are selected, or the classification items may be selected according to the user's preferences analyzed based on the browsing history of web pages and the like.
 特徴点比較部22は、選択された分類項目に属するそれぞれの部分表面画像の特徴点と、対象物の頭部の3次元表面形状のうち対応する部分における特徴点の位置を比較するためのものである。具体的には、特徴点比較部22は、双方の特徴点のうち互いに対応関係にあるものの位置関係を比較し、その位置が一致するか否かの情報と、一致しない場合にどの程度離隔しているかに関する情報を、比較結果として出力する。なお、特徴点比較部22による位置関係の比較において、対象物の頭部の3次元形状情報の表面における特徴点の位置情報は、素体変形部19によって記録された、頭部素体アバターの特徴点の位置からの移動方向・移動量を内容とする相対的な位置情報とする。また、特徴点比較部22は、特徴点比較の際に対応関係にあるすべての特徴点について位置座標の比較を行ってもよいが、あらかじめ定めた一部の特徴点のみについて比較することとしてもよい。 The feature point comparison unit 22 is for comparing the feature points of each partial surface image belonging to the selected classification item with the positions of the feature points in corresponding parts of the three-dimensional surface shape of the object's head. It is. Specifically, the feature point comparison unit 22 compares the positional relationship of feature points of both sides that correspond to each other, and provides information as to whether the positions match or not, and if they do not match, how far apart they are. Outputs information regarding whether the In the comparison of the positional relationships by the feature point comparison unit 22, the position information of the feature points on the surface of the three-dimensional shape information of the object's head is based on the position information of the head element body avatar recorded by the element body transformation unit 19. Relative position information containing the direction and amount of movement from the position of the feature point. Further, the feature point comparison unit 22 may compare the position coordinates of all feature points in a correspondence relationship when comparing feature points, but may also compare only some predetermined feature points. good.
 画像判定部23は、特徴点比較部22の比較結果に基づき、複数の部分表面画像の中から、頭部の3次元表面形状の各領域について、対象物の頭部の3次元形状情報の特徴点の位置と近接する特徴点を備える部分表面画像を判定するためのものである。具体的には、画像判定部23は、特徴点比較部22によって導出された比較対象となる各特徴点間の距離の差の合計値が最も小さい部分表面画像を判定することとする。もっとも、位置が一致する特徴点の数が最も多い部分表面画像を判定することとしてもよいし、他のアルゴリズムにしたがって当該部分における表面形状と整合する表面形状を有する部分表面画像を判定することとしてもよい。 Based on the comparison result of the feature point comparison unit 22, the image determination unit 23 selects features of the three-dimensional shape information of the object's head for each region of the three-dimensional surface shape of the head from among the plurality of partial surface images. This is for determining a partial surface image that includes feature points that are close to the position of the point. Specifically, the image determination unit 23 determines the partial surface image with the smallest total value of the difference in distance between each feature point to be compared derived by the feature point comparison unit 22. However, it may be possible to determine the partial surface image with the largest number of feature points with matching positions, or it may be possible to determine the partial surface image having a surface shape that matches the surface shape of the relevant portion according to another algorithm. Good too.
 画像出力部24は、合成表面画像生成部7に対し、画像判定部23によって判定された部分表面画像を出力するためのものである。画像出力部24は、画像判定部23によって判定された部分表面画像について、色・模様と質感の少なくとも一方に関する情報、特徴点の位置座標及び頭部の3次元表面形状のどの部分に関する部分表面画像であるかを示す情報を出力する機能を有する。 The image output unit 24 is for outputting the partial surface image determined by the image determination unit 23 to the composite surface image generation unit 7. For the partial surface image determined by the image determination section 23, the image output unit 24 outputs information regarding at least one of color/pattern and texture, the position coordinates of feature points, and the partial surface image regarding which part of the three-dimensional surface shape of the head. It has a function to output information indicating whether the
 合成表面画像生成部7は、部分表面画像選択部6によって選択された各部分に関する部分表面画像を、対象物の頭部の3次元表面形状に適合する態様にて各部分に配置することで合成表面画像を生成するためのものである。具体的には、合成表面画像生成部7は、部分表面画像の3次元表面形状が対象物の頭部の3次元表面形状と一致するよう表面形状の調整を行う表面形状調整部26と、表面形状の調整を経た部分表面画像を、対象物の頭部の表面形状に配置する画像合成部27と、対象物の頭部の3次元表面形状に生じる部分表面画像間の空隙領域について補完処理を行う補完処理部28とを備える。 The composite surface image generation unit 7 synthesizes partial surface images related to each part selected by the partial surface image selection unit 6 by arranging them on each part in a manner that matches the three-dimensional surface shape of the head of the object. This is for generating surface images. Specifically, the composite surface image generation unit 7 includes a surface shape adjustment unit 26 that adjusts the surface shape so that the three-dimensional surface shape of the partial surface image matches the three-dimensional surface shape of the head of the object; An image synthesis unit 27 arranges the partial surface image whose shape has been adjusted on the surface shape of the object's head, and performs interpolation processing on the gap region between the partial surface images that occurs in the three-dimensional surface shape of the object's head. and a complement processing unit 28 that performs the processing.
 表面形状調整部26は、部分表面画像選択部6によって選択された部分表面画像に対し、特徴点が対象物の頭部の3次元表面形状における特徴点の位置と一致するよう表面形状を変化させるためのものである。表面形状調整部26は、部分表面画像における特徴点の移動にあわせて表面画像を構成する色・模様及び質感も変化させる。特徴点上における色・模様及び質感の位置を移動させることに加え、不自然な表面画像となることを抑制するため特徴点近傍における色・模様及び質感についても特徴点からの距離に応じて位置を移動させることが望ましい。 The surface shape adjustment unit 26 changes the surface shape of the partial surface image selected by the partial surface image selection unit 6 so that the feature points match the positions of the feature points in the three-dimensional surface shape of the object's head. It is for. The surface shape adjustment unit 26 also changes the color, pattern, and texture of the surface image in accordance with the movement of the feature points in the partial surface image. In addition to moving the position of the color, pattern, and texture on the feature point, the position of the color, pattern, and texture near the feature point is also changed according to the distance from the feature point to prevent an unnatural surface image. It is desirable to move the
 画像合成部27は、表面形状調整部26による調整を経た部分表面画像を、対象物の頭部の表面形状のうち対応する領域上に配置するためのものである。具体的には、画像合成部27は、表面形状調整部26による形状調整を経たそれぞれの部分表面画像の特徴点の位置が、対象物の頭部の3次元表面形状のうち対応する部分における特徴点の位置が一致するように配置する機能を有する。 The image synthesis unit 27 is for arranging the partial surface image that has been adjusted by the surface shape adjustment unit 26 on a corresponding region of the surface shape of the object's head. Specifically, the image synthesis unit 27 determines that the position of the feature point of each partial surface image that has been subjected to shape adjustment by the surface shape adjustment unit 26 is a feature in the corresponding portion of the three-dimensional surface shape of the object's head. It has the function of arranging points so that their positions match.
 補完処理部28は、画像合成部27によって対象物の頭部の3次元表面形状の一部に部分表面画像が配置された際に全表面について部分表面画像が配置されず空隙領域が生じた場合に、当該空隙領域について補完処理を行うためのものである。配置するそれぞれの部分表面画像の形状等によっては、隣接して配置された部分表面画像間に空隙部分が生じ、当該空隙部分における色・模様及び質感を確定できないこととなる。補完処理部28は、空隙部分が不自然とならぬよう、空隙部分について、近傍の部分表面画像の色・模様及び質感に合わせた色・模様及び質感を設定する補完処理を行う。 When the partial surface image is placed on a part of the three-dimensional surface shape of the object's head by the image synthesis section 27, the complementation processing section 28 detects the case where the partial surface image is not placed on the entire surface and a gap area occurs. Second, it is used to perform complementary processing on the void area. Depending on the shape of each partial surface image to be placed, a gap may occur between adjacent partial surface images, and the color, pattern, and texture of the gap cannot be determined. The complementation processing unit 28 performs complementation processing for setting the color, pattern, and texture of the gap portion to match the color, pattern, and texture of the neighboring partial surface image, so that the gap portion does not look unnatural.
 補完処理の具体的な内容としては、空隙部分の周囲に存在する部分表面画像における主要色(たとえば、皮膚の色)により構成される部分を多数の微小領域に区分し、各領域における色情報のうちいくつかの頻出色(たとえば頻出頻度が上位1位から5位の色)を合成することにより代表色(Smodel)を決定する。また、皮膚の色(S)の導出にあたっては、上述の各微小領域における輝度の最大値(Smax)、中央値(Smedian)及び平均値(Smean)を用いた以下の式により導出される。
 
S={(Smodel+Smax)/2}*Smodel/{(Smedian+Smean)/2}
 
The specific content of the interpolation process is to divide the area formed by the main color (for example, skin color) in the partial surface image that exists around the gap into a large number of minute areas, and to extract the color information in each area. A representative color (S model ) is determined by combining some of the frequently appearing colors (for example, the colors ranked 1st to 5th in frequency). In addition, in deriving the skin color (S), it is derived by the following formula using the maximum value (S max ), median value (S median ), and average value (S mean ) of the brightness in each microregion described above. Ru.

S={(S model +S max )/2}*S model /{(S median +S mean )/2}
 もっとも、空隙領域における色の導出にあたっては他の数式を利用してもよいし、代表的な色・模様及び質感を抽出しそのまま空隙部分における色・模様及び質感として用いることとしてもよい。 However, other mathematical formulas may be used to derive the color in the void area, or representative colors, patterns, and textures may be extracted and used as they are as the colors, patterns, and textures in the void area.
 また、補完処理部28は、空隙部分と隣接する部分表面画像との境界が不自然に目立つことのないように、境界を不明確化するぼかし処理を行うこととする。具体的には、境界付近において空隙部分の色・模様等及び部分表面画像の表示領域をそれぞれ拡大して双方が重なり合う領域を形成し、当該重なり合う領域において、それぞれの混合比率を10:0、9:1、・・・2:8、1:9、0:10のように段階的に変化させることにより、境界を不明確化する。 Furthermore, the complementation processing unit 28 performs a blurring process to make the boundary unclear so that the boundary between the gap and the adjacent partial surface image does not stand out unnaturally. Specifically, the display area of the color/pattern of the void part and the partial surface image are each enlarged near the boundary to form an area where both overlap, and in the overlap area, the respective mixing ratios are set to 10:0 and 9. :1, ...2:8, 1:9, 0:10 to make the boundary unclear.
 頭部アバター生成部9は、頭部形状生成部4によって形成された対象物の頭部の3次元表面形状と、合成表面画像生成部7によって生成された合成表面画像を合成することにより、頭部アバターを生成するためのものである。頭部アバター生成部9の処理によって、対象物の頭部の3次元表面形状を有しつつ、表面画像として、部分表面画像選択部6によって選択された部分表面画像の内容を反映した3次元的な頭部アバターが完成する。 The head avatar generation section 9 synthesizes the three-dimensional surface shape of the head of the object formed by the head shape generation section 4 and the composite surface image generated by the composite surface image generation section 7 to generate a head avatar. This is for generating a partial avatar. Through the processing of the head avatar generation section 9, a three-dimensional surface image that has the three-dimensional surface shape of the object's head and reflects the contents of the partial surface image selected by the partial surface image selection section 6 is created as a surface image. The head avatar is completed.
 胴体データベース10は、胴体アバターについて記憶するためのものである。記憶される胴体アバターは、様々な形状、デザインのものが存在するとともに、異なるデータ形式(フォーマット、用途、基本特徴点の定義態様等が異なるもの)のものが多数存在する。胴体アバターの各種データ形式は、頭部素体アバターにおける各種データ形式と同様であり、より具体的には、胴体アバターとして特定のものが選択された時点で、これと同一のデータ形式からなる頭部素体アバターが選択される。同一のデータ形式からなる胴体アバター及び頭部素体アバターとの関係では、胴体アバターにおいて頭部素体アバターを接続する位置に関する情報、接続時における頭部素体アバターの基準軸の方向に関する情報及び接続する頭部素体アバターの大きさに関する情報が規定されている。これらの情報を用いて、アバター合成部13による全身アバターの合成処理が行われる。胴体アバターとしては、写実的な人物の身体を模したものでもよいし、動物や想像上の生物さらにはアニメーションキャラクターのようなものでもよい。胴体アバターの形状についても、8頭身でも2頭身でもその他の頭身でもよく、また、裸体でも洋服、靴、装飾品等を身に着けた態様でもよい。 The torso database 10 is for storing torso avatars. The torso avatars that are stored have various shapes and designs, and there are many different data formats (different formats, uses, definitions of basic feature points, etc.). The various data formats of the torso avatar are the same as the various data formats of the head body avatar, and more specifically, when a specific one is selected as the torso avatar, A partial body avatar is selected. Regarding the relationship between a torso avatar and a head body avatar that have the same data format, information regarding the position where the head body avatar is connected in the torso avatar, information regarding the direction of the reference axis of the head body avatar at the time of connection, and Information regarding the size of the head element body avatar to be connected is defined. Using this information, the avatar compositing section 13 performs a whole-body avatar compositing process. The torso avatar may be an imitation of a realistic human body, an animal, an imaginary creature, or even an animated character. The shape of the torso avatar may be 8-headed, 2-headed, or any other shape, and may be nude or wearing clothes, shoes, accessories, etc.
 頭髪データベース11は、頭髪アバターについて記憶するためのものである。頭髪アバターは、頭部素体データベース2に記憶されている個々の頭部素体アバターの頭頂部付近の外表面形状と整合するよう形成されている。 The hair database 11 is for storing hair avatars. The hair avatar is formed to match the outer surface shape near the top of each head element avatar stored in the head element database 2.
 部品アバター選択部12は、胴体アバターと頭髪アバターを選択するためのものである。部品アバター選択部12による胴体アバターの選択アルゴリズムは、用途に応じて自動的に選択するものでもよいし、ユーザ(たとえば顔面画像の対象物である人物)の指示により選択するものでもよい。本実施の形態1における頭髪アバターの選択については、一方向(たとえば正面方向)に投影した頭髪領域の2次元形状に関して、対象物のものと形状が一致する頭髪アバターを選択するものとする。より具体的には、部品アバター選択部12は、頭髪アバターの2次元的なシルエットとユーザの頭髪の2次元的なシルエットをk-means法や主成分分析によりクラスタリングすることによって相互の類似度を算出し、ユーザの頭髪のシルエットと類似するシルエットを有する頭髪アバターを選択するものとする。 The part avatar selection section 12 is for selecting a torso avatar and a hair avatar. The selection algorithm of the torso avatar by the part avatar selection unit 12 may be one that is automatically selected depending on the purpose, or one that is selected based on instructions from the user (for example, the person who is the subject of the facial image). Regarding the selection of the hair avatar in the first embodiment, it is assumed that a hair avatar whose shape matches that of the object with respect to the two-dimensional shape of the hair region projected in one direction (for example, the front direction) is selected. More specifically, the component avatar selection unit 12 clusters the two-dimensional silhouette of the hair avatar and the two-dimensional silhouette of the user's hair using the k-means method or principal component analysis to determine their mutual similarity. A hair avatar having a silhouette similar to that of the user's hair is selected.
 アバター合成部13は、部品アバター選択部12によって選択された胴体アバターと頭髪アバターを、頭部アバター生成部9によって生成された頭部アバターと合成して一体的な全身アバターを生成するためのものである。 The avatar synthesis section 13 is for synthesizing the torso avatar and hair avatar selected by the parts avatar selection section 12 with the head avatar generated by the head avatar generation section 9 to generate an integrated whole body avatar. It is.
 アバター合成部13は、胴体アバターを頭部アバターに合成する際に両者の位置合わせを行う位置調整部29と、胴体アバターにて設定された接続時における頭部アバターの基準軸と頭部アバターの基準軸を同一方向となるよう調整する基準軸調整部30と、頭部アバターの大きさを調整するサイズ調整部31と、頭部アバターに胴体アバター及び頭髪アバターを合成する合成処理部32とを備える。 The avatar synthesis unit 13 includes a position adjustment unit 29 that aligns the torso avatar and the head avatar when composing them, and a position adjustment unit 29 that aligns the torso avatar with the head avatar, and a reference axis of the head avatar and the head avatar at the time of connection set in the torso avatar. A reference axis adjustment section 30 that adjusts the reference axes so that they are in the same direction, a size adjustment section 31 that adjusts the size of the head avatar, and a synthesis processing section 32 that synthesizes the head avatar with the torso avatar and the hair avatar. Be prepared.
 位置調整部29は、胴体アバターに対する頭部アバターの相対位置を、胴体アバターにおける接続箇所の位置に移動させるためのものである。あらかじめ胴体アバターには頭部アバターの接続箇所の位置情報が設定されており、位置調整部29は、かかる位置情報に基づき、胴体アバターの接続箇所と頭部アバターの接続箇所が一致するよう、相対位置を調整する。なお、胴体アバターにおける接続箇所は、通常の人物を模したアバターの場合は両肩の間の首に相当する場所となるものの、かかる場所に限定されるものではない。合成獣のような想像上の生物等を模したものであれば、そのデザインに合わせた場所に接続箇所を設けることが可能であり、たとえば右手の掌に接続箇所を設ける構成としてもよい。 The position adjustment unit 29 is for moving the relative position of the head avatar with respect to the torso avatar to the position of the connection point in the torso avatar. The positional information of the connecting point of the head avatar is set in the torso avatar in advance, and the position adjustment unit 29 adjusts the relative position so that the connecting point of the torso avatar and the connecting point of the head avatar match, based on this positional information. Adjust the position. Note that the connection point in the torso avatar is a place corresponding to the neck between both shoulders in the case of an avatar imitating a normal person, but is not limited to such a place. If it imitates an imaginary creature such as a synthetic beast, it is possible to provide a connection point at a location that matches the design; for example, a connection point may be provided in the palm of the right hand.
 位置調整部29は、頭部アバターと頭髪アバターの位置調整も行う。頭髪アバターは、頭部アバターのもととなる頭部素体アバターの頭頂部付近の外表面形状と整合するよう形成されており、位置関係についても、双方の頂点において対応関係が定められており、かかる対応関係を参照しつつ、位置調整部29は、頭髪アバターが頭部アバター上の適切な位置に配置されるよう位置調整を行う。 The position adjustment unit 29 also adjusts the positions of the head avatar and hair avatar. The hair avatar is formed to match the outer surface shape near the top of the head body avatar, which is the source of the head avatar, and the positional relationship is determined at the vertices of both. With reference to this correspondence, the position adjustment unit 29 performs position adjustment so that the hair avatar is placed at an appropriate position on the head avatar.
 基準軸調整部30は、胴体アバターにてあらかじめ設定された頭部アバター用の基準軸と、頭部アバターの基準軸を一致させるよう双方の 方向関係を調整するためのものである。基準軸調整部30による方向調整は、たとえば胴体アバター又は頭部アバターに対する回転処理によって行われる。 The reference axis adjustment unit 30 is for adjusting the directional relationship between the reference axis for the head avatar, which is set in advance for the torso avatar, and the reference axis for the head avatar, so that they match. The direction adjustment by the reference axis adjustment unit 30 is performed, for example, by rotation processing on the torso avatar or head avatar.
 サイズ調整部31は、頭部アバターの大きさについて、胴体アバターにてあらかじめ設定された大きさとなるよう調整を行うためのものである。胴体アバターにおいては、一体的な全体アバターが合成された際における頭部アバター部分の体積に関する情報が定められており、頭部アバターの体積がこれと異なる場合に、サイズ調整部31は、胴体アバターの設定情報と一致するよう頭部アバターに対し拡大処理又は縮小処理を行う。 The size adjustment unit 31 is for adjusting the size of the head avatar so that it becomes the size preset for the torso avatar. In the torso avatar, information regarding the volume of the head avatar part when an integrated whole avatar is synthesized is determined, and if the volume of the head avatar differs from this, the size adjustment unit 31 adjusts the volume of the torso avatar. The head avatar is enlarged or reduced so that it matches the setting information.
 合成処理部32は、位置調整部29により位置合わせが行われ、基準軸調整部30によって頭部アバターと胴体アバターの基準軸の方向が一致し、サイズ調整部31による頭部アバターのサイズ調整が完了した頭部アバターと、胴体アバター及び頭髪アバターを合成するためのものである。具体的には、合成処理部32は、頭部アバターと胴体アバターのそれぞれの接続箇所に形成されたメッシュループ(接続箇所の端部に位置する頂点同士で形成された閉曲線を指す)同士を結合することによって、表面形状部分において結合がなされる。より簡易に、あらかじめ対応関係が規定された頭部アバターの頂点と胴体アバターの頂点間に新たな線分を形成し、新たに形成された線分によって形成される面をポリゴンとした部分的なメッシュ構造を新たに構築することとしてもよい。また、対応関係が規定された頂点同士を1つの頂点に統合することによって、頭部アバターと胴体アバターを結合してもよい。頭部アバターと頭髪アバターの結合についても同様である。 In the synthesis processing section 32, the position adjustment section 29 performs alignment, the reference axis adjustment section 30 matches the directions of the reference axes of the head avatar and the body avatar, and the size adjustment section 31 adjusts the size of the head avatar. This is for compositing the completed head avatar, torso avatar, and hair avatar. Specifically, the synthesis processing unit 32 combines mesh loops (referring to closed curves formed by vertices located at the ends of the connection points) formed at the respective connection points of the head avatar and the body avatar. By doing so, a bond is formed in the surface shape portion. To make it easier, a new line segment is formed between the vertices of the head avatar and the vertices of the body avatar for which the correspondence relationship is defined in advance, and the surface formed by the newly formed line segment is made into a partial polygon. It is also possible to construct a new mesh structure. Further, the head avatar and the torso avatar may be combined by integrating vertices with defined correspondences into one vertex. The same applies to the combination of the head avatar and the hair avatar.
 また、合成処理部32は、頭部アバター、胴体アバターが3次元表面形状の情報だけでなく骨格情報も有する場合は、互いの骨格構造(ボーン)についても結合処理を行う。ボーンの結合処理については、頭部アバターに一般に備わるheadボーンあるいはこれに準ずるボーンと、胴体アバターに一般に備わるneckボーンあるいはこれに準ずるボーンとを、親子関係を設定する形式にて結合することが望ましい。 Furthermore, if the head avatar and torso avatar have not only three-dimensional surface shape information but also skeletal information, the compositing processing unit 32 also performs a merging process on their skeletal structures (bones). Regarding the bone joining process, it is desirable to connect the head bone or a similar bone that is generally provided in a head avatar and the neck bone or a similar bone that is generally provided in a torso avatar in a format that sets a parent-child relationship. .
 次に、図2を参照しつつ頭部形状生成部4の動作について説明する。まず、特徴点抽出部14によって入力された顔面画像から特徴点を抽出し(ステップS101)、位置関係導出部15によって特徴点の3次元的な位置関係を導出する(ステップS102)。その後、座標変換部16は、顔面画像から抽出された特徴点の位置情報と、頭部素体アバターの基本特徴点の位置情報とを、特定の特徴点(本実施の形態では鼻頭に対応する特徴点とする。)を原点とした座標系の位置座標に変換し(ステップS103)、顔面画像から抽出された各特徴点がそれぞれ対応する頭部素体アバターの基本特徴点の位置と整合するよう位置調整を行い(ステップS104)、特徴点抽出時に位置座標のせん断が生じた場合には修正を行う(ステップS105)。 Next, the operation of the head shape generation section 4 will be explained with reference to FIG. 2. First, the feature point extraction section 14 extracts feature points from the input facial image (step S101), and the positional relationship derivation section 15 derives the three-dimensional positional relationship of the feature points (step S102). Thereafter, the coordinate conversion unit 16 converts the position information of the feature points extracted from the facial image and the position information of the basic feature points of the head body avatar to a specific feature point (in this embodiment, a point corresponding to the tip of the nose). ) is converted into position coordinates of a coordinate system with the origin as the origin (step S103), and each feature point extracted from the facial image matches the position of the basic feature point of the corresponding head body avatar. The position is adjusted (step S104), and if shearing of position coordinates occurs during feature point extraction, correction is performed (step S105).
 その後、拡大縮小部18によって、顔面画像から抽出された特徴点に対し、当該特徴点によって形成される空間の広さが対応する頭部素体アバターの基本特徴点によって形成される空間と整合するよう、位置座標を変化させる(ステップS106)。そして、素体変形部19によって、特徴点によって形成される空間の広さが同程度となった状態で頭部素体アバターの基本特徴点の位置座標を、対応する顔面画像の特徴点の位置座標に移動させる処理を行い(ステップS107)、最後に頭部素体アバターの全頂点の位置座標について、座標変換部16によって元の座標系に戻す処理を行う(ステップS108)。以上で頭部形状生成部4の動作は終了し、対象物の顔面画像から導出される対象物の特徴を具備する対象物の頭部の3次元形状情報が完成する。 Thereafter, the scaling unit 18 matches the size of the space formed by the feature points extracted from the facial image with the space formed by the basic feature points of the corresponding head body avatar. Then, the position coordinates are changed (step S106). Then, the element body deforming unit 19 converts the position coordinates of the basic feature points of the head element avatar into the positions of the corresponding feature points of the face image in a state where the width of the space formed by the feature points is approximately the same. A process is performed to move the position coordinates to the coordinates (step S107), and finally, a process is performed to return the position coordinates of all vertices of the head element body avatar to the original coordinate system by the coordinate conversion unit 16 (step S108). The operation of the head shape generation unit 4 is thus completed, and the three-dimensional shape information of the head of the object, which has the characteristics of the object derived from the facial image of the object, is completed.
 次に、図3を参照しつつ部分表面画像選択部6及び合成表面画像生成部7の動作について説明する。まず、分類選択部21により分類項目を選択し(ステップS201)、選択した分類項目に属する部分表面画像の特徴点について、対象物の頭部の3次元表面形状において対応関係にある特徴点との位置関係について比較処理を行う(ステップS202)。 Next, the operations of the partial surface image selection section 6 and the composite surface image generation section 7 will be explained with reference to FIG. First, a classification item is selected by the classification selection unit 21 (step S201), and feature points of a partial surface image belonging to the selected classification item are compared with feature points that have a corresponding relationship in the three-dimensional surface shape of the object's head. A comparison process is performed regarding the positional relationship (step S202).
 その後、画像判定部23によって、対象物の頭部の3次元形状情報における特徴点と最も近接する特徴点を備えた部分表面画像が選択される(ステップS203)。選択された部分表面画像は、表面形状調整部26によって、その特徴点に対し対象物の頭部の3次元表面形状に適合するように移動させる処理が行われ(ステップS204)、部分表面画像は、画像合成部27により対象部の頭部の3次元表面形状における対応領域上に配置され(ステップS205)、必要に応じて補完処理が施された上で(ステップS206)、対象物の頭部の3次元表面形状上に選択された部分表面画像が各領域に配置された合成表面画像が生成される。 Thereafter, the image determination unit 23 selects a partial surface image that has the feature point closest to the feature point in the three-dimensional shape information of the object's head (step S203). The selected partial surface image is processed by the surface shape adjustment unit 26 to move its feature points so as to match the three-dimensional surface shape of the object's head (step S204), and the partial surface image is , is placed on the corresponding area in the three-dimensional surface shape of the head of the target part by the image synthesis unit 27 (step S205), and after performing complementation processing as necessary (step S206), the head of the target part is A composite surface image is generated in which the selected partial surface images are arranged in each region on the three-dimensional surface shape of the surface.
 次に、図4を参照しつつアバター合成部13の動作について説明する。まず、位置調整部29によって、頭部アバターに設定された接続箇所と、胴体アバターに設定された接続箇所の位置が一致するよう調整を行い(ステップS301)、頭部アバターと頭髪アバターの位置調整を行う(ステップS302)。その後、基準軸調整部30によって、あらかじめ胴体アバターに設定された頭部アバター用の基準軸の方向と、頭部アバターに設定された基準軸の方向とが一致するよう調整を行う(ステップS303)。さらに、サイズ調整部31によって、頭部アバターのサイズがあらかじめ胴体アバターにて設定されたサイズとなるよう拡大/縮小処理を行う(ステップS304)。最後に、合成処理部32によって頭部アバター、胴体アバター及び頭髪アバターを結合することで(ステップS305)、一体的な全身アバターが完成する。 Next, the operation of the avatar synthesis section 13 will be explained with reference to FIG. 4. First, the position adjustment unit 29 adjusts the positions of the connection points set for the head avatar and the connection points set for the torso avatar to match (step S301), and adjusts the positions of the head avatar and hair avatar. (Step S302). Thereafter, the reference axis adjustment unit 30 performs adjustment so that the direction of the reference axis for the head avatar set in advance for the torso avatar matches the direction of the reference axis set for the head avatar (step S303). . Further, the size adjustment unit 31 performs an enlargement/reduction process so that the size of the head avatar becomes the size set in advance for the body avatar (step S304). Finally, the head avatar, torso avatar, and hair avatar are combined by the synthesis processing unit 32 (step S305), thereby completing an integrated whole-body avatar.
 次に、実施の形態1にかかる3次元アバター生成装置の利点について説明する。まず、本実施の形態1にかかる3次元アバター生成装置は、対象物の顔面画像に基づき3次元アバターの頭部における3次元表面形状を生成する一方で、3次元表面形状上に配置する表面画像については、対象物の顔面画像そのものではなく、3次元表面形状に近接した形状からなる別途用意された部分表面画像を合成した合成表面画像を用いることとしている。すなわち、本実施の形態1にかかる3次元アバター生成装置は、対象物の形質的な特徴(目が離れている、口が大きい、鼻が高い、等)を反映しつつも具体的な色・模様において実物と異なる部分表面画像を使用しており、これによって、本人の特徴を具備しつつも本人の外観とは異なる表面画像を備えた3次元アバターを生成可能である。このような3次元アバターは、本人の外観を知る知人にとっては本人の特徴を備えた親しみやすい印象を与えられる一方で、第三者は本人の具体的な外観を推知できないため、不特定多数が参加するコンピュータゲームやSNS等にて使用した場合でも、本人のプライバシーを適切に保護できるという利点を有する。 Next, the advantages of the three-dimensional avatar generation device according to the first embodiment will be explained. First, the three-dimensional avatar generation device according to the first embodiment generates a three-dimensional surface shape of the head of a three-dimensional avatar based on a facial image of a target object, and a surface image to be placed on the three-dimensional surface shape. In this case, instead of using the facial image of the object itself, a composite surface image is used, which is obtained by combining separately prepared partial surface images having shapes close to the three-dimensional surface shape. That is, the three-dimensional avatar generation device according to the first embodiment reflects the physical characteristics of the object (separated eyes, large mouth, high nose, etc.) while also generating specific colors and A partial surface image that differs from the real thing in the pattern is used, thereby making it possible to generate a three-dimensional avatar that has the characteristics of the person but also has a surface image that differs from the appearance of the person. Such three-dimensional avatars give acquaintances who know the person's appearance a friendly impression with the person's characteristics, but third parties cannot guess the person's specific appearance, so they are used by an unspecified number of people. It has the advantage that the user's privacy can be appropriately protected even when used in participating computer games, SNS, etc.
 また、本実施の形態1にかかる3次元アバター生成装置は、所定の分類に従った多数の部分表面画像の中から、所定の分類のものを選択して合成表面画像を生成することとしている。かかる構成を採用することにより、頭部の表面画像において、視覚的に統一性のある3次元アバターを生成できるという利点を有する。また、分類項目を3次元アバターを使用するサービスの種別に応じて設定した場合には、ユーザは自らが利用するサービスに対応した3次元アバターを簡易に生成できるという利点も有する。 Further, the three-dimensional avatar generation device according to the first embodiment selects partial surface images of a predetermined classification from among a large number of partial surface images according to a predetermined classification to generate a composite surface image. By employing such a configuration, there is an advantage that a three-dimensional avatar with visual uniformity can be generated in the surface image of the head. Furthermore, when the classification items are set according to the type of service that uses the three-dimensional avatar, there is an advantage that the user can easily generate a three-dimensional avatar that corresponds to the service that he/she uses.
(実施の形態2)
 次に、実施の形態2にかかる3次元アバター生成装置について説明する。実施の形態2及びその変形例において、実施の形態1と同一名称かつ同一符号を付した構成要素に関しては、特に言及しない限り、実施の形態1における構成要素と同一の機能を発揮するものとする。
(Embodiment 2)
Next, a three-dimensional avatar generation device according to a second embodiment will be described. In Embodiment 2 and its modifications, components with the same names and the same symbols as in Embodiment 1 shall exhibit the same functions as the components in Embodiment 1, unless otherwise specified. .
 図5に示すとおり、本実施の形態2にかかる3次元アバター生成装置は、実施の形態1にて示した構成に加え、生成した3次元アバターと対応関係にある非代替性トークン及び部分表面画像データベースに記録される個々の部分表面画像と対応関係にある非代替性トークンを生成するトークン生成部33と、生成された3次元アバターにおいて、元の対象者の顔面画像に対する部分表面画像の合成比率を導出する合成比導出部34と、導出された合成比率を含む3次元アバターを構成する部分表面画像に関する情報を出力する情報出力部35と、合成比率に基づき3次元アバターによって得られる収益の分配比率を算出する分配収益算出部36とを備える。 As shown in FIG. 5, the three-dimensional avatar generation device according to the second embodiment has, in addition to the configuration shown in the first embodiment, a non-fungible token and a partial surface image that correspond to the generated three-dimensional avatar. The token generation unit 33 generates non-fungible tokens that correspond to individual partial surface images recorded in the database, and the composition ratio of the partial surface image to the original target person's facial image in the generated three-dimensional avatar. a combination ratio derivation unit 34 that derives the combination ratio, an information output unit 35 that outputs information regarding the partial surface images constituting the three-dimensional avatar including the derived combination ratio, and distribution of profits obtained by the three-dimensional avatar based on the combination ratio. It also includes a distribution revenue calculation unit 36 that calculates the ratio.
 トークン生成部33は、3次元アバター及び個々の部分表面画像に対応した非代替性トークンを生成するためのものである。「非代替性トークン」とは、いわゆるNFT(Non-Fangible-Token)、すなわち固有のデータを備えることで他のトークンと代替不能な性質を有するトークンのことであり、たとえばEthereum(登録商標)の規格であるERC721またはその他の所定の規格に基づいて発行される。発行された非代替性トークンの取引履歴等は、それぞれのトークンに対応して分散型ネットワークに保存された分散型台帳に記録される構成となっている。 The token generation unit 33 is for generating non-fungible tokens corresponding to three-dimensional avatars and individual partial surface images. A “non-fungible token” is a so-called NFT (Non-Fungible-Token), that is, a token that has unique data and cannot be replaced with other tokens, such as Ethereum (registered trademark). It is published based on the standard ERC721 or other predetermined standards. The transaction history of issued non-fungible tokens is recorded in a distributed ledger stored in a distributed network corresponding to each token.
 本実施の形態2では、分散型ネットワークを利用した分散型台帳の管理技術として、いわゆる「ブロックチェーン技術」を利用する。「ブロックチェーン技術」とは、分散型ネットワークを構成する複数のコンピュータ間において暗号技術を活用しつつデータ同期を行う技術である。「分散型台帳」は、具体的には各ブロックが複数のコンピュータ間で合意された記録の集合体と、他のブロックと接続させるための情報(前のブロックに関する情報)により構成されるものであり、当該各ブロックが複数連結されることによって分散型台帳は構成される。ブロックチェーン技術を用いた分散型台帳の管理においては、複数のコンピュータの一部でデータ改ざんが行われても他のコンピュータとの間で多数決によって正しいデータが選択されるため、台帳のデータの破壊・改ざんが極めて難しいという特徴を有している。もっとも、本実施の形態2における分散型台帳の管理技術についてブロックチェーン技術に限定する必然性はなく、同様の機能を果たすものであれば別の技術を用いることとしてもよい。 In the second embodiment, so-called "block chain technology" is used as a distributed ledger management technology using a distributed network. "Blockchain technology" is a technology that uses cryptographic technology to synchronize data between multiple computers that make up a distributed network. Specifically, a "distributed ledger" is one in which each block consists of a collection of records agreed upon among multiple computers and information for connecting with other blocks (information about the previous block). A distributed ledger is constructed by connecting multiple blocks. In the management of distributed ledgers using blockchain technology, even if data is tampered with on some of multiple computers, the correct data will be selected by majority vote among the other computers, so the data in the ledger will be destroyed.・It has the characteristic that it is extremely difficult to tamper with it. However, the distributed ledger management technology in Embodiment 2 is not necessarily limited to blockchain technology, and other technologies may be used as long as they perform similar functions.
 各トークンに関する分散型台帳には、トークンと対応関係にある3次元アバター又は部分表面画像の保有名義に関する取引履歴が記録される。かかる構成を採用することにより、本実施の形態2においては、各トークンに関する分散型台帳に記録された情報を参照することで、各トークンに対応した3次元アバター及び部分表面画像の現時点における保有名義を把握することが可能である。なお、特許請求の範囲では3次元アバターに対応した分散型台帳を第1の分散型台帳とし、第1の分散型台帳が保存される分散型ネットワークを第1の分散型ネットワークとする一方で、部分表面画像に対応した分散型台帳を第2の分散型台帳とし、第2の分散型台帳が保存される分散型ネットワークを第2の分散型ネットワークと称するところ、少なくとも第1、第2の分散型ネットワークは同一のネットワークによって構成することが可能である。また、3次元アバターの合成表面画像が複数の部分表面画像によって形成される場合、第2の分散型台帳は、部分表面画像の個数に応じて複数存在する。 The distributed ledger for each token records the transaction history regarding the ownership name of the three-dimensional avatar or partial surface image that corresponds to the token. By adopting such a configuration, in the second embodiment, by referring to the information recorded in the distributed ledger regarding each token, the current holding name of the three-dimensional avatar and partial surface image corresponding to each token can be determined. It is possible to understand the Note that in the claims, a distributed ledger corresponding to a three-dimensional avatar is referred to as a first distributed ledger, and a distributed network in which the first distributed ledger is stored is referred to as a first distributed network; The distributed ledger corresponding to the partial surface image is referred to as a second distributed ledger, and the distributed network in which the second distributed ledger is stored is referred to as a second distributed network. type networks can be composed of the same network. Further, when the composite surface image of the three-dimensional avatar is formed by a plurality of partial surface images, a plurality of second distributed ledgers exist according to the number of partial surface images.
 また、各トークンのうち3次元アバターと対応関係にあるものについては、分散型台帳において、上述した保有名義に関する取引履歴に加え、3次元アバターに使用された部分表面画像に関する情報が記録される。具体的には、どの部分表面画像がどの程度使用されているかの情報に加え、後述する合成比導出部34によって導出された、当該3次元アバターにおける対象物の頭部の3次元表面形状全体における個々の部分表面画像の合成比率が記録される。 Furthermore, for each token that has a corresponding relationship with a 3D avatar, in addition to the transaction history regarding the ownership name mentioned above, information regarding the partial surface image used for the 3D avatar is recorded in the distributed ledger. Specifically, in addition to the information on which partial surface images are used and to what extent, information on the overall three-dimensional surface shape of the object's head in the three-dimensional avatar, which is derived by the synthesis ratio deriving unit 34 described later, is provided. The composite ratio of the individual partial surface images is recorded.
 各トークンと3次元アバター及び部分表面画像との対応関係については、各トークンの識別子と、対応関係にある3次元アバター及び部分表面画像の識別情報とを1対1で紐づける形式とする。より直接的に、トークンの識別子と3次元アバター等の識別情報を同一内容としてもよいが、それぞれ異なる文字列等によって構成したものについて対応関係を設定する態様でも問題ない。 Regarding the correspondence between each token and the three-dimensional avatar and partial surface image, the identifier of each token is linked one-to-one with the identification information of the corresponding three-dimensional avatar and partial surface image. More directly, the identifier of the token and the identification information of the three-dimensional avatar or the like may have the same content, but there is no problem in an embodiment in which a correspondence relationship is set between the identifiers of the token and the identification information of the three-dimensional avatar or the like.
 合成比導出部34は、アバター合成部13によって合成された3次元アバターについて、頭部の3次元表面形状における部分表面画像の合成比率を導出するためのものである。本実施の形態2においては、合成比導出部34は、3次元アバターのうち頭髪部を除く頭部表面について、全表面積のうち特定の部分表面画像にて置換された領域の面積の割合により導出することとしている。また、合成比率についてはすべての部分表面画像について個別に導出することとしてもよいが、本実施の形態2においては、同一の保有名義に属する部分表面画像ごとにまとめて導出することとする。 The synthesis ratio derivation unit 34 is for deriving the synthesis ratio of partial surface images in the three-dimensional surface shape of the head for the three-dimensional avatar synthesized by the avatar synthesis unit 13. In the second embodiment, the composition ratio derivation unit 34 derives the composition ratio based on the ratio of the area of the area replaced by a specific partial surface image to the total surface area of the head surface excluding the hair part of the three-dimensional avatar. I am planning to do so. Furthermore, although the synthesis ratio may be derived individually for all partial surface images, in the second embodiment, it is derived collectively for each partial surface image belonging to the same ownership name.
 情報出力部35は、合成比導出部34によって導出された合成比率と合成に使用された部分表面画像の識別情報を含む、3次元アバターに使用された部分表面画像に関する情報を分散型ネットワークに対し出力するためのものである。具体的には、情報出力部35は、3次元アバターと対応関係にあるトークンに関する分散型ネットワークと直接的又は間接的に接続されており、当該分散型ネットワークに対し、合成比率等の情報を出力する機能を有する。かかる機能を実現できるものであれば情報出力部35の具体的構成は任意のものとすることが可能であるが、本実施の形態2においては、合成比率等に関する情報を内容とするトランザクション及び秘密鍵を用いて生成した電子署名を出力する態様とする。これらが出力されることにより、分散型ネットワークにて電子署名が復号され真正なトランザクションであることが確認されると、トランザクションの内容が分散型台帳に記録される。なお、各トークンに関する保有名義の変更に関する情報について、本実施の形態2では既存の外部システムを用いて分散型台帳に記録する構成を採用するものの、情報出力部35にて保有名義の変更を内容とするトランザクション及び秘密鍵を用いた電子署名により構成される保有名義変更情報を、それぞれのトークンに関する分散型台帳を保存する分散型ネットワークに対し出力することとしてもよい。 The information output unit 35 sends information regarding the partial surface image used for the three-dimensional avatar, including the combination ratio derived by the combination ratio derivation unit 34 and identification information of the partial surface image used for combination, to the distributed network. It is for output. Specifically, the information output unit 35 is directly or indirectly connected to a decentralized network regarding tokens that correspond to three-dimensional avatars, and outputs information such as the synthesis ratio to the decentralized network. It has the function of Although the specific configuration of the information output unit 35 can be arbitrary as long as it can realize such a function, in the second embodiment, the information output unit 35 is configured to handle transactions and secrets whose contents include information regarding the composition ratio, etc. The mode is such that a digital signature generated using the key is output. By outputting these, the electronic signature is decrypted in the distributed network and when it is confirmed that the transaction is genuine, the contents of the transaction are recorded in the distributed ledger. Note that although the second embodiment adopts a configuration in which information regarding the change in the holding name of each token is recorded in a distributed ledger using an existing external system, the information output unit 35 can record the change in the holding name in the information output unit 35. It is also possible to output ownership change information consisting of a transaction and an electronic signature using a private key to a distributed network that stores a distributed ledger regarding each token.
 分配収益算出部36は、3次元アバターを商用ゲームソフトのキャラクターとして使用させた場合等に発生する収益について、3次元アバターの保有名義を有する者及び当該3次元アバターを構成する部分表面画像の保有名義を有する者に対する分配比率を算出するためのものである。分配収益算出部36は、分散型ネットワークと直接的又は間接的に接続されており、3次元アバター及び個々の部分表面画像に対応した非代替性トークンに関する分散型台帳に記録された取引履歴を参照して、現時点における3次元アバター等の保有名義を有する者を把握する。 Regarding the revenue generated when a 3D avatar is used as a character in a commercial game software, the distribution revenue calculation unit 36 calculates the amount of revenue generated when a 3D avatar is used as a character in commercial game software, and is calculated by the person who owns the 3D avatar and the partial surface image that makes up the 3D avatar. This is to calculate the distribution ratio to the person who holds the name. The distribution revenue calculation unit 36 is directly or indirectly connected to the distributed network, and refers to the transaction history recorded in the distributed ledger regarding non-fungible tokens corresponding to the three-dimensional avatar and each partial surface image. Then, the person who currently owns the three-dimensional avatar, etc. is determined.
 その上で、分配収益算出部36は、合成比導出部34にて導出された合成比率、使用された部分表面画像の識別情報、3次元アバター及び部分表面画像の保有名義に基づき、3次元アバターについて生じた収益の分配比率及び分配先について算出することとしている。分配比率の具体的な算出方法としては、最も簡易な構成としては全収益に対し合成比率を乗算することにより導出することが考えられるところ、合成比率を用いて算出する方法であれば他の方法でもよい。たとえば、分配収益算出部36において、頭部表面における領域ごとに重要度に応じた重みづけを行い、たとえば目を含む部分表面画像に関して、合成比率を乗算した値に1より大きい値からなる所定の係数を乗算した結果を分配比率とすることとしてもよい。 Then, the distributed revenue calculation unit 36 calculates the 3D avatar based on the combination ratio derived by the combination ratio derivation unit 34, the identification information of the used partial surface image, the ownership name of the 3D avatar and the partial surface image, The distribution ratio and recipients of the profits generated will be calculated. As for the specific method of calculating the distribution ratio, the simplest structure is to derive it by multiplying the total revenue by the composite ratio, but other methods are possible as long as it is calculated using the composite ratio. But that's fine. For example, the distributed revenue calculation unit 36 weights each region on the head surface according to its importance, and for example, for a partial surface image including the eyes, a predetermined value that is greater than 1 is multiplied by the synthesis ratio. The distribution ratio may be the result of multiplying the coefficients.
 次に、本実施の形態2にかかる3次元アバター生成装置の利点について説明する。本実施の形態2では、合成比導出部34にて合成比率を導出し、導出した合成比率に基づき分配収益算出部36にて収益分配比率を算出する構成を採用することにより、3次元アバターによって収益を得た場合において、簡易かつ公正な収益分配を実現できるという利点を有する。特に、合成比率の導出は一定のアルゴリズム(たとえば本実施の形態2では3次元アバターのうち頭髪部を除く頭部表面における、全表面積に対する部分表面画像の占める面積の割合を算出することによって合成比率は導出される。)に従って行われ、収益分配比率の算出も同様であることから、一連の作業において恣意的判断がなされることなく公正な収益分配を、簡易に実現することが可能である。 Next, the advantages of the three-dimensional avatar generation device according to the second embodiment will be explained. In the second embodiment, by adopting a configuration in which the composition ratio derivation unit 34 derives the composition ratio, and the distribution revenue calculation unit 36 calculates the revenue distribution ratio based on the derived composition ratio, the three-dimensional avatar It has the advantage of being able to easily and fairly distribute profits when profits are earned. In particular, the synthesis ratio is derived using a certain algorithm (for example, in the second embodiment, the synthesis ratio is calculated by calculating the ratio of the area occupied by the partial surface image to the total surface area of the head surface excluding the hair part of the three-dimensional avatar). ), and the calculation of the revenue distribution ratio is also the same, so it is possible to easily realize fair revenue distribution without making arbitrary judgments in the series of operations.
 また、本実施の形態2においては、3次元アバター及び部分表面画像のそれぞれに対応した非代替性トークンを生成し、各トークンに対応して設けられた分散型台帳の記録内容に基づき分配収益算出部36が収益分配比率を採用する構成としている。非代替性トークンに関する分散型ネットワークを利用した分散型台帳にて情報を管理した場合、情報の破壊・改ざんの可能性が極めて低いことから、正確な合成比率及び収益分配時点における正確な保有名義の把握が可能となり、収益分配比率及び収益の分配先について過誤が生じることを大幅に抑制できるという利点が生じる。 In addition, in the second embodiment, non-fungible tokens are generated corresponding to each of the three-dimensional avatar and the partial surface image, and distribution revenue is calculated based on the recorded contents of the distributed ledger provided corresponding to each token. The structure is such that the division 36 adopts the revenue distribution ratio. When information is managed on a distributed ledger using a decentralized network related to non-fungible tokens, the possibility of information being destroyed or tampered with is extremely low. This has the advantage that errors in the revenue distribution ratio and revenue distribution destination can be significantly suppressed.
(変形例)
 次に、実施の形態2の変形例について説明する。本変形例にかかる3次元アバター生成装置では、実施の形態2の構成に加えて、選択した部分表面画像を単に配置することで合成表面画像を生成するのではなく、選択した部分表面画像と、対象物の顔面画像とを合成することによって合成表面画像を生成する構成を有する。具体的には、図6に示すとおり本変形例にかかる3次元アバター生成装置は、顔面画像入力部1を通じて入力された2次元の顔面画像を3次元化する3次元画像生成部37と、3次元画像化した顔面画像及び部分表面画像選択部6によって選択された部分表面画像とを重ね合わせる合成表面画像生成部38と、合成表面画像生成部38による重ね合わせ態様等を考慮した合成比率を導出する合成比導出部39とを備える。
(Modified example)
Next, a modification of the second embodiment will be described. In addition to the configuration of Embodiment 2, the three-dimensional avatar generation device according to this modification does not generate a composite surface image by simply arranging the selected partial surface images; It has a configuration that generates a composite surface image by combining the facial image of the target object. Specifically, as shown in FIG. 6, the 3D avatar generation device according to this modification includes a 3D image generation unit 37 that converts a 2D facial image input through the facial image input unit 1 into 3D; A synthetic surface image generation unit 38 superimposes the facial image converted into a dimensional image and the partial surface image selected by the partial surface image selection unit 6, and a synthesis ratio is derived in consideration of the superposition mode etc. by the synthetic surface image generation unit 38. A combination ratio deriving unit 39 is provided.
 3次元画像生成部37は、顔面画像入力部1を通じて入力された2次元的な対象物の顔面画像を3次元画像化するためのものである。具体的には、3次元画像生成部37は、顔面画像の2次元的な特徴点に関して、位置関係導出部15によって導出された3次元的な位置関係に対応するよう移動させるとともに、特徴点及びその周囲の領域における色・模様等についても特徴点の移動に応じて位置を移動させることによって、顔面画像を3次元化する機能を有する。なお、2次元の顔面画像にて表現されていない部分(たとえば側面・背面の皮膚等)については空白のままとしてもよいし、補完処理部28と同様の態様にて補完処理を行うこととしてもよい。 The three-dimensional image generation unit 37 is for converting the facial image of the two-dimensional object input through the facial image input unit 1 into a three-dimensional image. Specifically, the three-dimensional image generation unit 37 moves the two-dimensional feature points of the facial image so as to correspond to the three-dimensional positional relationship derived by the positional relationship derivation unit 15, and also moves the two-dimensional feature points of the facial image to correspond to the three-dimensional positional relationship derived by the positional relationship derivation unit 15. It has a function of making the facial image three-dimensional by moving the position of the color, pattern, etc. in the surrounding area according to the movement of the feature point. Note that parts that are not represented in the two-dimensional facial image (for example, skin on the sides and back) may be left blank, or may be supplemented in the same manner as the complementation processing section 28. good.
 合成表面画像生成部38は、部分表面画像選択部6によって選択された部分表面画像と、3次元画像生成部37によって3次元化された顔面画像とを合成することによって、合成表面画像を生成するためのものである。具体的には、合成表面画像生成部38は、実施の形態2における補完処理部28に加え、部分表面画像のみならず3次元化した顔面画像の形状をも変化させる表面形状調整部40と、表面形状の調整処理を経た部分表面画像及び顔面画像を所定の混合比率にて重ね合わせることで両者を合成する画像合成部41とを備える。 The composite surface image generation unit 38 generates a composite surface image by combining the partial surface image selected by the partial surface image selection unit 6 and the facial image made three-dimensional by the three-dimensional image generation unit 37. It is for. Specifically, in addition to the complementary processing unit 28 in the second embodiment, the composite surface image generation unit 38 includes a surface shape adjustment unit 40 that changes the shape of not only the partial surface image but also the three-dimensional facial image. The apparatus includes an image synthesis section 41 that synthesizes the partial surface image and the facial image that have undergone surface shape adjustment processing by superimposing them at a predetermined mixing ratio.
 表面形状調整部40は、実施の形態1、2における表面形状調整部26のように部分表面画像の表面形状のみを調整するのではなく、顔面画像の表面形状をも調整する機能を有する。具体的には、表面形状調整部40は、部分表面画像上の所定の特徴点と、顔面画像上のこれに対応する特徴点の位置が相違する場合に、いずれか一方の位置と一致するよう他方の位置を調整することに加え、双方の特徴点の位置を移動させることにより位置を一致させる機能を有する。表面形状調整部40における双方の特徴点の移動量の決定については、ユーザの選択によるものとしてもよいし、深層学習の結果等に基づきより自然に、あるいはより特徴的な形状となるよう双方の特徴点の移動量について決定することとしてもよい。また、双方の特徴点の移動量については、全ての部分表面画像及び顔面画像において一律の値としてもよいし、部分表面画像ごとに異なる値としてもよく、さらには部分表面画像における特徴点ごとに異なる値としてもよい。なお、双方の特徴点の移動量の決定内容には部分表面画像の表面形状の特徴点のみを移動させる場合も含まれ、この場合における表面形状調整部40は、実施の形態1、2における表面形状調整部26と同様の処理を行うこととなる。 The surface shape adjustment section 40 has a function of adjusting not only the surface shape of the partial surface image like the surface shape adjustment section 26 in the first and second embodiments, but also the surface shape of the facial image. Specifically, when the positions of a predetermined feature point on the partial surface image and the corresponding feature point on the face image are different, the surface shape adjustment unit 40 adjusts the feature point so that it matches the position of one of them. In addition to adjusting the position of the other, it has a function of moving the positions of both feature points to match their positions. The amount of movement of both feature points in the surface shape adjustment unit 40 may be determined by the user's selection, or the amount of movement of both feature points may be determined based on the results of deep learning etc. so that the shape becomes more natural or more characteristic. The amount of movement of the feature point may also be determined. Furthermore, the amount of movement of both feature points may be set to a uniform value for all partial surface images and facial images, or may be set to a different value for each partial surface image, or may be set to a different value for each feature point in the partial surface image. It may be a different value. Note that the content of determining the amount of movement of both feature points includes a case where only the feature points of the surface shape of the partial surface image are moved, and in this case, the surface shape adjustment unit 40 The same processing as that of the shape adjustment section 26 will be performed.
 表面形状調整部40は、顔面画像上の特徴点の位置を移動させた場合には、位置の変動情報(たとえば、移動方向及び移動距離)について頭部形状生成部4及び合成比導出部39に対し出力する。頭部形状生成部4は、当該変動情報に基づき調整後の特徴点の位置と一致するように、対象物の頭部の3次元形状情報における特徴点の位置を調整する。合成比導出部39は、当該変動情報をも参照しつつ合成比率を導出する。 When the surface shape adjustment unit 40 moves the position of the feature point on the facial image, the surface shape adjustment unit 40 provides the head shape generation unit 4 and the synthesis ratio derivation unit 39 with respect to the position change information (for example, movement direction and movement distance). Output against. The head shape generation unit 4 adjusts the position of the feature point in the three-dimensional shape information of the object's head so that it matches the position of the adjusted feature point based on the variation information. The combination ratio derivation unit 39 derives the combination ratio while also referring to the fluctuation information.
 画像合成部41は、実施の形態1、2における画像合成部27のように部分表面画像を対象物の3次元表面形状上に単に配置するのではなく、部分表面画像と、当該部分表面画像の領域に対応した顔面画像(3次元画像生成部37によって3次元化したもの)を所定の混合比率にて重ね合わせた状態で配置する機能を有する。具体的には、画像合成部41は、表面形状調整部40による特徴点の位置調整等により同一形状となった部分表面画像と顔面画像を所定の混合比率(たとえば部分表面画像を8、顔面画像を2)にて重ね合わせた上で、対象物の頭部の3次元表面形状上の対応領域上に配置する機能を有する。なお、画像合成部41は、部分表面画像と顔面画像の混合比率を10対0として画像合成を行うことも可能であるところ、この場合における画像合成部41は、実施の形態1、2における画像合成部27と同様の処理を行うこととなる。 The image synthesis unit 41 does not simply arrange the partial surface image on the three-dimensional surface shape of the object like the image synthesis unit 27 in the first and second embodiments, but instead arranges the partial surface image and the partial surface image. It has a function of arranging facial images (three-dimensionalized by the three-dimensional image generation unit 37) corresponding to the area in a superimposed state at a predetermined mixing ratio. Specifically, the image synthesis unit 41 mixes the partial surface image and the facial image, which have the same shape by adjusting the position of feature points by the surface shape adjustment unit 40, at a predetermined mixing ratio (for example, 8 for the partial surface image, 8 for the facial image, etc.). It has a function of superimposing the images in step 2) and placing them on the corresponding areas on the three-dimensional surface shape of the object's head. Note that the image synthesis unit 41 can also perform image synthesis with a mixing ratio of the partial surface image and the facial image of 10:0; however, in this case, the image synthesis unit 41 may The same processing as the combining section 27 will be performed.
 画像合成部41は、混合比率に関する情報を合成比導出部39に対し出力する。合成比導出部39は、当該混合比率に関する情報を考慮しつつ、合成比率を導出する。 The image synthesis section 41 outputs information regarding the mixing ratio to the synthesis ratio derivation section 39. The combination ratio derivation unit 39 derives the combination ratio while considering the information regarding the mixture ratio.
 合成比導出部39は、実施の形態1、2における合成比導出部34と同様に、生成された3次元アバターの頭部表面における部分表面画像の合成比率を導出するためのものである。本変形例においては、単に部分表面画像を配置する場合のみならず、部分表面画像の形状にあわせて特徴点を移動することにより対象者の顔面画像及び3次元表面形状を変形し、また、部分表面画像と顔面画像を所定の混合比率にて重ね合わせて表示する構成を採用することから、合成比導出部39は、顔面画像及び3次元表面形状の変形程度及び部分表面画像と顔面画像の混合比率をも参照した上で合成比率を導出することとしている。具体的には、合成比導出部39は、たとえば、3次元アバターのうち頭髪部を除く頭部表面について、全表面積のうち特定の部分表面画像にて置換された領域の面積の割合を導出した後、当該部分表面画像に関する混合比率を乗算する。たとえば混合比率が部分表面画像:顔面画像=9:1であれば0.9を乗算し、部分表面画像:顔面画像=5:5の場合には0.5を乗算する。その後、合成比導出部38は、対象となる部分表面画像の領域における3次元表面形状の変形程度、たとえば当該領域内における特徴点の平均移動距離に所定の係数を乗算したものを乗算することで、本変形例における合成比率を導出する。導出された合成比率は、実施の形態2と同様に分散型ネットワークに対し出力され、また、分配収益算出部36により合成比率に基づく収益の分配比率が算出される。 Similar to the synthesis ratio derivation unit 34 in the first and second embodiments, the synthesis ratio derivation unit 39 is for deriving the synthesis ratio of partial surface images on the head surface of the generated three-dimensional avatar. In this modification, the facial image and three-dimensional surface shape of the subject are transformed not only by simply arranging the partial surface image, but also by moving the feature points according to the shape of the partial surface image. Since a configuration is adopted in which a surface image and a facial image are superimposed and displayed at a predetermined mixing ratio, the synthesis ratio deriving unit 39 calculates the degree of deformation of the facial image and three-dimensional surface shape and the mixture of the partial surface image and the facial image. The composite ratio is derived by also referring to the ratio. Specifically, the synthesis ratio deriving unit 39 derives, for example, the ratio of the area of the area replaced by the specific partial surface image to the total surface area of the head surface excluding the hair part of the three-dimensional avatar. After that, the mixture ratio for the partial surface image is multiplied. For example, if the mixing ratio is partial surface image:facial image=9:1, it is multiplied by 0.9, and when partial surface image:facial image=5:5, it is multiplied by 0.5. After that, the combination ratio deriving unit 38 determines the degree of deformation of the three-dimensional surface shape in the region of the target partial surface image, for example, by multiplying the average moving distance of the feature points in the region by a predetermined coefficient. , derive the synthesis ratio in this modification. The derived combination ratio is output to the distributed network as in the second embodiment, and the distribution revenue calculation unit 36 calculates the revenue distribution ratio based on the combination ratio.
 本変形例のような構成を採用することにより、単に部分表面画像を配置するのみならず、必要に応じて対象物の実際の顔面画像の形状を調整したり、顔面画像と部分表面画像を混合することが可能となるため、多種多様な合成表面画像を生成できるという利点が生じる。たとえば、顔面画像の形状を部分表面画像と一致するよう変形した上で、部分表面画像の混合比率を0とすることにより、顔面画像の形状を部分表面画像の形状と一致させつつも、色・模様については顔面画像のものが維持された合成表面画像を生成することが可能である。また、合成比導出部39が面積比のみならず対象物の頭部の表面形状における特徴点の移動の程度及び部分表面画像との混合比率をも参照して合成比率を導出することにより、本変形例は、多種多様な合成表面画像において、部分表面画像の貢献度を適切に評価した合成比率の導出が可能であり、適切かつ公正な収益分配比率を算出できるという利点も有する。 By adopting a configuration like this modified example, it is possible to not only simply arrange partial surface images, but also adjust the shape of the actual facial image of the object as necessary, or mix facial images and partial surface images. This has the advantage that a wide variety of composite surface images can be generated. For example, by deforming the shape of the facial image to match that of the partial surface image and then setting the mixing ratio of the partial surface image to 0, the shape of the facial image can be made to match the shape of the partial surface image, but the color and It is possible to generate a composite surface image in which the pattern of the facial image is maintained. In addition, the synthesis ratio deriving unit 39 derives the synthesis ratio by referring not only to the area ratio but also to the degree of movement of feature points in the surface shape of the object's head and the mixing ratio with the partial surface image. The modified example also has the advantage that it is possible to derive a synthesis ratio that appropriately evaluates the degree of contribution of a partial surface image in a wide variety of synthetic surface images, and that it is possible to calculate an appropriate and fair revenue distribution ratio.
 以上、実施の形態1、2にわたり本発明の内容について説明したが、もとより本発明の技術的範囲は実施の形態に記載した具体的構成に限定して解釈されるべきではなく、本発明の機能を実現できるものであれば、上記実施の形態に対する様々な変形例、応用例についても、本発明の技術的範囲に属することはもちろんである。 Although the content of the present invention has been described above in Embodiments 1 and 2, the technical scope of the present invention should not be construed as being limited to the specific configuration described in the embodiments, and the functions of the present invention It goes without saying that various modifications and applications of the above-described embodiments fall within the technical scope of the present invention, as long as they can be realized.
 まず、実施の形態1、2及び変形例では、顔面画像入力部1を通じて入力する対象物の顔面画像を2次元画像としているところ、かかる態様に限定する必要はなく、たとえば3Dスキャナによって撮影された3次元的な顔面画像を用いることとしてもよい。3次元的な顔面画像を使用した場合には、位置関係導出部15の処理及び3次元画像生成部37による処理を省略することが可能となり、より簡易な構成にて3次元アバター生成装置を実現できるという利点も生じる。 First, in the first and second embodiments and the modified examples, the facial image of the object inputted through the facial image input section 1 is a two-dimensional image, but there is no need to limit it to such an aspect. A three-dimensional facial image may also be used. When a three-dimensional facial image is used, it becomes possible to omit the processing by the positional relationship derivation unit 15 and the processing by the three-dimensional image generation unit 37, and a three-dimensional avatar generation device is realized with a simpler configuration. There is also the advantage of being able to do so.
 また、位置調整等に用いる特徴点の設定にあたっては、より高精細な3次元アバターを生成する観点から多数設定することとしてもよいが、処理負担の軽減等を理由として表面形状情報に含まれる複数の頂点のごく限られた一部のみとしてもよい。同様の理由により、そもそも表面形状情報に含まれる頂点の数を削減することとしてもよい。さらには、特徴点の位置座標については頭部素体において対応する基本特徴点からの相対的な位置情報を用いるのではなく、頭部素体とは別の絶対的な座標系を用いて、必要に応じて適宜座標変換する構成としてもよい。 In addition, when setting feature points used for position adjustment, etc., it is possible to set a large number of feature points from the perspective of generating a higher-definition three-dimensional avatar. It may be only a limited part of the vertices of . For the same reason, the number of vertices included in the surface shape information may be reduced in the first place. Furthermore, for the position coordinates of feature points, instead of using relative position information from the corresponding basic feature points in the head element, an absolute coordinate system separate from the head element is used. It may also be configured to perform coordinate transformation as necessary.
 また、実施の形態1では頭部素体アバターとして頭部のみからなるアバターを使用しているところ、頭部のみならず胴体部分も含むアバターを頭部素体アバターとして用いることとしてもよい。この場合、アバター合成部13によるアバター合成を行うことなく、顔面画像に基づき対象物の特徴を備えた全身アバターを生成することが可能である。また、頭部のみからなる頭部素体アバターを使用した場合に、胴体アバター等と結合させることなく、頭部アバターのみで独立したアバターとなるよう構成することとしてもよい。さらには頭部アバターについて、頭髪を除去したスキンヘッド状に形成するのではなく、頭髪と一体的に形成した頭部アバターを生成することとしてもよい。 Furthermore, in the first embodiment, an avatar consisting only of the head is used as the head body avatar, but an avatar including not only the head but also a torso portion may be used as the head body avatar. In this case, it is possible to generate a full-body avatar having the characteristics of the object based on the facial image without performing avatar synthesis by the avatar synthesis unit 13. Furthermore, when a head body avatar consisting only of a head is used, it may be configured such that the head avatar becomes an independent avatar without being combined with a body avatar or the like. Furthermore, instead of forming the head avatar in the shape of a skin head with the hair removed, a head avatar formed integrally with the hair may be generated.
 また、特許請求の範囲の移動手段について、座標変換部16のような態様に限定して解釈する必要はない。顔面画像の特徴点の3次元的な位置関係について、頭部素体アバターの頂点と同じ座標系上で導出することとして、座標変換を伴うことなく、同一座標系上で特徴点及び/又は頭部素体アバターの頂点群を移動させることとしてもよい。また、顔面画像の特徴点、頭部素体アバターの基本特徴点ないし頂点群の移動・回転等に関しては、いずれか一方のみを移動等する場合のみならず、双方を移動等することとしてもよい。  Further, the moving means in the claims does not need to be interpreted as being limited to aspects such as the coordinate transformation unit 16. The three-dimensional positional relationship of the feature points of the face image is derived on the same coordinate system as the vertices of the head element avatar, without coordinate transformation. It is also possible to move the vertices of the partial element body avatar. Furthermore, regarding the movement/rotation of the feature points of the face image, the basic feature points of the head body avatar, or the group of vertices, it is possible to move not only one of them, but also both of them. . 
 本発明は、実在の人物等の特徴を反映しつつも当該人物等のプライバシーを保護し、かつ使用されるサービス等にて設定される世界観に適合した高品質な3次元アバターを容易に生成する技術として利用可能である。 The present invention easily generates a high-quality three-dimensional avatar that reflects the characteristics of a real person, protects the privacy of the person, and conforms to the worldview set by the service used. It can be used as a technology to
1 顔面画像入力部
2 頭部素体データベース
3 頭部素体選択部
4 頭部形状生成部
5 部分表面画像データベース
6 部分表面画像選択部
7、38 合成表面画像生成部
9 頭部アバター生成部
10 胴体データベース
11 頭髪データベース
12 部品アバター選択部
13 アバター合成部
14 特徴点抽出部
15 位置関係導出部
16 座標変換部
17 位置調整部
18 拡大縮小部
19 素体変形部
21 分類選択部
22 特徴点比較部
23 画像判定部
24 画像出力部
26、40 表面形状調整部
27、41 画像合成部
28 補完処理部
29 位置調整部
30 基準軸調整部
31 サイズ調整部
32 合成処理部
33 トークン生成部
34、39 合成比導出部
35 情報出力部
36 分配収益算出部
37 3次元画像生成部
1 Facial image input unit 2 Head body database 3 Head body selection unit 4 Head shape generation unit 5 Partial surface image database 6 Partial surface image selection units 7, 38 Composite surface image generation unit 9 Head avatar generation unit 10 Torso database 11 Hair database 12 Parts avatar selection section 13 Avatar composition section 14 Feature point extraction section 15 Positional relationship derivation section 16 Coordinate conversion section 17 Position adjustment section 18 Scaling section 19 Base transformation section 21 Classification selection section 22 Feature point comparison section 23 Image judgment section 24 Image output section 26, 40 Surface shape adjustment section 27, 41 Image composition section 28 Complement processing section 29 Position adjustment section 30 Reference axis adjustment section 31 Size adjustment section 32 Composition processing section 33 Token generation section 34, 39 Composition Ratio derivation unit 35 Information output unit 36 Distribution revenue calculation unit 37 Three-dimensional image generation unit

Claims (6)

  1.  3次元空間上における位置関係を定めた頂点群を用いて表現した対象物の3次元表面形状及び当該3次元表面形状上に表示された表面画像を含む3次元アバターを生成する3次元アバター生成装置であって、
     対象物の顔面画像に基づき前記頂点のうち頭部形状を構成する各要素に対応した特徴点の位置関係が特定された対象物の頭部の3次元表面形状を生成する3次元形状生成手段と、
     頭部の表面形状における全部又は一部の領域に関する表面画像であり、当該全部又は一部の領域における頭部形状を構成する各要素に対応した特徴点の位置関係が特定された複数の部分表面画像の中から、2以上の特徴点間の位置関係が、対応関係にある前記頭部の3次元表面形状の2以上の特徴点間の位置関係と一致する部分表面画像を選択する部分表面画像選択手段と、
     前記部分表面画像選択手段によって選択された部分表面画像を合成することにより、前記頭部の3次元表面形状に適合した表面画像である合成表面画像を生成する合成表面画像生成手段と、
     前記頭部の3次元表面形状と前記合成表面画像に基づき頭部アバターを生成する頭部アバター生成手段と、
     を備えたことを特徴とする3次元アバター生成装置。
    A 3D avatar generation device that generates a 3D avatar including a 3D surface shape of an object expressed using a group of vertices with defined positional relationships in a 3D space and a surface image displayed on the 3D surface shape. And,
    a three-dimensional shape generating means for generating a three-dimensional surface shape of the head of the object in which the positional relationship of feature points corresponding to each element constituting the head shape among the vertices is specified based on the facial image of the object; ,
    A surface image relating to all or a part of the surface shape of the head, and a plurality of partial surfaces in which the positional relationships of feature points corresponding to each element constituting the head shape in the whole or part of the region are specified. A partial surface image in which a positional relationship between two or more feature points matches a positional relationship between two or more feature points of the corresponding three-dimensional surface shape of the head from among the images. selection means,
    synthetic surface image generating means for generating a synthetic surface image that is a surface image that matches the three-dimensional surface shape of the head by synthesizing the partial surface images selected by the partial surface image selecting means;
    head avatar generation means for generating a head avatar based on the three-dimensional surface shape of the head and the composite surface image;
    A three-dimensional avatar generation device comprising:
  2.  前記合成表面画像生成手段は、前記部分表面画像選択手段によって選択された前記部分表面画像と、前記対象物の顔面画像のうち前記部分表面画像の領域に対応した部分の画像とを合成することにより前記合成表面画像を生成することを特徴とする請求項1記載の3次元アバター生成装置。 The composite surface image generation means combines the partial surface image selected by the partial surface image selection means with an image of a portion of the facial image of the object corresponding to the area of the partial surface image. The three-dimensional avatar generation device according to claim 1, wherein the three-dimensional avatar generation device generates the composite surface image.
  3.  前記合成表面画像における前記頭部の3次元表面形状上における前記部分表面画像の合成比率を導出する合成比導出手段と、
     生成した3次元アバターと対応関係にある非代替性トークンに関して生成され前記3次元アバターの保有名義の変動情報、前記合成表面画像を構成する部分表面画像の識別情報及び前記合成比率が記録される第1の分散型台帳が保存される第1の分散型ネットワークに対し少なくとも前記合成比率を出力する情報出力手段と、
     前記第1の分散型台帳に記録された前記3次元アバターの保有名義、前記識別情報及び前記合成比率と、前記部分表面画像と対応関係にある非代替性トークンに関して生成され前記部分表面画像の保有名義の変動情報が記録される第2の分散型台帳に記録された前記部分表面画像の保有名義とに基づき、生成された前記3次元アバターについて生ずる収益に関する前記3次元アバター及び前記部分表面画像の保有名義間における分配比率を算出する分配収益算出手段と、
     を備えたことを特徴とする請求項1又は2記載の3次元アバター生成装置。
    composition ratio deriving means for deriving a composition ratio of the partial surface image on the three-dimensional surface shape of the head in the composite surface image;
    A first step in which information on fluctuations in ownership of the three-dimensional avatar, identification information of partial surface images constituting the composite surface image, and the composition ratio are recorded with respect to non-fungible tokens that correspond to the generated three-dimensional avatar. information output means for outputting at least the composite ratio to a first distributed network in which one distributed ledger is stored;
    Possession of the partial surface image generated with respect to the holding name of the three-dimensional avatar, the identification information, and the synthesis ratio recorded in the first distributed ledger, and a non-fungible token in correspondence with the partial surface image. Based on the ownership name of the partial surface image recorded in the second distributed ledger in which the change information of the name is recorded, the 3D avatar and the partial surface image regarding the revenue generated for the generated 3D avatar. a distribution revenue calculation means for calculating the distribution ratio among the holding names;
    The three-dimensional avatar generation device according to claim 1 or 2, further comprising:
  4.  3次元空間上における位置関係を定めた頂点群を用いて表現した対象物の3次元表面形状及び当該3次元表面形状上に表示された表面画像を含む3次元アバターを生成する3次元アバター生成方法であって、
     対象物の顔面画像に基づき前記頂点のうち頭部形状を構成する各要素に対応した特徴点の位置関係が特定された対象物の頭部の3次元表面形状を生成する3次元形状生成ステップと、
     頭部の表面形状における全部又は一部の領域に関する表面画像であり、当該全部又は一部の領域における頭部形状を構成する各要素に対応した特徴点の位置関係が特定された複数の部分表面画像の中から、2以上の特徴点間の位置関係が、対応関係にある前記頭部の3次元表面形状の2以上の特徴点間の位置関係と一致する部分表面画像を選択する部分表面画像選択ステップと、
     前記部分表面画像選択ステップにおいて選択された部分表面画像を合成することにより、前記頭部の3次元表面形状に適合した表面画像である合成表面画像を生成する合成表面画像生成ステップと、
     前記頭部の3次元表面形状と前記合成表面画像に基づき頭部アバターを生成する頭部アバター生成ステップと、
     を含むことを特徴とする3次元アバター生成方法。
    A 3D avatar generation method for generating a 3D avatar including a 3D surface shape of an object expressed using a group of vertices with defined positional relationships in a 3D space and a surface image displayed on the 3D surface shape. And,
    a three-dimensional shape generation step of generating a three-dimensional surface shape of the head of the object in which the positional relationship of feature points corresponding to each element constituting the head shape among the vertices is specified based on the facial image of the object; ,
    A surface image relating to all or a part of the surface shape of the head, and a plurality of partial surfaces in which the positional relationships of feature points corresponding to each element constituting the head shape in the whole or part of the region are specified. A partial surface image in which a positional relationship between two or more feature points matches a positional relationship between two or more feature points of the corresponding three-dimensional surface shape of the head from among the images. a selection step;
    a synthetic surface image generation step of generating a synthetic surface image that is a surface image that conforms to the three-dimensional surface shape of the head by synthesizing the partial surface images selected in the partial surface image selection step;
    a head avatar generation step of generating a head avatar based on the three-dimensional surface shape of the head and the composite surface image;
    A three-dimensional avatar generation method comprising:
  5.  前記合成表面画像における前記頭部の3次元表面形状上における前記部分表面画像の合成比率を導出する合成比導出ステップと、
     生成した3次元アバターと対応関係にある非代替性トークンに関して生成され前記3次元アバターの保有名義の変動情報、前記合成表面画像を構成する部分表面画像の識別情報及び前記合成比率が記録される第1の分散型台帳が保存される第1の分散型ネットワークに対し少なくとも前記合成比率を出力する情報出力ステップと、
     前記第1の分散型台帳に記録された前記3次元アバターの保有名義、前記識別情報及び前記合成比率と、前記部分表面画像と対応関係にある非代替性トークンに関して生成され前記部分表面画像の保有名義の変動情報が記録される第2の分散型台帳に記録された前記部分表面画像の保有名義とに基づき、生成された前記3次元アバターについて生ずる収益に関する前記3次元アバター及び前記部分表面画像の保有名義間における分配比率を算出する分配収益算出ステップと、
     を含むことを特徴とする請求項4記載の3次元アバター生成方法。
    a composition ratio deriving step of deriving a composition ratio of the partial surface image on the three-dimensional surface shape of the head in the composite surface image;
    A first step in which information on fluctuations in ownership of the three-dimensional avatar, identification information of partial surface images constituting the composite surface image, and the composition ratio are recorded with respect to non-fungible tokens that correspond to the generated three-dimensional avatar. an information output step of outputting at least the composite ratio to a first distributed network in which one distributed ledger is stored;
    Possession of the partial surface image generated with respect to the holding name of the three-dimensional avatar, the identification information, and the synthesis ratio recorded in the first distributed ledger, and a non-fungible token that has a correspondence relationship with the partial surface image. The 3D avatar and the partial surface image regarding the revenue generated for the generated 3D avatar based on the ownership name of the partial surface image recorded in the second distributed ledger in which name change information is recorded. a distribution revenue calculation step of calculating a distribution ratio between holding names;
    5. The three-dimensional avatar generation method according to claim 4, further comprising:
  6.  対象物の顔面画像に基づき、頂点群によって表現した対象物の3次元表面形状および3次元表面形状上に表示された表面画像を含む3次元アバターをコンピュータに生成させる3次元アバター生成プログラムであって、
     前記コンピュータに対し、
     3次元空間上における位置関係を定めた頂点群を用いて表現した対象物の3次元表面形状及び当該3次元表面形状上に表示された表面画像を含む3次元アバターを生成する3次元アバター生成装置であって、
     対象物の顔面画像に基づき前記頂点のうち頭部形状を構成する各要素に対応した特徴点の位置関係が特定された対象物の頭部の3次元表面形状を生成する3次元形状生成機能と、
     頭部の表面形状における全部又は一部の領域に関する表面画像であり、当該全部又は一部の領域における頭部形状を構成する各要素に対応した特徴点の位置関係が特定された複数の部分表面画像の中から、2以上の特徴点間の位置関係が、対応関係にある前記頭部の3次元表面形状の2以上の特徴点間の位置関係と一致する部分表面画像を選択する部分表面画像選択機能と、
     前記部分表面画像選択機能によって選択された部分表面画像を合成することにより、前記頭部の3次元表面形状に適合した表面画像である合成表面画像を生成する合成表面画像生成機能と、
     前記頭部の3次元表面形状と前記合成表面画像に基づき頭部アバターを生成する頭部アバター生成機能と、
     を実現させることを特徴とする3次元アバター生成プログラム。
    A three-dimensional avatar generation program that causes a computer to generate a three-dimensional avatar including a three-dimensional surface shape of the object expressed by a group of vertices and a surface image displayed on the three-dimensional surface shape, based on a facial image of the object. ,
    to the computer,
    A 3D avatar generation device that generates a 3D avatar including a 3D surface shape of an object expressed using a group of vertices with defined positional relationships in a 3D space and a surface image displayed on the 3D surface shape. And,
    a three-dimensional shape generation function that generates a three-dimensional surface shape of the object's head in which the positional relationship of feature points corresponding to each element constituting the head shape among the vertices is specified based on the object's facial image; ,
    A surface image relating to all or a part of the surface shape of the head, and a plurality of partial surfaces in which the positional relationships of feature points corresponding to each element constituting the head shape in the whole or part of the region are specified. A partial surface image in which a positional relationship between two or more feature points matches a positional relationship between two or more feature points of the corresponding three-dimensional surface shape of the head from among the images. selection function,
    a synthetic surface image generation function that generates a synthetic surface image that is a surface image that matches the three-dimensional surface shape of the head by synthesizing the partial surface images selected by the partial surface image selection function;
    a head avatar generation function that generates a head avatar based on the three-dimensional surface shape of the head and the composite surface image;
    A three-dimensional avatar generation program that is characterized by realizing the following.
PCT/JP2023/025313 2022-09-09 2023-07-07 Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program WO2024053235A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-143740 2022-09-09
JP2022143740A JP7202045B1 (en) 2022-09-09 2022-09-09 3D avatar generation device, 3D avatar generation method and 3D avatar generation program

Publications (1)

Publication Number Publication Date
WO2024053235A1 true WO2024053235A1 (en) 2024-03-14

Family

ID=84829427

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/025313 WO2024053235A1 (en) 2022-09-09 2023-07-07 Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program

Country Status (2)

Country Link
JP (1) JP7202045B1 (en)
WO (1) WO2024053235A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003141563A (en) * 2001-10-31 2003-05-16 Nippon Telegr & Teleph Corp <Ntt> Facial three-dimensional computer graphic generation method, its program and recording medium
WO2011155068A1 (en) * 2010-06-11 2011-12-15 株式会社アルトロン Character generating system, character generating method, and program
JP2016057775A (en) * 2014-09-08 2016-04-21 オムロン株式会社 Portrait generator, portrait generation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003141563A (en) * 2001-10-31 2003-05-16 Nippon Telegr & Teleph Corp <Ntt> Facial three-dimensional computer graphic generation method, its program and recording medium
WO2011155068A1 (en) * 2010-06-11 2011-12-15 株式会社アルトロン Character generating system, character generating method, and program
JP2016057775A (en) * 2014-09-08 2016-04-21 オムロン株式会社 Portrait generator, portrait generation method

Also Published As

Publication number Publication date
JP7202045B1 (en) 2023-01-11
JP2024039293A (en) 2024-03-22

Similar Documents

Publication Publication Date Title
US11423556B2 (en) Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
KR101514327B1 (en) Method and apparatus for generating face avatar
CN101324961B (en) Human face portion three-dimensional picture pasting method in computer virtual world
US8624901B2 (en) Apparatus and method for generating facial animation
US9196074B1 (en) Refining facial animation models
CN112669447A (en) Model head portrait creating method and device, electronic equipment and storage medium
Zhang et al. Avatarverse: High-quality & stable 3d avatar creation from text and pose
JP2004506276A (en) Three-dimensional face modeling system and modeling method
Theobald et al. Real-time expression cloning using appearance models
JP6818219B1 (en) 3D avatar generator, 3D avatar generation method and 3D avatar generation program
CN107103646B (en) Expression synthesis method and device
WO2024053235A1 (en) Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program
Fratarcangeli et al. Facial motion cloning with radial basis functions in MPEG-4 FBA
KR20120121034A (en) Apparatus for acquiring three-dimensional face shape by pre-input from two-dimensional image
JP7076861B1 (en) 3D avatar generator, 3D avatar generation method and 3D avatar generation program
Jiang et al. Animating arbitrary topology 3D facial model using the MPEG-4 FaceDefTables
Fratarcangeli Computational models for animating 3d virtual faces
Cosker Facial capture and animation in visual effects
de Goes et al. Sculpt Processing for Character Rigging
Zhang et al. From range data to animated anatomy-based faces: a model adaptation method
Karunaratne et al. A new efficient expression generation and automatic cloning method for multimedia actors
Cassol et al. Procedural hair generation
Fei Expressive textures
Li et al. Improved radial basis function based parameterization for facial expression animation
Orvalho et al. Character animation: Past, present and future

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23862779

Country of ref document: EP

Kind code of ref document: A1