WO2021261188A1 - Avatar generation method, program, avatar generation system, and avatar display method - Google Patents
Avatar generation method, program, avatar generation system, and avatar display method Download PDFInfo
- Publication number
- WO2021261188A1 WO2021261188A1 PCT/JP2021/020993 JP2021020993W WO2021261188A1 WO 2021261188 A1 WO2021261188 A1 WO 2021261188A1 JP 2021020993 W JP2021020993 W JP 2021020993W WO 2021261188 A1 WO2021261188 A1 WO 2021261188A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- avatar
- user
- generation method
- data
- target person
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
- A63F13/655—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/69—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- This disclosure generally relates to avatar generation methods, programs, and avatar generation systems. More specifically, the present disclosure relates to an avatar generation method for generating an avatar displayed in a virtual space, a program for executing the avatar generation method, and an avatar generation system for generating an avatar displayed in the virtual space.
- Patent Document 1 describes a virtual space providing system in which a user-customized avatar image can be operated in the virtual space provided by the virtual space providing server, and the avatar image can be moved in the virtual space. Is disclosed.
- the avatar generation method includes a generation step and a modification step.
- the generation step is a step of generating avatar data for displaying an avatar that reflects at least the physical information of the subject in the virtual space.
- the modification step at least one of the avatar and the accessory displayed in the virtual space among the data for the avatar generated in the generation step is at least one of the display mode of the avatar and the attribute of the accessory. It is a step to modify according to.
- the program according to one aspect of the present disclosure causes one or more processors to execute the above-mentioned avatar generation method.
- the avatar generation system includes a generation unit and a modification unit.
- the generation unit generates avatar data for displaying at least an avatar that reflects the physical information of the subject in a virtual space.
- the modification unit uses at least one of the avatar and the accessory displayed in the virtual space among the avatar data generated by the generation unit, and at least one of the display mode of the avatar and the attribute of the accessory. Modify according to.
- This disclosure has the advantage that it is easy to generate avatar data suitable for the application.
- FIG. 1 is a block diagram of an overall configuration including an avatar generation system which is an execution subject of the avatar generation method according to the embodiment of the present disclosure.
- FIG. 2 is a flowchart showing an example of the operation of the above-mentioned avatar generation system.
- FIG. 3 is a schematic diagram showing an example of an avatar generated by the above-mentioned avatar generation system.
- FIG. 4 is a schematic diagram showing an example of an avatar modified by the same avatar generation system.
- FIG. 5 is a schematic diagram showing another example of the avatar modified by the same avatar generation system.
- FIG. 6 is a schematic diagram showing a first operation example of the above-mentioned avatar generation system.
- FIG. 7 is a schematic diagram showing a second operation example of the above-mentioned avatar generation system.
- FIG. 8 is a schematic diagram showing a second operation example of the above-mentioned avatar generation system.
- FIG. 9 is a schematic diagram showing a third operation example of the above-mentioned avatar generation system.
- the avatar generation method (avatar generation system 100) of the present embodiment is a method (system) for generating the avatar A1 of the target person T1.
- the "avatar” referred to in the present disclosure is a character displayed in the virtual space V1 as an alter ego of the target person T1 in the real space R1.
- the avatar A1 is displayed in the virtual space V1 as, for example, a three-dimensional model imitating the target person T1.
- the virtual space V1 is a three-dimensional space displayed on the display unit 51 of the information terminal 5 described later.
- the virtual space V1 may be a space expressed by VR (Virtual Reality) technology or a space expressed by AR (Augmented Reality).
- the virtual space V1 may be a space represented by MR (Mixed Reality) technology or a space represented by SR (Substitutional Reality) technology. That is, the virtual space V1 is a space expressed by xR (Cross Reality) technology.
- the avatar generation system 100 includes a generation unit 11 and a modification unit 12.
- the generation unit 11 is the execution subject of the generation step ST1 (see FIG. 2) in the avatar generation method of the present embodiment.
- the modification unit 12 is the execution subject of the modification step ST2 (see FIG. 2) in the avatar generation method of the present embodiment.
- the generation unit 11 As the generation step ST1, the generation unit 11 generates avatar data for displaying at least the avatar A1 reflecting the physical information of the target person T1 in the virtual space V1.
- the avatar data is used, for example, in an application executed by the information terminal 5.
- the application can superimpose and display the avatar A1 on the virtual space V1 by using the avatar data provided by the avatar generation system 100.
- the avatar A1 includes a three-dimensional model of the subject T1, a texture, a material, a skeleton, a skin weight, and the like.
- the avatar A1 included in the avatar data is not a model completely different from the target person T1 in the real space R1 such as a stuffed bear, but the physical information of the target person T1, that is, the degree to which the target person T1 can be identified. It is a model that reflects the information of.
- the avatar A1 included in the avatar data is a model that realistically imitates the subject T1, that is, a photorealistic model.
- the avatar data may include not only the data of the avatar A1 but also the data of the accessory B1 described later. Further, the avatar data may include motion data relating to the movement of the avatar A1.
- the modification unit 12 modifies at least one of the avatar data generated by the generation unit 11 (generation step ST1) of the avatar A1 and the accessory B1 displayed in the virtual space V1 as the modification step ST2.
- the avatar A1 and the accessory B1 before being modified by the modification unit 12 are simply referred to as “avatar A1” and “incidental B1", respectively.
- the avatar A1 and the accessory B1 after being modified by the modification unit 12 are referred to as "the modified avatar A1" and “the modified accessory B1", respectively.
- the "incidental object” referred to in the present disclosure is an object displayed together with the avatar A1 in the virtual space V1 and refers to an object related to the avatar A1 in some way.
- the accessory B1 may include an object that can be worn by the avatar A1 such as clothes, and an object used by the avatar A1 such as a car or a house.
- the clothes T11 worn by the subject T1 when the avatar A1 is generated are included in the avatar A1 instead of the accessory B1.
- the modification unit 12 modifies at least one of the avatar A1 and the accessory B1 according to at least one of the display mode of the avatar A1 and the attribute of the accessory B1 in the virtual space V1. For example, the modification unit 12 modifies at least one of the avatar A1 and the accessory B1 according to the partner who discloses the avatar A1 (that is, the display mode of the avatar A1) as described later. Further, for example, the modification unit 12 modifies at least one of the avatar A1 and the accessory B1 according to the type and / or size of the garment (that is, the attribute of the accessory B1) as described later.
- the application can superimpose and display the modified avatar A1 and / or the modified accessory B1 on the virtual space V1. ..
- At least one of the avatar A1 and the accessory B1 is according to the display mode of the avatar A1 and at least one of the attributes of the accessory B1. Modify. Therefore, this embodiment has an advantage that it is easy to generate avatar data suitable for the application.
- the avatar generation system 100 is a server.
- the avatar generation system 100 is configured to be able to communicate with each of the scanner 4 and the information terminal 5 via a network N1 such as the Internet.
- a network N1 such as the Internet.
- FIG. 1 one scanner 4 and one information terminal 5 are shown, but each of them may be plural.
- the scanner 4 is a 3D scanner, and is installed in a remote place away from the place where the avatar generation system 100 is installed, for example.
- the scanner 4 includes a plurality of image pickup devices 41 that image the subject T1 in the real space R1. In FIG. 1, four image pickup devices 41 are installed, but in reality, it is preferable that several tens of image pickup devices 41 are installed. It should be noted that the plurality of image pickup devices 41 and their arrangement in FIG. 1 are merely conceptually expressed and do not represent an actual mode.
- Each image pickup device 41 has a solid-state image pickup element such as a CCD (Charge Coupled Device) image sensor or a CMOS (Complementary Metal-Oxide-Semiconductor) image sensor.
- the plurality of image pickup devices 41 are installed at different positions so as to surround the subject T1 and image the subject T1 from different angles.
- the scanner 4 scans the whole body of the subject T1 by acquiring a plurality of still images of the subject T1 captured from various angles.
- the plurality of still images acquired by the scanner 4 are transmitted to the avatar generation system 100 via the network N1.
- the information terminal 5 is, for example, a smartphone. Further, the information terminal 5 may include a desktop personal computer (including a display), a laptop personal computer, a tablet terminal, or the like. That is, the information terminal 5 is a device for communicating with a server to perform information processing and information display, and may be a device including a processor, a memory, a transmitter, a receiver, and a display. In addition, the information terminal 5 may include a head-mounted display. The user of the information terminal 5 is basically the target person T1.
- the information terminal 5 receives the input from the user and executes the application pre-installed in the information terminal 5.
- the information terminal 5 receives an input from the user and acquires the execution result of the application on the web server via the browser installed in the information terminal 5 in advance.
- the virtual space V1 based on the execution result of the application is displayed on the display unit 51 of the information terminal 5.
- the avatar A1 based on the avatar data provided by the avatar generation system 100 or the modified avatar A1 is superimposed and displayed in the virtual space V1. Further, when the data for the avatar includes the data of the accessory B1, the accessory B1 or the modified accessory B1 is superimposed on the virtual space V1 in addition to the avatar A1 (or the modified avatar A1). Is displayed. Further, when the motion data is included in the avatar data, the avatar A1 (or the modified avatar A1) that moves according to the motion data is drawn in the virtual space V1.
- the avatar generation system 100 includes a processing unit 1, a communication unit 2, and a storage unit 3.
- the processing unit 1 is configured to control the entire avatar generation system 100.
- the processing unit 1 can be realized by, for example, a computer system including one or more processors (microprocessors) and one or more memories. That is, one or more processors execute one or more programs (applications) stored in one or more memories, thereby functioning as the processing unit 1.
- the program is recorded in advance in the memory of the processing unit 1 here, it may be recorded and provided through a telecommunication line such as the Internet or on a non-temporary recording medium such as a memory card.
- the processing unit 1 has a generation unit 11, a modification unit 12, a presentation unit 13, and a reception unit 14. In FIG. 1, none of these functional units shows a substantive configuration, but shows the functions realized by the processing unit 1.
- the generation unit 11 generates avatar data for displaying the avatar A1 in the virtual space V1 for each target person T1 as the generation step ST1.
- the generation unit 11 generates avatar data based on a plurality of still images of the whole body of the subject T1 acquired from the scanner 4.
- the generation unit 11 since the subject T1 wears the clothes T11, the generation unit 11 generates avatar data including the avatar A1 of the subject T1 wearing the clothes T11.
- the generation unit 11 generates a three-dimensional model of the subject T1 based on a plurality of still images acquired from the scanner 4. Specifically, the generation unit 11 calculates the coordinates of the target points in the basic space, which is a three-dimensional space, for each of all the target points of all the still images.
- the generation unit 11 acquires the image pickup result of each image pickup apparatus 41 to acquire the distance from the image pickup apparatus 41 to the target point when projected onto the basic space. Further, the generation unit 11 acquires the position information of each image pickup device 41 to acquire the distance between the adjacent image pickup devices 41 when projected onto the basic space.
- the generation unit 11 calculates the coordinates of the target point in the basic space based on these distances by the principle of triangulation. Then, the generation unit 11 generates a three-dimensional model of the target person T1 based on the coordinates of all the target points in the basic space.
- the generation unit 11 generates a texture to be attached to the three-dimensional model based on a plurality of still images acquired from the scanner 4.
- the texture includes a texture corresponding to the skin of the subject T1 and a texture corresponding to the clothes T11 worn by the subject T1.
- the generation unit 11 pastes the generated texture on the three-dimensional model.
- the generation unit 11 executes rigging on the three-dimensional model.
- the generation unit 11 executes skeleton setting, IK (Inverse Kinematics) and / or FK (Forward Kinematics) setting, skinning including weight adjustment, and the like on the three-dimensional model.
- IK Inverse Kinematics
- FK Forward Kinematics
- the generation unit 11 can automatically generate the avatar A1 of the subject T1 based on a plurality of still images obtained by capturing the whole body of the subject T1 with the scanner 4. be. Further, when the motion data is included in the avatar data, or when the motion data is applied to the avatar A1 in the application, it is possible to make the avatar A1 move according to the motion data in the virtual space V1. Is.
- the motion data may be acquired as unique data for each target person T1 by performing motion capture for each target person T1, but may be general-purpose motion data.
- general-purpose motion data it is possible to basically make the same kind of movement for all avatars A1.
- the modification unit 12 modifies at least one of the avatar A1 and the accessory B1 generated by the generation unit 11 mainly in response to a request from the application as the modification step ST2.
- the modification unit 12 may modify not only the avatar A1 but also both the avatar A1 and the accessory B1. Further, the modification unit 12 may modify not only one location in the avatar A1 but also a plurality of locations. That is, the process in the modification unit 12 (modification step ST2) may include a process of modifying a plurality of parts of the avatar data.
- the modification unit 12 may be in a mode in which at least one of the various processes listed below can be executed, and may not be in a mode in which all the processes can be executed.
- the process in the modification unit 12 may include a process of abstracting a part or all of the avatar A1 (hereinafter, also referred to as “abstraction process”).
- the "abstraction of an avatar” as used in the present disclosure means to replace a part or all of the avatar A1 with an abstract expression based on the avatar A1 generated by the generation unit 11.
- the abstraction process may include, for example, a mosaic process or a blur process. Further, the abstraction process may include a process of deforming the avatar A1 and / or the accessory B1.
- the "deformation” as used in the present disclosure may include modifying a part or all of the avatar A1 and / or the accessory B1 so as to be consistent with the virtual space V1 provided by the application. Further, the "deformation” referred to in the present disclosure may include deforming a part or all of the avatar A1 and / or the accessory B1 to the extent that the person who sees the avatar A1 cannot recognize the target person T1.
- FIG. 3 shows the avatar A1 of the subject T1 that has not been modified by the modification unit 12, that is, the avatar A1 generated by the generation unit 11.
- the modification unit 12 that is, the avatar A1 generated by the generation unit 11.
- the target person T1 can understand that the avatar A1 represents the target person T1 by looking at the avatar A1.
- FIG. 4 shows the modified avatar A11 in which a part of the avatar A1 of the target person T1 is subjected to an abstraction process.
- the modified avatar A11 is subjected to an abstraction process that deforms the region A100 corresponding to the head of the subject T1.
- the characteristics of each part (hair, eyes, nose, mouth, skin, etc.) included in the head of the subject T1 are not drawn on the modified avatar A11, so that the person who sees the modified avatar A11.
- it becomes difficult to grasp that the modified avatar A11 represents the target person T1.
- the texture data (image data) of the target person T1 included in the avatar data can be used.
- the modification unit 12 can recognize the face in the texture data and modify the recognized face.
- an existing face recognition algorithm can be used for face recognition.
- the modified portion 12 can abstract the face by changing the size and shape of the recognized facial parts (hair, eyes, nose, mouth, skin, etc. as described above).
- the modification unit 12 may abstract the face by modifying the resolution of the texture data of the recognized face portion.
- the modification unit 12 may abstract the face of the avatar by lowering the resolution of the texture data of the face portion.
- the modified avatar A11 can be generated by pasting the abstracted face texture on the three-dimensional model of the target person T1 included in the avatar data.
- abstraction processing can be performed without changing the shape of the three-dimensional model.
- the abstraction process may be performed not only by modifying the texture data but also by modifying the shape of the three-dimensional model.
- FIG. 5 shows the modified avatar A12 in which a part of the modified avatar A11 of the subject T1 is further abstracted.
- the modified avatar A12 is subjected to an abstraction process that deforms the region A101 corresponding to the clothes of the subject T1.
- the characteristics of the clothes T11 of the subject T1 are not drawn on the modified avatar A12, so that the person who sees the modified avatar A12 further indicates that the modified avatar A12 represents the subject T1. It becomes difficult to grasp.
- the degree of abstraction of only a part of the avatar A1 may be increased, or the degree of abstraction of the entire avatar A1 may be increased.
- the process in the modification unit 12 may include a process of modifying the avatar A1 according to the person who discloses the avatar A1.
- the person who sees the avatar A1 of the target person T1 displayed on the display unit 51 of the information terminal 5 is the target person T1.
- the user of the information terminal 5 is a person other than the target person T1
- the person who sees the avatar A1 of the target person T1 displayed on the display unit 51 of the information terminal 5 that is, the person who discloses the avatar A1) , Others.
- the avatar A1 generated by the generation unit 11 is represented by a photorealistic three-dimensional model, information that can identify the target person T1, that is, personal information of the target person T1. Can correspond to. Therefore, depending on the target person T1, there may be a case where the avatar A1 (that is, personal information) is not desired to be disclosed to others. Therefore, by modifying the avatar A1 according to the person who discloses the avatar A1, for example, the modified avatar A1 abstracted to the extent that it cannot be identified as the target person T1 is disclosed to others. .. This makes it difficult for others to know that the modified avatar A1 is the target person T1.
- the process in the modification unit 12 may include a process of modifying the avatar A1 according to the relationship between the partner who discloses the avatar A1 and the target person T1.
- the relationship referred to here may include at least whether or not the person who discloses the avatar A1 is the target person T1.
- the modification unit 12 does not modify the avatar A1 or is consistent with the virtual space V1. Modify avatar A1 up to.
- the modification unit 12 does not modify the avatar A1 or the virtual space V1. Modify avatar A1 to the extent that it is consistent with.
- Avatar A1 is abstracted to a certain extent. Further, when the person who sees the avatar A1 of the target person T1 (the person who discloses the avatar A1) is a third party who has nothing to do with the target person T1, the modification unit 12 cannot grasp that it is the target person T1. Avatar A1 is abstracted up to.
- the abstraction process here may include a process of not displaying the avatar A1 of the target person T1 on the display unit 51 of the information terminal 5 of another person.
- the process in the modification unit 12 may include a process of modifying the avatar A1 according to the time change of the subject T1.
- the "time change of the subject” referred to in the present disclosure means a change in the appearance of the subject T1 with the passage of time from the time when the avatar A1 is generated by the generation unit 11 to a predetermined time.
- the modification unit 12 estimates the 3D model of the subject T1 20 years later by an appropriate algorithm, and the avatar is based on the estimation result. Modify A1.
- the modified avatar A1 reflects the change in the appearance of the subject T1 with the passage of 20 years.
- the time change of the subject T1 may include a change based on the behavioral information regarding the behavior of the subject T1. That is, the time change of the subject T1 is not only the change in the appearance of the subject T1 with the passage of time when the subject T1 does not take any action, but also the time when the subject T1 takes some action. It may include changes in the appearance of subject T1 over time.
- the modification unit 12 uses an appropriate algorithm to obtain a three-dimensional model of the subject T1 one year later. presume. Further, the modification unit 12 estimates the influence of running (behavior of the subject T1) on the appearance of the subject T1 by an appropriate algorithm.
- the modification unit 12 modifies the avatar A1 based on these estimation results.
- the modified avatar A1 reflects the change in the appearance of the subject T1 with the passage of time for one year and the change in the appearance of the subject T1 due to running.
- the process in the modification unit 12 may include a process of modifying the motion of the avatar A1.
- the modification unit 12 changes the garment T11 in the avatar A1 to another garment so that the avatar A1 is changed to another garment. To modify. Then, the modification unit 12 modifies the motion of the modified avatar A1 according to the attributes of other clothes.
- the modification unit 12 modifies the motion of the modified avatar A1 from taking an upright posture to taking an elegant gesture. Further, as an example, it is assumed that the motion of the avatar A1 is walking and the other clothes are heavier than the clothes T11. In this case, the modification unit 12 modifies the motion of the modified avatar A1 from a motion of walking relatively lightly to a motion of walking relatively slowly.
- the process in the modification unit 12 may include a process of modifying the avatar A1 according to the physical characteristics of the subject T1.
- the modification unit 12 does not perform modification processing on the avatar A1 so as not to retain the prototype of the avatar A1, but a part or all of the avatar A1 has physical characteristics of the subject T1, that is, the subject T1. Modification processing is performed to the extent that the expression of the unique part remains.
- the process in the modification unit 12 may include a process of modifying the accessory B1.
- the modification unit 12 changes the garment T11 in the avatar A1 to another garment.
- the avatar A1 is modified.
- the process of modifying the accessory B1 may include the process of imparting the accessory B1 to the avatar A1.
- the process of modifying the accessory B1 may include a process of abstracting the accessory B1 to the extent that it cannot be specified, or a process of deforming the accessory B1 so as to be consistent with the virtual space V1.
- the process in the modification unit 12 may include a process of modifying the avatar A1 according to the accessory B1.
- the application requests that the garment T11 of the avatar A1 be changed to another garment as ancillary B1 and that the size of the other garment is smaller or larger than the garment T11.
- the modification unit 12 modifies the size of the avatar A1 so as to be consistent with the size of other clothes.
- the modification unit 12 may modify the motion of the avatar A1 so as to express the emotion of the subject T1 toward not being able to wear other clothes without modifying the size of the avatar A1.
- the presentation unit 13 presents that the process in the modification unit 12 (modification step ST2) has been executed.
- the presentation unit 13 is the execution subject of the presentation step ST3.
- the presentation unit 13 causes the display unit 51 of the information terminal 5 to display the modified avatar A1
- a character string or an image indicating that the avatar A1 has been modified is transmitted to the information terminal 5 via the network N1.
- the character string or the image is displayed on the display unit 51 of the information terminal 5 together with the modified avatar A1.
- the presentation unit 13 displays the modified avatar A1 on the display unit 51 of the information terminal 5
- a voice message indicating that the avatar A1 has been modified is sent to the information terminal 5 via the network N1.
- the information terminal 5 displays the modified avatar A1 on the display unit 51 and outputs the voice message from the speaker.
- the reception unit 14 receives input from the user regarding the process (modification step ST2) in the modification unit 12.
- the reception unit 14 is the execution body of the reception step ST4.
- the reception unit 14 causes the display unit 51 of the information terminal 5 to display the modified avatar A1 and / or the modified accessory B1
- the user here, the target person T1 sends the modified avatar A1 to the information terminal 5.
- Information about the operation input is received via the network N1.
- the operation input may include an input requesting change of at least a part of the parameters of the modified avatar A1.
- the parameters may include, by way of example, aspects of each part of the face of Avatar A1 (eg, nose height, etc.).
- the modification unit 12 When the reception unit 14 receives the input from the user, the modification unit 12 further modifies the modified avatar A1 and / or the modified accessory B1 according to the input from the user. For example, when an input requesting that the modified avatar A1 is narrowed is received, the modified unit 12 further modifies the modified avatar A1 so as to narrow the modified avatar A1.
- the communication unit 2 has a communication interface that can be connected to the network N1.
- the communication unit 2 is configured to be able to communicate with the scanner 4 via the network N1. Further, the communication unit 2 is configured to be able to communicate with the information terminal 5 via the network N1.
- the communication protocol of the communication interface can be selected from various well-known wireless communication standards such as Wi-Fi (registered trademark).
- the communication unit 2 receives a plurality of still images transmitted from the scanner 4 via the network N1. Further, the communication unit 2 transmits the avatar data to the information terminal 5 via the network N1. Further, the communication unit 2 receives the request information regarding the request from the information terminal 5 (that is, the request from the application) via the network N1.
- the storage unit 3 includes, for example, an electrically rewritable non-volatile memory such as EEPROM (Electrically Erasable Programmable Read-Only Memory), and a volatile memory such as RAM (Random Access Memory).
- EEPROM Electrically Erasable Programmable Read-Only Memory
- RAM Random Access Memory
- the subject T1 goes to the facility where the scanner 4 is installed and uses the scanner 4. Then, the scanner 4 scans the whole body of the subject T1 by acquiring a plurality of still images of the subject T1 captured from various angles. Then, the scanner 4 transmits the acquired plurality of still images to the avatar generation system 100 via the network N1. As a result, the avatar generation system 100 acquires a plurality of still images of the subject T1 captured by the scanner 4 (S1).
- the generation unit 11 generates avatar data including the avatar A1 of the target person T1 based on a plurality of still images of the whole body of the target person T1 acquired from the scanner 4 (S2).
- Process S2 corresponds to generation step ST1.
- the generation unit 11 associates the generated avatar data with the target person T1 and stores it in the storage unit 3 (S3).
- the avatar generation system 100 does not execute anything in particular until it receives the request information regarding the request from the information terminal 5 (that is, the request from the application) (S4: No).
- the request information regarding the request from the information terminal 5 that is, the request from the application
- S4: No the request from the application
- the information terminal 5 When the user of the information terminal 5 (here, the target person T1) executes the application by operating the information terminal 5, the information terminal 5 transmits the request information to the avatar generation system 100 via the network N1.
- the modification unit 12 Upon receiving the request information (S4: Yes), the modification unit 12 reads the avatar data from the storage unit 3 and modifies the avatar data according to the request information (that is, the request from the application) (S5). Process S5 corresponds to modification step ST2. Then, the modification unit 12 transmits (outputs) the avatar data including the modified avatar A1 and / or the modified accessory B1 to the information terminal 5 via the network N1 (S6).
- the presentation unit 13 transmits information (character string, image, voice message, etc.) indicating that the avatar A1 has been modified to the information terminal 5 via the network N1.
- the presentation unit 13 presents to the user of the information terminal 5 that the avatar A1 has been modified (S7).
- Process S7 corresponds to presentation step ST3.
- the information terminal 5 receives the modified avatar A1 and / or the modified accessory B1 via the network N1.
- Information regarding the operation input to the information terminal 5 is transmitted to the avatar generation system 100.
- the modification unit 12 Upon receiving the information regarding the operation input (S8: Yes), the modification unit 12 reads the modified avatar data from the storage unit 3, and the modified unit 12 reads the modified avatar data according to the information regarding the operation input (that is, the input from the user).
- the avatar data is further modified (S9).
- the process S8 corresponds to the reception step ST4. Further, the process S9 corresponds to the modification step ST2.
- the modification unit 12 transmits (outputs) the further modified avatar data to the information terminal 5 via the network N1 (S10).
- the first operation example is an example of the operation of the avatar generation system 100 at the time of executing an application in which a large number of users log in to the same server and share the same virtual space V1.
- an application may include, for example, an online game such as an MMO (Massively Multiplayer Online).
- the user first creates an account. Further, the user requests the avatar A1 generated in advance by the avatar generation system 100 from the avatar generation system 100 through the application, and associates the avatar A1 with the account.
- the user registers another person who uses the application as a friend at the stage of creating an account or while the application is running.
- the friend may be, for example, a friend already registered in SNS (Social Networking Service).
- SNS Social Networking Service
- the user may associate the SNS account with the account of this application.
- Friends are registered in n stages (“n” is a natural number) according to the intimacy with the user.
- n is a natural number
- the first friend is, for example, a relative or a close friend of the user.
- the second friend is, for example, an acquaintance of the user.
- the user When using the application, the user operates the information terminal 5 to start the application and logs in with his / her own account. Then, the application displays the virtual space V1 on the display unit 51 of the information terminal 5, and superimposes and displays the avatars A1 of one or more users existing in the range of the display unit 51 on the virtual space V1. At this time, the application requests the avatar generation system 100 for the avatar data of one or more users to be displayed in the virtual space V1.
- the modification unit 12 of the avatar generation system 100 modifies the avatar data of one or more users according to the world view of the virtual space V1 in response to the request from the application. In this way, by modifying the avatar data according to the world view of the virtual space V1, the user's immersive feeling is impaired as compared with the case where the photorealistic avatar A1 is superimposed and displayed on the virtual space V1. Hateful.
- the modification unit 12 modifies the avatar data of one or more users according to the relationship with the user (user himself / herself) of the requesting information terminal 5. Then, the avatar generation system 100 transmits the modified avatar data of one or more users to the information terminal 5. As a result, the modified avatar A1 of one or more users is displayed on the display unit 51 of the information terminal 5.
- FIG. 6 shows an example of the display of the virtual space V1 and the avatar A1 of one or more users.
- a closed space such as a room is displayed as a virtual space V1 on the display unit 51.
- the user's own avatar A10, the first friend's avatar A20, and the second friend's avatar A30 are displayed.
- the user's own avatar A10 is displayed on the display unit 51 in a form modified according to the world view of the virtual space V1.
- the first friend's avatar A20 is displayed on the display unit 51 in a form modified according to the world view of the virtual space V1.
- the avatar A30 of the second friend is more abstract than the avatar A10 of the user himself and the avatar A20 of the first friend and is displayed on the display unit 51.
- the avatar A30 of the second friend may be abstracted and displayed on the display unit 51 according to the degree of abstraction set by the first friend.
- the avatar A40 represented by the dotted line in FIG. 6 is a third party avatar A40 unrelated to the user, and is not actually displayed on the display unit 51.
- the third party avatar A40 may also be displayed on the display unit 51.
- the third party avatar A40 may be abstracted according to the degree of abstraction set by the third party, or may be further abstracted than the second friend avatar A30.
- the third party avatar A40 may be displayed on the display unit 51 as a silhouette.
- the first friend's avatar A20 is modified in the same manner as the user's own avatar A10, but may be more abstract than the user's own avatar A10. Further, the avatar A30 of the second friend may be further abstracted than the display mode shown in FIG.
- the avatar A1 of the user himself / herself is displayed in a modified form according to the relationship between the other person and the user himself / herself.
- the avatar A1 of the user himself / herself is not displayed in the same manner as the example shown in FIG. 6, or is abstracted to the extent that the user himself / herself cannot be identified. It is displayed in.
- each user can use the application in a privacy-protected form.
- the relationship between each user may be determined in consideration of the degree of relevance determined by other factors in addition to the above-mentioned intimacy.
- the identity or closeness of information such as age, gender, place of origin, position on GPS (Global Positioning System), nationality, preference, or affiliation between users is used. Can be decided.
- the second operation example is an example of the operation of the avatar generation system 100 at the time of executing an application in which the user's avatar A1 is tried on clothes as an accessory B1 in the virtual space V1.
- a large amount of clothing data (that is, data of the accessory B1) that can be tried on by the avatar A1 is accumulated in the dedicated server of the company that operates the application. Attribute information representing the clothing category is associated with each clothing data.
- the user first creates an account. Further, the user requests the avatar A1 generated in advance by the avatar generation system 100 from the avatar generation system 100 through the application, and associates the avatar A1 with the account. In addition, the user registers another person who uses the application as a friend at the stage of creating an account or while the application is running.
- the friend may be, for example, a friend already registered on the SNS.
- the user When using the application, the user operates the information terminal 5 to start the application and logs in with his / her own account. Then, the application displays the virtual space V1 on the display unit 51 of the information terminal 5, and superimposes and displays the user's avatar A1 on the virtual space V1. At this time, the application requests the avatar generation system 100 for the user's avatar data to be displayed in the virtual space V1.
- the avatar generation system 100 transmits the avatar data of the user of the requesting information terminal 5 to the information terminal 5 in response to the request from the application.
- the user's avatar A1 is displayed on the display unit 51 of the information terminal 5.
- the user's avatar A1 has not been modified by the modification unit 12.
- the application requests the avatar generation system 100 for the avatar A1 of the user who has tried on the selected clothes.
- the avatar generation system 100 requests the clothes data of the selected clothes from the dedicated server in response to the request from the application. Then, the modification unit 12 of the avatar generation system 100 modifies the user's avatar A1 based on the clothes data received from the dedicated server. For example, the modification unit 12 modifies the user's avatar A1 by pasting the texture of the clothing included in the clothing data on the user's avatar A1 (that is, adding the accessory B1). Further, the modification unit 12 modifies the user's avatar A1 by associating the motion data corresponding to the attribute information included in the clothing data with the user's avatar A1.
- the avatar generation system 100 transmits the modified user's avatar data to the information terminal 5.
- the user's modified avatar A1 is displayed on the display unit 51 of the information terminal 5.
- the user's avatar A1 that tries on the clothes selected by the user and takes a motion according to the attributes of the clothes is superimposed on the virtual space V1 and displayed on the display unit 51.
- FIGS. 7 and 8 show an example of displaying the avatar A1 of the user who tried on the clothes selected by the user.
- a closed space such as a fitting room is displayed as a virtual space V1 on the display unit 51.
- the avatar A13 of the user who has tried on the sports clothing B11 is displayed on the display unit 51.
- the user's avatar A13 is drawn on the display unit 51 so as to take a motion (here, a running motion) according to the attribute information of the sports clothing B11.
- the user's avatar A13 is stationary, but is moving on the actual display unit 51.
- a closed space such as a fitting room is displayed as a virtual space V1 on the display unit 51.
- the avatar A14 of the user who has tried on the clothes B12 (incidental object B1) having a size smaller than the size suitable for the user is displayed on the display unit 51.
- the user's avatar A14 is drawn on the display unit 51 so as to take a motion (here, a confused motion) according to the attribute information of the clothes B12 (here, the information that the size is small).
- the user's avatar A14 is stationary, but is actually moving.
- the clothes B12 may be partially torn.
- the user can publish the avatar A1 of the user who tried on the clothes on SNS or the like.
- the user's avatar A1 is displayed on the display unit 51 of the information terminal 5 of the other person in a form corresponding to the relationship between the other person and the user.
- the avatar A1 of the user who has tried on the clothes is displayed as it is on the display unit 51 of the other person's information terminal 5.
- the user's avatar A1 is not displayed on the display unit 51 of the other person's information terminal 5, or is abstracted to the extent that the user cannot be identified.
- the user's avatar A1 is displayed with.
- the user can try on clothes online by using the avatar A1. Therefore, in the second operation example, the manufacturer who provides the clothes does not have to provide the clothes to the actual store, so that there is an advantage that the manufacturer does not have to hold the inventory. Further, in the second operation example, there is an advantage that the user can objectively view the user himself / herself by looking at the display unit 51 as compared with the case of trying on clothes in the fitting room of the actual store.
- the virtual space V1 displayed on the display unit 51 may be appropriately changed by the user operating the information terminal 5. For example, by operating the information terminal 5 to select an urban area, the user can superimpose and display the user's own avatar A1 on the virtual space V1 simulating the urban area. In this case, the user has an advantage that it is easier to grasp whether or not the selected garment is suitable for a predetermined situation, as compared with the case where the garment is tried on in the fitting room.
- the avatar A1 in a state of not wearing clothes is displayed on the display unit 51 in an appropriately abstracted form, or is not displayed on the display unit 51. Is preferable.
- the third operation example is an example of the operation of the avatar generation system 100 at the time of executing an application for fitness or health care, for example.
- the dedicated server of the company that operates the application executes a simulation that estimates the change in the user's body shape when the predetermined exercise is continued and / or when the predetermined meal is continuously eaten.
- a simulation that estimates the change in the user's body shape when the predetermined exercise is continued and / or when the predetermined meal is continuously eaten.
- the user first creates an account. Further, the user requests the avatar A1 generated in advance by the avatar generation system 100 from the avatar generation system 100 through the application, and associates the avatar A1 with the account. In addition, the user registers another person who uses the application as a friend at the stage of creating an account or while the application is running. The friend may be, for example, a friend already registered on the SNS. In addition, the user also associates information about his / her constitution with the account.
- the user When using the application, the user operates the information terminal 5 to start the application and logs in with his / her own account. Then, the application displays the virtual space V1 on the display unit 51 of the information terminal 5, and superimposes and displays the user's avatar A1 on the virtual space V1. At this time, the application requests the avatar generation system 100 for the user's avatar data to be displayed in the virtual space V1.
- the avatar generation system 100 transmits the avatar data of the user of the requesting information terminal 5 to the information terminal 5 in response to the request from the application.
- the user's avatar A1 is displayed on the display unit 51 of the information terminal 5.
- the user's avatar A1 has not been modified by the modification unit 12.
- the application requests the avatar generation system 100 for the user's avatar A1 corresponding to the selected exercise / and meal.
- the avatar generation system 100 requests a simulation result corresponding to the selected exercise / and meal from the dedicated server in response to a request from the application. Then, the modification unit 12 of the avatar generation system 100 modifies the user's avatar A1 based on the simulation result received from the dedicated server. For example, the modification unit 12 modifies the user's avatar A1 so that the muscle corresponding to the selected exercise is enlarged, the whole body is thinned, or the corresponding portion is slimmed.
- the avatar generation system 100 transmits the modified user's avatar data to the information terminal 5.
- the user's modified avatar A1 is displayed on the display unit 51 of the information terminal 5.
- the user's avatar A1 in which the muscle corresponding to the selected exercise is enlarged, the whole body is thinned, or the corresponding part is slimmed is superimposed on the virtual space V1 and displayed on the display unit 51. Will be done.
- FIG. 9 shows an example of the display of the user's avatar A1 when the user's selected exercise is continuously executed.
- a closed space such as a gym is displayed as a virtual space V1 on the display unit 51.
- the display unit 51 starts a predetermined exercise with the current user's avatar A1 and the user's avatar A15 taking a motion of a predetermined exercise (here, abdominal muscle exercise).
- the user's avatar A16 after a lapse of a certain period of time is displayed on the display unit 51.
- the current avatar A1 is the avatar A1 generated by the generation unit 11.
- the arrows between the avatars A1 in FIG. 9 indicate the passage of time.
- the user's avatar A15 located in the center of FIG. 9 wears exercise clothes B13 as an accessory B1.
- the user's avatar A16 located on the right side of FIG. 9 wears sports-type underwear B14 as an accessory B1.
- the user's avatar A1 is displayed on the display unit 51 of the information terminal 5 of the other person in a form corresponding to the relationship between the other person and the user. For example, if the other person is the user's best friend, the future user's avatar A1 is displayed as it is on the display unit 51 of the other person's information terminal 5. Further, for example, if the other person is a third party, the user's avatar A1 is not displayed on the display unit 51 of the other person's information terminal 5, or is abstracted to the extent that the user cannot be identified. The user's avatar A1 is displayed with.
- the avatar A1 that estimates the future user's body shape by exercise / and meal is superimposed and displayed on the virtual space V1. Therefore, in the third operation example, there is an advantage that the user can easily feel the effect of exercise / and meal. That is, for example, in fitness, the effect of training does not appear in a short period of time, and it is difficult for the user to experience it. On the other hand, in the third operation example, the user can easily realize the effect of the training by seeing the avatar A1. As a result, the third operation example has an advantage that it is easy to maintain the motivation for the user's training. In addition, even if training is not performed at this time, by presenting the avatar A1 that estimates the body shape of the future user to the user, it is easy to motivate the user to go to the gym or the like in order to approach the body shape. There is an advantage.
- the application may acquire exercise information regarding the exercise performed by the user from a wearable terminal such as an activity meter worn by the user.
- the exercise information may include, for example, the type of exercise, the execution time of the exercise, the exercise intensity, and the like.
- the avatar generation system 100 can modify the user's avatar A1 so as to reflect the simulation result on the dedicated server based on the motion information.
- the above embodiment is only one of the various embodiments of the present disclosure.
- the above-described embodiment can be variously modified depending on the design and the like as long as the object of the present disclosure can be achieved.
- the same function as the avatar generation method may be embodied in a (computer) program, a non-temporary recording medium on which the program is recorded, or the like.
- the program according to one aspect of the present disclosure causes one or more processors to execute the above-mentioned avatar generation method.
- the processing unit 1 and the like include a computer system.
- the computer system mainly consists of a processor and a memory as hardware.
- the processor executes the program recorded in the memory of the computer system, the function as the avatar generation system 100 in the present disclosure is realized.
- the program may be pre-recorded in the memory of the computer system, may be provided through a telecommunications line, and may be recorded on a non-temporary recording medium such as a memory card, optical disk, hard disk drive, etc. that can be read by the computer system. May be provided.
- the processor of a computer system is composed of one or more electronic circuits including a semiconductor integrated circuit (IC) or a large scale integrated circuit (LSI).
- IC semiconductor integrated circuit
- LSI large scale integrated circuit
- the integrated circuit such as IC or LSI referred to here has a different name depending on the degree of integration, and includes an integrated circuit called a system LSI, VLSI (Very Large Scale Integration), or ULSI (Ultra Large Scale Integration).
- an FPGA Field-Programmable Gate Array
- a plurality of electronic circuits may be integrated on one chip, or may be distributed on a plurality of chips.
- a plurality of chips may be integrated in one device, or may be distributed in a plurality of devices.
- the computer system referred to here includes a microcontroller having one or more processors and one or more memories. Therefore, the microprocessor is also composed of one or a plurality of electronic circuits including a semiconductor integrated circuit or a large-scale integrated circuit.
- the avatar generation system 100 it is not an essential configuration for the avatar generation system 100 that a plurality of functions in the avatar generation system 100 are integrated in one housing.
- the components of the avatar generation system 100 may be distributed in a plurality of housings.
- at least a part of the functions of the avatar generation system 100 may be realized by, for example, a server and a cloud (cloud computing).
- the scanner 4 is configured by using a plurality of image pickup devices 41.
- the scanner 4 may be composed of one image pickup device 41, and one image pickup device 41 may be moved to acquire a plurality of images. That is, in the present disclosure, a plurality of images for generating the avatar A1 may be obtained by photographing the subject T1 from a plurality of different angles, and the number of image pickup devices 41 is not limited to a plurality of units.
- the scanner 4 may be configured by using a distance sensor. Further, the scanner 4 may be configured by combining a distance sensor and an image pickup device. Examples of the distance sensor include a lidar sensor. Specifically, a ToF type LIDAR sensor may be used. By using the distance sensor, a more realistic avatar can be generated.
- the modified unit 12 is included in the processing unit 1, but the present invention is not limited to this embodiment.
- the modification unit 12 may be included in the information terminal 5, or may be included in both the processing unit 1 and the information terminal 5. In the latter case, the processing of the modification unit 12 may be shared between the processing unit 1 and the information terminal 5.
- the modification unit 12 may be included in the dedicated server of the company that operates the application, or may be included in both the processing unit 1 and the dedicated server. In the latter case, the processing of the modification unit 12 may be shared between the processing unit 1 and the dedicated server.
- the modification unit 12 may be modified to the avatar A1 in a state of wearing clothes, for example, when the avatar A1 generated by the generation unit 11 is in underwear.
- the modification unit 12 may modify the avatar A1 generated by the generation unit 11 into a naked avatar A1 when the avatar A1 is in a state of wearing clothes.
- the modification unit 12 can modify the avatar A1 by using a trained model machine-learned by, for example, deep learning. That is, the modification unit 12 estimates the body shape hidden by the clothes by using the trained model, and modifies the avatar A1 so as to reflect the estimated body shape.
- the application installed in the information terminal 5 may have contents such as a moving image, a still image, or an audio in advance. Further, the application may acquire contents such as moving images, still images, and sounds from the outside via a network. Further, the application may display the avatar A1 generated by the avatar generation system 100 on the display unit 51 in combination with the above content.
- the display unit 51 of the information terminal 5 may display a general-purpose avatar prepared by the avatar generation system 100 or the dedicated server of the application superimposed on the virtual space V1 as the user's avatar A1.
- the user's avatar A1 is displayed as a silhouette, and the user's avatar A1 is displayed superimposed on the virtual space V1 in an abstract form. You may let me. Further, the size of the user's avatar A1 may be changed to match the size of the selected clothes. In the latter case, the presentation unit 13 may present that the current user's body shape cannot wear the selected clothing.
- the presentation unit 13 not only presents that the processing in the modification unit 12 (modification step ST2) has been executed, but also presents the comparison results before and after the processing in the modification unit 12. You may.
- the modification of the avatar may be performed on the information terminal 5.
- the avatar is modified and displayed on the information terminal 5 will be described with reference to FIG.
- the example in which the user who uses the information terminal 5 is a person different from the target person T1 is described here, the user and the target person T1 may be the same person.
- the user uses the information terminal 5 to execute an application for displaying the avatar of the target person T1.
- This application may be executed on the information terminal 5 or may be executed on a server (not shown) according to the instruction of the information terminal 5.
- the user logs in to the user's account in this application using the information terminal 5.
- password authentication, fingerprint authentication, face authentication, etc. can be used.
- the application is, for example, an online game or an SNS installed in the information terminal 5.
- the information terminal 5 acquires the avatar data of the target person T1 from the avatar generation system 100 (an example of the first server). Specifically, the information terminal 5 sends an instruction to the avatar generation system 100, and in response, the communication unit 2 of the avatar generation system 100 stores the data for the avatar of the target person T1 stored in the storage unit 3 in the information terminal. Send to 5.
- the information terminal 5 communicates with an application server (an example of a second server, not shown) that stores data necessary for the operation of the application, and the user and the target person T1 stored in the application server.
- an application server an example of a second server, not shown
- Get information that shows the intimacy with is information indicating that the user is a relative, a close friend, or an acquaintance of the user in an online game or SNS, as described in the first operation example.
- the information terminal 5 modifies the avatar of the target person T1 according to the information indicating the intimacy.
- the modification method is as described above, and the lower the intimacy, the greater the degree of abstraction of the avatar.
- the modified avatar is displayed on the display unit 51 of the information terminal 5.
- the avatar of the target person T1 can be modified according to the intimacy between the user of the information terminal 5 and the target person T1. As a result, the privacy of the target person T1 can be automatically protected from the user of the information terminal 5.
- the avatar generation system 100 may be a server different from the application server that stores information indicating intimacy, or may be the same server.
- the avatar generation method includes a generation step (ST1) and a modification step (ST2).
- the generation step (ST1) is a step of generating avatar data for displaying at least the avatar (A1) reflecting the physical information of the target person (T1) in the virtual space (V1).
- the modification step (ST2) is performed on the avatar (A1) for at least one of the avatar (A1) and the accessory (B1) displayed in the virtual space (V1) among the avatar data generated in the generation step (ST1). It is a step of modifying according to at least one of the display mode of the above and the attribute of the accessory (B1).
- the modification step (ST2) includes a process of abstracting a part or all of the avatar (A1).
- the modification step (ST2) includes a process of modifying the avatar (A1) according to the person who discloses the avatar (A1).
- the modification step (ST2) obtains the avatar (A1) according to the relationship between the person who discloses the avatar (A1) and the target person (T1). Includes processing to modify.
- the method of giving information about the target person (T1) can be changed between a person who is relatively close to the target person (T1) and a person who has a relatively weak relationship with the target person (T1). There is an advantage.
- the lower the intimacy between the person who discloses the avatar (A1) and the target person (T1), the lower the intimacy of the avatar (A1). includes processing to increase the degree of abstraction of.
- the relationship includes at least whether or not the person who discloses the avatar (A1) is the target person (T1).
- the method of giving information about the target person (T1) can be changed depending on whether or not the person who discloses the avatar (A1) is the target person (T1).
- the modification step (ST2) is a process of modifying the avatar (A1) according to the time change of the target person (T1). include.
- the time change of the target person (T1) is a change based on the behavior information regarding the behavior of the target person (T1).
- the change of the physical information can be reflected in the avatar (A1). There is.
- the modification step (ST2) includes a process of modifying the motion of the avatar (A1).
- the modification step (ST2) is a process of modifying the avatar (A1) according to the physical characteristics of the subject (T1). including.
- the modification step (ST2) includes a process of modifying the accessory (B1).
- the modification step (ST2) includes a process of modifying the avatar (A1) according to the accessory (B1).
- the avatar generation method further includes a presentation step (ST3) indicating that the modification step (ST2) has been executed in any one of the first to twelfth aspects.
- the avatar generation method further includes a reception step (ST4) for receiving input from the user regarding the modification step (ST2) in any one of the first to thirteenth aspects.
- the modification step (ST2) includes a process of modifying a plurality of parts of the avatar data.
- the program according to the sixteenth aspect causes one or more processors to execute the avatar generation method of any one of the first to fifteenth aspects.
- the avatar generation system (100) includes a generation unit (11) and a modification unit (12).
- the generation unit (11) generates avatar data for displaying at least the avatar (A1) reflecting the physical information of the target person (T1) in the virtual space (V1).
- the modification unit (12) has the avatar (A1) for at least one of the avatar (A1) and the accessory (B1) displayed in the virtual space (V1) among the avatar data generated by the generation unit (11). Is modified according to at least one of the display mode of the above and the attribute of the accessory (B1).
- the method according to the second to fifteenth aspects is not an essential method for the avatar generation method, and can be omitted as appropriate.
- Avatar generation system 11 Generation part 12 Modification part A1 Avatar B1 Ancillary items ST1 Generation step ST2 Modification step ST3 Presentation step ST4 Reception step T1 Target person V1 Virtual space
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
This avatar generation method comprises a generation step and a modification step. The generation step is a step for generating avatar-purpose data for displaying, in a virtual space, an avatar that reflects at least physical information of a target person. The modification step is a step for modifying, among the avatar-purpose data generated in the generation step, the avatar and/or an accessory that will be displayed in the virtual space, according to a display form of the avatar and/or an attribute of the accessory.
Description
本開示は、一般にアバター生成方法、プログラム、及びアバター生成システムに関する。より詳細には、本開示は、仮想空間に表示されるアバターを生成するアバター生成方法、アバター生成方法を実行するためのプログラム、及び仮想空間に表示されるアバターを生成するアバター生成システムに関する。
This disclosure generally relates to avatar generation methods, programs, and avatar generation systems. More specifically, the present disclosure relates to an avatar generation method for generating an avatar displayed in a virtual space, a program for executing the avatar generation method, and an avatar generation system for generating an avatar displayed in the virtual space.
特許文献1には、仮想空間提供サーバによって提供される仮想空間内において、ユーザがカスタマイズしたアバター画像を操作することができ、更にそのアバター画像を仮想空間内で移動させることができる仮想空間提供システムが開示されている。
Patent Document 1 describes a virtual space providing system in which a user-customized avatar image can be operated in the virtual space provided by the virtual space providing server, and the avatar image can be moved in the virtual space. Is disclosed.
本開示は、アプリケーションに適したアバター用データを生成しやすいアバター生成方法、プログラム、及びアバター生成システムを提供することを目的とする。
It is an object of the present disclosure to provide an avatar generation method, a program, and an avatar generation system that can easily generate avatar data suitable for an application.
本開示の一態様に係るアバター生成方法は、生成ステップと、改変ステップと、を有する。前記生成ステップは、少なくとも対象者の身体情報が反映されたアバターを仮想空間に表示させるアバター用データを生成するステップである。前記改変ステップは、前記生成ステップにて生成される前記アバター用データのうち前記アバター及び前記仮想空間に表示される付帯物の少なくとも一方を、前記アバターの表示態様及び前記付帯物の属性の少なくとも一方に応じて改変するステップである。
The avatar generation method according to one aspect of the present disclosure includes a generation step and a modification step. The generation step is a step of generating avatar data for displaying an avatar that reflects at least the physical information of the subject in the virtual space. In the modification step, at least one of the avatar and the accessory displayed in the virtual space among the data for the avatar generated in the generation step is at least one of the display mode of the avatar and the attribute of the accessory. It is a step to modify according to.
本開示の一態様に係るプログラムは、1以上のプロセッサに、上記のアバター生成方法を実行させる。
The program according to one aspect of the present disclosure causes one or more processors to execute the above-mentioned avatar generation method.
本開示の一態様に係るアバター生成システムは、生成部と、改変部と、を備える。前記生成部は、少なくとも対象者の身体情報が反映されたアバターを仮想空間に表示させるアバター用データを生成する。前記改変部は、前記生成部にて生成される前記アバター用データのうち前記アバター及び前記仮想空間に表示される付帯物の少なくとも一方を、前記アバターの表示態様及び前記付帯物の属性の少なくとも一方に応じて改変する。
The avatar generation system according to one aspect of the present disclosure includes a generation unit and a modification unit. The generation unit generates avatar data for displaying at least an avatar that reflects the physical information of the subject in a virtual space. The modification unit uses at least one of the avatar and the accessory displayed in the virtual space among the avatar data generated by the generation unit, and at least one of the display mode of the avatar and the attribute of the accessory. Modify according to.
本開示は、アプリケーションに適したアバター用データを生成しやすい、という利点がある。
This disclosure has the advantage that it is easy to generate avatar data suitable for the application.
(1)概要
以下、本実施形態のアバター生成方法、及びアバター生成方法の実行主体であるアバター生成システム100(図1参照)について図面を参照して説明する。ただし、下記の実施形態は、本開示の様々な実施形態の一部に過ぎない。下記の実施形態は、本開示の目的を達成できれば、設計等に応じて種々の変更が可能である。また、下記の実施形態において説明する各図は、模式的な図であり、図中の各構成要素の大きさ及び厚さそれぞれの比が必ずしも実際の寸法比を反映しているとは限らない。 (1) Outline Hereinafter, the avatar generation method of the present embodiment and the avatar generation system 100 (see FIG. 1), which is the execution subject of the avatar generation method, will be described with reference to the drawings. However, the following embodiments are only part of the various embodiments of the present disclosure. The following embodiments can be variously modified according to the design and the like as long as the object of the present disclosure can be achieved. Further, each figure described in the following embodiment is a schematic view, and the ratio of the size and the thickness of each component in the figure does not necessarily reflect the actual dimensional ratio. ..
以下、本実施形態のアバター生成方法、及びアバター生成方法の実行主体であるアバター生成システム100(図1参照)について図面を参照して説明する。ただし、下記の実施形態は、本開示の様々な実施形態の一部に過ぎない。下記の実施形態は、本開示の目的を達成できれば、設計等に応じて種々の変更が可能である。また、下記の実施形態において説明する各図は、模式的な図であり、図中の各構成要素の大きさ及び厚さそれぞれの比が必ずしも実際の寸法比を反映しているとは限らない。 (1) Outline Hereinafter, the avatar generation method of the present embodiment and the avatar generation system 100 (see FIG. 1), which is the execution subject of the avatar generation method, will be described with reference to the drawings. However, the following embodiments are only part of the various embodiments of the present disclosure. The following embodiments can be variously modified according to the design and the like as long as the object of the present disclosure can be achieved. Further, each figure described in the following embodiment is a schematic view, and the ratio of the size and the thickness of each component in the figure does not necessarily reflect the actual dimensional ratio. ..
本実施形態のアバター生成方法(アバター生成システム100)は、対象者T1のアバターA1を生成するための方法(システム)である。本開示でいう「アバター」は、実空間R1における対象者T1の分身として、仮想空間V1にて表示されるキャラクターである。アバターA1は、例えば対象者T1を模した3次元モデルとして仮想空間V1にて表示される。
The avatar generation method (avatar generation system 100) of the present embodiment is a method (system) for generating the avatar A1 of the target person T1. The "avatar" referred to in the present disclosure is a character displayed in the virtual space V1 as an alter ego of the target person T1 in the real space R1. The avatar A1 is displayed in the virtual space V1 as, for example, a three-dimensional model imitating the target person T1.
また、本実施形態では、仮想空間V1は、後述する情報端末5の表示部51に表示される3次元空間である。仮想空間V1は、VR(Virtual Reality:仮想現実)技術にて表現される空間であってもよいし、AR(Augmented Reality:拡張現実)にて表現される空間であってもよい。また、仮想空間V1は、MR(Mixed Reality)技術にて表現される空間であってもよいし、SR(Substitutional Reality)技術にて表現される空間であってもよい。つまり、仮想空間V1は、xR(Cross Reality)技術にて表現される空間である。
Further, in the present embodiment, the virtual space V1 is a three-dimensional space displayed on the display unit 51 of the information terminal 5 described later. The virtual space V1 may be a space expressed by VR (Virtual Reality) technology or a space expressed by AR (Augmented Reality). Further, the virtual space V1 may be a space represented by MR (Mixed Reality) technology or a space represented by SR (Substitutional Reality) technology. That is, the virtual space V1 is a space expressed by xR (Cross Reality) technology.
アバター生成システム100は、図1に示すように、生成部11と、改変部12と、を備えている。生成部11は、本実施形態のアバター生成方法における生成ステップST1(図2参照)の実行主体である。改変部12は、本実施形態のアバター生成方法における改変ステップST2(図2参照)の実行主体である。
As shown in FIG. 1, the avatar generation system 100 includes a generation unit 11 and a modification unit 12. The generation unit 11 is the execution subject of the generation step ST1 (see FIG. 2) in the avatar generation method of the present embodiment. The modification unit 12 is the execution subject of the modification step ST2 (see FIG. 2) in the avatar generation method of the present embodiment.
生成部11は、生成ステップST1として、少なくとも対象者T1の身体情報が反映されたアバターA1を仮想空間V1に表示させるアバター用データを生成する。アバター用データは、例えば情報端末5にて実行されるアプリケーションにて使用される。この場合、アプリケーションは、アバター生成システム100から提供されたアバター用データを用いることで、仮想空間V1にアバターA1を重畳して表示させることが可能である。アバターA1には、対象者T1の3次元モデル、テクスチャ、マテリアル、スケルトン、及びスキンウェイト等が含まれる。
As the generation step ST1, the generation unit 11 generates avatar data for displaying at least the avatar A1 reflecting the physical information of the target person T1 in the virtual space V1. The avatar data is used, for example, in an application executed by the information terminal 5. In this case, the application can superimpose and display the avatar A1 on the virtual space V1 by using the avatar data provided by the avatar generation system 100. The avatar A1 includes a three-dimensional model of the subject T1, a texture, a material, a skeleton, a skin weight, and the like.
ここで、アバター用データに含まれるアバターA1は、例えば熊のぬいぐるみ等の実空間R1における対象者T1から全くかけ離れたモデルではなく、対象者T1の身体情報、つまり対象者T1を識別し得る程度の情報が反映されたモデルである。本実施形態では、アバター用データに含まれるアバターA1は、対象者T1を写実的に模したモデル、つまりフォトリアルなモデルである。また、アバター用データには、アバターA1のデータだけでなく、後述する付帯物B1のデータが含まれ得る。さらに、アバター用データには、アバターA1の動きに関するモーションデータが含まれ得る。
Here, the avatar A1 included in the avatar data is not a model completely different from the target person T1 in the real space R1 such as a stuffed bear, but the physical information of the target person T1, that is, the degree to which the target person T1 can be identified. It is a model that reflects the information of. In the present embodiment, the avatar A1 included in the avatar data is a model that realistically imitates the subject T1, that is, a photorealistic model. Further, the avatar data may include not only the data of the avatar A1 but also the data of the accessory B1 described later. Further, the avatar data may include motion data relating to the movement of the avatar A1.
改変部12は、改変ステップST2として、生成部11(生成ステップST1)にて生成されるアバター用データのうちアバターA1及び仮想空間V1に表示される付帯物B1の少なくとも一方について改変する。以下では、特に断りのない限り、改変部12により改変される前のアバターA1及び付帯物B1を、それぞれ単に「アバターA1」及び「付帯物B1」という。また、以下では、特に断りのない限り、改変部12により改変された後のアバターA1及び付帯物B1を、それぞれ「改変後のアバターA1」及び「改変後の付帯物B1」という。
The modification unit 12 modifies at least one of the avatar data generated by the generation unit 11 (generation step ST1) of the avatar A1 and the accessory B1 displayed in the virtual space V1 as the modification step ST2. Hereinafter, unless otherwise specified, the avatar A1 and the accessory B1 before being modified by the modification unit 12 are simply referred to as "avatar A1" and "incidental B1", respectively. Further, in the following, unless otherwise specified, the avatar A1 and the accessory B1 after being modified by the modification unit 12 are referred to as "the modified avatar A1" and "the modified accessory B1", respectively.
本開示でいう「付帯物」は、仮想空間V1にてアバターA1と共に表示される物体であって、何らかの形でアバターA1と関連する物体をいう。付帯物B1は、一例として、衣服等のアバターA1が身に着けることが可能な物体を含む他、車又は住宅等のアバターA1が利用する物体等を含み得る。なお、本実施形態では、アバターA1を生成する際に対象者T1が身に着けている衣服T11は、付帯物B1ではなく、アバターA1に含まれる。
The "incidental object" referred to in the present disclosure is an object displayed together with the avatar A1 in the virtual space V1 and refers to an object related to the avatar A1 in some way. As an example, the accessory B1 may include an object that can be worn by the avatar A1 such as clothes, and an object used by the avatar A1 such as a car or a house. In the present embodiment, the clothes T11 worn by the subject T1 when the avatar A1 is generated are included in the avatar A1 instead of the accessory B1.
改変部12は、仮想空間V1でのアバターA1の表示態様及び付帯物B1の属性の少なくとも一方に応じて、アバターA1及び付帯物B1の少なくとも一方について改変する。例えば、改変部12は、後述するようにアバターA1を開示する相手(つまり、アバターA1の表示態様)に応じて、アバターA1及び付帯物B1の少なくとも一方を改変する。また、例えば、改変部12は、後述するように衣服の種類及び/又はサイズ(つまり、付帯物B1の属性)に応じて、アバターA1及び付帯物B1の少なくとも一方を改変する。
The modification unit 12 modifies at least one of the avatar A1 and the accessory B1 according to at least one of the display mode of the avatar A1 and the attribute of the accessory B1 in the virtual space V1. For example, the modification unit 12 modifies at least one of the avatar A1 and the accessory B1 according to the partner who discloses the avatar A1 (that is, the display mode of the avatar A1) as described later. Further, for example, the modification unit 12 modifies at least one of the avatar A1 and the accessory B1 according to the type and / or size of the garment (that is, the attribute of the accessory B1) as described later.
そして、アプリケーションは、アバター生成システム100から提供されたアバター用データを用いることで、仮想空間V1に改変後のアバターA1及び/又は改変後の付帯物B1を重畳して表示させることが可能である。
Then, by using the avatar data provided by the avatar generation system 100, the application can superimpose and display the modified avatar A1 and / or the modified accessory B1 on the virtual space V1. ..
上述のように、本実施形態では、生成部11にて生成したアバター用データのうちアバターA1及び付帯物B1の少なくとも一方について、アバターA1の表示態様及び付帯物B1の属性の少なくとも一方に応じて改変する。このため、本実施形態では、アプリケーションに適したアバター用データを生成しやすい、という利点がある。
As described above, in the present embodiment, of the avatar data generated by the generation unit 11, at least one of the avatar A1 and the accessory B1 is according to the display mode of the avatar A1 and at least one of the attributes of the accessory B1. Modify. Therefore, this embodiment has an advantage that it is easy to generate avatar data suitable for the application.
(2)詳細
以下、本実施形態のアバター生成システム100について図1を参照して詳しく説明する。本実施形態では、アバター生成システム100を運用する業者と、アプリケーションを提供する業者とが互いに異なっている、と仮定する。 (2) Details Hereinafter, theavatar generation system 100 of the present embodiment will be described in detail with reference to FIG. In this embodiment, it is assumed that the company that operates the avatar generation system 100 and the company that provides the application are different from each other.
以下、本実施形態のアバター生成システム100について図1を参照して詳しく説明する。本実施形態では、アバター生成システム100を運用する業者と、アプリケーションを提供する業者とが互いに異なっている、と仮定する。 (2) Details Hereinafter, the
本実施形態では、アバター生成システム100はサーバである。アバター生成システム100は、例えばインターネット等のネットワークN1を介して、スキャナ4及び情報端末5の各々と通信可能に構成されている。図1では、スキャナ4及び情報端末5はそれぞれ1つずつ図示されているが、いずれも複数であってもよい。
In this embodiment, the avatar generation system 100 is a server. The avatar generation system 100 is configured to be able to communicate with each of the scanner 4 and the information terminal 5 via a network N1 such as the Internet. In FIG. 1, one scanner 4 and one information terminal 5 are shown, but each of them may be plural.
スキャナ4は、3Dスキャナであって、例えばアバター生成システム100が設置される場所から離れた遠隔地に設置されている。スキャナ4は、実空間R1における対象者T1を撮像する複数台の撮像装置41を備えている。図1では、撮像装置41が4台設置されているが、実際には、数十台の撮像装置41が設置されているのが好ましい。なお、図1における複数台の撮像装置41及びこれらの配置は、概念的に表現されているに過ぎず、実際の態様を表していない。
The scanner 4 is a 3D scanner, and is installed in a remote place away from the place where the avatar generation system 100 is installed, for example. The scanner 4 includes a plurality of image pickup devices 41 that image the subject T1 in the real space R1. In FIG. 1, four image pickup devices 41 are installed, but in reality, it is preferable that several tens of image pickup devices 41 are installed. It should be noted that the plurality of image pickup devices 41 and their arrangement in FIG. 1 are merely conceptually expressed and do not represent an actual mode.
各撮像装置41は、例えばCCD(Charge Coupled Device)イメージセンサ又はCMOS(Complementary Metal-Oxide-Semiconductor)イメージセンサ等の固体撮像素子を有している。複数台の撮像装置41は、対象者T1を囲むように互いに異なる位置に設置され、かつ、互いに異なる角度から対象者T1を撮像する。これにより、スキャナ4は、対象者T1を種々の角度から捉えた複数の静止画像を取得することにより、対象者T1の全身をスキャンする。スキャナ4で取得した複数の静止画像は、ネットワークN1を介してアバター生成システム100に送信される。
Each image pickup device 41 has a solid-state image pickup element such as a CCD (Charge Coupled Device) image sensor or a CMOS (Complementary Metal-Oxide-Semiconductor) image sensor. The plurality of image pickup devices 41 are installed at different positions so as to surround the subject T1 and image the subject T1 from different angles. As a result, the scanner 4 scans the whole body of the subject T1 by acquiring a plurality of still images of the subject T1 captured from various angles. The plurality of still images acquired by the scanner 4 are transmitted to the avatar generation system 100 via the network N1.
情報端末5は、例えばスマートフォンである。また、情報端末5は、デスクトップ型のパーソナルコンピュータ(ディスプレイを含む)、ラップトップ型のパーソナルコンピュータ、又はタブレット端末等を含み得る。つまり、情報端末5は、サーバと通信して情報処理および情報表示を行うためのデバイスであって、プロセッサ、メモリ、トランスミッタ、レシーバおよびディスプレイを備えたデバイスであってよい。その他、情報端末5は、ヘッドマウントディスプレイを含み得る。情報端末5のユーザは、基本的に対象者T1である。
The information terminal 5 is, for example, a smartphone. Further, the information terminal 5 may include a desktop personal computer (including a display), a laptop personal computer, a tablet terminal, or the like. That is, the information terminal 5 is a device for communicating with a server to perform information processing and information display, and may be a device including a processor, a memory, a transmitter, a receiver, and a display. In addition, the information terminal 5 may include a head-mounted display. The user of the information terminal 5 is basically the target person T1.
情報端末5は、ユーザからの入力を受け付けることで、情報端末5にあらかじめインストールされているアプリケーションを実行する。または、情報端末5は、ユーザからの入力を受け付けることで、情報端末5にあらかじめインストールされているブラウザを介してウェブサーバでのアプリケーションの実行結果を取得する。いずれの場合においても、情報端末5の表示部51には、アプリケーションの実行結果に基づく仮想空間V1が表示される。
The information terminal 5 receives the input from the user and executes the application pre-installed in the information terminal 5. Alternatively, the information terminal 5 receives an input from the user and acquires the execution result of the application on the web server via the browser installed in the information terminal 5 in advance. In either case, the virtual space V1 based on the execution result of the application is displayed on the display unit 51 of the information terminal 5.
ここで、仮想空間V1には、アバター生成システム100から提供されるアバター用データに基づくアバターA1又は改変後のアバターA1が重畳して表示される。また、アバター用データに付帯物B1のデータが含まれる場合、仮想空間V1には、アバターA1(又は改変後のアバターA1)の他に、付帯物B1又は改変後の付帯物B1が重畳して表示される。さらに、アバター用データにモーションデータが含まれる場合、仮想空間V1では、モーションデータに応じて動くアバターA1(又は改変後のアバターA1)が描画される。
Here, the avatar A1 based on the avatar data provided by the avatar generation system 100 or the modified avatar A1 is superimposed and displayed in the virtual space V1. Further, when the data for the avatar includes the data of the accessory B1, the accessory B1 or the modified accessory B1 is superimposed on the virtual space V1 in addition to the avatar A1 (or the modified avatar A1). Is displayed. Further, when the motion data is included in the avatar data, the avatar A1 (or the modified avatar A1) that moves according to the motion data is drawn in the virtual space V1.
アバター生成システム100は、図1に示すように、処理部1と、通信部2と、記憶部3と、を備えている。
As shown in FIG. 1, the avatar generation system 100 includes a processing unit 1, a communication unit 2, and a storage unit 3.
処理部1は、アバター生成システム100の全体的な制御をするように構成される。処理部1は、例えば、1以上のプロセッサ(マイクロプロセッサ)と1以上のメモリとを含むコンピュータシステムにより実現され得る。つまり、1以上のプロセッサが1以上のメモリに記憶された1以上のプログラム(アプリケーション)を実行することで、処理部1として機能する。プログラムは、ここでは処理部1のメモリに予め記録されているが、インターネット等の電気通信回線を通じて、又はメモリカード等の非一時的な記録媒体に記録されて提供されてもよい。
The processing unit 1 is configured to control the entire avatar generation system 100. The processing unit 1 can be realized by, for example, a computer system including one or more processors (microprocessors) and one or more memories. That is, one or more processors execute one or more programs (applications) stored in one or more memories, thereby functioning as the processing unit 1. Although the program is recorded in advance in the memory of the processing unit 1 here, it may be recorded and provided through a telecommunication line such as the Internet or on a non-temporary recording medium such as a memory card.
処理部1は、生成部11と、改変部12と、提示部13と、受付部14と、を有している。図1において、これらの機能部はいずれも実体のある構成を示しているわけではなく、処理部1によって実現される機能を示している。
The processing unit 1 has a generation unit 11, a modification unit 12, a presentation unit 13, and a reception unit 14. In FIG. 1, none of these functional units shows a substantive configuration, but shows the functions realized by the processing unit 1.
生成部11は、生成ステップST1として、対象者T1ごとに、アバターA1を仮想空間V1に表示させるアバター用データを生成する。本実施形態では、生成部11は、スキャナ4から取得した対象者T1の全身を捉えた複数の静止画像に基づいて、アバター用データを生成する。図1に示す例では、対象者T1は衣服T11を着ているため、生成部11は、衣服T11を着た状態の対象者T1のアバターA1を含むアバター用データを生成する。
The generation unit 11 generates avatar data for displaying the avatar A1 in the virtual space V1 for each target person T1 as the generation step ST1. In the present embodiment, the generation unit 11 generates avatar data based on a plurality of still images of the whole body of the subject T1 acquired from the scanner 4. In the example shown in FIG. 1, since the subject T1 wears the clothes T11, the generation unit 11 generates avatar data including the avatar A1 of the subject T1 wearing the clothes T11.
以下、生成部11による対象者T1のアバターA1の生成方法の一例について説明する。まず、生成部11は、スキャナ4から取得した複数の静止画像に基づいて、対象者T1の3次元モデルを生成する。具体的には、生成部11は、全ての静止画像の全ての対象点の各々について、3次元空間である基本空間での対象点の座標を演算する。ここで、生成部11は、各撮像装置41での撮像結果を取得することにより、基本空間に投影した場合の撮像装置41から対象点までの距離を取得する。また、生成部11は、各撮像装置41の位置情報を取得することにより、基本空間に投影した場合の隣り合う撮像装41間の距離を取得する。生成部11は、これらの距離に基づいて、三角測量の原理により、基本空間における対象点の座標を演算する。そして、生成部11は、基本空間における全ての対象点の座標に基づいて、対象者T1の3次元モデルを生成する。
Hereinafter, an example of a method of generating the avatar A1 of the target person T1 by the generation unit 11 will be described. First, the generation unit 11 generates a three-dimensional model of the subject T1 based on a plurality of still images acquired from the scanner 4. Specifically, the generation unit 11 calculates the coordinates of the target points in the basic space, which is a three-dimensional space, for each of all the target points of all the still images. Here, the generation unit 11 acquires the image pickup result of each image pickup apparatus 41 to acquire the distance from the image pickup apparatus 41 to the target point when projected onto the basic space. Further, the generation unit 11 acquires the position information of each image pickup device 41 to acquire the distance between the adjacent image pickup devices 41 when projected onto the basic space. The generation unit 11 calculates the coordinates of the target point in the basic space based on these distances by the principle of triangulation. Then, the generation unit 11 generates a three-dimensional model of the target person T1 based on the coordinates of all the target points in the basic space.
次に、生成部11は、スキャナ4から取得した複数の静止画像に基づいて、3次元モデルに貼り付けるテクスチャを生成する。ここでは、テクスチャには、対象者T1の肌に相当するテクスチャの他、対象者T1が着ている衣服T11に相当するテクスチャが含まれる。そして、生成部11は、生成したテクスチャを3次元モデルに貼り付ける。
Next, the generation unit 11 generates a texture to be attached to the three-dimensional model based on a plurality of still images acquired from the scanner 4. Here, the texture includes a texture corresponding to the skin of the subject T1 and a texture corresponding to the clothes T11 worn by the subject T1. Then, the generation unit 11 pastes the generated texture on the three-dimensional model.
次に、生成部11は、3次元モデルに対してリギングを実行する。リギングにおいては、生成部11は、スケルトンの設定、IK(Inverse Kinematics)及び/又はFK(Forward Kinematics)の設定、並びにウェイトの調整を含めたスキニング等を3次元モデルに対して実行する。これにより、種々の動きをとることが可能な対象者T1のアバターA1が生成される。
Next, the generation unit 11 executes rigging on the three-dimensional model. In rigging, the generation unit 11 executes skeleton setting, IK (Inverse Kinematics) and / or FK (Forward Kinematics) setting, skinning including weight adjustment, and the like on the three-dimensional model. As a result, the avatar A1 of the subject T1 who can take various movements is generated.
このように、本実施形態では、生成部11は、スキャナ4にて対象者T1の全身を撮像した複数の静止画像に基づいて、対象者T1のアバターA1を自動的に生成することが可能である。また、アバター用データにモーションデータが含まれる場合、又はアプリケーションにてアバターA1に対してモーションデータが適用された場合、仮想空間V1にてモーションデータに応じた動きをアバターA1にとらせることが可能である。
As described above, in the present embodiment, the generation unit 11 can automatically generate the avatar A1 of the subject T1 based on a plurality of still images obtained by capturing the whole body of the subject T1 with the scanner 4. be. Further, when the motion data is included in the avatar data, or when the motion data is applied to the avatar A1 in the application, it is possible to make the avatar A1 move according to the motion data in the virtual space V1. Is.
ここで、モーションデータは、対象者T1ごとにモーションキャプチャを行うことにより、対象者T1ごとに固有のデータを取得してもよいが、汎用のモーションデータであってもよい。この場合、汎用のモーションデータを適用することで、基本的に全てのアバターA1に対して同種の動きをとらせることが可能である。
Here, the motion data may be acquired as unique data for each target person T1 by performing motion capture for each target person T1, but may be general-purpose motion data. In this case, by applying general-purpose motion data, it is possible to basically make the same kind of movement for all avatars A1.
改変部12は、改変ステップST2として、主としてアプリケーションからの要求に応じて、生成部11にて生成されたアバターA1及び付帯物B1の少なくとも一方について改変する。本実施形態では、改変部12は、アバターA1のみを改変するだけでなく、アバターA1及び付帯物B1の両方を改変してもよい。また、改変部12は、例えばアバターA1における1箇所のみを改変するだけでなく、複数箇所を改変してもよい。つまり、改変部12での処理(改変ステップST2)は、アバター用データについて複数箇所を改変する処理を含み得る。
The modification unit 12 modifies at least one of the avatar A1 and the accessory B1 generated by the generation unit 11 mainly in response to a request from the application as the modification step ST2. In the present embodiment, the modification unit 12 may modify not only the avatar A1 but also both the avatar A1 and the accessory B1. Further, the modification unit 12 may modify not only one location in the avatar A1 but also a plurality of locations. That is, the process in the modification unit 12 (modification step ST2) may include a process of modifying a plurality of parts of the avatar data.
以下、改変部12で実行し得る種々の処理について列挙する。改変部12は、以下に列挙する種々の処理のうち少なくとも1つの処理を実行可能な態様であればよく、全ての処理を実行可能な態様でなくてもよい。
The following is a list of various processes that can be executed by the modification unit 12. The modification unit 12 may be in a mode in which at least one of the various processes listed below can be executed, and may not be in a mode in which all the processes can be executed.
改変部12での処理(改変ステップST2)は、アバターA1の一部又は全部を抽象化する処理(以下、「抽象化処理」ともいう)を含み得る。本開示でいう「アバターの抽象化」とは、生成部11にて生成したアバターA1を基礎として、このアバターA1の一部又は全部を抽象的な表現に置き換えることをいう。抽象化処理は、一例として、モザイク処理又はぼかし処理を含み得る。また、抽象化処理は、アバターA1及び/又は付帯物B1をデフォルメする処理を含み得る。本開示でいう「デフォルメ」は、アプリケーションが提供する仮想空間V1との整合がとれるようにアバターA1及び/又は付帯物B1の一部又は全部を変形することを含み得る。また、本開示でいう「デフォルメ」は、アバターA1を見た者が対象者T1を認識することができない程度にアバターA1及び/又は付帯物B1の一部又は全部を変形することを含み得る。
The process in the modification unit 12 (modification step ST2) may include a process of abstracting a part or all of the avatar A1 (hereinafter, also referred to as “abstraction process”). The "abstraction of an avatar" as used in the present disclosure means to replace a part or all of the avatar A1 with an abstract expression based on the avatar A1 generated by the generation unit 11. The abstraction process may include, for example, a mosaic process or a blur process. Further, the abstraction process may include a process of deforming the avatar A1 and / or the accessory B1. The "deformation" as used in the present disclosure may include modifying a part or all of the avatar A1 and / or the accessory B1 so as to be consistent with the virtual space V1 provided by the application. Further, the "deformation" referred to in the present disclosure may include deforming a part or all of the avatar A1 and / or the accessory B1 to the extent that the person who sees the avatar A1 cannot recognize the target person T1.
ここで、アバターA1に対する抽象化処理の具体例について図3~図5を用いて説明する。図3は、対象者T1のアバターA1であって、改変部12により改変されていない、つまり生成部11にて生成されたアバターA1を表している。対象者T1を知る者であれば、アバターA1を見ることで、アバターA1が対象者T1を表していることを把握し得る。
Here, a specific example of the abstraction process for the avatar A1 will be described with reference to FIGS. 3 to 5. FIG. 3 shows the avatar A1 of the subject T1 that has not been modified by the modification unit 12, that is, the avatar A1 generated by the generation unit 11. Anyone who knows the target person T1 can understand that the avatar A1 represents the target person T1 by looking at the avatar A1.
図4は、対象者T1のアバターA1の一部に対して抽象化処理を施した改変後のアバターA11を表している。改変後のアバターA11においては、対象者T1の頭部に相当する領域A100をデフォルメする抽象化処理が施されている。これにより、対象者T1の頭部に含まれる各パーツ(髪、目、鼻、口、又は皮膚等)の特徴が改変後のアバターA11に描画されなくなるため、改変後のアバターA11を見た者が、改変後のアバターA11が対象者T1を表していることを把握しにくくなる。
FIG. 4 shows the modified avatar A11 in which a part of the avatar A1 of the target person T1 is subjected to an abstraction process. The modified avatar A11 is subjected to an abstraction process that deforms the region A100 corresponding to the head of the subject T1. As a result, the characteristics of each part (hair, eyes, nose, mouth, skin, etc.) included in the head of the subject T1 are not drawn on the modified avatar A11, so that the person who sees the modified avatar A11. However, it becomes difficult to grasp that the modified avatar A11 represents the target person T1.
一例では、この抽象化処理を施すために、アバター用データに含まれる、対象者T1のテクスチャデータ(画像データ)を利用することができる。例えば、改変部12は、テクスチャデータ内の顔を認識し、認識した顔を改変することができる。ここで、顔認識には、既存の顔認識アルゴリズムを使用することができる。改変部12は、認識した顔のパーツ(上記のように髪、目、鼻、口、又は皮膚等)の大きさや形を変えることによって、顔を抽象化することができる。また、改変部12は、認識した顔の部分のテクスチャデータの解像度を改変することによって、顔を抽象化してもよい。一例として、改変部12は、顔の部分のテクスチャデータの解像度を下げることによって、アバターの顔を抽象化してもよい。抽象化された顔のテクスチャを、アバター用データに含まれる対象者T1の3次元モデルに貼り付けることによって、改変後のアバターA11を生成することができる。これにより、3次元モデルの形状を変更することなく、抽象化処理を施すことができる。さらに、テクスチャデータの改変だけではなく、3次元モデルの形状を改変することによって抽象化処理を行ってもよい。
In one example, in order to perform this abstraction process, the texture data (image data) of the target person T1 included in the avatar data can be used. For example, the modification unit 12 can recognize the face in the texture data and modify the recognized face. Here, an existing face recognition algorithm can be used for face recognition. The modified portion 12 can abstract the face by changing the size and shape of the recognized facial parts (hair, eyes, nose, mouth, skin, etc. as described above). Further, the modification unit 12 may abstract the face by modifying the resolution of the texture data of the recognized face portion. As an example, the modification unit 12 may abstract the face of the avatar by lowering the resolution of the texture data of the face portion. The modified avatar A11 can be generated by pasting the abstracted face texture on the three-dimensional model of the target person T1 included in the avatar data. As a result, abstraction processing can be performed without changing the shape of the three-dimensional model. Further, the abstraction process may be performed not only by modifying the texture data but also by modifying the shape of the three-dimensional model.
図5は、対象者T1の改変後のアバターA11の一部に対して更に抽象化処理を施した改変後のアバターA12を表している。改変後のアバターA12においては、対象者T1の衣服に相当する領域A101をデフォルメする抽象化処理が施されている。これにより、対象者T1の衣服T11の特徴が改変後のアバターA12に描画されなくなるため、改変後のアバターA12を見た者が、改変後のアバターA12が対象者T1を表していることを更に把握しにくくなる。このように、アバターA1の一部(例えば、頭部)のみの抽象化の度合いを大きくしてもよいし、アバターA1の全部の抽象化の度合いを大きくしてもよい。頭部のみを抽象化することによって、比較的簡易な処理で、対象者T1のプライバシーを効果的に保護することができる。
FIG. 5 shows the modified avatar A12 in which a part of the modified avatar A11 of the subject T1 is further abstracted. The modified avatar A12 is subjected to an abstraction process that deforms the region A101 corresponding to the clothes of the subject T1. As a result, the characteristics of the clothes T11 of the subject T1 are not drawn on the modified avatar A12, so that the person who sees the modified avatar A12 further indicates that the modified avatar A12 represents the subject T1. It becomes difficult to grasp. In this way, the degree of abstraction of only a part of the avatar A1 (for example, the head) may be increased, or the degree of abstraction of the entire avatar A1 may be increased. By abstracting only the head, the privacy of the subject T1 can be effectively protected by a relatively simple process.
また、改変部12での処理(改変ステップST2)は、アバターA1を開示する相手に応じてアバターA1を改変する処理を含み得る。例えば、情報端末5のユーザが対象者T1である場合、情報端末5の表示部51に表示される対象者T1のアバターA1を見る者(つまり、アバターA1を開示する相手)は、対象者T1本人である。一方、情報端末5のユーザが対象者T1以外の他者である場合、情報端末5の表示部51に表示される対象者T1のアバターA1を見る者(つまり、アバターA1を開示する相手)は、他者である。
Further, the process in the modification unit 12 (modification step ST2) may include a process of modifying the avatar A1 according to the person who discloses the avatar A1. For example, when the user of the information terminal 5 is the target person T1, the person who sees the avatar A1 of the target person T1 displayed on the display unit 51 of the information terminal 5 (that is, the person who discloses the avatar A1) is the target person T1. The person himself. On the other hand, when the user of the information terminal 5 is a person other than the target person T1, the person who sees the avatar A1 of the target person T1 displayed on the display unit 51 of the information terminal 5 (that is, the person who discloses the avatar A1) , Others.
ここで、既に述べたように、生成部11にて生成されたアバターA1は、フォトリアルな3次元モデルで表されているため、対象者T1を特定し得る情報、つまり対象者T1の個人情報に相当し得る。このため、対象者T1によっては、他者にアバターA1(つまり、個人情報)を開示したくない場合があり得る。そこで、アバターA1を開示する相手に応じてアバターA1を改変することで、例えば他者に対しては、対象者T1であることを特定できない程度まで抽象化された改変後のアバターA1を開示する。これにより、改変後のアバターA1が対象者T1であることを他者が把握しにくくなる。
Here, as already described, since the avatar A1 generated by the generation unit 11 is represented by a photorealistic three-dimensional model, information that can identify the target person T1, that is, personal information of the target person T1. Can correspond to. Therefore, depending on the target person T1, there may be a case where the avatar A1 (that is, personal information) is not desired to be disclosed to others. Therefore, by modifying the avatar A1 according to the person who discloses the avatar A1, for example, the modified avatar A1 abstracted to the extent that it cannot be identified as the target person T1 is disclosed to others. .. This makes it difficult for others to know that the modified avatar A1 is the target person T1.
特に、本実施形態では、改変部12での処理(改変ステップST2)は、アバターA1を開示する相手と対象者T1との関係性に応じてアバターA1を改変する処理を含み得る。ここでいう関係性は、少なくともアバターA1を開示する相手が対象者T1であるか否かを含み得る。例えば、対象者T1のアバターA1を見る者(アバターA1を開示する相手)が対象者T1本人である場合、改変部12は、アバターA1を改変しないか、又は仮想空間V1との整合がとれる程度までアバターA1を改変する。また、例えば、対象者T1のアバターA1を見る者(アバターA1を開示する相手)が対象者T1の親族又は友人である場合も、改変部12は、アバターA1を改変しないか、又は仮想空間V1との整合がとれる程度までアバターA1を改変する。
In particular, in the present embodiment, the process in the modification unit 12 (modification step ST2) may include a process of modifying the avatar A1 according to the relationship between the partner who discloses the avatar A1 and the target person T1. The relationship referred to here may include at least whether or not the person who discloses the avatar A1 is the target person T1. For example, when the person who sees the avatar A1 of the target person T1 (the person who discloses the avatar A1) is the target person T1, the modification unit 12 does not modify the avatar A1 or is consistent with the virtual space V1. Modify avatar A1 up to. Further, for example, even when the person who sees the avatar A1 of the target person T1 (the person who discloses the avatar A1) is a relative or a friend of the target person T1, the modification unit 12 does not modify the avatar A1 or the virtual space V1. Modify avatar A1 to the extent that it is consistent with.
一方、対象者T1のアバターA1を見る者(アバターA1を開示する相手)が対象者T1と然程親しくない友人又は知人である場合、改変部12は、対象者T1であることが把握しにくくなる程度までアバターA1を抽象化する。また、対象者T1のアバターA1を見る者(アバターA1を開示する相手)が対象者T1と全く関係のない第三者である場合、改変部12は、対象者T1であることが把握できない程度までアバターA1を抽象化する。ここでの抽象化処理は、他者の情報端末5の表示部51に対象者T1のアバターA1を表示させない処理も含み得る。このように、本実施形態では、改変部12での処理(改変ステップST2)は、アバターA1を開示する相手と対象者T1(ここでは、ユーザ本人)との親密度が低いほど、アバターA1の抽象化の度合いを大きくする処理を含み得る。
On the other hand, when the person who sees the avatar A1 of the target person T1 (the person who discloses the avatar A1) is a friend or acquaintance who is not so close to the target person T1, it is difficult for the modification unit 12 to grasp that the target person T1. Avatar A1 is abstracted to a certain extent. Further, when the person who sees the avatar A1 of the target person T1 (the person who discloses the avatar A1) is a third party who has nothing to do with the target person T1, the modification unit 12 cannot grasp that it is the target person T1. Avatar A1 is abstracted up to. The abstraction process here may include a process of not displaying the avatar A1 of the target person T1 on the display unit 51 of the information terminal 5 of another person. As described above, in the present embodiment, in the process (modification step ST2) in the modification unit 12, the lower the intimacy between the person who discloses the avatar A1 and the target person T1 (here, the user himself / herself), the lower the intimacy of the avatar A1. It may include processing that increases the degree of abstraction.
また、改変部12での処理(改変ステップST2)は、対象者T1の時間変化に応じてアバターA1を改変する処理を含み得る。本開示でいう「対象者の時間変化」は、生成部11にてアバターA1が生成された時点から、所定の時点までの時間経過に伴う対象者T1の外観の変化をいう。例えば、対象者T1の20年後のアバターA1をアプリケーションから要求された場合、改変部12は、適宜のアルゴリズムにより対象者T1の20年後の3次元モデルを推定し、推定結果に基づいてアバターA1を改変する。この場合、改変後のアバターA1には、20年の時間経過に伴う対象者T1の外観の変化が反映されることになる。
Further, the process in the modification unit 12 (modification step ST2) may include a process of modifying the avatar A1 according to the time change of the subject T1. The "time change of the subject" referred to in the present disclosure means a change in the appearance of the subject T1 with the passage of time from the time when the avatar A1 is generated by the generation unit 11 to a predetermined time. For example, when the application requests the avatar A1 20 years after the subject T1, the modification unit 12 estimates the 3D model of the subject T1 20 years later by an appropriate algorithm, and the avatar is based on the estimation result. Modify A1. In this case, the modified avatar A1 reflects the change in the appearance of the subject T1 with the passage of 20 years.
特に、本実施形態では、対象者T1の時間変化は、対象者T1の行動に関する行動情報に基づく変化を含み得る。つまり、対象者T1の時間変化は、単に対象者T1が何の行動も起こさない場合における時間経過に伴う対象者T1の外観の変化だけでなく、対象者T1が何らかの行動を起こした場合における時間経過に伴う対象者T1の外観の変化を含み得る。一例として、毎日30分のランニングを実行する対象者T1の1年後のアバターA1をアプリケーションから要求された場合、改変部12は、適宜のアルゴリズムにより対象者T1の1年後の3次元モデルを推定する。また、改変部12は、適宜のアルゴリズムによりランニング(対象者T1の行動)が対象者T1の外観に与える影響を推定する。そして、改変部12は、これらの推定結果に基づいて、アバターA1を改変する。この場合、改変後のアバターA1には、1年の時間経過に伴う対象者T1の外観の変化と、ランニングによる対象者T1の外観の変化と、が反映されることになる。
In particular, in the present embodiment, the time change of the subject T1 may include a change based on the behavioral information regarding the behavior of the subject T1. That is, the time change of the subject T1 is not only the change in the appearance of the subject T1 with the passage of time when the subject T1 does not take any action, but also the time when the subject T1 takes some action. It may include changes in the appearance of subject T1 over time. As an example, when the application requests an avatar A1 one year after the subject T1 who runs for 30 minutes every day, the modification unit 12 uses an appropriate algorithm to obtain a three-dimensional model of the subject T1 one year later. presume. Further, the modification unit 12 estimates the influence of running (behavior of the subject T1) on the appearance of the subject T1 by an appropriate algorithm. Then, the modification unit 12 modifies the avatar A1 based on these estimation results. In this case, the modified avatar A1 reflects the change in the appearance of the subject T1 with the passage of time for one year and the change in the appearance of the subject T1 due to running.
また、改変部12での処理(改変ステップST2)は、アバターA1のモーションを改変する処理を含み得る。例えば、アバターA1の衣服T11を付帯物B1としての他の衣服へ変更することをアプリケーションから要求された場合、改変部12は、アバターA1における衣服T11を他の衣服に変更するように、アバターA1を改変する。そして、改変部12は、他の衣服の属性に応じて、改変後のアバターA1のモーションを改変する。
Further, the process in the modification unit 12 (modification step ST2) may include a process of modifying the motion of the avatar A1. For example, when the application requests that the garment T11 of the avatar A1 be changed to another garment as an accessory B1, the modification unit 12 changes the garment T11 in the avatar A1 to another garment so that the avatar A1 is changed to another garment. To modify. Then, the modification unit 12 modifies the motion of the modified avatar A1 according to the attributes of other clothes.
一例として、アバターA1のモーションが直立姿勢をとることであって、他の衣服が衣服T11と比較して上品である、と仮定する。この場合、改変部12は、改変後のアバターA1のモーションを、直立姿勢をとることから、優雅な仕草をとることへ改変する。また、一例として、アバターA1のモーションが歩行であって、他の衣服が衣服T11と比較して重い、と仮定する。この場合、改変部12は、改変後のアバターA1のモーションを、比較的軽やかに歩行するモーションから、比較的鈍重に歩行するモーションへと改変する。
As an example, it is assumed that the motion of the avatar A1 takes an upright posture, and the other clothes are more elegant than the clothes T11. In this case, the modification unit 12 modifies the motion of the modified avatar A1 from taking an upright posture to taking an elegant gesture. Further, as an example, it is assumed that the motion of the avatar A1 is walking and the other clothes are heavier than the clothes T11. In this case, the modification unit 12 modifies the motion of the modified avatar A1 from a motion of walking relatively lightly to a motion of walking relatively slowly.
また、改変部12での処理(改変ステップST2)は、対象者T1の身体的特徴に応じてアバターA1を改変する処理を含み得る。例えば、改変部12は、アバターA1に対してアバターA1の原型を留めないほどの改変処理を施すのではなく、アバターA1の一部又は全部に対象者T1の身体的特徴、つまり対象者T1の個性的な箇所についての表現が残る程度の改変処理を施す。
Further, the process in the modification unit 12 (modification step ST2) may include a process of modifying the avatar A1 according to the physical characteristics of the subject T1. For example, the modification unit 12 does not perform modification processing on the avatar A1 so as not to retain the prototype of the avatar A1, but a part or all of the avatar A1 has physical characteristics of the subject T1, that is, the subject T1. Modification processing is performed to the extent that the expression of the unique part remains.
また、改変部12での処理(改変ステップST2)は、付帯物B1を改変する処理を含み得る。例えば、上述のように、アバターA1の衣服T11を付帯物B1としての他の衣服へ変更することをアプリケーションから要求された場合、改変部12は、アバターA1における衣服T11を他の衣服に変更するように、アバターA1を改変する。このように、付帯物B1を改変する処理は、アバターA1に付帯物B1を付与する処理を含み得る。また、付帯物B1を改変する処理は、付帯物B1を特定できない程度まで抽象化する処理、又は仮想空間V1との整合がとれるように付帯物B1をデフォルメする処理等を含み得る。
Further, the process in the modification unit 12 (modification step ST2) may include a process of modifying the accessory B1. For example, as described above, when the application requests that the garment T11 of the avatar A1 be changed to another garment as an accessory B1, the modification unit 12 changes the garment T11 in the avatar A1 to another garment. As such, the avatar A1 is modified. As described above, the process of modifying the accessory B1 may include the process of imparting the accessory B1 to the avatar A1. Further, the process of modifying the accessory B1 may include a process of abstracting the accessory B1 to the extent that it cannot be specified, or a process of deforming the accessory B1 so as to be consistent with the virtual space V1.
また、改変部12での処理(改変ステップST2)は、付帯物B1に応じてアバターA1を改変する処理を含み得る。例えば、アバターA1の衣服T11を付帯物B1としての他の衣服へ変更することをアプリケーションから要求され、かつ、他の衣服のサイズが衣服T11よりも小さい又は大きい、と仮定する。この場合、改変部12は、他の衣服のサイズと整合がとれるように、アバターA1のサイズを改変する。また、この場合、改変部12は、アバターA1のサイズを改変せずに、他の衣服を着こなせないことに対する対象者T1の感情を表現するように、アバターA1のモーションを改変してもよい。
Further, the process in the modification unit 12 (modification step ST2) may include a process of modifying the avatar A1 according to the accessory B1. For example, it is assumed that the application requests that the garment T11 of the avatar A1 be changed to another garment as ancillary B1 and that the size of the other garment is smaller or larger than the garment T11. In this case, the modification unit 12 modifies the size of the avatar A1 so as to be consistent with the size of other clothes. Further, in this case, the modification unit 12 may modify the motion of the avatar A1 so as to express the emotion of the subject T1 toward not being able to wear other clothes without modifying the size of the avatar A1.
提示部13は、改変部12での処理(改変ステップST2)が実行されたことを提示する。提示部13は、提示ステップST3の実行主体である。例えば、提示部13は、情報端末5の表示部51に改変後のアバターA1を表示させる際に、アバターA1が改変されたことを示す文字列又は画像を、ネットワークN1を介して情報端末5へ送信させる。これにより、情報端末5の表示部51には、改変後のアバターA1と共に上記文字列又は画像が表示される。また、例えば、提示部13は、情報端末5の表示部51に改変後のアバターA1を表示させる際に、アバターA1が改変されたことを示す音声メッセージを、ネットワークN1を介して情報端末5へ送信させる。これにより、情報端末5は、改変後のアバターA1を表示部51に表示させると共に、上記音声メッセージをスピーカから出力させる。
The presentation unit 13 presents that the process in the modification unit 12 (modification step ST2) has been executed. The presentation unit 13 is the execution subject of the presentation step ST3. For example, when the presentation unit 13 causes the display unit 51 of the information terminal 5 to display the modified avatar A1, a character string or an image indicating that the avatar A1 has been modified is transmitted to the information terminal 5 via the network N1. Send it. As a result, the character string or the image is displayed on the display unit 51 of the information terminal 5 together with the modified avatar A1. Further, for example, when the presentation unit 13 displays the modified avatar A1 on the display unit 51 of the information terminal 5, a voice message indicating that the avatar A1 has been modified is sent to the information terminal 5 via the network N1. Send it. As a result, the information terminal 5 displays the modified avatar A1 on the display unit 51 and outputs the voice message from the speaker.
受付部14は、改変部12での処理(改変ステップST2)についてのユーザからの入力を受け付ける。受付部14は、受付ステップST4の実行主体である。例えば、受付部14は、情報端末5の表示部51に改変後のアバターA1及び/又は改変後の付帯物B1を表示させる際に、ユーザ(ここでは、対象者T1)による情報端末5への操作入力に関する情報を、ネットワークN1を介して受け付ける。操作入力は、一例として、改変後のアバターA1の少なくとも一部のパラメータの変更を要求する入力を含み得る。パラメータは、一例として、アバターA1の顔の各部の態様(例えば、鼻の高さ等)を含み得る。
The reception unit 14 receives input from the user regarding the process (modification step ST2) in the modification unit 12. The reception unit 14 is the execution body of the reception step ST4. For example, when the reception unit 14 causes the display unit 51 of the information terminal 5 to display the modified avatar A1 and / or the modified accessory B1, the user (here, the target person T1) sends the modified avatar A1 to the information terminal 5. Information about the operation input is received via the network N1. As an example, the operation input may include an input requesting change of at least a part of the parameters of the modified avatar A1. The parameters may include, by way of example, aspects of each part of the face of Avatar A1 (eg, nose height, etc.).
受付部14にてユーザからの入力を受け付けた場合、改変部12は、ユーザからの入力に応じて、改変後のアバターA1及び/又は改変後の付帯物B1を更に改変する。例えば、改変後のアバターA1の目を細くすることを要求する入力を受け付けた場合、改変部12は、改変後のアバターA1の目を細くするように、改変後のアバターA1を更に改変する。
When the reception unit 14 receives the input from the user, the modification unit 12 further modifies the modified avatar A1 and / or the modified accessory B1 according to the input from the user. For example, when an input requesting that the modified avatar A1 is narrowed is received, the modified unit 12 further modifies the modified avatar A1 so as to narrow the modified avatar A1.
通信部2は、ネットワークN1に接続可能な通信インタフェースを有している。通信部2は、ネットワークN1を介してスキャナ4と通信可能に構成されている。また、通信部2は、ネットワークN1を介して情報端末5と通信可能に構成されている。なお、通信インタフェースの通信プロトコルは、例えばWi-Fi(登録商標)等の周知の様々な無線通信規格から選択され得る。
The communication unit 2 has a communication interface that can be connected to the network N1. The communication unit 2 is configured to be able to communicate with the scanner 4 via the network N1. Further, the communication unit 2 is configured to be able to communicate with the information terminal 5 via the network N1. The communication protocol of the communication interface can be selected from various well-known wireless communication standards such as Wi-Fi (registered trademark).
通信部2は、ネットワークN1を介して、スキャナ4から送信される複数の静止画像を受信する。また、通信部2は、ネットワークN1を介して、アバター用データを情報端末5へ送信する。さらに、通信部2は、ネットワークN1を介して、情報端末5からの要求(つまり、アプリケーションからの要求)に関する要求情報を受信する。
The communication unit 2 receives a plurality of still images transmitted from the scanner 4 via the network N1. Further, the communication unit 2 transmits the avatar data to the information terminal 5 via the network N1. Further, the communication unit 2 receives the request information regarding the request from the information terminal 5 (that is, the request from the application) via the network N1.
記憶部3は、例えば、EEPROM(Electrically Erasable Programmable Read-Only Memory)等の電気的に書換え可能な不揮発性メモリ、及びRAM(Random Access Memory)等の揮発性メモリ等を備える。記憶部3は、生成部11で生成したアバター用データを対象者T1ごとに記憶する。また、記憶部3は、改変部12で改変したアバター用データを対象者T1及びアプリケーションごとに記憶する。
The storage unit 3 includes, for example, an electrically rewritable non-volatile memory such as EEPROM (Electrically Erasable Programmable Read-Only Memory), and a volatile memory such as RAM (Random Access Memory). The storage unit 3 stores the avatar data generated by the generation unit 11 for each target person T1. Further, the storage unit 3 stores the avatar data modified by the modification unit 12 for each target person T1 and the application.
(3)動作
以下、本実施形態のアバター生成システム100の基本動作の一例について図2を参照して説明する。まず、対象者T1が、スキャナ4が設置されている施設に赴き、スキャナ4を使用する。すると、スキャナ4は、対象者T1を種々の角度から捉えた複数の静止画像を取得することにより、対象者T1の全身をスキャンする。そして、スキャナ4は、ネットワークN1を介して、取得した複数の静止画像をアバター生成システム100へ送信する。これにより、アバター生成システム100は、スキャナ4で撮像した対象者T1に関する複数の静止画像を取得する(S1)。 (3) Operation Hereinafter, an example of the basic operation of theavatar generation system 100 of the present embodiment will be described with reference to FIG. First, the subject T1 goes to the facility where the scanner 4 is installed and uses the scanner 4. Then, the scanner 4 scans the whole body of the subject T1 by acquiring a plurality of still images of the subject T1 captured from various angles. Then, the scanner 4 transmits the acquired plurality of still images to the avatar generation system 100 via the network N1. As a result, the avatar generation system 100 acquires a plurality of still images of the subject T1 captured by the scanner 4 (S1).
以下、本実施形態のアバター生成システム100の基本動作の一例について図2を参照して説明する。まず、対象者T1が、スキャナ4が設置されている施設に赴き、スキャナ4を使用する。すると、スキャナ4は、対象者T1を種々の角度から捉えた複数の静止画像を取得することにより、対象者T1の全身をスキャンする。そして、スキャナ4は、ネットワークN1を介して、取得した複数の静止画像をアバター生成システム100へ送信する。これにより、アバター生成システム100は、スキャナ4で撮像した対象者T1に関する複数の静止画像を取得する(S1)。 (3) Operation Hereinafter, an example of the basic operation of the
次に、生成部11は、スキャナ4から取得した対象者T1の全身を捉えた複数の静止画像に基づいて、対象者T1のアバターA1を含むアバター用データを生成する(S2)。処理S2は、生成ステップST1に相当する。そして、生成部11は、生成したアバター用データを、対象者T1に紐付けて記憶部3に記憶させる(S3)。
Next, the generation unit 11 generates avatar data including the avatar A1 of the target person T1 based on a plurality of still images of the whole body of the target person T1 acquired from the scanner 4 (S2). Process S2 corresponds to generation step ST1. Then, the generation unit 11 associates the generated avatar data with the target person T1 and stores it in the storage unit 3 (S3).
その後、アバター生成システム100は、情報端末5からの要求(つまり、アプリケーションからの要求)に関する要求情報を受信するまで、特に何も実行しない(S4:No)。もちろん、この期間において、他の対象者T1がスキャナ4を使用した場合、他の対象者T1について上記の処理S1~S3を繰り返す。
After that, the avatar generation system 100 does not execute anything in particular until it receives the request information regarding the request from the information terminal 5 (that is, the request from the application) (S4: No). Of course, when the other subject T1 uses the scanner 4 in this period, the above processes S1 to S3 are repeated for the other subject T1.
情報端末5のユーザ(ここでは、対象者T1)が情報端末5を操作することで、アプリケーションを実行すると、情報端末5は、ネットワークN1を介して要求情報をアバター生成システム100へ送信する。要求情報を受信すると(S4:Yes)、改変部12は、記憶部3からアバター用データを読み出し、要求情報(つまり、アプリケーションからの要求)に応じて、アバター用データを改変する(S5)。処理S5は、改変ステップST2に相当する。そして、改変部12は、ネットワークN1を介して、改変後のアバターA1及び/又は改変後の付帯物B1を含むアバター用データを情報端末5へ送信(出力)する(S6)。
When the user of the information terminal 5 (here, the target person T1) executes the application by operating the information terminal 5, the information terminal 5 transmits the request information to the avatar generation system 100 via the network N1. Upon receiving the request information (S4: Yes), the modification unit 12 reads the avatar data from the storage unit 3 and modifies the avatar data according to the request information (that is, the request from the application) (S5). Process S5 corresponds to modification step ST2. Then, the modification unit 12 transmits (outputs) the avatar data including the modified avatar A1 and / or the modified accessory B1 to the information terminal 5 via the network N1 (S6).
また、提示部13は、ネットワークN1を介して、アバターA1が改変されたことを示す情報(文字列、画像、又は音声メッセージ等)を情報端末5へ送信する。これにより、提示部13は、アバターA1が改変されたことを情報端末5のユーザに提示する(S7)。処理S7は、提示ステップST3に相当する。
Further, the presentation unit 13 transmits information (character string, image, voice message, etc.) indicating that the avatar A1 has been modified to the information terminal 5 via the network N1. As a result, the presentation unit 13 presents to the user of the information terminal 5 that the avatar A1 has been modified (S7). Process S7 corresponds to presentation step ST3.
その後、情報端末5のユーザが情報端末5を操作することで、改変後のアバターA1及び/又は改変後の付帯物B1の更なる改変を要求すると、情報端末5は、ネットワークN1を介して、情報端末5への操作入力に関する情報をアバター生成システム100へ送信する。操作入力に関する情報を受信すると(S8:Yes)、改変部12は、記憶部3から改変後のアバター用データを読み出し、操作入力に関する情報(つまり、ユーザからの入力)に応じて、改変後のアバター用データを更に改変する(S9)。処理S8は、受付ステップST4に相当する。また、処理S9は、改変ステップST2に相当する。そして、改変部12は、ネットワークN1を介して、更に改変したアバター用データを情報端末5へ送信(出力)する(S10)。
After that, when the user of the information terminal 5 operates the information terminal 5 to request further modification of the modified avatar A1 and / or the modified accessory B1, the information terminal 5 receives the modified avatar A1 and / or the modified accessory B1 via the network N1. Information regarding the operation input to the information terminal 5 is transmitted to the avatar generation system 100. Upon receiving the information regarding the operation input (S8: Yes), the modification unit 12 reads the modified avatar data from the storage unit 3, and the modified unit 12 reads the modified avatar data according to the information regarding the operation input (that is, the input from the user). The avatar data is further modified (S9). The process S8 corresponds to the reception step ST4. Further, the process S9 corresponds to the modification step ST2. Then, the modification unit 12 transmits (outputs) the further modified avatar data to the information terminal 5 via the network N1 (S10).
(3.1)第1動作例
以下、本実施形態のアバター生成システム100の第1動作例について説明する。第1動作例は、多数のユーザが同一のサーバにログインして同一の仮想空間V1を共有するアプリケーションの実行時におけるアバター生成システム100の動作の一例である。このようなアプリケーションは、一例として、MMO(Massively Multiplayer Online)等のオンラインゲームを含み得る。 (3.1) First Operation Example Hereinafter, a first operation example of theavatar generation system 100 of the present embodiment will be described. The first operation example is an example of the operation of the avatar generation system 100 at the time of executing an application in which a large number of users log in to the same server and share the same virtual space V1. Such an application may include, for example, an online game such as an MMO (Massively Multiplayer Online).
以下、本実施形態のアバター生成システム100の第1動作例について説明する。第1動作例は、多数のユーザが同一のサーバにログインして同一の仮想空間V1を共有するアプリケーションの実行時におけるアバター生成システム100の動作の一例である。このようなアプリケーションは、一例として、MMO(Massively Multiplayer Online)等のオンラインゲームを含み得る。 (3.1) First Operation Example Hereinafter, a first operation example of the
第1動作例でのアプリケーションにおいては、ユーザ(対象者T1)は、まずアカウントを作成する。また、ユーザは、アプリケーションを通じて、アバター生成システム100にてあらかじめ生成したアバターA1をアバター生成システム100に要求し、このアバターA1をアカウントに紐付ける。
In the application in the first operation example, the user (target person T1) first creates an account. Further, the user requests the avatar A1 generated in advance by the avatar generation system 100 from the avatar generation system 100 through the application, and associates the avatar A1 with the account.
また、ユーザは、アカウントを作成する段階、又はアプリケーションの実行中において、アプリケーションを利用する他者を必要に応じてフレンドとして登録する。フレンドは、例えばSNS(Social Networking Service)にて既に登録されているフレンドであってもよい。この場合、ユーザは、このアプリケーションのアカウントにSNSのアカウントを紐付ければよい。
In addition, the user registers another person who uses the application as a friend at the stage of creating an account or while the application is running. The friend may be, for example, a friend already registered in SNS (Social Networking Service). In this case, the user may associate the SNS account with the account of this application.
フレンドは、ユーザとの親密度に応じてn段階(“n”は自然数)に分けて登録される。以下では、ユーザとの親密度が比較的高いフレンドを「第1フレンド」とし、ユーザとの親密度が比較的低いフレンドを「第2フレンド」と仮定する。第1フレンドは、一例として、ユーザの親族又は親友等である。第2フレンドは、一例として、ユーザの知人等である。
Friends are registered in n stages (“n” is a natural number) according to the intimacy with the user. In the following, it is assumed that a friend who has a relatively high intimacy with a user is a "first friend" and a friend who has a relatively low intimacy with a user is a "second friend". The first friend is, for example, a relative or a close friend of the user. The second friend is, for example, an acquaintance of the user.
ユーザは、アプリケーションを使用する場合、情報端末5を操作してアプリケーションを起動し、自身のアカウントでログインする。すると、アプリケーションは、情報端末5の表示部51に仮想空間V1を表示し、表示部51の範囲内に存在する1以上のユーザのアバターA1を仮想空間V1に重畳して表示させる。このとき、アプリケーションは、仮想空間V1に表示させる1以上のユーザのアバター用データを、アバター生成システム100に要求する。
When using the application, the user operates the information terminal 5 to start the application and logs in with his / her own account. Then, the application displays the virtual space V1 on the display unit 51 of the information terminal 5, and superimposes and displays the avatars A1 of one or more users existing in the range of the display unit 51 on the virtual space V1. At this time, the application requests the avatar generation system 100 for the avatar data of one or more users to be displayed in the virtual space V1.
アバター生成システム100の改変部12は、アプリケーションからの要求に応じて、仮想空間V1の世界観に合わせて1以上のユーザのアバター用データを改変する。このように、仮想空間V1の世界観に合わせてアバター用データを改変することにより、フォトリアルなアバターA1を仮想空間V1に重畳して表示させる場合と比較して、ユーザの没入感が損なわれにくい。
The modification unit 12 of the avatar generation system 100 modifies the avatar data of one or more users according to the world view of the virtual space V1 in response to the request from the application. In this way, by modifying the avatar data according to the world view of the virtual space V1, the user's immersive feeling is impaired as compared with the case where the photorealistic avatar A1 is superimposed and displayed on the virtual space V1. Hateful.
また、改変部12は、要求元の情報端末5のユーザ(ユーザ本人)との関係性に応じて、1以上のユーザのアバター用データを改変する。そして、アバター生成システム100は、改変した1以上のユーザのアバター用データを情報端末5へ送信する。これにより、情報端末5の表示部51には、1以上のユーザの改変後のアバターA1が表示される。
Further, the modification unit 12 modifies the avatar data of one or more users according to the relationship with the user (user himself / herself) of the requesting information terminal 5. Then, the avatar generation system 100 transmits the modified avatar data of one or more users to the information terminal 5. As a result, the modified avatar A1 of one or more users is displayed on the display unit 51 of the information terminal 5.
仮想空間V1及び1以上のユーザのアバターA1の表示の一例を図6に示す。図6に示す例では、表示部51には、部屋等の閉空間が仮想空間V1として表示されている。また、図6に示す例では、ユーザ本人のアバターA10と、第1フレンドのアバターA20と、第2フレンドのアバターA30と、が表示されている。
FIG. 6 shows an example of the display of the virtual space V1 and the avatar A1 of one or more users. In the example shown in FIG. 6, a closed space such as a room is displayed as a virtual space V1 on the display unit 51. Further, in the example shown in FIG. 6, the user's own avatar A10, the first friend's avatar A20, and the second friend's avatar A30 are displayed.
ユーザ本人のアバターA10は、仮想空間V1の世界観に合わせて改変された形で表示部51に表示されている。第1フレンドのアバターA20は、ユーザ本人のアバターA10と同様に、仮想空間V1の世界観に合わせて改変された形で表示部51に表示されている。第2フレンドのアバターA30は、ユーザ本人のアバターA10及び第1フレンドのアバターA20よりも抽象化されて表示部51に表示されている。ただし、第2フレンドのアバターA30は、第1フレンドによって設定される抽象度に応じて抽象化されて表示部51に表示されてもよい。
The user's own avatar A10 is displayed on the display unit 51 in a form modified according to the world view of the virtual space V1. Like the user's own avatar A10, the first friend's avatar A20 is displayed on the display unit 51 in a form modified according to the world view of the virtual space V1. The avatar A30 of the second friend is more abstract than the avatar A10 of the user himself and the avatar A20 of the first friend and is displayed on the display unit 51. However, the avatar A30 of the second friend may be abstracted and displayed on the display unit 51 according to the degree of abstraction set by the first friend.
なお、図6において点線で表されたアバターA40は、ユーザとは無関係の第三者のアバターA40であって、実際には表示部51には表示されない。もちろん、第三者のアバターA40についても表示部51に表示させてもよい。この場合、第三者のアバターA40は、この第三者によって設定される抽象度に応じて抽象化されていてもよいし、第2フレンドのアバターA30よりも更に抽象化されていてもよい。一例として、第三者のアバターA40は、シルエットで表示部51に表示されてもよい。
The avatar A40 represented by the dotted line in FIG. 6 is a third party avatar A40 unrelated to the user, and is not actually displayed on the display unit 51. Of course, the third party avatar A40 may also be displayed on the display unit 51. In this case, the third party avatar A40 may be abstracted according to the degree of abstraction set by the third party, or may be further abstracted than the second friend avatar A30. As an example, the third party avatar A40 may be displayed on the display unit 51 as a silhouette.
また、図6において第1フレンドのアバターA20は、ユーザ本人のアバターA10と同様に改変されているが、ユーザ本人のアバターA10よりも抽象化されてもよい。また、第2フレンドのアバターA30は、図6に示す表示態様よりも更に抽象化されてもよい。
Further, in FIG. 6, the first friend's avatar A20 is modified in the same manner as the user's own avatar A10, but may be more abstract than the user's own avatar A10. Further, the avatar A30 of the second friend may be further abstracted than the display mode shown in FIG.
また、他者の情報端末5の表示部51においては、ユーザ本人のアバターA1は、他者とユーザ本人との関係性に応じて改変された形で表示される。例えば、第三者の情報端末5の表示部51では、ユーザ本人のアバターA1は、図6に示す例と同様に表示されないか、又はユーザ本人であることが特定できない程度に抽象化された形で表示される。
Further, on the display unit 51 of the information terminal 5 of another person, the avatar A1 of the user himself / herself is displayed in a modified form according to the relationship between the other person and the user himself / herself. For example, on the display unit 51 of the information terminal 5 of a third party, the avatar A1 of the user himself / herself is not displayed in the same manner as the example shown in FIG. 6, or is abstracted to the extent that the user himself / herself cannot be identified. It is displayed in.
上述のように、第1動作例においては、各ユーザ間の関係性に応じて改変されたアバターA1が仮想空間V1に重畳して表示される。このため、第1動作例では、各ユーザがプライバシーを保護された形でアプリケーションを利用することができる、という利点がある。なお、各ユーザ間の関係性は、上述した親密度に加えて、その他のファクターによって決定される関連度を考慮して決定されてもよい。この関連度は、例えば、各ユーザ間の年齢、性別、出身地、GPS(Global Positioning System)での位置、国籍、嗜好、若しくは所属等の情報の同一性、又は幅を持った近さを用いて決定することができる。
As described above, in the first operation example, the avatar A1 modified according to the relationship between each user is superimposed and displayed on the virtual space V1. Therefore, in the first operation example, there is an advantage that each user can use the application in a privacy-protected form. The relationship between each user may be determined in consideration of the degree of relevance determined by other factors in addition to the above-mentioned intimacy. For this degree of relevance, for example, the identity or closeness of information such as age, gender, place of origin, position on GPS (Global Positioning System), nationality, preference, or affiliation between users is used. Can be decided.
(3.2)第2動作例
以下、本実施形態のアバター生成システム100の第2動作例について説明する。第2動作例は、仮想空間V1にてユーザのアバターA1に付帯物B1としての衣服を試着させるアプリケーションの実行時におけるアバター生成システム100の動作の一例である。 (3.2) Second Operation Example Hereinafter, a second operation example of theavatar generation system 100 of the present embodiment will be described. The second operation example is an example of the operation of the avatar generation system 100 at the time of executing an application in which the user's avatar A1 is tried on clothes as an accessory B1 in the virtual space V1.
以下、本実施形態のアバター生成システム100の第2動作例について説明する。第2動作例は、仮想空間V1にてユーザのアバターA1に付帯物B1としての衣服を試着させるアプリケーションの実行時におけるアバター生成システム100の動作の一例である。 (3.2) Second Operation Example Hereinafter, a second operation example of the
第2動作例においては、アプリケーションを運用する業者の専用サーバには、アバターA1に試着させることが可能な多数の衣服データ(つまり、付帯物B1のデータ)が蓄積されている。各衣服データには、衣服のカテゴリを表す属性情報が紐付けられている。
In the second operation example, a large amount of clothing data (that is, data of the accessory B1) that can be tried on by the avatar A1 is accumulated in the dedicated server of the company that operates the application. Attribute information representing the clothing category is associated with each clothing data.
第1動作例と同様に、第2動作例でのアプリケーションにおいては、ユーザ(対象者T1)は、まずアカウントを作成する。また、ユーザは、アプリケーションを通じて、アバター生成システム100にてあらかじめ生成したアバターA1をアバター生成システム100に要求し、このアバターA1をアカウントに紐付ける。また、ユーザは、アカウントを作成する段階、又はアプリケーションの実行中において、アプリケーションを利用する他者を必要に応じてフレンドとして登録する。フレンドは、例えばSNSにて既に登録されているフレンドであってもよい。
Similar to the first operation example, in the application in the second operation example, the user (target person T1) first creates an account. Further, the user requests the avatar A1 generated in advance by the avatar generation system 100 from the avatar generation system 100 through the application, and associates the avatar A1 with the account. In addition, the user registers another person who uses the application as a friend at the stage of creating an account or while the application is running. The friend may be, for example, a friend already registered on the SNS.
ユーザは、アプリケーションを使用する場合、情報端末5を操作してアプリケーションを起動し、自身のアカウントでログインする。すると、アプリケーションは、情報端末5の表示部51に仮想空間V1を表示し、かつ、ユーザのアバターA1を仮想空間V1に重畳して表示させる。このとき、アプリケーションは、仮想空間V1に表示させるユーザのアバター用データを、アバター生成システム100に要求する。
When using the application, the user operates the information terminal 5 to start the application and logs in with his / her own account. Then, the application displays the virtual space V1 on the display unit 51 of the information terminal 5, and superimposes and displays the user's avatar A1 on the virtual space V1. At this time, the application requests the avatar generation system 100 for the user's avatar data to be displayed in the virtual space V1.
アバター生成システム100は、アプリケーションからの要求に応じて、要求元の情報端末5のユーザのアバター用データを情報端末5へ送信する。これにより、情報端末5の表示部51には、ユーザのアバターA1が表示される。この段階では、ユーザのアバターA1は、改変部12により改変されていない。
The avatar generation system 100 transmits the avatar data of the user of the requesting information terminal 5 to the information terminal 5 in response to the request from the application. As a result, the user's avatar A1 is displayed on the display unit 51 of the information terminal 5. At this stage, the user's avatar A1 has not been modified by the modification unit 12.
次に、ユーザは、情報端末5を操作することにより、試着したい衣服を選択する。アプリケーションは、選択された衣服を試着したユーザのアバターA1をアバター生成システム100に要求する。
Next, the user selects the clothes he / she wants to try on by operating the information terminal 5. The application requests the avatar generation system 100 for the avatar A1 of the user who has tried on the selected clothes.
アバター生成システム100は、アプリケーションからの要求に応じて、選択された衣服の衣服データを専用サーバに要求する。そして、アバター生成システム100の改変部12は、専用サーバから受信した衣服データに基づいて、ユーザのアバターA1を改変する。例えば、改変部12は、衣服データに含まれる衣服のテクスチャをユーザのアバターA1に貼り付ける(つまり、付帯物B1を付与する)ことにより、ユーザのアバターA1を改変する。また、改変部12は、衣服データに含まれる属性情報に応じたモーションデータをユーザのアバターA1に紐付けることにより、ユーザのアバターA1を改変する。
The avatar generation system 100 requests the clothes data of the selected clothes from the dedicated server in response to the request from the application. Then, the modification unit 12 of the avatar generation system 100 modifies the user's avatar A1 based on the clothes data received from the dedicated server. For example, the modification unit 12 modifies the user's avatar A1 by pasting the texture of the clothing included in the clothing data on the user's avatar A1 (that is, adding the accessory B1). Further, the modification unit 12 modifies the user's avatar A1 by associating the motion data corresponding to the attribute information included in the clothing data with the user's avatar A1.
そして、アバター生成システム100は、改変したユーザのアバター用データを情報端末5へ送信する。これにより、情報端末5の表示部51には、ユーザの改変後のアバターA1が表示される。具体的には、表示部51には、ユーザが選択した衣服を試着し、かつ、衣服の属性に応じたモーションをとるユーザのアバターA1が、仮想空間V1に重畳して表示される。
Then, the avatar generation system 100 transmits the modified user's avatar data to the information terminal 5. As a result, the user's modified avatar A1 is displayed on the display unit 51 of the information terminal 5. Specifically, the user's avatar A1 that tries on the clothes selected by the user and takes a motion according to the attributes of the clothes is superimposed on the virtual space V1 and displayed on the display unit 51.
ユーザが選択した衣服を試着したユーザのアバターA1の表示の一例を図7及び図8に示す。図7に示す例では、表示部51には、例えば試着室等の閉空間が仮想空間V1として表示されている。また、図7に示す例では、スポーツ系の衣服B11(付帯物B1)を試着したユーザのアバターA13が表示部51に表示されている。そして、ユーザのアバターA13は、スポーツ系の衣服B11の属性情報に応じたモーション(ここでは、走るモーション)をとるように表示部51に描画されている。図7に示す例では、ユーザのアバターA13は静止しているが、実際の表示部51では動いている。
FIGS. 7 and 8 show an example of displaying the avatar A1 of the user who tried on the clothes selected by the user. In the example shown in FIG. 7, a closed space such as a fitting room is displayed as a virtual space V1 on the display unit 51. Further, in the example shown in FIG. 7, the avatar A13 of the user who has tried on the sports clothing B11 (incidental B1) is displayed on the display unit 51. The user's avatar A13 is drawn on the display unit 51 so as to take a motion (here, a running motion) according to the attribute information of the sports clothing B11. In the example shown in FIG. 7, the user's avatar A13 is stationary, but is moving on the actual display unit 51.
図8に示す例では、表示部51には、例えば試着室等の閉空間が仮想空間V1として表示されている。また、図8に示す例では、ユーザに適したサイズよりも小さいサイズの衣服B12(付帯物B1)を試着したユーザのアバターA14が表示部51に表示されている。そして、ユーザのアバターA14は、衣服B12の属性情報(ここでは、サイズが小さいという情報)に応じたモーション(ここでは、困惑しているモーション)をとるように表示部51に描画されている。図8に示す例では、ユーザのアバターA14は静止しているが、実際には動いている。また、図8に示す例において、衣服B12は一部が破けていてもよい。
In the example shown in FIG. 8, a closed space such as a fitting room is displayed as a virtual space V1 on the display unit 51. Further, in the example shown in FIG. 8, the avatar A14 of the user who has tried on the clothes B12 (incidental object B1) having a size smaller than the size suitable for the user is displayed on the display unit 51. Then, the user's avatar A14 is drawn on the display unit 51 so as to take a motion (here, a confused motion) according to the attribute information of the clothes B12 (here, the information that the size is small). In the example shown in FIG. 8, the user's avatar A14 is stationary, but is actually moving. Further, in the example shown in FIG. 8, the clothes B12 may be partially torn.
ここで、ユーザは、選択した衣服が気に入った場合、この衣服を試着したユーザのアバターA1をSNS等で公開することが可能である。この場合、他者の情報端末5の表示部51においては、ユーザのアバターA1は、他者とユーザとの関係性に応じた形で表示される。例えば、他者がユーザの親友であれば、他者の情報端末5の表示部51には、衣服を試着したユーザのアバターA1がそのまま表示される。また、例えば、他者が第三者であれば、他者の情報端末5の表示部51では、ユーザのアバターA1が表示されないか、又はユーザであることが特定できない程度に抽象化された形でユーザのアバターA1が表示される。
Here, if the user likes the selected clothes, the user can publish the avatar A1 of the user who tried on the clothes on SNS or the like. In this case, the user's avatar A1 is displayed on the display unit 51 of the information terminal 5 of the other person in a form corresponding to the relationship between the other person and the user. For example, if the other person is the user's best friend, the avatar A1 of the user who has tried on the clothes is displayed as it is on the display unit 51 of the other person's information terminal 5. Further, for example, if the other person is a third party, the user's avatar A1 is not displayed on the display unit 51 of the other person's information terminal 5, or is abstracted to the extent that the user cannot be identified. The user's avatar A1 is displayed with.
上述のように、第2動作例においては、ユーザがアバターA1を利用してオンラインで衣服を試着することが可能である。このため、第2動作例では、衣服を提供するメーカは、実店舗に衣服を提供しなくて済むので、在庫を抱えずに済む、という利点がある。また、第2動作例では、ユーザは、実店舗の試着室で衣服を試着する場合と比較して、表示部51を見ることでユーザ自身を客観視することができる、という利点がある。
As described above, in the second operation example, the user can try on clothes online by using the avatar A1. Therefore, in the second operation example, the manufacturer who provides the clothes does not have to provide the clothes to the actual store, so that there is an advantage that the manufacturer does not have to hold the inventory. Further, in the second operation example, there is an advantage that the user can objectively view the user himself / herself by looking at the display unit 51 as compared with the case of trying on clothes in the fitting room of the actual store.
なお、表示部51に表示される仮想空間V1は、ユーザが情報端末5を操作することにより適宜変更可能であってもよい。例えば、ユーザは、情報端末5を操作して市街地を選択することで、市街地を模擬した仮想空間V1にユーザ本人のアバターA1を重畳して表示させることが可能である。この場合、ユーザは、試着室にて衣服を試着する場合と比較して、選択した衣服が所定のシチュエーションに適合しているか否かを把握しやすい、という利点がある。
The virtual space V1 displayed on the display unit 51 may be appropriately changed by the user operating the information terminal 5. For example, by operating the information terminal 5 to select an urban area, the user can superimpose and display the user's own avatar A1 on the virtual space V1 simulating the urban area. In this case, the user has an advantage that it is easier to grasp whether or not the selected garment is suitable for a predetermined situation, as compared with the case where the garment is tried on in the fitting room.
また、衣服を着用していない状態(つまり、裸又は下着のみを着用した状態)のアバターA1は、適切に抽象化された形で表示部51に表示されるか、又は表示部51に表示されないのが好ましい。
Further, the avatar A1 in a state of not wearing clothes (that is, wearing only naked or underwear) is displayed on the display unit 51 in an appropriately abstracted form, or is not displayed on the display unit 51. Is preferable.
(3.3)第3動作例
以下、本実施形態のアバター生成システム100の第3動作例について説明する。第3動作例は、例えばフィットネス又はヘルスケアについてのアプリケーションの実行時におけるアバター生成システム100の動作の一例である。 (3.3) Third Operation Example Hereinafter, a third operation example of theavatar generation system 100 of the present embodiment will be described. The third operation example is an example of the operation of the avatar generation system 100 at the time of executing an application for fitness or health care, for example.
以下、本実施形態のアバター生成システム100の第3動作例について説明する。第3動作例は、例えばフィットネス又はヘルスケアについてのアプリケーションの実行時におけるアバター生成システム100の動作の一例である。 (3.3) Third Operation Example Hereinafter, a third operation example of the
第3動作例においては、アプリケーションを運用する業者の専用サーバは、所定の運動を継続した場合、及び/又は所定の食事を採り続けた場合のユーザの体型の変化を推定するシミュレーションを実行する。ここで、運動及び/又は食事が体型に及ぼす効果には、ユーザの体質によって個人差があるので、シミュレーションにおいては、ユーザの体質に関する情報が参照される。
In the third operation example, the dedicated server of the company that operates the application executes a simulation that estimates the change in the user's body shape when the predetermined exercise is continued and / or when the predetermined meal is continuously eaten. Here, since the effect of exercise and / or diet on the body shape varies from person to person depending on the constitution of the user, information on the constitution of the user is referred to in the simulation.
第1動作例と同様に、第3動作例でのアプリケーションにおいては、ユーザ(対象者T1)は、まずアカウントを作成する。また、ユーザは、アプリケーションを通じて、アバター生成システム100にてあらかじめ生成したアバターA1をアバター生成システム100に要求し、このアバターA1をアカウントに紐付ける。また、ユーザは、アカウントを作成する段階、又はアプリケーションの実行中において、アプリケーションを利用する他者を必要に応じてフレンドとして登録する。フレンドは、例えばSNSにて既に登録されているフレンドであってもよい。さらに、ユーザは、自身の体質に関する情報もアカウントに紐付ける。
Similar to the first operation example, in the application in the third operation example, the user (target person T1) first creates an account. Further, the user requests the avatar A1 generated in advance by the avatar generation system 100 from the avatar generation system 100 through the application, and associates the avatar A1 with the account. In addition, the user registers another person who uses the application as a friend at the stage of creating an account or while the application is running. The friend may be, for example, a friend already registered on the SNS. In addition, the user also associates information about his / her constitution with the account.
ユーザは、アプリケーションを使用する場合、情報端末5を操作してアプリケーションを起動し、自身のアカウントでログインする。すると、アプリケーションは、情報端末5の表示部51に仮想空間V1を表示し、かつ、ユーザのアバターA1を仮想空間V1に重畳して表示させる。このとき、アプリケーションは、仮想空間V1に表示させるユーザのアバター用データを、アバター生成システム100に要求する。
When using the application, the user operates the information terminal 5 to start the application and logs in with his / her own account. Then, the application displays the virtual space V1 on the display unit 51 of the information terminal 5, and superimposes and displays the user's avatar A1 on the virtual space V1. At this time, the application requests the avatar generation system 100 for the user's avatar data to be displayed in the virtual space V1.
アバター生成システム100は、アプリケーションからの要求に応じて、要求元の情報端末5のユーザのアバター用データを情報端末5へ送信する。これにより、情報端末5の表示部51には、ユーザのアバターA1が表示される。この段階では、ユーザのアバターA1は、改変部12により改変されていない。
The avatar generation system 100 transmits the avatar data of the user of the requesting information terminal 5 to the information terminal 5 in response to the request from the application. As a result, the user's avatar A1 is displayed on the display unit 51 of the information terminal 5. At this stage, the user's avatar A1 has not been modified by the modification unit 12.
次に、ユーザは、情報端末5を操作することにより、継続したい運動及び/又は採り続けたい食事を選択する。アプリケーションは、選択された運動/及び食事に対応したユーザのアバターA1をアバター生成システム100に要求する。
Next, the user selects the exercise he / she wants to continue and / or the meal he / she wants to continue eating by operating the information terminal 5. The application requests the avatar generation system 100 for the user's avatar A1 corresponding to the selected exercise / and meal.
アバター生成システム100は、アプリケーションからの要求に応じて、選択された運動/及び食事に対応するシミュレーション結果を専用サーバに要求する。そして、アバター生成システム100の改変部12は、専用サーバから受信したシミュレーション結果に基づいて、ユーザのアバターA1を改変する。例えば、改変部12は、選択した運動に対応する筋肉を肥大化させるか、全身が痩せるように、又は対応する部位を痩身させるように、ユーザのアバターA1を改変する。
The avatar generation system 100 requests a simulation result corresponding to the selected exercise / and meal from the dedicated server in response to a request from the application. Then, the modification unit 12 of the avatar generation system 100 modifies the user's avatar A1 based on the simulation result received from the dedicated server. For example, the modification unit 12 modifies the user's avatar A1 so that the muscle corresponding to the selected exercise is enlarged, the whole body is thinned, or the corresponding portion is slimmed.
そして、アバター生成システム100は、改変したユーザのアバター用データを情報端末5へ送信する。これにより、情報端末5の表示部51には、ユーザの改変後のアバターA1が表示される。具体的には、表示部51には、選択した運動に対応する筋肉が肥大化した、全身が痩せた、又は対応する部位を痩身させたユーザのアバターA1が、仮想空間V1に重畳して表示される。
Then, the avatar generation system 100 transmits the modified user's avatar data to the information terminal 5. As a result, the user's modified avatar A1 is displayed on the display unit 51 of the information terminal 5. Specifically, the user's avatar A1 in which the muscle corresponding to the selected exercise is enlarged, the whole body is thinned, or the corresponding part is slimmed is superimposed on the virtual space V1 and displayed on the display unit 51. Will be done.
ユーザが選択した運動を継続して実行した場合におけるユーザのアバターA1の表示の一例を図9に示す。図9に示す例では、表示部51には、例えばジム等の閉空間が仮想空間V1として表示されている。また、図9に示す例では、表示部51には、現在のユーザのアバターA1と、所定の運動(ここでは、腹筋運動)のモーションをとるユーザのアバターA15と、所定の運動を開始してから一定期間経過後のユーザのアバターA16と、が表示部51に表示されている。現在のアバターA1は、生成部11にて生成されたアバターA1である。図9におけるアバターA1間の矢印は、時間経過を表している。図9の中央に位置するユーザのアバターA15は、付帯物B1として運動用の衣服B13を着用している。また、図9の右側に位置するユーザのアバターA16は、付帯物B1としてスポーツ系の下着B14を着用している。
FIG. 9 shows an example of the display of the user's avatar A1 when the user's selected exercise is continuously executed. In the example shown in FIG. 9, a closed space such as a gym is displayed as a virtual space V1 on the display unit 51. Further, in the example shown in FIG. 9, the display unit 51 starts a predetermined exercise with the current user's avatar A1 and the user's avatar A15 taking a motion of a predetermined exercise (here, abdominal muscle exercise). The user's avatar A16 after a lapse of a certain period of time is displayed on the display unit 51. The current avatar A1 is the avatar A1 generated by the generation unit 11. The arrows between the avatars A1 in FIG. 9 indicate the passage of time. The user's avatar A15 located in the center of FIG. 9 wears exercise clothes B13 as an accessory B1. Further, the user's avatar A16 located on the right side of FIG. 9 wears sports-type underwear B14 as an accessory B1.
ここで、ユーザは、自身の将来のアバターA1が気に入った場合、この将来のユーザのアバターA1をSNS等で公開することが可能である。この場合、他者の情報端末5の表示部51においては、ユーザのアバターA1は、他者とユーザとの関係性に応じた形で表示される。例えば、他者がユーザの親友であれば、他者の情報端末5の表示部51には、将来のユーザのアバターA1がそのまま表示される。また、例えば、他者が第三者であれば、他者の情報端末5の表示部51では、ユーザのアバターA1が表示されないか、又はユーザであることが特定できない程度に抽象化された形でユーザのアバターA1が表示される。
Here, if the user likes his / her future avatar A1, he / she can publish the future user's avatar A1 on SNS or the like. In this case, the user's avatar A1 is displayed on the display unit 51 of the information terminal 5 of the other person in a form corresponding to the relationship between the other person and the user. For example, if the other person is the user's best friend, the future user's avatar A1 is displayed as it is on the display unit 51 of the other person's information terminal 5. Further, for example, if the other person is a third party, the user's avatar A1 is not displayed on the display unit 51 of the other person's information terminal 5, or is abstracted to the extent that the user cannot be identified. The user's avatar A1 is displayed with.
上述のように、第3動作例においては、運動/及び食事による将来のユーザの体型を推定したアバターA1が仮想空間V1に重畳して表示される。このため、第3動作例では、ユーザが運動/及び食事による効果を実感しやすい、という利点がある。すなわち、例えばフィットネスでは、トレーニングの効果は短期間では表れず、ユーザが体感しにくい。一方、第3動作例では、ユーザは、アバターA1を見ることでトレーニングの効果を実感しやすい。その結果、第3動作例では、ユーザのトレーニングに対するモチベーションの維持を図りやすい、という利点がある。また、現時点でトレーニングを行っていない場合でも、ユーザに対して将来のユーザの体型を推定したアバターA1を提示することで、その体型に近づけるためにジム等に通う動機づけをユーザに与えやすい、という利点がある。
As described above, in the third operation example, the avatar A1 that estimates the future user's body shape by exercise / and meal is superimposed and displayed on the virtual space V1. Therefore, in the third operation example, there is an advantage that the user can easily feel the effect of exercise / and meal. That is, for example, in fitness, the effect of training does not appear in a short period of time, and it is difficult for the user to experience it. On the other hand, in the third operation example, the user can easily realize the effect of the training by seeing the avatar A1. As a result, the third operation example has an advantage that it is easy to maintain the motivation for the user's training. In addition, even if training is not performed at this time, by presenting the avatar A1 that estimates the body shape of the future user to the user, it is easy to motivate the user to go to the gym or the like in order to approach the body shape. There is an advantage.
なお、アプリケーションは、例えばユーザが装着している活動量計等のウェアラブル端末からユーザの実行している運動に関する運動情報を取得してもよい。運動情報は、一例として、運動の種類、運動の実行時間、又は運動強度等を含み得る。この場合、アバター生成システム100は、運動情報に基づく専用サーバでのシミュレーション結果を反映するように、ユーザのアバターA1を改変することが可能である。
Note that the application may acquire exercise information regarding the exercise performed by the user from a wearable terminal such as an activity meter worn by the user. The exercise information may include, for example, the type of exercise, the execution time of the exercise, the exercise intensity, and the like. In this case, the avatar generation system 100 can modify the user's avatar A1 so as to reflect the simulation result on the dedicated server based on the motion information.
(4.1)変形例
上述の実施形態は、本開示の様々な実施形態の一つに過ぎない。上述の実施形態は、本開示の目的を達成できれば、設計等に応じて種々の変更が可能である。また、アバター生成方法(アバター生成システム100)と同様の機能は、(コンピュータ)プログラム、又はプログラムを記録した非一時的記録媒体等で具現化されてもよい。本開示の一態様に係るプログラムは、1以上のプロセッサに、上記のアバター生成方法を実行させる。 (4.1) Modifications The above embodiment is only one of the various embodiments of the present disclosure. The above-described embodiment can be variously modified depending on the design and the like as long as the object of the present disclosure can be achieved. Further, the same function as the avatar generation method (avatar generation system 100) may be embodied in a (computer) program, a non-temporary recording medium on which the program is recorded, or the like. The program according to one aspect of the present disclosure causes one or more processors to execute the above-mentioned avatar generation method.
上述の実施形態は、本開示の様々な実施形態の一つに過ぎない。上述の実施形態は、本開示の目的を達成できれば、設計等に応じて種々の変更が可能である。また、アバター生成方法(アバター生成システム100)と同様の機能は、(コンピュータ)プログラム、又はプログラムを記録した非一時的記録媒体等で具現化されてもよい。本開示の一態様に係るプログラムは、1以上のプロセッサに、上記のアバター生成方法を実行させる。 (4.1) Modifications The above embodiment is only one of the various embodiments of the present disclosure. The above-described embodiment can be variously modified depending on the design and the like as long as the object of the present disclosure can be achieved. Further, the same function as the avatar generation method (avatar generation system 100) may be embodied in a (computer) program, a non-temporary recording medium on which the program is recorded, or the like. The program according to one aspect of the present disclosure causes one or more processors to execute the above-mentioned avatar generation method.
以下、上述の実施形態の変形例を列挙する。以下に説明する変形例は、適宜組み合わせて適用可能である。
Hereinafter, modified examples of the above-described embodiment are listed. The modifications described below can be applied in combination as appropriate.
本開示におけるアバター生成システム100では、例えば、処理部1等に、コンピュータシステムを含んでいる。コンピュータシステムは、ハードウェアとしてのプロセッサ及びメモリを主構成とする。コンピュータシステムのメモリに記録されたプログラムをプロセッサが実行することによって、本開示におけるアバター生成システム100としての機能が実現される。プログラムは、コンピュータシステムのメモリに予め記録されてもよく、電気通信回線を通じて提供されてもよく、コンピュータシステムで読み取り可能なメモリカード、光学ディスク、ハードディスクドライブ等の非一時的記録媒体に記録されて提供されてもよい。コンピュータシステムのプロセッサは、半導体集積回路(IC)又は大規模集積回路(LSI)を含む1ないし複数の電子回路で構成される。ここでいうIC又はLSI等の集積回路は、集積の度合いによって呼び方が異なっており、システムLSI、VLSI(Very Large Scale Integration)、又はULSI(Ultra Large Scale Integration)と呼ばれる集積回路を含む。さらに、LSIの製造後にプログラムされる、FPGA(Field-Programmable Gate Array)、又はLSI内部の接合関係の再構成若しくはLSI内部の回路区画の再構成が可能な論理デバイスについても、プロセッサとして採用することができる。複数の電子回路は、1つのチップに集約されていてもよいし、複数のチップに分散して設けられていてもよい。複数のチップは、1つの装置に集約されていてもよいし、複数の装置に分散して設けられていてもよい。ここでいうコンピュータシステムは、1以上のプロセッサ及び1以上のメモリを有するマイクロコントローラを含む。したがって、マイクロコントローラについても、半導体集積回路又は大規模集積回路を含む1ないし複数の電子回路で構成される。
In the avatar generation system 100 in the present disclosure, for example, the processing unit 1 and the like include a computer system. The computer system mainly consists of a processor and a memory as hardware. When the processor executes the program recorded in the memory of the computer system, the function as the avatar generation system 100 in the present disclosure is realized. The program may be pre-recorded in the memory of the computer system, may be provided through a telecommunications line, and may be recorded on a non-temporary recording medium such as a memory card, optical disk, hard disk drive, etc. that can be read by the computer system. May be provided. The processor of a computer system is composed of one or more electronic circuits including a semiconductor integrated circuit (IC) or a large scale integrated circuit (LSI). The integrated circuit such as IC or LSI referred to here has a different name depending on the degree of integration, and includes an integrated circuit called a system LSI, VLSI (Very Large Scale Integration), or ULSI (Ultra Large Scale Integration). Further, an FPGA (Field-Programmable Gate Array) programmed after the LSI is manufactured, or a logical device capable of reconstructing the junction relationship inside the LSI or reconfiguring the circuit partition inside the LSI should also be adopted as a processor. Can be done. A plurality of electronic circuits may be integrated on one chip, or may be distributed on a plurality of chips. A plurality of chips may be integrated in one device, or may be distributed in a plurality of devices. The computer system referred to here includes a microcontroller having one or more processors and one or more memories. Therefore, the microprocessor is also composed of one or a plurality of electronic circuits including a semiconductor integrated circuit or a large-scale integrated circuit.
また、アバター生成システム100における複数の機能が、1つの筐体内に集約されていることはアバター生成システム100に必須の構成ではない。アバター生成システム100の構成要素は、複数の筐体に分散して設けられていてもよい。さらに、アバター生成システム100の少なくとも一部の機能は、例えば、サーバ及びクラウド(クラウドコンピューティング)等によって実現されてもよい。
Further, it is not an essential configuration for the avatar generation system 100 that a plurality of functions in the avatar generation system 100 are integrated in one housing. The components of the avatar generation system 100 may be distributed in a plurality of housings. Further, at least a part of the functions of the avatar generation system 100 may be realized by, for example, a server and a cloud (cloud computing).
上述の実施形態において、スキャナ4は、複数の撮像装置41を用いて構成されていることは、必須の構成ではない。例えば、スキャナ4は、1台の撮像装置41で構成され、1台の撮像装置41を移動させて複数の画像を取得してもよい。つまり、本開示においては、アバターA1を生成するための複数の画像は、対象者T1を複数の異なる角度から撮影することで得られればよく、撮像装置41の台数は複数台に限定されない。また、スキャナ4は、距離センサを用いて構成されてもよい。さらに、スキャナ4は、距離センサと撮像装置を組み合わせて構成されていてもよい。距離センサとしては、例えば、LIDARセンサが挙げられる。具体的には、ToF方式のLIDARセンサを用いてもよい。距離センサを用いることにより、さらにリアルなアバターを生成することができる。
In the above-described embodiment, it is not an essential configuration that the scanner 4 is configured by using a plurality of image pickup devices 41. For example, the scanner 4 may be composed of one image pickup device 41, and one image pickup device 41 may be moved to acquire a plurality of images. That is, in the present disclosure, a plurality of images for generating the avatar A1 may be obtained by photographing the subject T1 from a plurality of different angles, and the number of image pickup devices 41 is not limited to a plurality of units. Further, the scanner 4 may be configured by using a distance sensor. Further, the scanner 4 may be configured by combining a distance sensor and an image pickup device. Examples of the distance sensor include a lidar sensor. Specifically, a ToF type LIDAR sensor may be used. By using the distance sensor, a more realistic avatar can be generated.
上述の実施形態では、改変部12は処理部1に含まれているが、この態様に限られない。例えば、改変部12は、情報端末5に含まれていてもよいし、処理部1及び情報端末5の両方に含まれていてもよい。後者の場合、改変部12の処理は、処理部1と情報端末5とで分担されていてもよい。また、改変部12は、アプリケーションを運用する業者の専用サーバに含まれていてもよいし、処理部1及び専用サーバの両方に含まれていてもよい。後者の場合、改変部12の処理は、処理部1と専用サーバとで分担されていてもよい。
In the above-described embodiment, the modified unit 12 is included in the processing unit 1, but the present invention is not limited to this embodiment. For example, the modification unit 12 may be included in the information terminal 5, or may be included in both the processing unit 1 and the information terminal 5. In the latter case, the processing of the modification unit 12 may be shared between the processing unit 1 and the information terminal 5. Further, the modification unit 12 may be included in the dedicated server of the company that operates the application, or may be included in both the processing unit 1 and the dedicated server. In the latter case, the processing of the modification unit 12 may be shared between the processing unit 1 and the dedicated server.
上述の実施形態において、改変部12は、例えば生成部11にて生成されたアバターA1が下着姿である場合に、衣服を着た状態のアバターA1に改変してもよい。逆に、改変部12は、例えば生成部11にて生成されたアバターA1が衣服を着た状態である場合に、裸のアバターA1に改変してもよい。後者の場合、改変部12は、例えばディープラーニング等により機械学習された学習済みモデルを用いて、アバターA1を改変することが可能である。つまり、改変部12は、学習済みモデルを用いることにより、衣服により隠れている体型を推定し、推定した体型を反映するようにアバターA1を改変する。
In the above-described embodiment, the modification unit 12 may be modified to the avatar A1 in a state of wearing clothes, for example, when the avatar A1 generated by the generation unit 11 is in underwear. On the contrary, the modification unit 12 may modify the avatar A1 generated by the generation unit 11 into a naked avatar A1 when the avatar A1 is in a state of wearing clothes. In the latter case, the modification unit 12 can modify the avatar A1 by using a trained model machine-learned by, for example, deep learning. That is, the modification unit 12 estimates the body shape hidden by the clothes by using the trained model, and modifies the avatar A1 so as to reflect the estimated body shape.
上述の実施形態においては、情報端末5にインストールされているアプリケーションは、予め動画、静止画、又は音声等のコンテンツを有していてもよい。また、当該アプリケーションは、外部からネットワークを介して動画、静止画、又は音声等のコンテンツを取得してもよい。さらに、当該アプリケーションは、アバター生成システム100によって生成されたアバターA1を上記コンテンツと組み合わせて表示部51に表示してもよい。
In the above-described embodiment, the application installed in the information terminal 5 may have contents such as a moving image, a still image, or an audio in advance. Further, the application may acquire contents such as moving images, still images, and sounds from the outside via a network. Further, the application may display the avatar A1 generated by the avatar generation system 100 on the display unit 51 in combination with the above content.
上述の第1動作例~第3動作例の各々において、ユーザは、アプリケーションを実行する際に自身のアカウントでログインしなくてもよい。この場合、情報端末5の表示部51には、アバター生成システム100又はアプリケーションの専用サーバが用意する汎用のアバターを、ユーザのアバターA1として仮想空間V1に重畳して表示させてもよい。
In each of the above-mentioned first operation example to third operation example, the user does not have to log in with his / her own account when executing the application. In this case, the display unit 51 of the information terminal 5 may display a general-purpose avatar prepared by the avatar generation system 100 or the dedicated server of the application superimposed on the virtual space V1 as the user's avatar A1.
上述の第2動作例において、例えばユーザの選択した衣服がユーザに適したサイズよりも小さい場合、ユーザのアバターA1をシルエットで表示する等、抽象化させた形で仮想空間V1に重畳して表示させてもよい。また、選択した衣服のサイズに合わせるようにユーザのアバターA1のサイズを変更してもよい。後者の場合、提示部13により、現在のユーザの体型では選択した衣服を着用することができない旨を提示してもよい。
In the above-mentioned second operation example, for example, when the clothes selected by the user are smaller than the size suitable for the user, the user's avatar A1 is displayed as a silhouette, and the user's avatar A1 is displayed superimposed on the virtual space V1 in an abstract form. You may let me. Further, the size of the user's avatar A1 may be changed to match the size of the selected clothes. In the latter case, the presentation unit 13 may present that the current user's body shape cannot wear the selected clothing.
上述の実施形態において、提示部13は、改変部12での処理(改変ステップST2)が実行されたことを提示するだけでなく、改変部12での処理の前後の比較結果を併せて提示してもよい。
In the above-described embodiment, the presentation unit 13 not only presents that the processing in the modification unit 12 (modification step ST2) has been executed, but also presents the comparison results before and after the processing in the modification unit 12. You may.
(4.2)他の変形例
上記変形例でも説明したが、アバターの改変は、情報端末5で行われてもよい。本変形例では、アバターを情報端末5で改変して表示する例を、図1を用いて説明する。なお、ここでは、情報端末5を使用するユーザは、対象者T1とは別の人物である例を説明しているが、ユーザと対象者T1は同一人物であってもよい。 (4.2) Other Modifications As described in the above modification, the modification of the avatar may be performed on theinformation terminal 5. In this modification, an example in which the avatar is modified and displayed on the information terminal 5 will be described with reference to FIG. Although the example in which the user who uses the information terminal 5 is a person different from the target person T1 is described here, the user and the target person T1 may be the same person.
上記変形例でも説明したが、アバターの改変は、情報端末5で行われてもよい。本変形例では、アバターを情報端末5で改変して表示する例を、図1を用いて説明する。なお、ここでは、情報端末5を使用するユーザは、対象者T1とは別の人物である例を説明しているが、ユーザと対象者T1は同一人物であってもよい。 (4.2) Other Modifications As described in the above modification, the modification of the avatar may be performed on the
ユーザは、情報端末5を用いて、対象者T1のアバターを表示するためのアプリケーションを実行する。このアプリケーションは、情報端末5上で実行されてもよいし、情報端末5の指示によりサーバ(図示せず)上で実行されてもよい。次に、ユーザは、情報端末5を用いて、このアプリケーションにおける、ユーザのアカウントにログインする。アカウントにログインするためには、パスワードによる認証、指紋認証、顔認証などを用いることができる。なお、ここでアプリケーションとは、例えば、情報端末5にインストールされたオンラインゲームやSNSなどである。
The user uses the information terminal 5 to execute an application for displaying the avatar of the target person T1. This application may be executed on the information terminal 5 or may be executed on a server (not shown) according to the instruction of the information terminal 5. Next, the user logs in to the user's account in this application using the information terminal 5. To log in to the account, password authentication, fingerprint authentication, face authentication, etc. can be used. Here, the application is, for example, an online game or an SNS installed in the information terminal 5.
このアプリケーションで対象者T1のアバターを表示するために、情報端末5は、アバター生成システム100(第1サーバの一例)から、対象者T1のアバター用データを取得する。具体的には、情報端末5がアバター生成システム100に指示を送り、それに応じて、アバター生成システム100の通信部2が、記憶部3に記憶された対象者T1のアバター用データを、情報端末5に送信する。
In order to display the avatar of the target person T1 in this application, the information terminal 5 acquires the avatar data of the target person T1 from the avatar generation system 100 (an example of the first server). Specifically, the information terminal 5 sends an instruction to the avatar generation system 100, and in response, the communication unit 2 of the avatar generation system 100 stores the data for the avatar of the target person T1 stored in the storage unit 3 in the information terminal. Send to 5.
さらに、情報端末5は、本アプリケーションの動作に必要なデータを格納しているアプリケーションサーバ(第2サーバの一例、図示せず)と通信し、アプリケーションサーバに格納されている、ユーザと対象者T1との親密度を示す情報を取得する。この親密度を示す情報は、第1動作例で説明したように、オンラインゲームやSNSにおいて、ユーザの親族、親友、知人であることを示す情報である。
Further, the information terminal 5 communicates with an application server (an example of a second server, not shown) that stores data necessary for the operation of the application, and the user and the target person T1 stored in the application server. Get information that shows the intimacy with. The information indicating this intimacy is information indicating that the user is a relative, a close friend, or an acquaintance of the user in an online game or SNS, as described in the first operation example.
情報端末5は、親密度を示す情報に応じて、対象者T1のアバターを改変する。改変の方法は上記した通りであり、親密度が低いほど、アバターの抽象化の度合いを大きくすることができる。改変したアバターは、情報端末5の表示部51に表示される。
The information terminal 5 modifies the avatar of the target person T1 according to the information indicating the intimacy. The modification method is as described above, and the lower the intimacy, the greater the degree of abstraction of the avatar. The modified avatar is displayed on the display unit 51 of the information terminal 5.
上記の処理によって、情報端末5のユーザと対象者T1との親密度に応じて対象者T1のアバターを改変できる。これにより、自動的に、情報端末5のユーザから対象者T1のプライバシーを守ることができる。なお、上記の説明において、アバター生成システム100は、親密度を示す情報を格納しているアプリケーションサーバとは別のサーバであってもよいし、同一のサーバであってもよい。
By the above processing, the avatar of the target person T1 can be modified according to the intimacy between the user of the information terminal 5 and the target person T1. As a result, the privacy of the target person T1 can be automatically protected from the user of the information terminal 5. In the above description, the avatar generation system 100 may be a server different from the application server that stores information indicating intimacy, or may be the same server.
(まとめ)
以上述べたように、第1の態様に係るアバター生成方法は、生成ステップ(ST1)と、改変ステップ(ST2)と、を有する。生成ステップ(ST1)は、少なくとも対象者(T1)の身体情報が反映されたアバター(A1)を仮想空間(V1)に表示させるアバター用データを生成するステップである。改変ステップ(ST2)は、生成ステップ(ST1)にて生成されるアバター用データのうちアバター(A1)及び仮想空間(V1)に表示される付帯物(B1)の少なくとも一方について、アバター(A1)の表示態様及び付帯物(B1)の属性の少なくとも一方に応じて改変するステップである。 (summary)
As described above, the avatar generation method according to the first aspect includes a generation step (ST1) and a modification step (ST2). The generation step (ST1) is a step of generating avatar data for displaying at least the avatar (A1) reflecting the physical information of the target person (T1) in the virtual space (V1). The modification step (ST2) is performed on the avatar (A1) for at least one of the avatar (A1) and the accessory (B1) displayed in the virtual space (V1) among the avatar data generated in the generation step (ST1). It is a step of modifying according to at least one of the display mode of the above and the attribute of the accessory (B1).
以上述べたように、第1の態様に係るアバター生成方法は、生成ステップ(ST1)と、改変ステップ(ST2)と、を有する。生成ステップ(ST1)は、少なくとも対象者(T1)の身体情報が反映されたアバター(A1)を仮想空間(V1)に表示させるアバター用データを生成するステップである。改変ステップ(ST2)は、生成ステップ(ST1)にて生成されるアバター用データのうちアバター(A1)及び仮想空間(V1)に表示される付帯物(B1)の少なくとも一方について、アバター(A1)の表示態様及び付帯物(B1)の属性の少なくとも一方に応じて改変するステップである。 (summary)
As described above, the avatar generation method according to the first aspect includes a generation step (ST1) and a modification step (ST2). The generation step (ST1) is a step of generating avatar data for displaying at least the avatar (A1) reflecting the physical information of the target person (T1) in the virtual space (V1). The modification step (ST2) is performed on the avatar (A1) for at least one of the avatar (A1) and the accessory (B1) displayed in the virtual space (V1) among the avatar data generated in the generation step (ST1). It is a step of modifying according to at least one of the display mode of the above and the attribute of the accessory (B1).
この態様によれば、アプリケーションに適したアバター用データを生成しやすい、という利点がある。
According to this aspect, there is an advantage that it is easy to generate avatar data suitable for the application.
第2の態様に係るアバター生成方法では、第1の態様において、改変ステップ(ST2)は、アバター(A1)の一部又は全部を抽象化する処理を含む。
In the avatar generation method according to the second aspect, in the first aspect, the modification step (ST2) includes a process of abstracting a part or all of the avatar (A1).
この態様によれば、対象者(T1)を特定できない程度に改変されたアバター(A1)を仮想空間(V1)に表示させることができる、という利点がある。
According to this aspect, there is an advantage that the avatar (A1) modified to the extent that the target person (T1) cannot be identified can be displayed in the virtual space (V1).
第3の態様に係るアバター生成方法では、第1又は第2の態様において、改変ステップ(ST2)は、アバター(A1)を開示する相手に応じてアバター(A1)を改変する処理を含む。
In the avatar generation method according to the third aspect, in the first or second aspect, the modification step (ST2) includes a process of modifying the avatar (A1) according to the person who discloses the avatar (A1).
この態様によれば、アバター(A1)を開示する相手に対して必要以上に対象者(T1)に関する情報を与えなくて済む、という利点がある。
According to this aspect, there is an advantage that it is not necessary to give information about the target person (T1) more than necessary to the person who discloses the avatar (A1).
第4の態様に係るアバター生成方法では、第3の態様において、改変ステップ(ST2)は、アバター(A1)を開示する相手と対象者(T1)との関係性に応じてアバター(A1)を改変する処理を含む。
In the avatar generation method according to the fourth aspect, in the third aspect, the modification step (ST2) obtains the avatar (A1) according to the relationship between the person who discloses the avatar (A1) and the target person (T1). Includes processing to modify.
この態様によれば、対象者(T1)と比較的親しい相手と、対象者(T1)と比較的関係性の薄い相手と、で対象者(T1)に関する情報の与え方を変えることができる、という利点がある。
According to this aspect, the method of giving information about the target person (T1) can be changed between a person who is relatively close to the target person (T1) and a person who has a relatively weak relationship with the target person (T1). There is an advantage.
第5の態様に係るアバター生成方法では、第4の態様において、改変ステップ(ST2)は、アバター(A1)を開示する相手と対象者(T1)との親密度が低いほど、アバター(A1)の抽象化の度合いを大きくする処理を含む。
In the avatar generation method according to the fifth aspect, in the fourth aspect, in the modification step (ST2), the lower the intimacy between the person who discloses the avatar (A1) and the target person (T1), the lower the intimacy of the avatar (A1). Includes processing to increase the degree of abstraction of.
この態様によれば、対象者(T1)のプライバシーを保護しやすい、という利点がある。
According to this aspect, there is an advantage that the privacy of the target person (T1) can be easily protected.
第6の態様に係るアバター生成方法では、第4又は第5の態様において、関係性は、少なくともアバター(A1)を開示する相手が対象者(T1)であるか否かを含む。
In the avatar generation method according to the sixth aspect, in the fourth or fifth aspect, the relationship includes at least whether or not the person who discloses the avatar (A1) is the target person (T1).
この態様によれば、アバター(A1)を開示する相手が対象者(T1)本人であるか否かで対象者(T1)に関する情報の与え方を変えることができる、という利点がある。
According to this aspect, there is an advantage that the method of giving information about the target person (T1) can be changed depending on whether or not the person who discloses the avatar (A1) is the target person (T1).
第7の態様に係るアバター生成方法では、第1~第6のいずれかの態様において、改変ステップ(ST2)は、対象者(T1)の時間変化に応じてアバター(A1)を改変する処理を含む。
In the avatar generation method according to the seventh aspect, in any one of the first to sixth aspects, the modification step (ST2) is a process of modifying the avatar (A1) according to the time change of the target person (T1). include.
この態様によれば、例えば対象者(T1)の身体情報が時間経過に伴って変化する場合に、この身体情報の変化をアバター(A1)に反映することができる、という利点がある。
According to this aspect, for example, when the physical information of the subject (T1) changes with the passage of time, there is an advantage that the change in the physical information can be reflected in the avatar (A1).
第8の態様に係るアバター生成方法では、第7の態様において、対象者(T1)の時間変化は、対象者(T1)の行動に関する行動情報に基づく変化である。
In the avatar generation method according to the eighth aspect, in the seventh aspect, the time change of the target person (T1) is a change based on the behavior information regarding the behavior of the target person (T1).
この態様によれば、例えば対象者(T1)の身体情報が対象者(T1)の行動に伴って変化する場合に、この身体情報の変化をアバター(A1)に反映することができる、という利点がある。
According to this aspect, for example, when the physical information of the subject (T1) changes with the behavior of the subject (T1), the change of the physical information can be reflected in the avatar (A1). There is.
第9の態様に係るアバター生成方法では、第1~第8のいずれかの態様において、改変ステップ(ST2)は、アバター(A1)のモーションを改変する処理を含む。
In the avatar generation method according to the ninth aspect, in any one of the first to eighth aspects, the modification step (ST2) includes a process of modifying the motion of the avatar (A1).
この態様によれば、アプリケーションに適した動きをアバター(A1)にとらせやすい、という利点がある。
According to this aspect, there is an advantage that it is easy for the avatar (A1) to take a movement suitable for the application.
第10の態様に係るアバター生成方法では、第1~第9のいずれかの態様において、改変ステップ(ST2)は、対象者(T1)の身体的特徴に応じてアバター(A1)を改変する処理を含む。
In the avatar generation method according to the tenth aspect, in any one of the first to ninth aspects, the modification step (ST2) is a process of modifying the avatar (A1) according to the physical characteristics of the subject (T1). including.
この態様によれば、対象者(T1)の身体的特徴を残した形でアプリケーションに適したアバター用データを生成しやすい、という利点がある。
According to this aspect, there is an advantage that it is easy to generate data for an avatar suitable for an application while retaining the physical characteristics of the target person (T1).
第11の態様に係るアバター生成方法では、第1~第10のいずれかの態様において、改変ステップ(ST2)は、付帯物(B1)を改変する処理を含む。
In the avatar generation method according to the eleventh aspect, in any one of the first to tenth aspects, the modification step (ST2) includes a process of modifying the accessory (B1).
この態様によれば、アプリケーションに適した付帯物(B1)を生成しやすい、という利点がある。
According to this aspect, there is an advantage that it is easy to generate an accessory (B1) suitable for the application.
第12の態様に係るアバター生成方法では、第1~第11のいずれかの態様において、改変ステップ(ST2)は、付帯物(B1)に応じてアバター(A1)を改変する処理を含む。
In the avatar generation method according to the twelfth aspect, in any one of the first to eleventh aspects, the modification step (ST2) includes a process of modifying the avatar (A1) according to the accessory (B1).
この態様によれば、付帯物(B1)に適したアバター用データを生成しやすい、という利点がある。
According to this aspect, there is an advantage that it is easy to generate data for an avatar suitable for an accessory (B1).
第13の態様に係るアバター生成方法は、第1~第12のいずれかの態様において、改変ステップ(ST2)が実行されたことを提示する提示ステップ(ST3)を更に有する。
The avatar generation method according to the thirteenth aspect further includes a presentation step (ST3) indicating that the modification step (ST2) has been executed in any one of the first to twelfth aspects.
この態様によれば、アバター(A1)が改変されたことをアバター(A1)を見る者が把握することができる、という利点がある。
According to this aspect, there is an advantage that the viewer of the avatar (A1) can grasp that the avatar (A1) has been modified.
第14の態様に係るアバター生成方法は、第1~第13のいずれかの態様において、改変ステップ(ST2)についてのユーザからの入力を受け付ける受付ステップ(ST4)を更に有する。
The avatar generation method according to the fourteenth aspect further includes a reception step (ST4) for receiving input from the user regarding the modification step (ST2) in any one of the first to thirteenth aspects.
この態様によれば、ユーザの要望をアバター(A1)の改変に反映させやすい、という利点がある。
According to this aspect, there is an advantage that the user's request can be easily reflected in the modification of the avatar (A1).
第15の態様に係るアバター生成方法では、第1~第14のいずれかの態様において、改変ステップ(ST2)は、アバター用データについて複数箇所を改変する処理を含む。
In the avatar generation method according to the fifteenth aspect, in any one of the first to the fourteenth aspects, the modification step (ST2) includes a process of modifying a plurality of parts of the avatar data.
この態様によれば、アバター用データについて1箇所のみを改変する場合と比較して、アバター用データの改変の多様性が向上する、という利点がある。
According to this aspect, there is an advantage that the variety of modification of the avatar data is improved as compared with the case of modifying only one place of the avatar data.
第16の態様に係るプログラムは、1以上のプロセッサに、第1~第15のいずれかの態様のアバター生成方法を実行させる。
The program according to the sixteenth aspect causes one or more processors to execute the avatar generation method of any one of the first to fifteenth aspects.
この態様によれば、アプリケーションに適したアバター用データを生成しやすい、という利点がある。
According to this aspect, there is an advantage that it is easy to generate avatar data suitable for the application.
第17の態様に係るアバター生成システム(100)は、生成部(11)と、改変部(12)と、を備える。生成部(11)は、少なくとも対象者(T1)の身体情報が反映されたアバター(A1)を仮想空間(V1)に表示させるアバター用データを生成する。改変部(12)は、生成部(11)にて生成されるアバター用データのうちアバター(A1)及び仮想空間(V1)に表示される付帯物(B1)の少なくとも一方について、アバター(A1)の表示態様及び付帯物(B1)の属性の少なくとも一方に応じて改変する。
The avatar generation system (100) according to the seventeenth aspect includes a generation unit (11) and a modification unit (12). The generation unit (11) generates avatar data for displaying at least the avatar (A1) reflecting the physical information of the target person (T1) in the virtual space (V1). The modification unit (12) has the avatar (A1) for at least one of the avatar (A1) and the accessory (B1) displayed in the virtual space (V1) among the avatar data generated by the generation unit (11). Is modified according to at least one of the display mode of the above and the attribute of the accessory (B1).
この態様によれば、アプリケーションに適したアバター用データを生成しやすい、という利点がある。
According to this aspect, there is an advantage that it is easy to generate avatar data suitable for the application.
第2~第15の態様に係る方法については、アバター生成方法に必須の方法ではなく、適宜省略可能である。
The method according to the second to fifteenth aspects is not an essential method for the avatar generation method, and can be omitted as appropriate.
100 アバター生成システム
11 生成部
12 改変部
A1 アバター
B1 付帯物
ST1 生成ステップ
ST2 改変ステップ
ST3 提示ステップ
ST4 受付ステップ
T1 対象者
V1 仮想空間 100Avatar generation system 11 Generation part 12 Modification part A1 Avatar B1 Ancillary items ST1 Generation step ST2 Modification step ST3 Presentation step ST4 Reception step T1 Target person V1 Virtual space
11 生成部
12 改変部
A1 アバター
B1 付帯物
ST1 生成ステップ
ST2 改変ステップ
ST3 提示ステップ
ST4 受付ステップ
T1 対象者
V1 仮想空間 100
Claims (20)
- 少なくとも対象者の身体情報が反映されたアバターを仮想空間に表示させるアバター用データを生成する生成ステップと、
前記生成ステップにて生成される前記アバター用データのうち前記アバター及び前記仮想空間に表示される付帯物の少なくとも一方を、前記アバターの表示態様及び前記付帯物の属性の少なくとも一方に応じて改変する改変ステップと、を有する、
アバター生成方法。 At least the generation step to generate the avatar data to display the avatar that reflects the physical information of the target person in the virtual space,
Of the avatar data generated in the generation step, at least one of the avatar and the accessory displayed in the virtual space is modified according to at least one of the display mode of the avatar and the attribute of the accessory. Has a modification step,
How to generate an avatar. - 前記改変ステップは、前記アバターの一部又は全部を抽象化する処理を含む、
請求項1記載のアバター生成方法。 The modification step includes a process of abstracting a part or all of the avatar.
The avatar generation method according to claim 1. - 前記改変ステップは、前記アバターを開示する相手に応じて前記アバターを改変する処理を含む、
請求項1又は2に記載のアバター生成方法。 The modification step includes a process of modifying the avatar according to the person who discloses the avatar.
The avatar generation method according to claim 1 or 2. - 前記改変ステップは、前記アバターを開示する相手と前記対象者との関係性に応じて前記アバターを改変する処理を含む、
請求項3記載のアバター生成方法。 The modification step includes a process of modifying the avatar according to the relationship between the person who discloses the avatar and the target person.
The avatar generation method according to claim 3. - 前記改変ステップは、前記アバターを開示する相手と前記対象者との親密度が低いほど、前記アバターの抽象化の度合いを大きくする処理を含む、
請求項4記載のアバター生成方法。 The modification step includes a process of increasing the degree of abstraction of the avatar as the intimacy between the person disclosing the avatar and the target person is lower.
The avatar generation method according to claim 4. - 前記アバター用データは、前記アバターの3次元モデルと、前記3次元モデルのテクスチャデータと、を含み、
前記改変ステップは、前記テクスチャデータを改変して、前記アバターの抽象化の度合いを大きくする処理を含む、
請求項5記載のアバター生成方法。 The avatar data includes a three-dimensional model of the avatar and texture data of the three-dimensional model.
The modification step includes a process of modifying the texture data to increase the degree of abstraction of the avatar.
The avatar generation method according to claim 5. - 前記アバター用データは、前記アバターの3次元モデルと、前記3次元モデルのテクスチャデータと、を含み、
前記改変ステップは、
前記テクスチャデータ内の顔を認識する処理と、
前記認識した顔を改変して、前記アバターの顔の抽象化の度合いを大きくする処理と、を含む、
請求項5記載のアバター生成方法。 The avatar data includes a three-dimensional model of the avatar and texture data of the three-dimensional model.
The modification step
The process of recognizing the face in the texture data and
A process of modifying the recognized face to increase the degree of abstraction of the face of the avatar, and the like.
The avatar generation method according to claim 5. - 前記関係性は、少なくとも前記アバターを開示する相手が前記対象者であるか否かを含む、
請求項4~7のいずれか1項に記載のアバター生成方法。 The relationship includes at least whether or not the person disclosing the avatar is the target person.
The avatar generation method according to any one of claims 4 to 7. - 前記改変ステップは、前記対象者の時間変化に応じて前記アバターを改変する処理を含む、
請求項1~8のいずれか1項に記載のアバター生成方法。 The modification step includes a process of modifying the avatar according to the time change of the subject.
The avatar generation method according to any one of claims 1 to 8. - 前記対象者の時間変化は、前記対象者の行動に関する行動情報に基づく変化である、
請求項9記載のアバター生成方法。 The time change of the subject is a change based on the behavioral information regarding the behavior of the subject.
The avatar generation method according to claim 9. - 前記改変ステップは、前記アバターのモーションを改変する処理を含む、
請求項1~10のいずれか1項に記載のアバター生成方法。 The modification step includes a process of modifying the motion of the avatar.
The avatar generation method according to any one of claims 1 to 10. - 前記改変ステップは、前記対象者の身体的特徴に応じて前記アバターを改変する処理を含む、
請求項1~11のいずれか1項に記載のアバター生成方法。 The modification step comprises the process of modifying the avatar according to the physical characteristics of the subject.
The avatar generation method according to any one of claims 1 to 11. - 前記改変ステップは、前記付帯物を改変する処理を含む、
請求項1~12のいずれか1項に記載のアバター生成方法。 The modification step comprises a process of modifying the accessory.
The avatar generation method according to any one of claims 1 to 12. - 前記改変ステップは、前記付帯物の属性に応じて前記アバターを改変する処理を含む、
請求項1~13のいずれか1項に記載のアバター生成方法。 The modification step includes a process of modifying the avatar according to the attributes of the accessory.
The avatar generation method according to any one of claims 1 to 13. - 前記改変ステップが実行されたことを提示する提示ステップを更に有する、
請求項1~14のいずれか1項に記載のアバター生成方法。 Further comprising a presentation step indicating that the modification step has been performed.
The avatar generation method according to any one of claims 1 to 14. - 前記改変ステップについてのユーザからの入力を受け付ける受付ステップを更に有する、
請求項1~15のいずれか1項に記載のアバター生成方法。 Further having a reception step for accepting input from the user regarding the modification step.
The avatar generation method according to any one of claims 1 to 15. - 前記改変ステップは、前記アバター用データについて複数箇所を改変する処理を含む、
請求項1~16のいずれか1項に記載のアバター生成方法。 The modification step includes a process of modifying a plurality of parts of the avatar data.
The avatar generation method according to any one of claims 1 to 16. - 1以上のプロセッサに、
請求項1~17のいずれか1項に記載のアバター生成方法を実行させる、
プログラム。 For one or more processors
The avatar generation method according to any one of claims 1 to 17 is executed.
program. - 少なくとも対象者の身体情報が反映されたアバターを仮想空間に表示させるアバター用データを生成する生成部と、
前記生成部にて生成される前記アバター用データのうち前記アバター及び前記仮想空間に表示される付帯物の少なくとも一方を、前記アバターの表示態様及び前記付帯物の属性の少なくとも一方に応じて改変する改変部と、を備える、
アバター生成システム。 At least a generator that generates avatar data that displays the avatar that reflects the physical information of the target person in the virtual space,
Of the avatar data generated by the generation unit, at least one of the avatar and the accessory displayed in the virtual space is modified according to at least one of the display mode of the avatar and the attribute of the accessory. With a modified part,
Avatar generation system. - 情報端末を用いて、対象者のアバターを表示する方法であって、
アプリケーションを実行するステップと、
前記アプリケーションにおけるユーザのアカウントにログインするステップと、
第1サーバから、前記対象者の身体情報が反映されたアバターを含むアバター用データを取得するステップと、
第2サーバから、前記ユーザと前記対象者との前記アプリケーションにおける親密度を示す情報を取得するステップと、
前記親密度を示す情報に応じて、前記アバターを改変するステップと、
前記情報端末に、前記改変したアバターを表示するステップと、を有する、
アバター表示方法。 It is a method of displaying the target person's avatar using an information terminal.
The steps to run the application and
The steps to log in to the user's account in the application,
A step of acquiring avatar data including an avatar that reflects the physical information of the target person from the first server, and
A step of acquiring information indicating the intimacy of the user and the target person in the application from the second server, and
The step of modifying the avatar according to the information indicating the intimacy, and
The information terminal has a step of displaying the modified avatar.
Avatar display method.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020108095A JP2023110113A (en) | 2020-06-23 | 2020-06-23 | Avatar generation method, program, and avatar generation system |
JP2020-108095 | 2020-06-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021261188A1 true WO2021261188A1 (en) | 2021-12-30 |
Family
ID=79282540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/020993 WO2021261188A1 (en) | 2020-06-23 | 2021-06-02 | Avatar generation method, program, avatar generation system, and avatar display method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2023110113A (en) |
WO (1) | WO2021261188A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023132261A1 (en) * | 2022-01-06 | 2023-07-13 | 国立研究開発法人情報通信研究機構 | Information processing system, information processing method, and information processing program |
CN116452703A (en) * | 2023-06-15 | 2023-07-18 | 深圳兔展智能科技有限公司 | User head portrait generation method, device, computer equipment and storage medium |
JP7466038B1 (en) | 2023-05-30 | 2024-04-11 | Kddi株式会社 | Information processing device and information processing method |
WO2024202544A1 (en) * | 2023-03-29 | 2024-10-03 | ソニーグループ株式会社 | Information processing device, information processing method, and program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004054572A (en) * | 2002-07-19 | 2004-02-19 | Minolta Co Ltd | Method and device for editing three-dimensional model |
JP2005202909A (en) * | 2003-12-16 | 2005-07-28 | Kyoto Univ | Avatar control system |
JP2008107895A (en) * | 2006-10-23 | 2008-05-08 | Nomura Research Institute Ltd | Virtual space providing server, virtual space providing system, and computer program |
WO2014002239A1 (en) * | 2012-06-28 | 2014-01-03 | 株式会社ソニー・コンピュータエンタテインメント | Information processing system, information processing device, information terminal device, information processing method, and information processing program |
WO2018216602A1 (en) * | 2017-05-26 | 2018-11-29 | 株式会社ソニー・インタラクティブエンタテインメント | Information processing device, information processing method, and program |
-
2020
- 2020-06-23 JP JP2020108095A patent/JP2023110113A/en active Pending
-
2021
- 2021-06-02 WO PCT/JP2021/020993 patent/WO2021261188A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004054572A (en) * | 2002-07-19 | 2004-02-19 | Minolta Co Ltd | Method and device for editing three-dimensional model |
JP2005202909A (en) * | 2003-12-16 | 2005-07-28 | Kyoto Univ | Avatar control system |
JP2008107895A (en) * | 2006-10-23 | 2008-05-08 | Nomura Research Institute Ltd | Virtual space providing server, virtual space providing system, and computer program |
WO2014002239A1 (en) * | 2012-06-28 | 2014-01-03 | 株式会社ソニー・コンピュータエンタテインメント | Information processing system, information processing device, information terminal device, information processing method, and information processing program |
WO2018216602A1 (en) * | 2017-05-26 | 2018-11-29 | 株式会社ソニー・インタラクティブエンタテインメント | Information processing device, information processing method, and program |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023132261A1 (en) * | 2022-01-06 | 2023-07-13 | 国立研究開発法人情報通信研究機構 | Information processing system, information processing method, and information processing program |
WO2024202544A1 (en) * | 2023-03-29 | 2024-10-03 | ソニーグループ株式会社 | Information processing device, information processing method, and program |
JP7466038B1 (en) | 2023-05-30 | 2024-04-11 | Kddi株式会社 | Information processing device and information processing method |
CN116452703A (en) * | 2023-06-15 | 2023-07-18 | 深圳兔展智能科技有限公司 | User head portrait generation method, device, computer equipment and storage medium |
CN116452703B (en) * | 2023-06-15 | 2023-10-27 | 深圳兔展智能科技有限公司 | User head portrait generation method, device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JP2023110113A (en) | 2023-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021261188A1 (en) | Avatar generation method, program, avatar generation system, and avatar display method | |
US11688120B2 (en) | System and method for creating avatars or animated sequences using human body features extracted from a still image | |
US11909878B2 (en) | Systems and methods for authenticating a user on an augmented, mixed and/or virtual reality platform to deploy experiences | |
US10347028B2 (en) | Method for sharing emotions through the creation of three-dimensional avatars and their interaction | |
TWI708152B (en) | Image processing method, device, and storage medium | |
JP7504968B2 (en) | Avatar display device, avatar generation device and program | |
KR101907136B1 (en) | System and method for avatar service through cable and wireless web | |
Latoschik et al. | FakeMi: A fake mirror system for avatar embodiment studies | |
CN114981844A (en) | 3D body model generation | |
US9047710B2 (en) | System and method for providing an avatar service in a mobile environment | |
CN108875539B (en) | Expression matching method, device and system and storage medium | |
US20230130535A1 (en) | User Representations in Artificial Reality | |
JP7479618B2 (en) | Information processing program, information processing method, and information processing device | |
CN116437137B (en) | Live broadcast processing method and device, electronic equipment and storage medium | |
Wen et al. | A survey of facial capture for virtual reality | |
TW202123128A (en) | Virtual character live broadcast method, system thereof and computer program product | |
US20230086704A1 (en) | Augmented reality experience based on physical items | |
US20220405996A1 (en) | Program, information processing apparatus, and information processing method | |
Roth et al. | Avatar Embodiment, Behavior Replication, and Kinematics in Virtual Reality. | |
Pettersson et al. | A perceptual evaluation of social interaction with emotes and real-time facial motion capture | |
US20240303926A1 (en) | Hand surface normal estimation | |
WO2024163574A1 (en) | Augmented reality try-on experience for a friend or another user |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21827864 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21827864 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |