WO2023218861A1 - Simulation device - Google Patents

Simulation device Download PDF

Info

Publication number
WO2023218861A1
WO2023218861A1 PCT/JP2023/015234 JP2023015234W WO2023218861A1 WO 2023218861 A1 WO2023218861 A1 WO 2023218861A1 JP 2023015234 W JP2023015234 W JP 2023015234W WO 2023218861 A1 WO2023218861 A1 WO 2023218861A1
Authority
WO
WIPO (PCT)
Prior art keywords
costume
data
user
image
costumes
Prior art date
Application number
PCT/JP2023/015234
Other languages
French (fr)
Japanese (ja)
Inventor
佐恵 木村
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Publication of WO2023218861A1 publication Critical patent/WO2023218861A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a simulation device.
  • simulation devices have been used that simulate the appearance of users trying on virtual costumes.
  • a simulation device is sometimes used that generates a video in which the user moves while trying on virtual costumes in accordance with the user's own movements.
  • Patent Document 1 generates motion body shape data by moving the user's 3D body shape data three-dimensionally, and dresses the 3D body shape data at each time included in the motion body shape data with a costume indicated by costume data.
  • a virtual try-on system that generates try-on images is disclosed.
  • Patent Document 1 does not even consider the order in which multiple costumes are layered. Specifically, Patent Document 1 describes, as an example, a case where a T-shirt and jeans are worn. However, the technology according to Patent Document 1 does not consider which of the T-shirt and jeans is closer to the user's body.
  • an object of the present invention is to provide a simulation device that can specify and simulate the order in which two or more costumes are layered when the user wears them.
  • a simulation device includes a processing device that simulates a case where a user wears multiple costumes by referring to a plurality of costume data in one-to-one correspondence with a plurality of costumes;
  • Each of the plurality of costume data includes shape data indicating a three-dimensional shape of the costume, and tag data associated with the shape data and indicating the number of times the costume is worn by the user,
  • the processing device includes a reception unit that receives input of two or more costumes selected by the user from among the plurality of costumes, and a processing device that receives input of two or more costumes respectively corresponding to the two or more costumes received by the reception unit.
  • the simulation device includes an image generation unit that generates a first composite image in which three-dimensional images of the two or more costumes are superimposed on the dimensional image according to the specified superimposition order.
  • FIG. 1 is a diagram showing the overall configuration of a simulation system 1 according to a first embodiment.
  • FIG. 2 is a block diagram showing a configuration example of a scanning device 20.
  • FIG. A configuration example of the first data set DS1.
  • Configuration example of costume data CD. A table showing an example of the correspondence between costume IDs and stacking order indexes.
  • FIG. 3 is a block diagram showing a configuration example of a server 30.
  • FIG. A configuration example of the first database DB1. 1 is a block diagram showing a configuration example of a terminal device 10.
  • FIG. A configuration example of the second database DB2.
  • FIG. 3 is a functional block diagram of the image generation unit 116.
  • FIG. 3 is an explanatory diagram of the operation of the image generation unit 116.
  • FIG. 3 is an explanatory diagram of the operation of the image generation unit 116.
  • 5 is a flowchart showing the operation of the terminal device 10.
  • FIG. 1 shows the overall configuration of a simulation system 1 according to the first embodiment.
  • the simulation system 1 includes a terminal device 10, a scanning device 20, and a server 30.
  • the terminal device 10, the scanning device 20, and the server 30 are communicably connected to each other via the communication network NET.
  • NET communication network
  • the terminal device 10 is a device that allows an end user U to simulate his or her own appearance when trying on two or more virtual costumes.
  • the terminal device 10 specifies the order in which multiple costumes are stacked on top of each other, that is, the stacking order, and then simulates a full-body image of the end user U trying on two or more virtual costumes.
  • the terminal device 10 is an example of a "simulation device.”
  • the scanning device 20 three-dimensionally scans two or more actual costumes and generates image data indicating a three-dimensional image of the costumes and costume data regarding the characteristics of the costumes, which are used when the terminal device 10 performs a simulation. do.
  • the costume data includes a stacking order index used by the terminal device 10 to specify the stacking order of two or more costumes.
  • the scanning device 20 outputs the generated image data and costume data as one data set to the server 30.
  • the server 30 acquires image data and costume data from the scanning device 20. Additionally, the server 30 outputs image data to the terminal device 10. Furthermore, the server 30 acquires a costume ID indicating the ID of the costume that the end user U tries on from the terminal device 10, and outputs costume data corresponding to the acquired costume ID to the terminal device 10.
  • FIG. 2 is a block diagram showing an example of the configuration of the scanning device 20.
  • the scanning device 20 includes a processing device 21 , a storage device 22 , a communication device 23 , an imaging device 24 , a display 25 , and an input device 26 .
  • the elements of scanning device 20 are interconnected using one or more buses for communicating information.
  • the processing device 21 is a processor that controls the entire scanning device 20. Further, the processing device 21 is configured using, for example, a single chip or a plurality of chips. The processing device 21 is configured using, for example, a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, registers, and the like. Note that some or all of the functions of the processing device 21 may be implemented using hardware such as a DSP, ASIC, PLD, and FPGA. The processing device 21 executes various processes in parallel or sequentially.
  • CPU central processing unit
  • the storage device 22 is a recording medium that can be read and written by the processing device 21. Furthermore, the storage device 22 stores a plurality of programs including the control program PR2 executed by the processing device 21. The storage device 22 also stores a first data set DS1.
  • the first data set DS1 is a data set corresponding to a three-dimensional model of each costume.
  • FIG. 3 shows an example of the configuration of the first data set DS1.
  • the first data set DS1 is a set of costume ID, image data PD, and costume data CD.
  • the costume ID indicates the ID of the costume scanned by the scanning device 20.
  • the first alphabet of the costume ID corresponds to the type of costume, for example, a T-shirt, a long-sleeved shirt, and pants. Further, the numerical value following the alphabet indicates the item number of the costume.
  • the image data PD indicates a three-dimensional image obtained as a result of three-dimensional scanning of the costume.
  • the costume data CD is data regarding the characteristics of the costume.
  • the costume data CD includes data obtained by analyzing the three-dimensional image and data input by the user of the scanning device 20.
  • FIG. 4 shows an example of the configuration of the costume data CD.
  • the costume data CD includes shape data FD and tag data GD.
  • Shape data FD is data indicating the three-dimensional shape of the costume.
  • the shape data FD includes structure data SD, texture data TD, and joint data JD.
  • the structure data SD is data regarding the structure of the costume. Specifically, the structure data SD includes data indicating what shape and thickness of cloth is placed at what position in one costume, and how the cloths are joined to each other.
  • the texture data TD includes data indicating the material, stiffness, color, pattern, gloss, and texture of each cloth.
  • the joint data JD includes data indicating positions corresponding to the joints of the general wearer in the costume.
  • the joint data JD includes data indicating the relative position of the skeleton of the general wearer with respect to the costume.
  • the joint data JD indicates a predetermined range as a position in a costume that corresponds to the position of the skeleton and joints of the wearer of the costume.
  • the tag data GD is data used to classify costumes.
  • the tag data GD includes, for example, a stacking order index used by the terminal device 10 to specify a stacking order indicating that a plurality of costumes are stacked on top of each other.
  • FIG. 5 is a table showing an example of the correspondence between costume IDs and stacking order indexes. Note that in FIG. 5, for convenience of explanation, examples of three-dimensional images of each costume indicated by image data PD corresponding to each costume ID are also shown.
  • the stacking order index corresponding to the costume whose first alphabet in the costume ID is "A", that is, the T-shirt, is "1".
  • the stacking order index corresponding to the costume whose first alphabet in the costume ID is “B”, that is, the long-sleeved shirt, is “3”.
  • the stacking order index corresponding to a costume whose first alphabet in the costume ID is “C”, that is, a sweater or a hoodie, is "4".
  • the stacking order index corresponding to the costume whose first alphabet in the costume ID is “D”, that is, the jacket, is “5”.
  • the stacking order index corresponding to the costume whose first alphabet in the costume ID is “E”, that is, pants, is “2”.
  • the stacking order index corresponding to the costume whose first alphabet in the costume ID is “F”, that is, the coat, is “6”.
  • the terminal device 10 uses three-dimensional images of two or more costumes to simulate the end user U trying on the two or more costumes to simulate the appearance of the end user U trying on the two or more costumes.
  • the stacking order of two or more costumes is specified using the order index. Specifically, the terminal device 10 overlays a three-dimensional image of a costume with a relatively large stacking order index over a three-dimensional image of a costume with a relatively small stacking order index.
  • part or all of the costume data CD may be automatically generated by the generation unit 212 using the analysis results of the analysis unit 213, which will be described later.
  • some or all of these costume data CDs may be generated based on input information input by the user of the scanning device 20 via the input device 26.
  • the communication device 23 is hardware as a transmitting and receiving device for communicating with other devices.
  • the communication device 23 is also called, for example, a network device, a network controller, a network card, a communication module, or the like.
  • the communication device 23 may include a connector for wired connection and an interface circuit corresponding to the connector.
  • the communication device 23 may include a wireless communication interface. Examples of connectors and interface circuits for wired connections include products compliant with wired LAN, IEEE1394, and USB.
  • examples of the wireless communication interface include products compliant with wireless LAN, Bluetooth (registered trademark), and the like.
  • the imaging device 24 images the outside world where the object exists.
  • the imaging device 24 images the costume.
  • the imaging device 24 outputs imaging information indicating an image obtained by imaging the costume.
  • the imaging device 24 includes, for example, a lens, an imaging element, an amplifier, and an AD converter.
  • the light collected through the lens is converted by the image sensor into an image signal, which is an analog signal.
  • the amplifier amplifies the imaging signal and outputs it to the AD converter.
  • the AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal.
  • the converted imaging information is output to the processing device 21.
  • FIG. 6 shows an example of the installation of the imaging device 24 in this embodiment.
  • the scanning device 20 includes eight imaging devices 24-1 to 24-8. Note that it is only an example that the scanning device 20 includes eight imaging devices 24. Scanning device 20 can include any number of imaging devices 24 .
  • the imaging devices 24-1 to 24-8 are fixed to the frame F, and image the costume C placed in the hollow inside the frame F from 360° directions in three axes: top, bottom, left, right, front and back. do.
  • a generation unit 212 which will be described later, generates a three-dimensional image of the costume C based on imaging information indicating a plurality of images captured by the imaging devices 24-1 to 24-8.
  • the display 25 is a device that displays images and text information.
  • the display 25 displays various images under the control of the processing device 21.
  • various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are suitably used as the display 25.
  • the processing device 21 reads the control program PR2 from the storage device 22 and executes it. As a result, the processing device 21 functions as an acquisition section 211, a generation section 212, an analysis section 213, and a communication control section 214.
  • the acquisition unit 211 acquires imaging information indicating a captured image of the costume C from the imaging device 24.
  • the acquisition unit 211 also acquires input information input by the user of the scanning device 20 via the input device 26 .
  • the input information includes, for example, a stacking order index.
  • the generation unit 212 generates image data PD representing a three-dimensional image of the costume C based on the imaging information acquired by the acquisition unit 211 from the imaging device 24. Further, the generation unit 212 generates costume data CD using the input information acquired by the acquisition unit 211 from the input device 26. Furthermore, the generation unit 212 generates the costume data CD using also analysis information indicating the analysis result obtained by analyzing the three-dimensional image of the costume C by the analysis unit 213, which will be described later. The generation unit 212 also generates a first data set DS1, which is a data set including a costume ID and a set of image data PD and costume data CD corresponding to the costume ID. The generation unit 212 stores the generated first data set DS1 in the storage device 22.
  • the analysis unit 213 analyzes the three-dimensional image of the costume C generated by the generation unit 212.
  • the analysis unit 213 extracts, for example, features related to the shape of the costume C as a result of analyzing the three-dimensional image. Analysis information indicating the extracted features is output to the generation unit 212.
  • the generation unit 212 generates costume data CD using the analysis information acquired from the analysis unit 213.
  • the communication control unit 214 causes the communication device 23 to transmit the first data set DS1 stored in the storage device 22 to the server 30.
  • the user of the scanning device 20 can easily create image data PD and costume data CD without having to manually input all of the data elements that make up image data PD and costume data CD one by one. Further, the simulation system 1 can simulate virtual try-on using the image data PD and costume data CD that are simply produced.
  • FIG. 7 is a block diagram showing an example of the configuration of the server 30.
  • the server 30 includes a processing device 31, a storage device 32, a communication device 33, a display 34, and an input device 35.
  • Each element included in the server 30 is interconnected using one or more buses for communicating information.
  • the processing device 31 is a processor that controls the entire server 30. Further, the processing device 31 is configured using, for example, a single chip or a plurality of chips. The processing device 31 is configured using, for example, a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, registers, and the like. Note that some or all of the functions of the processing device 31 may be implemented using hardware such as a DSP, ASIC, PLD, and FPGA. The processing device 31 executes various processes in parallel or sequentially.
  • CPU central processing unit
  • the storage device 32 is a recording medium that can be read and written by the processing device 31. Furthermore, the storage device 32 stores a plurality of programs including the control program PR3 executed by the processing device 31. The storage device 32 also stores a first database DB1.
  • FIG. 8 shows an example of the configuration of the first database DB1.
  • the first database DB1 is a database in which a first data set DS1 acquired from the scanning device 20 via the communication device 33 by the acquisition unit 311, which will be described later, is accumulated.
  • the communication device 33 is hardware as a transmitting/receiving device for communicating with other devices.
  • the communication device 33 is also called, for example, a network device, a network controller, a network card, a communication module, or the like.
  • the communication device 33 may include a connector for wired connection and an interface circuit corresponding to the connector.
  • the communication device 33 may include a wireless communication interface. Examples of connectors and interface circuits for wired connections include products compliant with wired LAN, IEEE1394, and USB.
  • examples of the wireless communication interface include products compliant with wireless LAN, Bluetooth (registered trademark), and the like.
  • the display 34 is a device that displays images and text information.
  • the display 34 displays various images under the control of the processing device 31.
  • various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are suitably used as the display 34.
  • the input device 35 accepts operations from the user of the server 30.
  • the input device 35 includes a keyboard, a touch pad, a touch panel, or a pointing device such as a mouse.
  • the input device 35 may also serve as the display 34.
  • the acquisition unit 311 acquires the first data set DS1 from the scanning device 20 via the communication device 33.
  • the acquisition unit 311 stores the acquired first data set DS1 in the first database DB1.
  • the acquisition unit 311 also acquires selection information indicating the selection result of the costume selected by the end user U from the terminal device 10 via the communication device 33, as described later.
  • the extraction unit 312 extracts costume data CD from the first database DB1 based on the selection information acquired by the acquisition unit 311. More specifically, the extraction unit 312 uses the costume ID included in the selection information to extract costume data CD linked to the costume ID from the first database DB1.
  • the communication control unit 313 causes the communication device 33 to transmit the set of costume ID and image data PD stored in the first database DB1 to the terminal device 10. As an example, the communication control unit 313 causes the communication device 33 to transmit all costume ID and image data PD pairs stored in the first database DB1 to the terminal device 10. Furthermore, the communication control unit 313 outputs the costume data CD extracted by the extraction unit 312 to the terminal device 10 via the communication device 33 as corresponding data RD that is paired with the costume ID to which the costume data CD corresponds. .
  • FIG. 9 is a block diagram showing an example of the configuration of the terminal device 10.
  • the terminal device 10 includes a processing device 11 , a storage device 12 , a communication device 13 , an imaging device 14 , a display 15 , and an input device 16 .
  • Each element included in the terminal device 10 is interconnected using a single bus or multiple buses for communicating information.
  • the processing device 11 is a processor that controls the entire terminal device 10. Further, the processing device 11 is configured using, for example, a single chip or a plurality of chips. The processing device 11 is configured using, for example, a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, registers, and the like. Note that some or all of the functions of the processing device 11 may be implemented using hardware such as a DSP, an ASIC, a PLD, and an FPGA. The processing device 11 executes various processes in parallel or sequentially.
  • CPU central processing unit
  • the storage device 12 is a recording medium that can be read and written by the processing device 11. Furthermore, the storage device 12 stores a plurality of programs including the control program PR1 executed by the processing device 11. The storage device 12 also stores a second database DB2, corresponding data RD, body shape data BD, and learning model LM.
  • FIG. 10 shows an example of the configuration of the second database DB2.
  • the second database DB2 is a database that stores a set of costume ID and image data PD acquired from the server 30 via the communication device 13 by the acquisition unit 111, which will be described later.
  • the second database DB2 stores all pairs of costume IDs and image data PD stored in the server 30.
  • FIG. 11 shows an example of the configuration of the corresponding data RD.
  • the correspondence data RD is correspondence data RD that an acquisition unit 111 (described later) acquires from the server 30 via the communication device 13.
  • the corresponding data RD is a set of the costume ID included in the selection information output from the terminal device 10 to the server 30 and the costume data CD extracted by the server 30 using the costume ID.
  • the body shape data BD is data representing the body shape of the end user U in three dimensions. Specifically, the body shape data BD is data indicating a three-dimensional image representing the body shape of the end user U.
  • the body shape data BD includes skeleton data KD representing the skeleton of the end user U. More specifically, the skeleton data KD represents a change in the posture of the end user's U skeleton in accordance with the end user's U motion.
  • the skeleton data KD includes, for example, temporal and discrete data indicating the posture of the skeleton of the end user U at a plurality of points in time during the period in which the end user U uses the terminal device 10. Furthermore, the skeleton data KD includes data regarding the joints of the end user U.
  • the trained model LM is generated outside the terminal device 10.
  • the learned model LM be generated in a server (not shown).
  • the terminal device 10 acquires the learned model LM from a server (not shown) via the communication network NET.
  • the imaging device 14 images the outside world where the object exists.
  • the imaging device 14 captures a full-body image of the end user U.
  • the imaging device 14 captures a full-body image of the end user U who has visited the store in the state of clothing at the time of the visit.
  • the imaging device 14 outputs imaging information indicating a captured image obtained by imaging the end user U.
  • the imaging device 14 includes, for example, a lens, an imaging element, an amplifier, and an AD converter.
  • the light collected through the lens is converted by the image sensor into an image signal, which is an analog signal.
  • the amplifier amplifies the imaging signal and outputs it to the AD converter.
  • the AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal.
  • the converted imaging information is output to the processing device 11.
  • FIG. 12 is an example of the operation screen OM displayed on the display 15.
  • a list of three-dimensional images of costumes that are candidates for costumes to be tried on by the end user U is displayed in the leftmost column.
  • three-dimensional images of candidates for a T-shirt, long-sleeved shirt, sweater or hoodie, jacket, pants, and coat are displayed in order from the top.
  • These three-dimensional images are shown in the second database DB2 by image data PD linked to the costume ID of each costume.
  • the end user U selects an outfit to try on from among these outfit candidates using the input device 16, which will be described later.
  • the acquisition unit 111 acquires the set of costume ID and image data PD and the corresponding data RD from the server 30 via the communication device 13.
  • the acquisition unit 111 stores the set of costume ID and image data PD acquired from the server 30 in the second database DB2.
  • the acquisition unit 111 also stores the corresponding data RD acquired from the server 30 in the storage device 12 .
  • the acquisition unit 111 acquires imaging information indicating a captured image of the end user U's whole body from the imaging device 14 .
  • the stacking order identifying unit 115 identifies the stacking order of two or more costumes selected by the end user U, based on the tag data GD included in the costume data CD acquired by the acquiring unit 111. As described above, the stacking order specifying unit 115 specifies that the larger the stacking order index of the costume, the more outwardly the clothes are stacked when viewed from the three-dimensional image showing the body shape of the end user U.
  • the stacking order specifying unit 115 is an example of a “specific unit”.
  • the image generation unit 116 generates the first composite image SP1 based on the body shape data BD and the two or more costume data CDs included in the corresponding data RD. Specifically, the image generation unit 116 superimposes the three-dimensional image of the two or more costumes on the three-dimensional image of the body shape of the end user U according to the stacking order of the two or more costumes specified by the stacking order identifying unit 115. A first composite image SP1 is generated. More specifically, the image generation unit 116 reads the costume ID and costume data CD from the corresponding data RD. Next, the image generation unit 116 refers to the second database DB2 using the read costume ID, thereby reading out the image data PD linked to the costume ID.
  • the image generation unit 116 superimposes the three-dimensional image of the costume indicated by the image data PD on the three-dimensional image of the body shape of the end user U in the stacking order specified by the stacking order specifying unit 115.
  • a composite image SP1 is generated.
  • the image generation unit 116 superimposes the 3D image of the costume on the 3D image of the body shape of the end user U, a more natural first composite image SP1 is generated by using the shape data FD included in the costume data CD. be done.
  • a three-dimensional image of the body shape of the end user U is represented by body shape data BD.
  • a first costume is placed inside the three-dimensional image BM of the body shape of the end user U.
  • Part of the 3D image commercial is included.
  • the area specifying unit 116-1 specifies the first area AR1.
  • costume data CD is linked to the costume ID to which each of the first costume and the second costume corresponds.
  • costume data CD includes joint data JD.
  • joint data JD indicates, when a typical wearer wears the costume, the range of joint positions of the typical wearer in the costume. Contains the data shown.
  • the terminal device 10 can simulate, as a moving image, a full-body image of the end user U while trying on a plurality of costumes.
  • the end user U can view a video in which a virtual full-body image of the end user U who has tried on a plurality of costumes moves in accordance with the user's own movements.
  • the terminal device 10 moves the joint data JD included in the shape data FD indicating the three-dimensional shape of the costume in accordance with the movement of the skeleton data KD that deforms in accordance with the movement of the end user U. It becomes possible to move the virtual full-body image of U more naturally.
  • the display control unit 117 causes the display 15 to display the operation screen OM shown in FIG. In particular, the display control unit 117 causes the display 15 to display the first composite image SP1 generated by the image generation unit 116.
  • FIG. 15 is a flowchart showing the operation of the terminal device 10 according to the first embodiment.
  • step S1 the processing device 11 functions as the acquisition unit 111.
  • the processing device 11 acquires imaging information indicating a captured image of the end user U's whole body from the imaging device 14 .
  • step S4 the processing device 11 functions as the communication control unit 114.
  • the processing device 11 causes the communication device 13 to transmit selection information indicating the selection result received in step S3 to the server 30.
  • step S6 the processing device 11 functions as the stacking order specifying unit 115.
  • the processing device 11 specifies the stacking order of two or more costumes based on the costume data CD included in the corresponding data RD acquired in step S5.
  • the processing device 11 functions as the image generation unit 116.
  • the processing device 11 generates a first composite image SP1 based on body shape data BD representing the body shape of the end user U in three dimensions and two or more pieces of costume data CD. Specifically, the processing device 11 adds two or more images to the three-dimensional image of the end user U by referring to the shape data FD included in the costume data CD according to the stacking order of the two or more costumes specified in step S6.
  • a first composite image SP1 is generated by overlapping the three-dimensional images of the costumes described above.
  • the terminal device 10 as a simulation device can provide information to end users by referring to a plurality of costume data CDs in one-to-one correspondence with a plurality of costumes.
  • a case is simulated in which U wears layers of clothing.
  • Each of the plurality of costume data CDs includes shape data FD indicating the three-dimensional shape of the costume, tag data GD associated with the shape data FD, and indicating the number of times the costume will be worn by the end user U. including.
  • the terminal device 10 includes a receiving section 113, a stacking order specifying section 115, and an image generating section 116.
  • the reception unit 113 receives two or more costumes selected by the end user U from among the plurality of costumes.
  • the two or more costumes include the first costume worn by the end user U without any other costume interposed between the first costume and the body of the end user U.
  • the image generation section 116 includes an area identification section 116-1 and a modification section 116-2.
  • the area specifying unit 116-1 specifies the first area AR1.
  • the first area AR1 is such that when the three-dimensional image of the first costume is superimposed on the three-dimensional image of the end user U, a part of the three-dimensional image of the first costume is inside the three-dimensional image of the end user U. This is an area to get into.
  • the modification unit 116-2 performs modification to push out a part of the three-dimensional image of the first costume included in the first region AR1 from the three-dimensional image of the end user U.
  • the terminal device 10 Since the terminal device 10 has the above configuration, when the end user U tries on underwear virtually, for example, the 3D image showing the underwear is compared to the 3D image showing the body shape of the end user U. Can eliminate digging. As a result, the end user U can see a full-body image of himself wearing underwear in a more natural state.
  • the two or more costumes include the second costume that the end user U wears over the first costume.
  • the area specifying unit 116-1 specifies the second area.
  • the 3D image of the second costume is superimposed on the second composite image SP2 corresponding to the set of the 3D image of the end user U and the 3D image of the first costume, the second area becomes the second composite image SP2.
  • This is an area in which a part of the three-dimensional image of the second costume enters.
  • the modification unit 116-2 performs modification to push out a part of the three-dimensional image of the second costume included in the second region from the second composite image SP2.
  • the body shape data BD includes the skeleton data KD representing the skeleton of the end user U.
  • the shape data FD corresponds to the relative position of the skeleton of the end user U with respect to the above costume and the joints of the end user U in the above costume, assuming that the end user U wears the above costume.
  • the image generation unit 116 collates the skeletal data KD, which is deformed according to the motion of the end user U, and the joint data JD, and generates a moving image as the first composite image SP1.
  • the terminal device 10 Since the terminal device 10 has the above configuration, the end user U can view a video in which a virtual full-body image of the end user U trying on two or more costumes moves in accordance with the user's own movements.
  • the terminal device 10 operates the joint data JD included in the shape data FD indicating the three-dimensional shape of the costume in accordance with the movement of the skeleton data KD that deforms in accordance with the end user's U movement.
  • the terminal device 10 is able to move the virtual full-body image of the end user U more naturally.
  • the simulation system 1 includes the scanning device 20 and the terminal device 10 described above.
  • the scanning device 20 three-dimensionally scans the costume described above to generate costume data CD.
  • the tag data GD includes a stacking order index used by the terminal device 10 to specify the stacking order of two or more costumes.
  • the tag data GD may include type data indicating the type of costume instead of the stacking order index.
  • the storage device 12 provided in the terminal device 10 stores a correspondence table that describes the correspondence between types of clothing such as T-shirts, long-sleeved shirts, and sweaters, and stacking order indices.
  • the stacking order specifying unit 115 may identify the stacking order of the costumes by comparing the types of costumes specified in the costume data CD obtained from the server 30 with the correspondence table.
  • the user of the scanning device 20 inputs a stacking order index indicating the stacking order of the costumes from the input device 26.
  • the scanning device 20 may generate the costume stacking order index using the trained model. More specifically, the generation unit 212 included in the scanning device 20 inputs image data PD representing a three-dimensional image of the costume C generated by itself into the learned model, thereby determining the stacking order index of the costume C. may be output from the trained model.
  • the learned model is generated by machine learning using teacher data including a plurality of sets of image data PD indicating three-dimensional images of costumes and stacking order indices of costumes.
  • the scanning device 20 uses the learned model to identify the type of costume. May be generated. More specifically, the generation unit 212 included in the scanning device 20 learns the type of costume C by inputting image data PD representing a three-dimensional image of the costume C generated by itself into the learned model. It may also be output from the model.
  • the learned model is generated by machine learning using teacher data including a plurality of sets of image data PD indicating three-dimensional images of costumes and types of costumes.
  • the terminal device 10 outputs selection information indicating the selection results of two or more costumes received by the reception unit 113 to the server 30, and acquires correspondence data RD corresponding to the selection information from the server 30. was. However, the terminal device 10 may not output the selection information to the server 30 and may unconditionally acquire the correspondence data RD for all costumes from the server 30.
  • the terminal device 10, the scanning device 20, and the server 30 are separate bodies. However, two or more of the terminal device 10, scanning device 20, and server 30 may be housed in the same housing. That is, two or more devices shown in the overall configuration of FIG. 1 may be realized as a single device.
  • the information, signals, etc. described may be represented using any of a variety of different technologies.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. which may be referred to throughout the above description, may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may also be represented by a combination of
  • the determination may be made using a value expressed using 1 bit (0 or 1) or a truth value (Boolean: true or false).
  • the comparison may be performed by comparing numerical values (for example, comparing with a predetermined value).
  • each of the functions illustrated in FIGS. 1 to 15 is realized by an arbitrary combination of at least one of hardware and software.
  • the method for realizing each functional block is not particularly limited. That is, each functional block may be realized using one physically or logically coupled device, or may be realized using two or more physically or logically separated devices directly or indirectly (e.g. , wired, wireless, etc.) and may be realized using a plurality of these devices.
  • the functional block may be realized by combining software with the one device or the plurality of devices.
  • the programs exemplified in the above-described embodiments are instructions, instruction sets, codes, codes, regardless of whether they are called software, firmware, middleware, microcode, hardware description language, or by other names. Should be broadly construed to mean a segment, program code, program, subprogram, software module, application, software application, software package, routine, subroutine, object, executable, thread of execution, procedure, function, etc.
  • software, instructions, information, etc. may be sent and received via a transmission medium.
  • a transmission medium For example, if the software uses wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and/or wireless technology (infrared, microwave, etc.) to create a website, When transmitted from a server or other remote source, these wired and/or wireless technologies are included within the definition of transmission medium.
  • wired technology coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.
  • wireless technology infrared, microwave, etc.
  • the information, parameters, etc. described in this disclosure may be expressed using absolute values, relative values from a predetermined value, or other corresponding information. It may also be expressed as
  • the terminal device 10, the scanning device 20, and the server 30 may be mobile stations (MS).
  • a mobile station is defined by a person skilled in the art as a subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless It may also be referred to as a terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable terminology. Further, in the present disclosure, terms such as “mobile station,” “user terminal,” “user equipment (UE),” and “terminal” may be used interchangeably.

Abstract

A simulation device in which each of a plurality of clothing data includes shape data which expresses the three-dimensional shape of clothing and tag data which expresses what number in a sequence said clothing is to be put on by the user, said simulation device having a processing device equipped with: a receiving unit for receiving two or more pieces of clothing selected by a user from among a plurality of pieces of clothing; an identification unit for identifying the overlap order of the two or more pieces of clothing on the basis of the two or more clothing data which correspond to the two or more pieces of clothing received by the receiving unit; and an image generation unit for generating a first synthesized image in which a three-dimensional image of two or more pieces of clothing overlaps a three-dimensional image of the user according to the identified overlap order, on the basis of body shape data which expresses the three-dimensional body shape of the user, and the two or more clothing data.

Description

シミュレーション装置simulation device
 本発明は、シミュレーション装置に関する。 The present invention relates to a simulation device.
 近年、仮想的な衣装を試着したユーザの姿をシミュレーションするシミュレーション装置が用いられている。とりわけ、ユーザ自身の動作に合わせて、当該ユーザが仮想的な衣装を試着した状態で動作する動画を生成するシミュレーション装置が利用されることがある。 In recent years, simulation devices have been used that simulate the appearance of users trying on virtual costumes. In particular, a simulation device is sometimes used that generates a video in which the user moves while trying on virtual costumes in accordance with the user's own movements.
 例えば特許文献1は、ユーザの3次元体型データを3次元的に動かした動作体型データを生成し、動作体型データに含まれる各時刻の3次元体型データに、衣装データによって示される衣装を着せ付けることで、試着画像を生成する仮想試着システムを開示している。 For example, Patent Document 1 generates motion body shape data by moving the user's 3D body shape data three-dimensionally, and dresses the 3D body shape data at each time included in the motion body shape data with a costume indicated by costume data. Thus, a virtual try-on system that generates try-on images is disclosed.
特許第5605885号公報Patent No. 5605885
 しかし、特許文献1に係る仮想試着システムは、複数の衣装を重ね着する場合に、その重ね順についてまで想定するものではなかった。具体的には、特許文献1においては一例として、Tシャツとジーンズとを着用する場合が記載されている。しかし、特許文献1に係る技術は、Tシャツとジーンズのうち、どちらがユーザの身体に対して、より近接するかまで考慮するものではなかった。 However, the virtual try-on system according to Patent Document 1 does not even consider the order in which multiple costumes are layered. Specifically, Patent Document 1 describes, as an example, a case where a T-shirt and jeans are worn. However, the technology according to Patent Document 1 does not consider which of the T-shirt and jeans is closer to the user's body.
 そこで、本発明は、ユーザが2以上の衣装を重ね着する場合に、その重ね順を特定してシミュレーションできるシミュレーション装置を提供することを目的とする。 Therefore, an object of the present invention is to provide a simulation device that can specify and simulate the order in which two or more costumes are layered when the user wears them.
 本発明の好適な態様に係るシミュレーション装置は、複数の衣装と1対1に対応する複数の衣装データを参照することによって、ユーザが衣装を重ね着する場合をシミュレーションする処理装置を有し、前記複数の衣装データの各々は、衣装の3次元形状を示す形状データと、前記形状データに対応付けられ、当該衣装が前記ユーザに何番目に着用されるかを示すタグデータと、を含み、前記処理装置は、前記複数の衣装のうち、前記ユーザが選択する2以上の衣装の入力を受け付ける受付部と、前記受付部によって受け付けられた前記2以上の衣装にそれぞれ対応する2以上の衣装データに基づいて、前記2以上の衣装が互いに重ねられることを表す重ね順を特定する特定部と、前記ユーザの3次元体形を表す体形データと前記2以上の衣装データとに基づいて、前記ユーザの3次元画像に、前記特定された重ね順に従って、前記2以上の衣装の3次元画像を重ねた第1合成画像を生成する画像生成部と、を備える、シミュレーション装置である。 A simulation device according to a preferred aspect of the present invention includes a processing device that simulates a case where a user wears multiple costumes by referring to a plurality of costume data in one-to-one correspondence with a plurality of costumes; Each of the plurality of costume data includes shape data indicating a three-dimensional shape of the costume, and tag data associated with the shape data and indicating the number of times the costume is worn by the user, The processing device includes a reception unit that receives input of two or more costumes selected by the user from among the plurality of costumes, and a processing device that receives input of two or more costumes respectively corresponding to the two or more costumes received by the reception unit. a specifying unit that specifies a stacking order indicating that the two or more costumes are stacked on top of each other based on the body shape data representing the three-dimensional body shape of the user and the two or more costume data; The simulation device includes an image generation unit that generates a first composite image in which three-dimensional images of the two or more costumes are superimposed on the dimensional image according to the specified superimposition order.
 本発明によれば、ユーザが2以上の衣装を重ね着する場合に、その重ね順を特定してシミュレーションできるシミュレーション装置を提供することが可能となる。 According to the present invention, when a user wears two or more costumes in layers, it is possible to provide a simulation device that can specify and simulate the order in which they are layered.
第1実施形態に係るシミュレーションシステム1の全体構成を示す図。FIG. 1 is a diagram showing the overall configuration of a simulation system 1 according to a first embodiment. 走査装置20の構成例を示すブロック図。FIG. 2 is a block diagram showing a configuration example of a scanning device 20. FIG. 第1データセットDS1の構成例。A configuration example of the first data set DS1. 衣装データCDの構成例。Configuration example of costume data CD. 衣装IDと重ね順指数との対応関係の例を示す表。A table showing an example of the correspondence between costume IDs and stacking order indexes. 撮像装置24の設置例。An example of installation of the imaging device 24. サーバ30の構成例を示すブロック図。FIG. 3 is a block diagram showing a configuration example of a server 30. FIG. 第1データベースDB1の構成例。A configuration example of the first database DB1. 端末装置10の構成例を示すブロック図。1 is a block diagram showing a configuration example of a terminal device 10. FIG. 第2データベースDB2の構成例。A configuration example of the second database DB2. 対応データRDの構成例。An example of the configuration of corresponding data RD. 操作画面OMの一例。An example of the operation screen OM. 画像生成部116の機能ブロック図。FIG. 3 is a functional block diagram of the image generation unit 116. 画像生成部116の動作についての説明図。FIG. 3 is an explanatory diagram of the operation of the image generation unit 116. 画像生成部116の動作についての説明図。FIG. 3 is an explanatory diagram of the operation of the image generation unit 116. 端末装置10の動作を示すフローチャート。5 is a flowchart showing the operation of the terminal device 10.
1:第1実施形態
 以下、図1~図15を参照しつつ、本発明の第1実施形態に係るシミュレーション装置としての端末装置10を含むシミュレーションシステム1の構成について説明する。
1: First Embodiment The configuration of a simulation system 1 including a terminal device 10 as a simulation device according to a first embodiment of the present invention will be described below with reference to FIGS. 1 to 15.
1-1:第1実施形態の構成
1-1-1:全体構成
 図1は、第1実施形態に係るシミュレーションシステム1の全体構成を示す。図1に示されるように、シミュレーションシステム1は、端末装置10、走査装置20、及びサーバ30を備える。シミュレーションシステム1において、端末装置10と、走査装置20と、サーバ30とは、通信網NETを介して互いに通信可能に接続される。なお、図1において、エンドユーザUは、端末装置10を利用するものとする。エンドユーザUは、「ユーザ」の一例である。
1-1: Configuration of First Embodiment 1-1-1: Overall Configuration FIG. 1 shows the overall configuration of a simulation system 1 according to the first embodiment. As shown in FIG. 1, the simulation system 1 includes a terminal device 10, a scanning device 20, and a server 30. In the simulation system 1, the terminal device 10, the scanning device 20, and the server 30 are communicably connected to each other via the communication network NET. Note that in FIG. 1, it is assumed that an end user U uses a terminal device 10. End user U is an example of a "user."
 端末装置10は、エンドユーザUが、2以上の仮想的な衣装を試着した自身の姿をシミュレーションする装置である。とりわけ端末装置10は、複数の衣装が互いに重ねられることを表す順番、すなわち重ね順を特定した上で、エンドユーザUが2以上の仮想的な衣装を試着した全身像をシミュレーションする。端末装置10は、「シミュレーション装置」の一例である。 The terminal device 10 is a device that allows an end user U to simulate his or her own appearance when trying on two or more virtual costumes. In particular, the terminal device 10 specifies the order in which multiple costumes are stacked on top of each other, that is, the stacking order, and then simulates a full-body image of the end user U trying on two or more virtual costumes. The terminal device 10 is an example of a "simulation device."
 走査装置20は、実際の2以上の衣装を3次元スキャンして、端末装置10がシミュレーションする場合に用いる、当該衣装の3次元画像を示す画像データと、当該衣装の特徴に関する衣装データとを生成する。衣装データには、端末装置10が2以上の衣装の重ね順を特定するために用いる重ね順指数が含まれる。走査装置20は、生成した画像データと衣装データとを一つのデータセットとして、サーバ30に出力する。 The scanning device 20 three-dimensionally scans two or more actual costumes and generates image data indicating a three-dimensional image of the costumes and costume data regarding the characteristics of the costumes, which are used when the terminal device 10 performs a simulation. do. The costume data includes a stacking order index used by the terminal device 10 to specify the stacking order of two or more costumes. The scanning device 20 outputs the generated image data and costume data as one data set to the server 30.
 サーバ30は、走査装置20から、画像データと衣装データとを取得する。また、サーバ30は、端末装置10に対し画像データを出力する。更にサーバ30は、端末装置10からエンドユーザUが試着する衣装のIDを示す衣装IDを取得し、取得した衣装IDに対応する衣装データを端末装置10に出力する。 The server 30 acquires image data and costume data from the scanning device 20. Additionally, the server 30 outputs image data to the terminal device 10. Furthermore, the server 30 acquires a costume ID indicating the ID of the costume that the end user U tries on from the terminal device 10, and outputs costume data corresponding to the acquired costume ID to the terminal device 10.
1-1-2:走査装置の構成
 図2は、走査装置20の構成例を示すブロック図である。走査装置20は、処理装置21、記憶装置22、通信装置23、撮像装置24、ディスプレイ25、及び入力装置26を備える。走査装置20が有する各要素は、情報を通信するための単体又は複数のバスを用いて相互に接続される。
1-1-2: Configuration of Scanning Device FIG. 2 is a block diagram showing an example of the configuration of the scanning device 20. The scanning device 20 includes a processing device 21 , a storage device 22 , a communication device 23 , an imaging device 24 , a display 25 , and an input device 26 . The elements of scanning device 20 are interconnected using one or more buses for communicating information.
 処理装置21は、走査装置20の全体を制御するプロセッサである。また、処理装置21は、例えば、単数又は複数のチップを用いて構成される。処理装置21は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU)を用いて構成される。なお、処理装置21が有する機能の一部又は全部を、DSP、ASIC、PLD、及びFPGA等のハードウェアを用いて実現してもよい。処理装置21は、各種の処理を並列的又は逐次的に実行する。 The processing device 21 is a processor that controls the entire scanning device 20. Further, the processing device 21 is configured using, for example, a single chip or a plurality of chips. The processing device 21 is configured using, for example, a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, registers, and the like. Note that some or all of the functions of the processing device 21 may be implemented using hardware such as a DSP, ASIC, PLD, and FPGA. The processing device 21 executes various processes in parallel or sequentially.
 記憶装置22は、処理装置21による読取及び書込が可能な記録媒体である。また、記憶装置22は、処理装置21が実行する制御プログラムPR2を含む複数のプログラムを記憶する。また、記憶装置22は、第1データセットDS1を記憶する。第1データセットDS1は各衣装の3次元モデルに対応するデータセットである。 The storage device 22 is a recording medium that can be read and written by the processing device 21. Furthermore, the storage device 22 stores a plurality of programs including the control program PR2 executed by the processing device 21. The storage device 22 also stores a first data set DS1. The first data set DS1 is a data set corresponding to a three-dimensional model of each costume.
 図3は、第1データセットDS1の構成例を示す。図3に示されるように、第1データセットDS1は、衣装IDと、画像データPDと、衣装データCDとの組である。衣装IDは、走査装置20によって走査される衣装のIDを示す。一例として、衣装IDの先頭のアルファベットは、衣装の種類、例えばTシャツ、長袖シャツ、及びパンツといった衣装の種類に対応する。また、当該アルファベットに続く数値は、衣装の品番を示す。画像データPDは、当該衣装を3次元スキャンした結果得られる3次元画像を示す。衣装データCDは、当該衣装の特徴に関するデータである。衣装データCDは、当該3次元画像を解析したデータと走査装置20の使用者によって入力されたデータとを含む。図3に示される例においては、衣装ID=A001のデータセットは、画像データPDとして“H001.dae”というデータと、衣装データCDとして“O001.dst”というデータとを含む。 FIG. 3 shows an example of the configuration of the first data set DS1. As shown in FIG. 3, the first data set DS1 is a set of costume ID, image data PD, and costume data CD. The costume ID indicates the ID of the costume scanned by the scanning device 20. As an example, the first alphabet of the costume ID corresponds to the type of costume, for example, a T-shirt, a long-sleeved shirt, and pants. Further, the numerical value following the alphabet indicates the item number of the costume. The image data PD indicates a three-dimensional image obtained as a result of three-dimensional scanning of the costume. The costume data CD is data regarding the characteristics of the costume. The costume data CD includes data obtained by analyzing the three-dimensional image and data input by the user of the scanning device 20. In the example shown in FIG. 3, the data set with costume ID=A001 includes data "H001.dae" as image data PD and data "O001.dst" as costume data CD.
 図4は、衣装データCDの構成例を示す。衣装データCDは、形状データFDとタグデータGDとを含む。形状データFDは衣装の3次元形状を示すデータである。形状データFDは、構造データSD、テクスチャーデータTD、及び関節データJDを有する。構造データSDは、衣装の構造に関するデータである。具体的には、構造データSDは、一着の衣装において、どの形状でどの厚さの布が、どの位置に配置され、布同士が互いにどのように接合されるかを示すデータを含む。テクスチャーデータTDは、各々の布の素材、剛性、色、模様、光沢、及び質感を示すデータを含む。関節データJDは、衣装の一般的な着用者が当該衣装を着用する場合、当該衣装における、当該一般的な着用者の関節に対応する位置を示すデータを含む。更に関節データJDは、衣装の一般的な着用者が当該衣装を着用する場合、当該衣装に対する当該一般的な着用者の骨格の相対的な位置を示すデータを含む。なお、衣装の着用者の骨格及び関節の位置に対応する、一着の衣装における位置として、当該関節データJDは所定の範囲を示す。 FIG. 4 shows an example of the configuration of the costume data CD. The costume data CD includes shape data FD and tag data GD. Shape data FD is data indicating the three-dimensional shape of the costume. The shape data FD includes structure data SD, texture data TD, and joint data JD. The structure data SD is data regarding the structure of the costume. Specifically, the structure data SD includes data indicating what shape and thickness of cloth is placed at what position in one costume, and how the cloths are joined to each other. The texture data TD includes data indicating the material, stiffness, color, pattern, gloss, and texture of each cloth. When a general wearer of the costume wears the costume, the joint data JD includes data indicating positions corresponding to the joints of the general wearer in the costume. Furthermore, when a general wearer of the costume wears the costume, the joint data JD includes data indicating the relative position of the skeleton of the general wearer with respect to the costume. Note that the joint data JD indicates a predetermined range as a position in a costume that corresponds to the position of the skeleton and joints of the wearer of the costume.
 タグデータGDは、衣装を分類するために用いるデータである。タグデータGDは、例えば、端末装置10が複数の衣装が互いに重ねられることを表す重ね順を特定するために用いる重ね順指数を含む。 The tag data GD is data used to classify costumes. The tag data GD includes, for example, a stacking order index used by the terminal device 10 to specify a stacking order indicating that a plurality of costumes are stacked on top of each other.
 図5は、衣装IDと重ね順指数との対応関係の例を示す表である。なお図5においては、説明の便宜上、各衣装IDに対応する画像データPDによって示される各衣装の3次元画像の例を合わせて示す。 FIG. 5 is a table showing an example of the correspondence between costume IDs and stacking order indexes. Note that in FIG. 5, for convenience of explanation, examples of three-dimensional images of each costume indicated by image data PD corresponding to each costume ID are also shown.
 図5に示される例において、衣装IDの先頭のアルファベットが“A”の衣装、すなわちTシャツに対応する重ね順指数は“1”である。衣装IDの先頭のアルファベットが“B”の衣装、すなわち長袖シャツに対応する重ね順指数は“3”である。衣装IDの先頭のアルファベットが“C”の衣装、すなわちセーター又はパーカーに対応する重ね順指数は“4”である。衣装IDの先頭のアルファベットが“D”の衣装、すなわちジャケットに対応する重ね順指数は“5”である。衣装IDの先頭のアルファベットが“E”の衣装、すなわちパンツに対応する重ね順指数は“2”である。衣装IDの先頭のアルファベットが“F”の衣装、すなわちコートに対応する重ね順指数は“6”である。 In the example shown in FIG. 5, the stacking order index corresponding to the costume whose first alphabet in the costume ID is "A", that is, the T-shirt, is "1". The stacking order index corresponding to the costume whose first alphabet in the costume ID is “B”, that is, the long-sleeved shirt, is “3”. The stacking order index corresponding to a costume whose first alphabet in the costume ID is "C", that is, a sweater or a hoodie, is "4". The stacking order index corresponding to the costume whose first alphabet in the costume ID is “D”, that is, the jacket, is “5”. The stacking order index corresponding to the costume whose first alphabet in the costume ID is “E”, that is, pants, is “2”. The stacking order index corresponding to the costume whose first alphabet in the costume ID is “F”, that is, the coat, is “6”.
 後述の端末装置10が、2以上の衣装の3次元画像を用いて、エンドユーザUが当該2以上の衣装を試着した姿をシミュレーションする場合、当該端末装置10は、図5に例示される重ね順指数を用いて、2以上の衣装の重ね順を特定する。具体的には、端末装置10は、重ね順指数が相対的に小さい衣装の3次元画像の上から、重ね順指数が相対的に大きい衣装の3次元画像を重ねる。図5の例に示される各衣装は、衣装の種類ごとに、Tシャツ→パンツ→長袖シャツ→セーター又はパーカー→ジャケット→コートの順で、エンドユーザUの体形を示す3次元画像に対してより内側に位置する。 When the terminal device 10 described below uses three-dimensional images of two or more costumes to simulate the end user U trying on the two or more costumes, the terminal device 10 uses the three-dimensional images of two or more costumes to simulate the appearance of the end user U trying on the two or more costumes. The stacking order of two or more costumes is specified using the order index. Specifically, the terminal device 10 overlays a three-dimensional image of a costume with a relatively large stacking order index over a three-dimensional image of a costume with a relatively small stacking order index. Each costume shown in the example of FIG. 5 is arranged in the order of T-shirt → pants → long-sleeved shirt → sweater or hoodie → jacket → coat for each type of costume, relative to the three-dimensional image showing the body shape of the end user U. Located inside.
 説明を図4に戻すと、衣装データCDの一部又は全部は、後述の解析部213の解析結果を用いて、生成部212が自動的に生成してもよい。あるいは、これら衣装データCDの一部又は全部は、走査装置20の使用者が入力装置26を介して入力する、入力情報に基づいて生成されてもよい。 Returning to FIG. 4, part or all of the costume data CD may be automatically generated by the generation unit 212 using the analysis results of the analysis unit 213, which will be described later. Alternatively, some or all of these costume data CDs may be generated based on input information input by the user of the scanning device 20 via the input device 26.
 説明を図2に戻すと、通信装置23は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。通信装置23は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、又は通信モジュール等とも呼ばれる。通信装置23は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置23は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、及びUSBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 Returning to FIG. 2, the communication device 23 is hardware as a transmitting and receiving device for communicating with other devices. The communication device 23 is also called, for example, a network device, a network controller, a network card, a communication module, or the like. The communication device 23 may include a connector for wired connection and an interface circuit corresponding to the connector. Furthermore, the communication device 23 may include a wireless communication interface. Examples of connectors and interface circuits for wired connections include products compliant with wired LAN, IEEE1394, and USB. Furthermore, examples of the wireless communication interface include products compliant with wireless LAN, Bluetooth (registered trademark), and the like.
 撮像装置24は、物体が存在する外界を撮像する。とりわけ本実施形態において、撮像装置24は、衣装を撮像する。また、撮像装置24は、衣装を撮像して得られた撮像画像を示す撮像情報を出力する。更に、撮像装置24は、例えば、レンズ、撮像素子、増幅器、及びAD変換器を備える。レンズを介して集光された光は、撮像素子がアナログ信号である撮像信号に変換する。増幅器は撮像信号を増幅した上でAD変換器に出力する。AD変換器はアナログ信号である増幅された撮像信号をデジタル信号である撮像情報に変換する。変換された撮像情報は、処理装置21に出力される。 The imaging device 24 images the outside world where the object exists. In particular, in this embodiment, the imaging device 24 images the costume. Further, the imaging device 24 outputs imaging information indicating an image obtained by imaging the costume. Furthermore, the imaging device 24 includes, for example, a lens, an imaging element, an amplifier, and an AD converter. The light collected through the lens is converted by the image sensor into an image signal, which is an analog signal. The amplifier amplifies the imaging signal and outputs it to the AD converter. The AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal. The converted imaging information is output to the processing device 21.
 図6は、本実施形態における撮像装置24の設置例を示す。図6に示される例において、走査装置20は、8台の撮像装置24-1~24-8を備える。なお、走査装置20が8台の撮像装置24を備えることはあくまで一例である。走査装置20は、任意の台数の撮像装置24を備えることが可能である。図6に示される例において、撮像装置24-1~24-8はフレームFに固定され、フレームF内の中空に配置された衣装Cを、上下左右前後の3軸において、360°方向から撮像する。後述の生成部212は、撮像装置24-1~24-8によって撮像された複数枚の撮像画像を示す撮像情報に基づいて、衣装Cの3次元画像を生成する。 FIG. 6 shows an example of the installation of the imaging device 24 in this embodiment. In the example shown in FIG. 6, the scanning device 20 includes eight imaging devices 24-1 to 24-8. Note that it is only an example that the scanning device 20 includes eight imaging devices 24. Scanning device 20 can include any number of imaging devices 24 . In the example shown in FIG. 6, the imaging devices 24-1 to 24-8 are fixed to the frame F, and image the costume C placed in the hollow inside the frame F from 360° directions in three axes: top, bottom, left, right, front and back. do. A generation unit 212, which will be described later, generates a three-dimensional image of the costume C based on imaging information indicating a plurality of images captured by the imaging devices 24-1 to 24-8.
 説明を図2に戻すと、ディスプレイ25は、画像及び文字情報を表示するデバイスである。ディスプレイ25は、処理装置21による制御のもとで各種の画像を表示する。例えば、液晶表示パネル及び有機EL(Electro Luminescence)表示パネル等の各種の表示パネルがディスプレイ25として好適に利用される。 Returning to FIG. 2, the display 25 is a device that displays images and text information. The display 25 displays various images under the control of the processing device 21. For example, various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are suitably used as the display 25.
 入力装置26は、走査装置20の使用者からの操作を受け付ける。例えば、入力装置26は、キーボード、タッチパッド、タッチパネル又はマウス等のポインティングデバイスを含んで構成される。ここで、入力装置26は、タッチパネルを含んで構成される場合、ディスプレイ25を兼ねてもよい。 The input device 26 accepts operations from the user of the scanning device 20. For example, the input device 26 includes a keyboard, a touch pad, a touch panel, or a pointing device such as a mouse. Here, when the input device 26 includes a touch panel, it may also serve as the display 25.
 処理装置21は、記憶装置22から制御プログラムPR2を読み出して実行する。その結果、処理装置21は、取得部211、生成部212、解析部213、及び通信制御部214として機能する。 The processing device 21 reads the control program PR2 from the storage device 22 and executes it. As a result, the processing device 21 functions as an acquisition section 211, a generation section 212, an analysis section 213, and a communication control section 214.
 取得部211は、撮像装置24から、衣装Cの撮像画像を示す撮像情報を取得する。また取得部211は、入力装置26から、走査装置20の使用者が、当該入力装置26を介して入力した入力情報を取得する。当該入力情報は、例えば重ね順指数を含む。 The acquisition unit 211 acquires imaging information indicating a captured image of the costume C from the imaging device 24. The acquisition unit 211 also acquires input information input by the user of the scanning device 20 via the input device 26 . The input information includes, for example, a stacking order index.
 生成部212は、取得部211が撮像装置24から取得した撮像情報に基づいて、衣装Cの3次元画像を示す画像データPDを生成する。また、生成部212は、取得部211が入力装置26から取得した入力情報を用いて衣装データCDを生成する。更に、生成部212は、後述の解析部213が、衣装Cの3次元画像を解析した解析結果を示す解析情報も用いて、衣装データCDを生成する。また、生成部212は、衣装IDと、当該衣装IDに対応する画像データPDと衣装データCDとの組とを含むデータセットである、第1データセットDS1を生成する。生成部212は、生成した第1データセットDS1を、記憶装置22に格納する。 The generation unit 212 generates image data PD representing a three-dimensional image of the costume C based on the imaging information acquired by the acquisition unit 211 from the imaging device 24. Further, the generation unit 212 generates costume data CD using the input information acquired by the acquisition unit 211 from the input device 26. Furthermore, the generation unit 212 generates the costume data CD using also analysis information indicating the analysis result obtained by analyzing the three-dimensional image of the costume C by the analysis unit 213, which will be described later. The generation unit 212 also generates a first data set DS1, which is a data set including a costume ID and a set of image data PD and costume data CD corresponding to the costume ID. The generation unit 212 stores the generated first data set DS1 in the storage device 22.
 解析部213は、生成部212が生成した衣装Cの3次元画像を解析する。解析部213は、当該3次元画像を解析した結果、例えば、衣装Cの形状に関する特徴を抽出する。抽出された特徴を示す解析情報は、生成部212に出力される。上記のように、生成部212は、解析部213から取得した解析情報を用いて、衣装データCDを生成する。 The analysis unit 213 analyzes the three-dimensional image of the costume C generated by the generation unit 212. The analysis unit 213 extracts, for example, features related to the shape of the costume C as a result of analyzing the three-dimensional image. Analysis information indicating the extracted features is output to the generation unit 212. As described above, the generation unit 212 generates costume data CD using the analysis information acquired from the analysis unit 213.
 通信制御部214は、記憶装置22に格納される第1データセットDS1を、通信装置23に、サーバ30に対して送信させる。 The communication control unit 214 causes the communication device 23 to transmit the first data set DS1 stored in the storage device 22 to the server 30.
 この結果、走査装置20の使用者は、画像データPD及び衣装データCDを構成する各データ要素の全てを一つずつ手入力することなく、簡便に、画像データPD及び衣装データCDを制作できる。また、シミュレーションシステム1は、簡便に制作された当該画像データPD及び衣装データCDを用いて、仮想的な試着をシミュレーションできる。 As a result, the user of the scanning device 20 can easily create image data PD and costume data CD without having to manually input all of the data elements that make up image data PD and costume data CD one by one. Further, the simulation system 1 can simulate virtual try-on using the image data PD and costume data CD that are simply produced.
1-1-3:サーバの構成
 図7は、サーバ30の構成例を示すブロック図である。サーバ30は、処理装置31、記憶装置32、通信装置33、ディスプレイ34、及び入力装置35を備える。サーバ30が有する各要素は、情報を通信するための単体又は複数のバスを用いて相互に接続される。
1-1-3: Server Configuration FIG. 7 is a block diagram showing an example of the configuration of the server 30. The server 30 includes a processing device 31, a storage device 32, a communication device 33, a display 34, and an input device 35. Each element included in the server 30 is interconnected using one or more buses for communicating information.
 処理装置31は、サーバ30の全体を制御するプロセッサである。また、処理装置31は、例えば、単数又は複数のチップを用いて構成される。処理装置31は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU)を用いて構成される。なお、処理装置31が有する機能の一部又は全部を、DSP、ASIC、PLD、及びFPGA等のハードウェアを用いて実現してもよい。処理装置31は、各種の処理を並列的又は逐次的に実行する。 The processing device 31 is a processor that controls the entire server 30. Further, the processing device 31 is configured using, for example, a single chip or a plurality of chips. The processing device 31 is configured using, for example, a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, registers, and the like. Note that some or all of the functions of the processing device 31 may be implemented using hardware such as a DSP, ASIC, PLD, and FPGA. The processing device 31 executes various processes in parallel or sequentially.
 記憶装置32は、処理装置31による読取及び書込が可能な記録媒体である。また、記憶装置32は、処理装置31が実行する制御プログラムPR3を含む複数のプログラムを記憶する。また、記憶装置32は、第1データベースDB1を記憶する。 The storage device 32 is a recording medium that can be read and written by the processing device 31. Furthermore, the storage device 32 stores a plurality of programs including the control program PR3 executed by the processing device 31. The storage device 32 also stores a first database DB1.
 図8は、第1データベースDB1の構成例を示す。第1データベースDB1は、後述の取得部311が、通信装置33を介して走査装置20から取得する第1データセットDS1が集積されるデータベースである。 FIG. 8 shows an example of the configuration of the first database DB1. The first database DB1 is a database in which a first data set DS1 acquired from the scanning device 20 via the communication device 33 by the acquisition unit 311, which will be described later, is accumulated.
 説明を図7に戻すと、通信装置33は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。通信装置33は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等とも呼ばれる。通信装置33は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置33は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、及びUSBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 Returning to FIG. 7, the communication device 33 is hardware as a transmitting/receiving device for communicating with other devices. The communication device 33 is also called, for example, a network device, a network controller, a network card, a communication module, or the like. The communication device 33 may include a connector for wired connection and an interface circuit corresponding to the connector. Furthermore, the communication device 33 may include a wireless communication interface. Examples of connectors and interface circuits for wired connections include products compliant with wired LAN, IEEE1394, and USB. Furthermore, examples of the wireless communication interface include products compliant with wireless LAN, Bluetooth (registered trademark), and the like.
 ディスプレイ34は、画像及び文字情報を表示するデバイスである。ディスプレイ34は、処理装置31による制御のもとで各種の画像を表示する。例えば、液晶表示パネル及び有機EL(Electro Luminescence)表示パネル等の各種の表示パネルがディスプレイ34として好適に利用される。 The display 34 is a device that displays images and text information. The display 34 displays various images under the control of the processing device 31. For example, various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are suitably used as the display 34.
 入力装置35は、サーバ30の使用者からの操作を受け付ける。例えば、入力装置35は、キーボード、タッチパッド、タッチパネル又はマウス等のポインティングデバイスを含んで構成される。ここで、入力装置35は、タッチパネルを含んで構成される場合、ディスプレイ34を兼ねてもよい。 The input device 35 accepts operations from the user of the server 30. For example, the input device 35 includes a keyboard, a touch pad, a touch panel, or a pointing device such as a mouse. Here, when the input device 35 includes a touch panel, it may also serve as the display 34.
 処理装置31は、記憶装置32から制御プログラムPR3を読み出して実行する。その結果、処理装置31は、取得部311、抽出部312、及び通信制御部313として機能する。 The processing device 31 reads the control program PR3 from the storage device 32 and executes it. As a result, the processing device 31 functions as an acquisition section 311, an extraction section 312, and a communication control section 313.
 取得部311は、通信装置33を介して、走査装置20から第1データセットDS1を取得する。取得部311は、取得した第1データセットDS1を、第1データベースDB1に格納する。また、取得部311は、通信装置33を介して、端末装置10から、後述のように、エンドユーザUによって選択された衣装の選択結果を示す選択情報を取得する。 The acquisition unit 311 acquires the first data set DS1 from the scanning device 20 via the communication device 33. The acquisition unit 311 stores the acquired first data set DS1 in the first database DB1. The acquisition unit 311 also acquires selection information indicating the selection result of the costume selected by the end user U from the terminal device 10 via the communication device 33, as described later.
 抽出部312は、取得部311が取得した選択情報に基づいて、第1データベースDB1から衣装データCDを抽出する。より詳細には、抽出部312は、選択情報に含まれる衣装IDを用いて、第1データベースDB1から、当該衣装IDに紐づく衣装データCDを抽出する。 The extraction unit 312 extracts costume data CD from the first database DB1 based on the selection information acquired by the acquisition unit 311. More specifically, the extraction unit 312 uses the costume ID included in the selection information to extract costume data CD linked to the costume ID from the first database DB1.
 通信制御部313は、第1データベースDB1に格納される衣装IDと画像データPDとの組を、通信装置33に、端末装置10に対して送信させる。一例として、通信制御部313は、第1データベースDB1に格納される全ての衣装IDと画像データPDとの組を、通信装置33に、端末装置10に対して送信させる。また、通信制御部313は、抽出部312によって抽出された衣装データCDを、当該衣装データCDが対応する衣装IDと組にした対応データRDとして、通信装置33を介して端末装置10に出力する。 The communication control unit 313 causes the communication device 33 to transmit the set of costume ID and image data PD stored in the first database DB1 to the terminal device 10. As an example, the communication control unit 313 causes the communication device 33 to transmit all costume ID and image data PD pairs stored in the first database DB1 to the terminal device 10. Furthermore, the communication control unit 313 outputs the costume data CD extracted by the extraction unit 312 to the terminal device 10 via the communication device 33 as corresponding data RD that is paired with the costume ID to which the costume data CD corresponds. .
1-1-4:端末装置の構成
 図9は、端末装置10の構成例を示すブロック図である。端末装置10は、処理装置11、記憶装置12、通信装置13、撮像装置14、ディスプレイ15、及び入力装置16を備える。端末装置10が有する各要素は、情報を通信するための単体又は複数のバスを用いて相互に接続される。
1-1-4: Configuration of Terminal Device FIG. 9 is a block diagram showing an example of the configuration of the terminal device 10. The terminal device 10 includes a processing device 11 , a storage device 12 , a communication device 13 , an imaging device 14 , a display 15 , and an input device 16 . Each element included in the terminal device 10 is interconnected using a single bus or multiple buses for communicating information.
 処理装置11は、端末装置10の全体を制御するプロセッサである。また、処理装置11は、例えば、単数又は複数のチップを用いて構成される。処理装置11は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU)を用いて構成される。なお、処理装置11が有する機能の一部又は全部を、DSP、ASIC、PLD、及びFPGA等のハードウェアを用いて実現してもよい。処理装置11は、各種の処理を並列的又は逐次的に実行する。 The processing device 11 is a processor that controls the entire terminal device 10. Further, the processing device 11 is configured using, for example, a single chip or a plurality of chips. The processing device 11 is configured using, for example, a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, registers, and the like. Note that some or all of the functions of the processing device 11 may be implemented using hardware such as a DSP, an ASIC, a PLD, and an FPGA. The processing device 11 executes various processes in parallel or sequentially.
 記憶装置12は、処理装置11による読取及び書込が可能な記録媒体である。また、記憶装置12は、処理装置11が実行する制御プログラムPR1を含む複数のプログラムを記憶する。また、記憶装置12は、第2データベースDB2、対応データRD、体形データBD、及び学習モデルLMを記憶する。 The storage device 12 is a recording medium that can be read and written by the processing device 11. Furthermore, the storage device 12 stores a plurality of programs including the control program PR1 executed by the processing device 11. The storage device 12 also stores a second database DB2, corresponding data RD, body shape data BD, and learning model LM.
 図10は、第2データベースDB2の構成例を示す。第2データベースDB2は、後述の取得部111が、通信装置13を介してサーバ30から取得した衣装IDと画像データPDとの組を格納するデータベースである。一例として、第2データベースDB2には、サーバ30に格納される全ての衣装IDと画像データPDとの組が格納される。 FIG. 10 shows an example of the configuration of the second database DB2. The second database DB2 is a database that stores a set of costume ID and image data PD acquired from the server 30 via the communication device 13 by the acquisition unit 111, which will be described later. As an example, the second database DB2 stores all pairs of costume IDs and image data PD stored in the server 30.
 図11は、対応データRDの構成例を示す。対応データRDは、後述の取得部111が、通信装置13を介してサーバ30から取得した対応データRDである。上記のように、当該対応データRDは、端末装置10からサーバ30に出力した選択情報に含まれる衣装IDと、サーバ30が当該衣装IDを用いて抽出した衣装データCDとの組である。 FIG. 11 shows an example of the configuration of the corresponding data RD. The correspondence data RD is correspondence data RD that an acquisition unit 111 (described later) acquires from the server 30 via the communication device 13. As described above, the corresponding data RD is a set of the costume ID included in the selection information output from the terminal device 10 to the server 30 and the costume data CD extracted by the server 30 using the costume ID.
 説明を図9に戻すと、体形データBDは、エンドユーザUの体形を3次元で表すデータである。具体的には、体形データBDは、エンドユーザUの体形を表す3次元画像を示すデータである。体形データBDは、エンドユーザUの骨格を表す骨格データKDを含む。より詳細には、骨格データKDは、エンドユーザUの動作に合わせたエンドユーザUの骨格の姿勢の変化を表す。当該骨格データKDは、一例として、エンドユーザUが端末装置10を利用する期間内の複数の時点における、当該エンドユーザUの骨格の姿勢を示す、経時的且つ離散的なデータを含む。更に、当該骨格データKDは、エンドユーザUの関節に関するデータを含む。 Returning to FIG. 9, the body shape data BD is data representing the body shape of the end user U in three dimensions. Specifically, the body shape data BD is data indicating a three-dimensional image representing the body shape of the end user U. The body shape data BD includes skeleton data KD representing the skeleton of the end user U. More specifically, the skeleton data KD represents a change in the posture of the end user's U skeleton in accordance with the end user's U motion. The skeleton data KD includes, for example, temporal and discrete data indicating the posture of the skeleton of the end user U at a plurality of points in time during the period in which the end user U uses the terminal device 10. Furthermore, the skeleton data KD includes data regarding the joints of the end user U.
 学習済みモデルLMは、後述のデータ生成部112が、後述の撮像装置14によって撮像されたエンドユーザUの撮像画像に基づいて、エンドユーザUの体形データBDを生成する場合に用いる学習済みモデルである。すなわち、学習済みモデルLMは、エンドユーザUの撮像画像を入力とし、体形データBDを出力とする。学習済みモデルLMは、互いに異なる複数のエンドユーザの撮像画像と、当該複数のエンドユーザと1対1に対応する複数の体形データとの関係を学習済みの学習済みモデルである。学習済みモデルLMは、学習フェーズにおいて、教師データを学習することによって生成される。学習済みモデルLMを生成するために用いられる教師データは、エンドユーザの撮像画像を示すデータと体形データとの組を複数有する。また、学習済みモデルLMは、端末装置10の外部で生成される。とりわけ、学習済みモデルLMは、図示しないサーバにおいて生成されることが好適である。この場合、端末装置10は、通信網NETを介して図示しないサーバから学習済みモデルLMを取得する。 The trained model LM is a trained model used when the data generation unit 112 (described later) generates body shape data BD of the end user U based on a captured image of the end user U captured by the imaging device 14 (described later). be. That is, the trained model LM receives the captured image of the end user U as input, and outputs the body shape data BD. The trained model LM is a trained model that has learned the relationship between a plurality of captured images of different end users and a plurality of body shape data that correspond one-to-one with the plurality of end users. The learned model LM is generated by learning teacher data in the learning phase. The teacher data used to generate the learned model LM includes a plurality of sets of data representing a captured image of the end user and body shape data. Further, the trained model LM is generated outside the terminal device 10. In particular, it is preferable that the learned model LM be generated in a server (not shown). In this case, the terminal device 10 acquires the learned model LM from a server (not shown) via the communication network NET.
 通信装置13は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。通信装置13は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、又は通信モジュール等とも呼ばれる。通信装置13は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置13は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、及びUSBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 13 is hardware as a transmitting/receiving device for communicating with other devices. The communication device 13 is also called, for example, a network device, a network controller, a network card, a communication module, or the like. The communication device 13 may include a connector for wired connection and an interface circuit corresponding to the connector. Furthermore, the communication device 13 may include a wireless communication interface. Examples of connectors and interface circuits for wired connections include products compliant with wired LAN, IEEE1394, and USB. Furthermore, examples of the wireless communication interface include products compliant with wireless LAN, Bluetooth (registered trademark), and the like.
 撮像装置14は、物体が存在する外界を撮像する。とりわけ、本実施形態において、撮像装置14は、エンドユーザUの全身像を撮像する。例えば、端末装置10が、衣装を販売する店舗に設置されている場合、撮像装置14は、当該店舗に来店したエンドユーザUの、来店時の着衣状態での全身像を撮像する。また、撮像装置14は、エンドユーザUを撮像して得られた撮像画像を示す撮像情報を出力する。更に、撮像装置14は、例えば、レンズ、撮像素子、増幅器、及びAD変換器を備える。レンズを介して集光された光は、撮像素子がアナログ信号である撮像信号に変換する。増幅器は撮像信号を増幅した上でAD変換器に出力する。AD変換器はアナログ信号である増幅された撮像信号をデジタル信号である撮像情報に変換する。変換された撮像情報は、処理装置11に出力される。 The imaging device 14 images the outside world where the object exists. In particular, in this embodiment, the imaging device 14 captures a full-body image of the end user U. For example, when the terminal device 10 is installed in a store that sells costumes, the imaging device 14 captures a full-body image of the end user U who has visited the store in the state of clothing at the time of the visit. Further, the imaging device 14 outputs imaging information indicating a captured image obtained by imaging the end user U. Furthermore, the imaging device 14 includes, for example, a lens, an imaging element, an amplifier, and an AD converter. The light collected through the lens is converted by the image sensor into an image signal, which is an analog signal. The amplifier amplifies the imaging signal and outputs it to the AD converter. The AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal. The converted imaging information is output to the processing device 11.
 ディスプレイ15は、画像及び文字情報を表示するデバイスである。ディスプレイ15は、処理装置11による制御のもとで各種の画像を表示する。例えば、液晶表示パネル及び有機EL(Electro Luminescence)表示パネル等の各種の表示パネルがディスプレイ15として好適に利用される。 The display 15 is a device that displays images and text information. The display 15 displays various images under the control of the processing device 11. For example, various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are suitably used as the display 15.
 図12は、ディスプレイ15に表示される操作画面OMの一例である。図12に例示される操作画面OMにおいて、最も左に位置する欄には、エンドユーザUが試着する衣装の候補となる衣装の3次元画像のリストが表示される。図12の例では、上から順に、Tシャツ、長袖シャツ、セーター又はパーカー、ジャケット、パンツ、及びコートの候補の3次元画像が表示される。これらの3次元画像は、第2データベースDB2において、各衣装の衣装IDと紐づく画像データPDによって示される。エンドユーザUは、後述の入力装置16を用いて、これらの衣装の候補の中から自身が試着する衣装を選択する。図12に例示される操作画面OMにおいては、衣装の候補の欄の右隣に、エンドユーザUが選択した衣装の3次元画像が表示される。更に、図12に例示される操作画面OMにおいて、エンドユーザUの衣装の選択結果が表示される欄の右隣に、エンドユーザUの全身像の3次元画像である第1合成画像SP1が、仮想的な試着結果として表示される。当該エンドユーザUの全身像の3次元画像は、エンドユーザUの実際の動きに合わせて動作する。 FIG. 12 is an example of the operation screen OM displayed on the display 15. In the operation screen OM illustrated in FIG. 12, a list of three-dimensional images of costumes that are candidates for costumes to be tried on by the end user U is displayed in the leftmost column. In the example of FIG. 12, three-dimensional images of candidates for a T-shirt, long-sleeved shirt, sweater or hoodie, jacket, pants, and coat are displayed in order from the top. These three-dimensional images are shown in the second database DB2 by image data PD linked to the costume ID of each costume. The end user U selects an outfit to try on from among these outfit candidates using the input device 16, which will be described later. In the operation screen OM illustrated in FIG. 12, a three-dimensional image of the costume selected by the end user U is displayed to the right of the costume candidate field. Furthermore, on the operation screen OM illustrated in FIG. 12, a first composite image SP1, which is a three-dimensional image of the whole body of the end user U, is displayed to the right of the column in which the selection results of the end user's U costume are displayed. Displayed as a virtual try-on result. The three-dimensional image of the whole body of the end user U moves in accordance with the actual movement of the end user U.
 説明を図9に戻すと、入力装置16は、エンドユーザUからの操作を受け付ける。例えば、入力装置16は、キーボード、タッチパッド、タッチパネル又はマウス等のポインティングデバイスを含んで構成される。ここで、入力装置16は、タッチパネルを含んで構成される場合、ディスプレイ15を兼ねてもよい。 Returning to FIG. 9, the input device 16 accepts operations from the end user U. For example, the input device 16 includes a keyboard, a touch pad, a touch panel, or a pointing device such as a mouse. Here, when the input device 16 includes a touch panel, it may also serve as the display 15.
 処理装置11は、記憶装置12から制御プログラムPR1を読み出して実行する。その結果、処理装置11は、取得部111、データ生成部112、受付部113、通信制御部114、重ね順特定部115、画像生成部116、及び表示制御部117として機能する。 The processing device 11 reads the control program PR1 from the storage device 12 and executes it. As a result, the processing device 11 functions as an acquisition section 111 , a data generation section 112 , a reception section 113 , a communication control section 114 , a stack order specification section 115 , an image generation section 116 , and a display control section 117 .
 取得部111は、通信装置13を介して、サーバ30から、衣装IDと画像データPDとの組、及び対応データRDを取得する。取得部111は、サーバ30から取得した、衣装IDと画像データPDとの組を、第2データベースDB2に格納する。また、取得部111は、サーバ30から取得した対応データRDを記憶装置12に格納する。更に、取得部111は、撮像装置14から、エンドユーザUの全身像を撮像した撮像画像を示す撮像情報を取得する。 The acquisition unit 111 acquires the set of costume ID and image data PD and the corresponding data RD from the server 30 via the communication device 13. The acquisition unit 111 stores the set of costume ID and image data PD acquired from the server 30 in the second database DB2. The acquisition unit 111 also stores the corresponding data RD acquired from the server 30 in the storage device 12 . Furthermore, the acquisition unit 111 acquires imaging information indicating a captured image of the end user U's whole body from the imaging device 14 .
 データ生成部112は、取得部111が取得した撮像情報を学習モデルLMに入力することにより、エンドユーザUの体形を3次元で表す体形データBDを生成する。当該体形データBDによって示されるエンドユーザUの体形は、エンドユーザUが着衣していない状態での体形を示すデータである。また、上記のように体形データBDには、エンドユーザUの骨格データKDが含まれる。骨格データKDは、エンドユーザUの骨格を表すデータである。 The data generation unit 112 generates body shape data BD that represents the body shape of the end user U in three dimensions by inputting the imaging information acquired by the acquisition unit 111 into the learning model LM. The body shape of the end user U indicated by the body shape data BD is data indicating the body shape of the end user U without clothes on. Further, as described above, the body shape data BD includes the end user U's skeletal data KD. The skeleton data KD is data representing the skeleton of the end user U.
 受付部113は、複数の衣装のうち、エンドユーザUが選択する2以上の衣装を受け付ける。具体的には、受付部113は、エンドユーザUが、操作画面OMを見ながら、入力装置16を用いて入力する、2以上の衣装の選択結果を示す選択情報を受け付ける。 The reception unit 113 receives two or more costumes selected by the end user U from among the plurality of costumes. Specifically, the reception unit 113 receives selection information indicating the selection results of two or more costumes, which is input by the end user U using the input device 16 while viewing the operation screen OM.
 通信制御部114は、受付部113が受け付けた選択情報を、通信装置13に、サーバ30に対して送信させる。上記のように、サーバ30への選択情報の出力に対する応答として、取得部111は、通信装置13を介して対応データRDを取得する。対応データRDには、エンドユーザUが選択した衣装に対応する衣装データCDが含まれる。また、図4に示されるように、当該衣装データCDにはタグデータGDが含まれる。更に、当該タグデータGDは重ね順指数を含む。 The communication control unit 114 causes the communication device 13 to transmit the selection information received by the reception unit 113 to the server 30. As described above, in response to the output of the selection information to the server 30, the acquisition unit 111 acquires the corresponding data RD via the communication device 13. The correspondence data RD includes costume data CD corresponding to the costume selected by the end user U. Further, as shown in FIG. 4, the costume data CD includes tag data GD. Furthermore, the tag data GD includes a stacking order index.
 重ね順特定部115は、取得部111が取得した衣装データCDに含まれるタグデータGDに基づいて、エンドユーザUが選択した2以上の衣装の重ね順を特定する。上記のように、重ね順特定部115は、重ね順指数の大きな衣装ほど、エンドユーザUの体形を示す3次元画像から見て外側に重ねることを特定する。重ね順特定部115は、「特定部」の一例である。 The stacking order identifying unit 115 identifies the stacking order of two or more costumes selected by the end user U, based on the tag data GD included in the costume data CD acquired by the acquiring unit 111. As described above, the stacking order specifying unit 115 specifies that the larger the stacking order index of the costume, the more outwardly the clothes are stacked when viewed from the three-dimensional image showing the body shape of the end user U. The stacking order specifying unit 115 is an example of a “specific unit”.
 画像生成部116は、体形データBDと、対応データRDに含まれる2以上の衣装データCDとに基づいて、第1合成画像SP1を生成する。具体的には、画像生成部116は、エンドユーザUの体形の3次元画像に、重ね順特定部115が特定した2以上の衣装の重ね順に従って、2以上の衣装の3次元画像を重ねた第1合成画像SP1を生成する。より詳細には、画像生成部116は、対応データRDから衣装IDと衣装データCDとを読みだす。次に、画像生成部116は、読み出した衣装IDを用いて第2データベースDB2を参照することにより、当該衣装IDに紐づく画像データPDを読みだす。また、画像生成部116は、エンドユーザUの体形の3次元画像に、重ね順特定部115によって特定された重ね順で、画像データPDによって示される衣装の3次元画像を重ねることにより、第1合成画像SP1を生成する。画像生成部116が、エンドユーザUの体形の3次元画像に、衣装の3次元画像を重ねる際、衣装データCDに含まれる形状データFDを用いることで、より自然な第1合成画像SP1が生成される。ここで、エンドユーザUの体形の3次元画像は、体形データBDによって示される。 The image generation unit 116 generates the first composite image SP1 based on the body shape data BD and the two or more costume data CDs included in the corresponding data RD. Specifically, the image generation unit 116 superimposes the three-dimensional image of the two or more costumes on the three-dimensional image of the body shape of the end user U according to the stacking order of the two or more costumes specified by the stacking order identifying unit 115. A first composite image SP1 is generated. More specifically, the image generation unit 116 reads the costume ID and costume data CD from the corresponding data RD. Next, the image generation unit 116 refers to the second database DB2 using the read costume ID, thereby reading out the image data PD linked to the costume ID. In addition, the image generation unit 116 superimposes the three-dimensional image of the costume indicated by the image data PD on the three-dimensional image of the body shape of the end user U in the stacking order specified by the stacking order specifying unit 115. A composite image SP1 is generated. When the image generation unit 116 superimposes the 3D image of the costume on the 3D image of the body shape of the end user U, a more natural first composite image SP1 is generated by using the shape data FD included in the costume data CD. be done. Here, a three-dimensional image of the body shape of the end user U is represented by body shape data BD.
 この結果、端末装置10は、エンドユーザUが2以上の衣装を重ね着する場合に、その重ね順を特定してシミュレーションできる。とりわけ、端末装置10を用いるエンドユーザUは、2以上の衣装を重ね着する場合に、端末装置10から当該2以上の衣装の重ね順に関する情報を入力する必要はない。エンドユーザUは、当該重ね順が自動的に特定された状態で、当該複数の衣装を仮想的に試着した自身の全身像を視認できる。 As a result, when the end user U wears two or more costumes in layers, the terminal device 10 can specify and simulate the order in which they are layered. In particular, when the end user U using the terminal device 10 wears two or more costumes in layers, there is no need to input information regarding the stacking order of the two or more costumes from the terminal device 10. The end user U can view a full-body image of himself or herself virtually trying on the plurality of costumes, with the stacking order automatically specified.
 図13は、画像生成部116の機能ブロック図を示す。画像生成部116は、領域特定部116-1と、修正部116-2とを備える。 FIG. 13 shows a functional block diagram of the image generation unit 116. The image generation section 116 includes an area identification section 116-1 and a modification section 116-2.
 領域特定部116-1は、第1領域を特定する。当該第1領域は、エンドユーザUの体形を示す3次元画像に、エンドユーザUの身体との間に他の衣装を介さずに、エンドユーザUが着用する第1衣装の3次元画像を重ねた場合、エンドユーザUの体形の3次元画像の内部に、第1衣装の3次元画像の一部が含まれる領域である。 The area specifying unit 116-1 specifies the first area. In the first area, a three-dimensional image of the first costume worn by the end user U is superimposed on a three-dimensional image showing the body shape of the end user U, without intervening another costume between the body of the end user U. In this case, this is a region in which a part of the three-dimensional image of the first costume is included inside the three-dimensional image of the body shape of the end user U.
 修正部116-2は、第1領域に含まれる第1衣装の3次元画像の一部を、エンドユーザUの3次元画像から押し出す修正を実行する。 The modification unit 116-2 performs modification to push out a part of the three-dimensional image of the first costume included in the first region from the three-dimensional image of the end user U.
 図14A及び図14Bは、領域特定部116-1及び修正部116-2を含む画像生成部116の動作についての説明図である。図14Aに示されるように、画像生成部116は、エンドユーザUの体形の3次元画像BMの上から、第1衣装の3次元画像CMを重ねる。この結果、エンドユーザUの体形の3次元画像BMと第1衣装の3次元画像CMの組に対応する第2合成画像SP2が生成される。なお、説明の便宜上、図14Aにおいて、エンドユーザUの3次元画像BMと、第1衣装の3次元画像CMとは輪郭線のみが示される。 FIGS. 14A and 14B are explanatory diagrams of the operation of the image generation unit 116 including the area identification unit 116-1 and the correction unit 116-2. As shown in FIG. 14A, the image generation unit 116 overlays the three-dimensional image CM of the first costume on the three-dimensional image BM of the body shape of the end user U. As a result, a second composite image SP2 corresponding to the set of the three-dimensional image BM of the body shape of the end user U and the three-dimensional image CM of the first costume is generated. For convenience of explanation, only outlines of the three-dimensional image BM of the end user U and the three-dimensional image CM of the first costume are shown in FIG. 14A.
 図14Aに示されるように、エンドユーザUの体形の3次元画像BMの肩に対応する部分を含む第1領域AR1において、エンドユーザUの体形の3次元画像BMの内部に、第1衣装の3次元画像CMの一部が入り込んでいる。領域特定部116-1は、当該第1領域AR1を特定する。 As shown in FIG. 14A, in a first region AR1 including a portion corresponding to the shoulders of a three-dimensional image BM of the body shape of the end user U, a first costume is placed inside the three-dimensional image BM of the body shape of the end user U. Part of the 3D image commercial is included. The area specifying unit 116-1 specifies the first area AR1.
 図14Bは、図14Aに示される第1領域AR1付近の拡大図である。図14Bの矢印に示されるように、修正部116-2は、第1領域AR1に含まれる第1衣装の3次元画像CMの一部を、エンドユーザUの3次元画像BMから押し出す修正を実行する。 FIG. 14B is an enlarged view of the vicinity of the first region AR1 shown in FIG. 14A. As shown by the arrow in FIG. 14B, the modification unit 116-2 executes a modification to push out a part of the 3D image CM of the first costume included in the first region AR1 from the 3D image BM of the end user U. do.
 当該修正により、エンドユーザUが、例えば肌着を仮想的に試着する場合に、当該エンドユーザUの体形を示す3次元画像BMに対して、肌着を示す3次元画像が食い込むことを解消できる。この結果、当該エンドユーザUは、より自然な状態で、肌着を着用した自身の全身像を視認できる。 With this modification, when the end user U tries on underwear virtually, for example, it is possible to prevent the three-dimensional image showing the underwear from digging into the three-dimensional image BM showing the body shape of the end user U. As a result, the end user U can see a full-body image of himself wearing underwear in a more natural state.
 また、図示はしないが、図14A及び図14Bに示される例と同様に、領域特定部116-1は、第2領域を特定する。当該第2領域は、修正部116-2によって修正された第2合成画像SP2に、エンドユーザUが第1衣装の上から着用する第2衣装の3次元画像を重ねた場合、第2合成画像SP2の内部に、第2衣装の3次元画像の一部が含まれる領域である。修正部116-2は、第2領域に含まれる第2衣装の3次元画像の一部を、第2合成画像SP2から押し出す修正を実行する。 Although not shown, the area specifying unit 116-1 specifies the second area similarly to the example shown in FIGS. 14A and 14B. The second area is generated when a three-dimensional image of the second costume worn by the end user U over the first costume is superimposed on the second composite image SP2 modified by the modification unit 116-2. This is an area that includes a part of the three-dimensional image of the second costume inside SP2. The modification unit 116-2 performs modification to push out a part of the three-dimensional image of the second costume included in the second region from the second composite image SP2.
 当該修正により、例えば、肌着を仮想的に試着したエンドユーザUが、更に上着を仮想的に試着する場合に、当該エンドユーザUの体形を示す3次元画像BMと肌着を示す3次元画像との合成画像に対して、上着を示す3次元画像が食い込むことを解消できる。この結果、当該エンドユーザUは、より自然な状態で、上着を着用した自身の全身像を視認できる。 With this modification, for example, when an end user U who has virtually tried on underwear virtually tries on a jacket, the three-dimensional image BM showing the body shape of the end user U and the three-dimensional image showing the underwear It is possible to prevent the three-dimensional image showing the jacket from digging into the composite image. As a result, the end user U can see a full-body image of himself wearing the jacket in a more natural state.
 対応データRDにおいて、第1衣装及び第2衣装の各々が対応する衣装IDには、衣装データCDが紐づけられる。上記のように、衣装データCDは、関節データJDを含む。また上記のように、関節データJDは、衣装の一般的な着用者が当該衣装を着用した場合、当該一般的な着用者の関節の位置が、一着の衣装においてどの範囲に対応するかを示すデータを含む。 In the correspondence data RD, the costume data CD is linked to the costume ID to which each of the first costume and the second costume corresponds. As described above, costume data CD includes joint data JD. In addition, as mentioned above, joint data JD indicates, when a typical wearer wears the costume, the range of joint positions of the typical wearer in the costume. Contains the data shown.
 一方、体形データBDに含まれる骨格データKDは、エンドユーザUの関節の位置を示すデータを含む。修正部116-2は、更に、第1衣装及び第2衣装を伸縮させることで、骨格データKDに含まれるエンドユーザUの関節の位置が、一着の衣装において、一般的な着用者の関節の位置が対応する範囲に含まれるようにしてもよい。 On the other hand, the skeletal data KD included in the body shape data BD includes data indicating the positions of the joints of the end user U. The modification unit 116-2 further expands and contracts the first costume and the second costume, so that the positions of the joints of the end user U included in the skeletal data KD are adjusted to the joint positions of a typical wearer in one costume. may be included in the corresponding range.
 説明を図9に戻すと、画像生成部116は、更に、エンドユーザUの動作に合わせてエンドユーザUの姿勢の変化を表す骨格データKDと、関節データJDとを照合して、第1合成画像SP1としての動画像を生成する。上記のように、関節データJDは、衣装の一般的な着用者が当該衣装を着用したことを想定した場合、当該一般的な着用者の骨格が、当該衣装に対して、相対的にどこに位置するかを示すデータを含む。画像生成部116は、当該衣装において、一般的な着用者の骨格が相対的に位置する範囲が、骨格データKDによって示されるエンドユーザUの骨格の位置を含むようにしながら、エンドユーザUの動作に合わせて、当該衣装の3次元画像を変形させる。当該衣装の3次元画像の変形においては、衣装データCDに含まれる構造データSD及びテクスチャーデータTDが用いられる。 Returning to FIG. 9, the image generation unit 116 further compares the joint data JD with the skeletal data KD representing changes in the posture of the end user U in accordance with the movements of the end user U, and generates the first composite. A moving image is generated as image SP1. As mentioned above, joint data JD indicates where the general wearer's skeleton is located relative to the costume, assuming that the costume is worn by a general wearer. Contains data indicating whether The image generation unit 116 adjusts the movement of the end user U while ensuring that the range in which the skeleton of a typical wearer is relatively located in the costume includes the position of the skeleton of the end user U indicated by the skeleton data KD. The three-dimensional image of the costume is transformed accordingly. In transforming the three-dimensional image of the costume, the structure data SD and texture data TD included in the costume data CD are used.
 この結果、端末装置10は、複数の衣装を試着した状態で動作するエンドユーザUの全身像を、動画像としてシミュレーションできる。エンドユーザUは、自身の動きに合わせて、複数の衣装を試着したエンドユーザUの、仮想的な全身像が動作する動画を視認できる。とりわけ、端末装置10は、衣装の3次元形状を示す形状データFDに含まれる関節データJDを、エンドユーザUの動作に合わせて変形する骨格データKDの動きに合わせて動作させることで、エンドユーザUの仮想的な全身像を、より自然に動作させることが可能となる。 As a result, the terminal device 10 can simulate, as a moving image, a full-body image of the end user U while trying on a plurality of costumes. The end user U can view a video in which a virtual full-body image of the end user U who has tried on a plurality of costumes moves in accordance with the user's own movements. In particular, the terminal device 10 moves the joint data JD included in the shape data FD indicating the three-dimensional shape of the costume in accordance with the movement of the skeleton data KD that deforms in accordance with the movement of the end user U. It becomes possible to move the virtual full-body image of U more naturally.
 表示制御部117は、図12に示される操作画面OMをディスプレイ15に表示させる。とりわけ、表示制御部117は、画像生成部116によって生成された第1合成画像SP1を、ディスプレイ15に表示させる。 The display control unit 117 causes the display 15 to display the operation screen OM shown in FIG. In particular, the display control unit 117 causes the display 15 to display the first composite image SP1 generated by the image generation unit 116.
1-2:第1実施形態の動作
 図15は、第1実施形態に係る端末装置10の動作を示すフローチャートである。
1-2: Operation of the first embodiment FIG. 15 is a flowchart showing the operation of the terminal device 10 according to the first embodiment.
 ステップS1において、処理装置11は、取得部111として機能する。処理装置11は、撮像装置14から、エンドユーザUの全身像を撮像した撮像画像を示す撮像情報を取得する。 In step S1, the processing device 11 functions as the acquisition unit 111. The processing device 11 acquires imaging information indicating a captured image of the end user U's whole body from the imaging device 14 .
 ステップS2において、処理装置11は、データ生成部112として機能する。処理装置11は、ステップS1において取得した撮像情報を用いて、エンドユーザUの体形を3次元で表す体形データBDを生成する。 In step S2, the processing device 11 functions as the data generation unit 112. The processing device 11 uses the imaging information acquired in step S1 to generate body shape data BD that represents the body shape of the end user U in three dimensions.
 ステップS3において、処理装置11は、受付部113として機能する。処理装置11は、エンドユーザUが入力装置16を用いて入力する、2以上の衣装の選択結果を示す選択情報を受け付ける。 In step S3, the processing device 11 functions as the reception unit 113. The processing device 11 receives selection information input by the end user U using the input device 16 and indicating the selection results of two or more costumes.
 ステップS4において、処理装置11は、通信制御部114として機能する。処理装置11は、ステップS3において受け付けた選択結果を示す選択情報を、通信装置13に、サーバ30に対して送信させる。 In step S4, the processing device 11 functions as the communication control unit 114. The processing device 11 causes the communication device 13 to transmit selection information indicating the selection result received in step S3 to the server 30.
 ステップS5において、処理装置11は、取得部111として機能する。処理装置11は、通信装置13を介して、サーバ30から対応データRDを取得する。 In step S5, the processing device 11 functions as the acquisition unit 111. The processing device 11 obtains correspondence data RD from the server 30 via the communication device 13 .
 ステップS6において、処理装置11は、重ね順特定部115として機能する。処理装置11は、ステップS5において取得した対応データRDに含まれる衣装データCDに基づいて、2以上の衣装の重ね順を特定する。 In step S6, the processing device 11 functions as the stacking order specifying unit 115. The processing device 11 specifies the stacking order of two or more costumes based on the costume data CD included in the corresponding data RD acquired in step S5.
 ステップS7において、処理装置11は、画像生成部116として機能する。処理装置11は、エンドユーザUの体形を3次元で表す体形データBDと2以上の衣装データCDとに基づいて、第1合成画像SP1を生成する。具体的には、処理装置11は、エンドユーザUの3次元画像に、ステップS6において特定された2以上の衣装の重ね順に従って、衣装データCDに含まれる形状データFDを参照することで、2以上の衣装の3次元画像を重ねた第1合成画像SP1を生成する。 In step S7, the processing device 11 functions as the image generation unit 116. The processing device 11 generates a first composite image SP1 based on body shape data BD representing the body shape of the end user U in three dimensions and two or more pieces of costume data CD. Specifically, the processing device 11 adds two or more images to the three-dimensional image of the end user U by referring to the shape data FD included in the costume data CD according to the stacking order of the two or more costumes specified in step S6. A first composite image SP1 is generated by overlapping the three-dimensional images of the costumes described above.
 ステップS8において、処理装置11は、表示制御部117として機能する。処理装置11は、ステップS7において生成された第1合成画像SP1を、ディスプレイ15に表示させる。 In step S8, the processing device 11 functions as the display control section 117. The processing device 11 causes the display 15 to display the first composite image SP1 generated in step S7.
1-3:第1実施形態が奏する効果
 以上の説明によれば、シミュレーション装置としての端末装置10は、複数の衣装と1対1に対応する複数の衣装データCDを参照することによって、エンドユーザUが衣装を重ね着する場合をシミュレーションする。複数の衣装データCDの各々は、衣装の3次元形状を示す形状データFDと、当該形状データFDと対応付けられ、当該衣装がエンドユーザUに何番目に着用されるかを示すタグデータGDとを含む。端末装置10は、受付部113と、重ね順特定部115と、画像生成部116とを備える。受付部113は、複数の衣装のうち、エンドユーザUが選択する2以上の衣装を受け付ける。重ね順特定部115は、受付部113によって受け付けられた2以上の衣装に対応する2以上の衣装データCDに基づいて、2以上の衣装の重ね順を特定する。画像生成部116は、エンドユーザUの体形を3次元で表す体形データBDと2以上の衣装データCDとに基づいて、エンドユーザUの3次元画像に、特定された2以上の衣装の重ね順に従って、2以上の衣装の3次元画像を重ねた第1合成画像SP1を生成する。
1-3: Effects of the First Embodiment According to the above description, the terminal device 10 as a simulation device can provide information to end users by referring to a plurality of costume data CDs in one-to-one correspondence with a plurality of costumes. A case is simulated in which U wears layers of clothing. Each of the plurality of costume data CDs includes shape data FD indicating the three-dimensional shape of the costume, tag data GD associated with the shape data FD, and indicating the number of times the costume will be worn by the end user U. including. The terminal device 10 includes a receiving section 113, a stacking order specifying section 115, and an image generating section 116. The reception unit 113 receives two or more costumes selected by the end user U from among the plurality of costumes. The stacking order identifying unit 115 identifies the stacking order of the two or more costumes based on the two or more costume data CDs corresponding to the two or more costumes received by the reception unit 113. The image generation unit 116 generates a stacking order of the identified two or more costumes in the three-dimensional image of the end user U based on the body shape data BD representing the body shape of the end user U in three dimensions and the two or more costume data CD. Accordingly, a first composite image SP1 is generated in which two or more three-dimensional images of costumes are superimposed.
 端末装置10は、上記の構成を備えるので、エンドユーザUが2以上の衣装を重ね着する場合に、その重ね順を特定してシミュレーションできる。とりわけ、端末装置10を用いるエンドユーザUは、2以上の衣装を重ね着する場合に、端末装置10から当該2以上の衣装の重ね順に関する情報を入力することなく、重ね順が自動的に特定された状態で、当該複数の衣装を仮想的に試着した自身の全身像を視認できる。 Since the terminal device 10 has the above configuration, when the end user U wears two or more costumes in layers, the order in which they are layered can be specified and simulated. In particular, when the end user U using the terminal device 10 wears two or more costumes in layers, the layering order can be automatically identified without inputting information regarding the layering order of the two or more costumes from the terminal device 10. In this state, the user can see a full-body image of himself or herself virtually trying on the plurality of costumes.
 また以上の説明によれば、2以上の衣装は、エンドユーザUの身体との間に他の衣装を介さずに、エンドユーザUが着用する第1衣装を含む。画像生成部116は、領域特定部116-1と修正部116-2とを備える。領域特定部116-1は、第1領域AR1を特定する。当該第1領域AR1は、エンドユーザUの3次元画像に、第1衣装の3次元画像を重ねた場合、エンドユーザUの3次元画像の内部に、第1衣装の3次元画像の一部が入り込む領域である。修正部116-2は、第1領域AR1に含まれる第1衣装の3次元画像の一部を、エンドユーザUの3次元画像から押し出す修正を実行する。 Also, according to the above description, the two or more costumes include the first costume worn by the end user U without any other costume interposed between the first costume and the body of the end user U. The image generation section 116 includes an area identification section 116-1 and a modification section 116-2. The area specifying unit 116-1 specifies the first area AR1. The first area AR1 is such that when the three-dimensional image of the first costume is superimposed on the three-dimensional image of the end user U, a part of the three-dimensional image of the first costume is inside the three-dimensional image of the end user U. This is an area to get into. The modification unit 116-2 performs modification to push out a part of the three-dimensional image of the first costume included in the first region AR1 from the three-dimensional image of the end user U.
 端末装置10は、上記の構成を備えるので、エンドユーザUが、例えば肌着を仮想的に試着する場合に、当該エンドユーザUの体形を示す3次元画像に対して、肌着を示す3次元画像が食い込むことを解消できる。この結果、当該エンドユーザUは、より自然な状態で、肌着を着用した自身の全身像を視認できる。 Since the terminal device 10 has the above configuration, when the end user U tries on underwear virtually, for example, the 3D image showing the underwear is compared to the 3D image showing the body shape of the end user U. Can eliminate digging. As a result, the end user U can see a full-body image of himself wearing underwear in a more natural state.
 また以上の説明によれば、上記の2以上の衣装は、エンドユーザUが第1衣装の上から着用する第2衣装を含む。領域特定部116-1は、第2領域を特定する。当該第2領域は、エンドユーザUの3次元画像と第1衣装の3次元画像の組に対応する第2合成画像SP2に、第2衣装の3次元画像を重ねた場合、第2合成画像SP2の内部に、第2衣装の3次元画像の一部が入り込む領域である。修正部116-2は、第2領域に含まれる第2衣装の3次元画像の一部を、第2合成画像SP2から押し出す修正を実行する。 Also, according to the above description, the two or more costumes include the second costume that the end user U wears over the first costume. The area specifying unit 116-1 specifies the second area. When the 3D image of the second costume is superimposed on the second composite image SP2 corresponding to the set of the 3D image of the end user U and the 3D image of the first costume, the second area becomes the second composite image SP2. This is an area in which a part of the three-dimensional image of the second costume enters. The modification unit 116-2 performs modification to push out a part of the three-dimensional image of the second costume included in the second region from the second composite image SP2.
 端末装置10は、上記の構成を備えるので、例えば、肌着を仮想的に試着したエンドユーザUが、更に上着を仮想的に試着する場合に、当該エンドユーザUの体形を示す3次元画像と肌着を示す3次元画像との合成画像に対して、上着を示す3次元画像が食い込むことを解消できる。この結果、当該エンドユーザUは、より自然な状態で、上着を着用した自身の全身像を視認できる。 Since the terminal device 10 has the above configuration, for example, when an end user U who has virtually tried on an underwear virtually tries on a jacket, the terminal device 10 displays a three-dimensional image showing the body shape of the end user U. It is possible to prevent the three-dimensional image showing the outerwear from digging into the composite image with the three-dimensional image showing the underwear. As a result, the end user U can see a full-body image of himself wearing the jacket in a more natural state.
 また以上の説明によれば、体形データBDは、エンドユーザUの骨格を表す骨格データKDを含む。形状データFDは、エンドユーザUが上記の衣装を着用したことを想定した場合における、上記の衣装に対するエンドユーザUの骨格の相対的な位置、及び上記の衣装におけるエンドユーザUの関節に対応する位置を示す関節データJDを含む。画像生成部116は、エンドユーザUの動作に合わせて変形する骨格データKDと、関節データJDとを照合して、第1合成画像SP1としての動画像を生成する。 According to the above description, the body shape data BD includes the skeleton data KD representing the skeleton of the end user U. The shape data FD corresponds to the relative position of the skeleton of the end user U with respect to the above costume and the joints of the end user U in the above costume, assuming that the end user U wears the above costume. Contains joint data JD indicating the position. The image generation unit 116 collates the skeletal data KD, which is deformed according to the motion of the end user U, and the joint data JD, and generates a moving image as the first composite image SP1.
 端末装置10は、上記の構成を備えるので、エンドユーザUは、自身の動きに合わせて、2以上の衣装を試着したエンドユーザUの仮想的な全身像が動作する動画を視認できる。とりわけ、端末装置10は、衣装の3次元形状を示す形状データFDに含まれる関節データJDを、エンドユーザUの動作に合わせて変形する骨格データKDの動きに合わせて動作させる。この結果、端末装置10は、エンドユーザUの仮想的な全身像を、より自然に動作させることが可能となる。 Since the terminal device 10 has the above configuration, the end user U can view a video in which a virtual full-body image of the end user U trying on two or more costumes moves in accordance with the user's own movements. In particular, the terminal device 10 operates the joint data JD included in the shape data FD indicating the three-dimensional shape of the costume in accordance with the movement of the skeleton data KD that deforms in accordance with the end user's U movement. As a result, the terminal device 10 is able to move the virtual full-body image of the end user U more naturally.
 また以上の説明によれば、シミュレーションシステム1は、走査装置20と、上記の端末装置10とを備える。走査装置20は、上記の衣装を3次元スキャンして、衣装データCDを生成する。 According to the above description, the simulation system 1 includes the scanning device 20 and the terminal device 10 described above. The scanning device 20 three-dimensionally scans the costume described above to generate costume data CD.
 シミュレーションシステム1は、上記の構成を備えるので、シミュレーションシステム1の管理者は、画像データPD及び衣装データCDを構成する各データ要素を一つずつ手入力することなく、簡便に衣装の画像データPD及び衣装データCDを制作できる。また、シミュレーションシステム1は、簡便に制作された当該画像データPD及び当該衣装データCDを用いて、仮想的な試着をシミュレーションできる。 Since the simulation system 1 has the above-described configuration, the administrator of the simulation system 1 can easily input costume image data PD without manually inputting each data element constituting the image data PD and costume data CD one by one. You can also create a costume data CD. Furthermore, the simulation system 1 can simulate virtual try-on using the image data PD and the costume data CD that are simply produced.
2:変形例
 本開示は、以上に例示した実施形態に限定されない。具体的な変形の態様を以下に例示する。以下の例示から任意に選択された2以上の態様を併合してもよい。
2: Modification The present disclosure is not limited to the embodiments illustrated above. Specific modes of modification are illustrated below. Two or more aspects arbitrarily selected from the examples below may be combined.
2-1:変形例1
 上記の実施形態において、タグデータGDは、端末装置10が2以上の衣装の重ね順を特定するために用いる重ね順指数を含む。しかし、タグデータGDは、重ね順指数の代わりに、衣装の品種を示す品種データを含んでもよい。この場合、端末装置10に備わる記憶装置12は、例えばTシャツ、長袖シャツ、及びセーターといった衣装の品種と、重ね順指数との対応関係が記載された対応表を記憶する。重ね順特定部115は、サーバ30から取得した衣装データCDの中で指定される衣装の品種を、当該対応表に照らし合わせることで、衣装の重ね順を特定してもよい。
2-1: Modification 1
In the above embodiment, the tag data GD includes a stacking order index used by the terminal device 10 to specify the stacking order of two or more costumes. However, the tag data GD may include type data indicating the type of costume instead of the stacking order index. In this case, the storage device 12 provided in the terminal device 10 stores a correspondence table that describes the correspondence between types of clothing such as T-shirts, long-sleeved shirts, and sweaters, and stacking order indices. The stacking order specifying unit 115 may identify the stacking order of the costumes by comparing the types of costumes specified in the costume data CD obtained from the server 30 with the correspondence table.
2-2:変形例2
 上記の実施形態において、走査装置20の使用者は、入力装置26から衣装の重ね順を示す重ね順指数を入力していた。しかし、走査装置20は、学習済みモデルを用いて、衣装の重ね順指数を生成してもよい。より具体的には、走査装置20に備わる生成部212は、自身が生成した衣装Cの3次元画像を示す画像データPDを、当該学習済みモデルに入力することで、当該衣装Cの重ね順指数を学習済みモデルから出力させてもよい。当該学習済みモデルは、衣装の3次元画像を示す画像データPDと衣装の重ね順指数との複数の組を含む教師データを用いて機械学習することにより生成される。
2-2: Modification 2
In the above embodiment, the user of the scanning device 20 inputs a stacking order index indicating the stacking order of the costumes from the input device 26. However, the scanning device 20 may generate the costume stacking order index using the trained model. More specifically, the generation unit 212 included in the scanning device 20 inputs image data PD representing a three-dimensional image of the costume C generated by itself into the learned model, thereby determining the stacking order index of the costume C. may be output from the trained model. The learned model is generated by machine learning using teacher data including a plurality of sets of image data PD indicating three-dimensional images of costumes and stacking order indices of costumes.
 あるいは、変形例1に記載のように、タグデータGDが、重ね順指数の代わりに、衣装の品種を示す品種データを含む場合、走査装置20は、学習済みモデルを用いて、衣装の品種を生成してもよい。より具体的には、走査装置20に備わる生成部212は、自身が生成した衣装Cの3次元画像を示す画像データPDを、当該学習済みモデルに入力することで、当該衣装Cの品種を学習モデルから出力させてもよい。当該学習済みモデルは、衣装の3次元画像を示す画像データPDと衣装の品種との複数の組を含む教師データを用いて機械学習することにより生成される。 Alternatively, as described in Modification 1, if the tag data GD includes type data indicating the type of costume instead of the stacking order index, the scanning device 20 uses the learned model to identify the type of costume. May be generated. More specifically, the generation unit 212 included in the scanning device 20 learns the type of costume C by inputting image data PD representing a three-dimensional image of the costume C generated by itself into the learned model. It may also be output from the model. The learned model is generated by machine learning using teacher data including a plurality of sets of image data PD indicating three-dimensional images of costumes and types of costumes.
2-3:変形例3
 上記の実施形態において、端末装置10は、受付部113が受け付けた2以上の衣装の選択結果を示す選択情報をサーバ30に出力し、サーバ30から当該選択情報に対応する対応データRDを取得していた。しかし、端末装置10は、当該選択情報をサーバ30に出力しないと共に、全ての衣装についての対応データRDを、サーバ30から無条件で取得してもよい。
2-3: Modification 3
In the above embodiment, the terminal device 10 outputs selection information indicating the selection results of two or more costumes received by the reception unit 113 to the server 30, and acquires correspondence data RD corresponding to the selection information from the server 30. was. However, the terminal device 10 may not output the selection information to the server 30 and may unconditionally acquire the correspondence data RD for all costumes from the server 30.
2-4:変形例4
 図1に示される全体構成において、端末装置10、走査装置20、及びサーバ30は別体となっていた。しかし、これら端末装置10、走査装置20、及びサーバ30のうち2以上の装置が、同一の筐体に格納されてもよい。すなわち、図1の全体構成に示される2以上の装置が、単一の装置として実現されてもよい。
2-4: Modification example 4
In the overall configuration shown in FIG. 1, the terminal device 10, the scanning device 20, and the server 30 are separate bodies. However, two or more of the terminal device 10, scanning device 20, and server 30 may be housed in the same housing. That is, two or more devices shown in the overall configuration of FIG. 1 may be realized as a single device.
3:その他
(1)上述した実施形態では、記憶装置12、記憶装置22、及び記憶装置32は、ROM及びRAMなどを例示したが、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリデバイス(例えば、カード、スティック、キードライブ)、CD-ROM(Compact Disc-ROM)、レジスタ、リムーバブルディスク、ハードディスク、フロッピー(登録商標)ディスク、磁気ストリップ、データベース、サーバその他の適切な記憶媒体である。また、プログラムは、電気通信回線を介してネットワークから送信されてもよい。また、プログラムは、電気通信回線を介して通信網NETから送信されてもよい。
3: Others (1) In the embodiments described above, the storage device 12, the storage device 22, and the storage device 32 are exemplified as ROM, RAM, etc. discs, Blu-ray discs), smart cards, flash memory devices (e.g. cards, sticks, key drives), CD-ROMs (Compact Disc-ROMs), registers, removable disks, hard disks, floppies ) disk, magnetic strip, database, server, or other suitable storage medium. The program may also be transmitted from a network via a telecommunications line. Further, the program may be transmitted from the communication network NET via a telecommunications line.
(2)上述した実施形態において、説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 (2) In the embodiments described above, the information, signals, etc. described may be represented using any of a variety of different technologies. For example, data, instructions, commands, information, signals, bits, symbols, chips, etc., which may be referred to throughout the above description, may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may also be represented by a combination of
(3)上述した実施形態において、入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルを用いて管理してもよい。入出力される情報等は、上書き、更新、又は追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 (3) In the embodiments described above, the input/output information may be stored in a specific location (for example, memory) or may be managed using a management table. Information etc. to be input/output may be overwritten, updated, or additionally written. The output information etc. may be deleted. The input information etc. may be transmitted to other devices.
(4)上述した実施形態において、判定は、1ビットを用いて表される値(0か1か)によって行われてもよいし、真偽値(Boolean:true又はfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 (4) In the embodiments described above, the determination may be made using a value expressed using 1 bit (0 or 1) or a truth value (Boolean: true or false). Alternatively, the comparison may be performed by comparing numerical values (for example, comparing with a predetermined value).
(5)上述した実施形態において例示した処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。 (5) The order of the processing procedures, sequences, flowcharts, etc. illustrated in the embodiments described above may be changed as long as there is no contradiction. For example, the methods described in this disclosure use an example order to present elements of the various steps and are not limited to the particular order presented.
(6)図1~図15に例示された各機能は、ハードウェア及びソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的又は論理的に結合した1つの装置を用いて実現されてもよいし、物理的又は論理的に分離した2つ以上の装置を直接的又は間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置又は上記複数の装置にソフトウェアを組み合わせて実現されてもよい。 (6) Each of the functions illustrated in FIGS. 1 to 15 is realized by an arbitrary combination of at least one of hardware and software. Furthermore, the method for realizing each functional block is not particularly limited. That is, each functional block may be realized using one physically or logically coupled device, or may be realized using two or more physically or logically separated devices directly or indirectly (e.g. , wired, wireless, etc.) and may be realized using a plurality of these devices. The functional block may be realized by combining software with the one device or the plurality of devices.
(7)上述した実施形態において例示したプログラムは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称を用いて呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 (7) The programs exemplified in the above-described embodiments are instructions, instruction sets, codes, codes, regardless of whether they are called software, firmware, middleware, microcode, hardware description language, or by other names. Should be broadly construed to mean a segment, program code, program, subprogram, software module, application, software application, software package, routine, subroutine, object, executable, thread of execution, procedure, function, etc.
 また、ソフトウェア、命令、情報などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、有線技術(同軸ケーブル、光ファイバケーブル、ツイストペア、デジタル加入者回線(DSL:Digital Subscriber Line)など)及び無線技術(赤外線、マイクロ波など)の少なくとも一方を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び無線技術の少なくとも一方は、伝送媒体の定義内に含まれる。 Additionally, software, instructions, information, etc. may be sent and received via a transmission medium. For example, if the software uses wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and/or wireless technology (infrared, microwave, etc.) to create a website, When transmitted from a server or other remote source, these wired and/or wireless technologies are included within the definition of transmission medium.
(8)前述の各形態において、「システム」及び「ネットワーク」という用語は、互換的に使用される。 (8) In each of the above embodiments, the terms "system" and "network" are used interchangeably.
(9)本開示において説明した情報、パラメータなどは、絶対値を用いて表されてもよいし、所定の値からの相対値を用いて表されてもよいし、対応する別の情報を用いて表されてもよい。 (9) The information, parameters, etc. described in this disclosure may be expressed using absolute values, relative values from a predetermined value, or other corresponding information. It may also be expressed as
(10)上述した実施形態において、端末装置10、走査装置20、及びサーバ30は、移動局(MS:Mobile Station)である場合が含まれる。移動局は、当業者によって、加入者局、モバイルユニット、加入者ユニット、ワイヤレスユニット、リモートユニット、モバイルデバイス、ワイヤレスデバイス、ワイヤレス通信デバイス、リモートデバイス、モバイル加入者局、アクセス端末、モバイル端末、ワイヤレス端末、リモート端末、ハンドセット、ユーザエージェント、モバイルクライアント、クライアント、又はいくつかの他の適切な用語によって呼ばれる場合もある。また、本開示においては、「移動局」、「ユーザ端末(user terminal)」、「ユーザ装置(UE:User Equipment)」、「端末」等の用語は、互換的に使用され得る。 (10) In the embodiments described above, the terminal device 10, the scanning device 20, and the server 30 may be mobile stations (MS). A mobile station is defined by a person skilled in the art as a subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless It may also be referred to as a terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable terminology. Further, in the present disclosure, terms such as "mobile station," "user terminal," "user equipment (UE)," and "terminal" may be used interchangeably.
(11)上述した実施形態において、「接続された(connected)」、「結合された(coupled)」という用語、又はこれらのあらゆる変形は、2又はそれ以上の要素間の直接的又は間接的なあらゆる接続又は結合を意味し、互いに「接続」又は「結合」された2つの要素間に1又はそれ以上の中間要素が存在することを含められる。要素間の結合又は接続は、物理的な結合又は接続であっても、論理的な結合又は接続であっても、或いはこれらの組み合わせであってもよい。例えば、「接続」は「アクセス」を用いて読み替えられてもよい。本開示において使用する場合、2つの要素は、1又はそれ以上の電線、ケーブル及びプリント電気接続の少なくとも一つを用いて、並びにいくつかの非限定的かつ非包括的な例として、無線周波数領域、マイクロ波領域及び光(可視及び不可視の両方)領域の波長を有する電磁エネルギーなどを用いて、互いに「接続」又は「結合」されると考えられる。 (11) In the embodiments described above, the terms "connected", "coupled", or any variations thereof refer to direct or indirect connections between two or more elements. Refers to any connection or combination, including the presence of one or more intermediate elements between two elements that are "connected" or "coupled" to each other. The coupling or connection between elements may be a physical coupling or connection, a logical coupling or connection, or a combination thereof. For example, "connection" may be replaced with "access." As used in this disclosure, two elements may include one or more wires, cables, and/or printed electrical connections, as well as in the radio frequency domain, as some non-limiting and non-inclusive examples. , electromagnetic energy having wavelengths in the microwave and optical (both visible and invisible) ranges.
(12)上述した実施形態において、「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 (12) In the embodiments described above, the statement "based on" does not mean "based solely on" unless specified otherwise. In other words, the phrase "based on" means both "based only on" and "based at least on."
(13)本開示において使用される「判断(determining)」、「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「判断」、「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking up、search、inquiry)(例えば、テーブル、データベース又は別のデータ構造での探索)、確認(ascertaining)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)などした事を「判断」「決定」したとみなす事を含み得る。つまり、「判断」「決定」は、何らかの動作を「判断」「決定」したとみなす事を含み得る。また、「判断(決定)」は、「想定する(assuming)」、「期待する(expecting)」、「みなす(considering)」などによって読み替えられてもよい。 (13) The terms "determining" and "determining" used in this disclosure may encompass a wide variety of operations. "Judgment" and "decision" include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, search, and inquiry. (e.g., searching in a table, database, or other data structure), and regarding an ascertaining as a "judgment" or "decision." In addition, "judgment" and "decision" refer to receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, and access. (accessing) (e.g., accessing data in memory) may include considering something as a "judgment" or "decision." In addition, "judgment" and "decision" refer to resolving, selecting, choosing, establishing, comparing, etc. as "judgment" and "decision". may be included. In other words, "judgment" and "decision" may include regarding some action as having been "judged" or "determined." Further, "judgment (decision)" may be read as "assuming", "expecting", "considering", etc.
(14)上述した実施形態において、「含む(include)」、「含んでいる(including)」及びそれらの変形が使用されている場合、これらの用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。更に、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。 (14) In the embodiments described above, when “include”, “including” and variations thereof are used, these terms are used in the same manner as the term “comprising”. , is intended to be comprehensive. Furthermore, the term "or" as used in this disclosure is not intended to be exclusive or.
(15)本開示において、例えば、英語でのa, an及びtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。 (15) In the present disclosure, when articles are added by translation, such as a, an, and the in English, the present disclosure does not include the fact that the nouns following these articles are plural. good.
(16)本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」等の用語も、「異なる」と同様に解釈されてもよい。 (16) In the present disclosure, the term "A and B are different" may mean "A and B are different from each other." Note that the term may also mean that "A and B are each different from C". Terms such as "separate", "coupled", etc. may also be interpreted similarly to "different".
(17)本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行う通知に限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 (17) Each aspect/embodiment described in the present disclosure may be used alone, in combination, or may be switched and used in accordance with execution. In addition, notification of prescribed information (for example, notification of "X") is not limited to explicit notification, but may also be done implicitly (for example, by not notifying the prescribed information). Good too.
 以上、本開示について詳細に説明したが、当業者にとっては、本開示が本開示中に説明した実施形態に限定されないということは明らかである。本開示は、請求の範囲の記載により定まる本開示の趣旨及び範囲を逸脱することなく修正及び変更態様として実施できる。従って、本開示の記載は、例示説明を目的とし、本開示に対して何ら制限的な意味を有さない。 Although the present disclosure has been described in detail above, it is clear to those skilled in the art that the present disclosure is not limited to the embodiments described in the present disclosure. The present disclosure can be implemented as modifications and changes without departing from the spirit and scope of the present disclosure as determined by the claims. Accordingly, the description of the present disclosure is for illustrative purposes only and is not meant to be limiting on the present disclosure.
1…シミュレーションシステム、10…端末装置、11…処理装置、12…記憶装置、13…通信装置、14…撮像装置、15…ディスプレイ、16…入力装置、20…走査装置、21…処理装置、22…記憶装置、23…通信装置、24…撮像装置、25…ディスプレイ、26…入力装置、30…サーバ、31…処理装置、32…記憶装置、33…通信装置、34…ディスプレイ、35…入力装置、111…取得部、112…データ生成部、113…受付部、114…出力部、115…重ね順特定部、116…画像生成部、116-1…領域特定部、116-2…修正部、117…表示制御部、211…取得部、212…生成部、213…解析部、214…出力部、311…取得部、312…抽出部、313…出力部、AR1…第1領域、DB1…第1データベース、DB2…第2データベース、DS1…第1データセット、PR1,PR2,PR3…制御プログラム、SP1…第1合成画像、SP2…第2合成画像 DESCRIPTION OF SYMBOLS 1... Simulation system, 10... Terminal device, 11... Processing device, 12... Storage device, 13... Communication device, 14... Imaging device, 15... Display, 16... Input device, 20... Scanning device, 21... Processing device, 22 ...Storage device, 23...Communication device, 24...Imaging device, 25...Display, 26...Input device, 30...Server, 31...Processing device, 32...Storage device, 33...Communication device, 34...Display, 35...Input device , 111... Acquisition section, 112... Data generation section, 113... Reception section, 114... Output section, 115... Stacking order specification section, 116... Image generation section, 116-1... Area specification section, 116-2... Correction section, DESCRIPTION OF SYMBOLS 117... Display control part, 211... Acquisition part, 212... Generation part, 213... Analysis part, 214... Output part, 311... Acquisition part, 312... Extraction part, 313... Output part, AR1... 1st area, DB1... th... 1 database, DB2...second database, DS1...first data set, PR1, PR2, PR3...control program, SP1...first composite image, SP2...second composite image

Claims (4)

  1.  複数の衣装と1対1に対応する複数の衣装データを参照することによって、ユーザが衣装を重ね着する場合をシミュレーションする処理装置を有し、
     前記複数の衣装データの各々は、
      衣装の3次元形状を示す形状データと、
      前記形状データに対応付けられ、当該衣装が前記ユーザに何番目に着用されるかを示すタグデータと、を含み、
     前記処理装置は、
     前記複数の衣装のうち、前記ユーザが選択する2以上の衣装の入力を受け付ける受付部と、
     前記受付部によって受け付けられた前記2以上の衣装にそれぞれ対応する2以上の衣装データに基づいて、前記2以上の衣装が互いに重ねられることを表す重ね順を特定する特定部と、
     前記ユーザの3次元体形を表す体形データと前記2以上の衣装データとに基づいて、前記ユーザの3次元画像に、前記特定された重ね順に従って、前記2以上の衣装の3次元画像を重ねた第1合成画像を生成する画像生成部と、
     を備える、シミュレーション装置。
    a processing device that simulates a case in which a user wears multiple costumes by referring to a plurality of costume data in one-to-one correspondence with a plurality of costumes;
    Each of the plurality of costume data is
    Shape data indicating the three-dimensional shape of the costume,
    tag data associated with the shape data and indicating the number of times the costume is worn by the user;
    The processing device includes:
    a reception unit that receives input of two or more costumes selected by the user from among the plurality of costumes;
    a specifying unit that specifies a stacking order indicating that the two or more outfits are stacked on top of each other, based on two or more outfit data corresponding to the two or more outfits received by the reception unit;
    Based on the body shape data representing the three-dimensional body shape of the user and the two or more costume data, three-dimensional images of the two or more costumes are superimposed on the three-dimensional image of the user according to the specified stacking order. an image generation unit that generates a first composite image;
    A simulation device comprising:
  2.  前記2以上の衣装は、対応する衣装と前記ユーザの身体との間に他の衣装を介さずに、前記ユーザが着用する第1衣装を含み、
     前記画像生成部は、
      前記ユーザの3次元画像に、前記第1衣装の3次元画像を重ねた場合、前記ユーザの3次元画像の内部に、前記第1衣装の3次元画像の一部が含まれる第1領域を特定する領域特定部と、
      前記第1領域に含まれる前記第1衣装の3次元画像の一部を、前記ユーザの3次元画像の外部に押し出す修正部と、
     を備える、請求項1に記載のシミュレーション装置。
    The two or more costumes include a first costume worn by the user without intervening another costume between the corresponding costume and the user's body,
    The image generation unit includes:
    When a three-dimensional image of the first costume is superimposed on the three-dimensional image of the user, a first area including a part of the three-dimensional image of the first costume is identified within the three-dimensional image of the user. an area identification unit to
    a modification unit that pushes a part of the three-dimensional image of the first costume included in the first area to the outside of the three-dimensional image of the user;
    The simulation device according to claim 1, comprising:
  3.  前記2以上の衣装は、前記ユーザが前記第1衣装の上から着用する第2衣装を含み、
     前記領域特定部は、前記修正部によって修正された前記ユーザの3次元画像と前記修正部によって修正された前記第1衣装の3次元画像の組に対応する第2合成画像に、前記第2衣装の3次元画像を重ねた場合、前記第2合成画像の内部に、前記第2衣装の3次元画像の一部が含まれる第2領域を特定し、
     前記修正部は、前記第2領域に含まれる前記第2衣装の3次元画像の一部を、前記第2合成画像の外部に押し出す、請求項2に記載のシミュレーション装置。
    The two or more costumes include a second costume that the user wears over the first costume,
    The area specifying unit adds the second costume to a second composite image corresponding to a set of a three-dimensional image of the user modified by the modification unit and a three-dimensional image of the first costume modified by the modification unit. when the three-dimensional images of the second costume are superimposed, specifying a second region in which a part of the three-dimensional image of the second costume is included in the second composite image;
    The simulation device according to claim 2, wherein the modification unit pushes a part of the three-dimensional image of the second costume included in the second region to the outside of the second composite image.
  4.  前記体形データは、前記ユーザの動作に合わせて前記ユーザの骨格の姿勢の変化を表す骨格データを含み、
     前記形状データは、前記ユーザが前記形状データによって示される前記衣装を着用した場合における、前記ユーザの関節に関する関節データを含み、
     前記関節データは、前記衣装に対する前記ユーザの前記骨格の相対的な位置、及び前記衣装における前記ユーザの前記関節に対応する位置を示し、
     前記画像生成部は、前記骨格データを、前記関節データと照合して、前記第1合成画像としての動画像を生成する、請求項1に記載のシミュレーション装置。
    The body shape data includes skeletal data representing changes in the posture of the user's skeleton in accordance with the user's movements,
    The shape data includes joint data regarding the user's joints when the user wears the costume indicated by the shape data,
    The joint data indicates a relative position of the skeleton of the user with respect to the costume, and a position corresponding to the joint of the user in the costume,
    The simulation device according to claim 1, wherein the image generation unit generates a moving image as the first composite image by comparing the skeletal data with the joint data.
PCT/JP2023/015234 2022-05-12 2023-04-14 Simulation device WO2023218861A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-078989 2022-05-12
JP2022078989 2022-05-12

Publications (1)

Publication Number Publication Date
WO2023218861A1 true WO2023218861A1 (en) 2023-11-16

Family

ID=88730182

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/015234 WO2023218861A1 (en) 2022-05-12 2023-04-14 Simulation device

Country Status (1)

Country Link
WO (1) WO2023218861A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002117414A (en) * 2000-10-11 2002-04-19 Toyobo Co Ltd Clothes collision processing method and computer- readable storage medium with clothes collision processing program stored therein
JP2013190974A (en) * 2012-03-13 2013-09-26 Satoru Ichimura Information processing apparatus, information processing method, and program
JP2016110652A (en) * 2014-12-05 2016-06-20 ダッソー システムズDassault Systemes Computer-implemented method for designing avatar with at least one garment
JP2016532197A (en) * 2013-08-04 2016-10-13 アイズマッチ エルティーディー.EyesMatch Ltd. Virtualization device, system and method in mirror
JP2020119156A (en) * 2019-01-22 2020-08-06 日本電気株式会社 Avatar creating system, avatar creating device, server device, avatar creating method and program
JP2020170394A (en) * 2019-04-04 2020-10-15 株式会社Sapeet Clothing-wearing visualization system and clothing-wearing visualization method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002117414A (en) * 2000-10-11 2002-04-19 Toyobo Co Ltd Clothes collision processing method and computer- readable storage medium with clothes collision processing program stored therein
JP2013190974A (en) * 2012-03-13 2013-09-26 Satoru Ichimura Information processing apparatus, information processing method, and program
JP2016532197A (en) * 2013-08-04 2016-10-13 アイズマッチ エルティーディー.EyesMatch Ltd. Virtualization device, system and method in mirror
JP2016110652A (en) * 2014-12-05 2016-06-20 ダッソー システムズDassault Systemes Computer-implemented method for designing avatar with at least one garment
JP2020119156A (en) * 2019-01-22 2020-08-06 日本電気株式会社 Avatar creating system, avatar creating device, server device, avatar creating method and program
JP2020170394A (en) * 2019-04-04 2020-10-15 株式会社Sapeet Clothing-wearing visualization system and clothing-wearing visualization method

Similar Documents

Publication Publication Date Title
Jiang et al. Seeing invisible poses: Estimating 3d body pose from egocentric video
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN106055710A (en) Video-based commodity recommendation method and device
JP2022510712A (en) Neural network training method and image matching method, as well as equipment
KR20190000397A (en) Fashion preference analysis
Vitali et al. Acquisition of customer’s tailor measurements for 3D clothing design using virtual reality devices
CN106202304A (en) Method of Commodity Recommendation based on video and device
US20140168111A1 (en) System and method of dynamically generating a frequency pattern to realize the sense of touch in a computing device
JP2014089665A (en) Image processor, image processing method, and image processing program
JP7318321B2 (en) Information processing device, information processing method, person search system, and person search method
JPWO2018142756A1 (en) Information processing apparatus and information processing method
CN109906457A (en) Data identification model constructs equipment and its constructs the method for data identification model and the method for data discrimination apparatus and its identification data
Goldsmith et al. Augmented reality environmental monitoring using wireless sensor networks
US11509712B2 (en) Fashion item analysis based on user ensembles in online fashion community
Vitali et al. A virtual environment to emulate tailor’s work
Colombo et al. Mixed reality to design lower limb prosthesis
WO2023218861A1 (en) Simulation device
CN110009446A (en) A kind of display methods and terminal
CN112925941A (en) Data processing method and device, electronic equipment and computer readable storage medium
Yu et al. Interactive Context-Aware Furniture Recommendation using Mixed Reality
WO2023120472A1 (en) Avatar generation system
WO2023162499A1 (en) Display control device
WO2023139961A1 (en) Information processing device
WO2023079875A1 (en) Information processing device
KR102619462B1 (en) Mataverse-based children clothing second-hand transaction relay system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23803343

Country of ref document: EP

Kind code of ref document: A1