KR101767144B1 - Apparatus, method and computer program for generating 3-dimensional model of clothes - Google Patents

Apparatus, method and computer program for generating 3-dimensional model of clothes Download PDF

Info

Publication number
KR101767144B1
KR101767144B1 KR1020150114435A KR20150114435A KR101767144B1 KR 101767144 B1 KR101767144 B1 KR 101767144B1 KR 1020150114435 A KR1020150114435 A KR 1020150114435A KR 20150114435 A KR20150114435 A KR 20150114435A KR 101767144 B1 KR101767144 B1 KR 101767144B1
Authority
KR
South Korea
Prior art keywords
image
model
dimensional model
dimensional
garment
Prior art date
Application number
KR1020150114435A
Other languages
Korean (ko)
Other versions
KR20170019917A (en
Inventor
탁세윤
조준구
김혜주
Original Assignee
(주)에프엑스기어
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)에프엑스기어 filed Critical (주)에프엑스기어
Priority to KR1020150114435A priority Critical patent/KR101767144B1/en
Publication of KR20170019917A publication Critical patent/KR20170019917A/en
Application granted granted Critical
Publication of KR101767144B1 publication Critical patent/KR101767144B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth

Abstract

An apparatus for generating a three-dimensional model of a garment according to an embodiment includes an image receiving unit configured to receive a two-dimensional image of a garment; A model generating unit configured to generate a three-dimensional model corresponding to the image using a pre-stored base model; And a matching unit configured to transform the image so that the image matches the surface of the three-dimensional model to combine the image with the three-dimensional model. The three-dimensional model creating apparatus of clothes can easily create a three-dimensional model of clothes in a short time (for example, several minutes) by using a general two-dimensional image of clothes, and is applied to a place where a costume is sold, such as an online shopping mall Thereby facilitating the virtual fitting of the user and promoting the purchase of clothes.

Description

[0001] APPARATUS, METHOD AND COMPUTER PROGRAM FOR GENERATING [0002] 3-DIMENSIONAL MODEL OF CLOTHES [0003]

Embodiments relate to an apparatus, method and computer program for generating a 3D model of a garment.

Recently, as a related technology such as a depth sensor has been developed, a virtual fitting service in which a virtual clothing is worn using an avatar or a three-dimensional human body model without a user wearing a costume directly . In order to provide a virtual fitting service, a three-dimensional model corresponding to a user is created by a method of scanning a contour of a user using a depth sensor, and a three-dimensional model of a costume prepared in advance is registered .

However, until now, the virtual fitting service has only been applied to experimental applications, and research has not been conducted on technologies that can be applied to clothes sold in actual clothing shops, leading to purchase of clothes . Particularly, in order to provide a virtual fitting service, a three-dimensional model of a garment to be fitted should be created in advance. Generating a three-dimensional model is a time-consuming and laborious operation, so that a virtual fitting service is applied to real- There are difficult limits.

Japanese Patent Application Laid-Open No. 10-2014-0116585 Japanese Patent Application Laid-Open No. 10-2014-0077820

According to an aspect of the present invention, a three-dimensional model of a costume for virtual fitting is used as it is without using extra effort for three-dimensional modeling of a costume, Dimensional model generating apparatus and method, and a computer program therefor, which can easily generate a three-dimensional model of a garment.

An apparatus for generating a three-dimensional model of a garment according to an embodiment includes an image receiving unit configured to receive a two-dimensional image of a garment; A model generating unit configured to generate a three-dimensional model corresponding to the image using a pre-stored base model; And a matching unit configured to transform the image so that the image matches the surface of the three-dimensional model to combine the image with the three-dimensional model.

In one embodiment, the model generation unit includes: a database storing one or more base models; And a model transformer configured to generate the three-dimensional model by transforming the selected one of the one or more base models to correspond to the image.

In one embodiment, the model generation unit further includes a user input unit configured to receive a user input associated with the base model.

In one embodiment, the user input includes information on the type of garment, and the selected one base model is determined based on information on the type of the garment.

In one embodiment, the user input comprises one or more feature points designated on the image, and the model transformer is configured to generate the three-dimensional model by modifying the base model based on the position of the one or more feature points .

In one embodiment, the model transformer is further configured to modify the base model to correct for errors due to a gap between the line and the line that locates the at least one feature point.

In one embodiment, the user input includes dimensional information associated with one or more details of the three-dimensional model, wherein the model transformation unit is further configured to transform the generated three-dimensional model based on the dimensional information. do.

In one embodiment, the matching unit is further configured to combine the image into the three-dimensional model by extending the image such that the image covers all surfaces of the three-dimensional model.

In one embodiment, the apparatus for generating a three-dimensional model of a garment further comprises a transmitter configured to transmit the three-dimensional model combined with the image to a seller apparatus or a user apparatus.

A method for generating a three-dimensional model of a garment according to an embodiment may be performed using a computing device.

A method for generating a three-dimensional model of a garment according to an embodiment includes: receiving a two-dimensional image of a garment; Generating a three-dimensional model corresponding to the image using a previously stored base model; And modifying the image to match the image to the surface of the three-dimensional model to combine the image with the three-dimensional model.

In one embodiment, generating the three-dimensional model comprises: selecting one of the one or more pre-stored base models; And transforming the selected one base model to correspond to the image.

In one embodiment, the step of generating the three-dimensional model further comprises the step of receiving information on the type of garment from the user prior to the selecting step. At this time, the selecting step includes a step of determining a base model to be selected based on the information on the type of the clothes.

In one embodiment, generating the three-dimensional model includes receiving at least one feature point information designated on the image; And modifying the base model based on the position of the at least one feature point.

In one embodiment, modifying the base model includes modifying the base model to correct an error due to a gap between the line and the line that locates the at least one feature point.

A method of generating a three-dimensional model of a ward according to one embodiment includes receiving dimensional information associated with one or more details contained in a three-dimensional model; And deforming the generated three-dimensional model based on the dimension information.

In one embodiment, combining the image with the three-dimensional model includes extending the image such that the image covers all the surfaces of the three-dimensional model.

In one embodiment, the method for generating a three-dimensional model of the ward further comprises transmitting the three-dimensional model combined with the image to a user device or a seller device.

In the above embodiments, the image of the garment may include a front image of the garment and a rear image of the garment.

A computer program according to embodiments is stored on a medium in combination with hardware to execute a method for generating a three-dimensional model of the garment.

An apparatus, method, and computer program for creating a three-dimensional model of a garment according to one aspect of the present invention can facilitate a three-dimensional model of a garment within a short period of time (e.g., minutes) using a general two- Can be generated. Such a three-dimensional model generation technique of a costume can be applied to a place where a costume is sold, such as an online shopping mall, thereby facilitating a user's virtual fitting and promoting costume purchasing.

1 is a schematic block diagram of an apparatus for generating a 3D model of a garment according to an embodiment.
2 is a conceptual diagram illustrating an exemplary operation of an apparatus for generating a three-dimensional model of a garment according to an embodiment.
3 is a flowchart of a method for generating a three-dimensional model of a garment according to an embodiment.
4 to 12 are images representing an exemplary user interface for a method of generating a three-dimensional model of a garment according to an embodiment.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

1 is a schematic block diagram of an apparatus for generating a 3D model of a garment according to an embodiment.

Referring to FIG. 1, an apparatus for generating a 3D model of a ward according to the present embodiment includes an image receiving unit 1, a model generating unit 2, and a matching unit 3. In one embodiment, the apparatus for generating a three-dimensional model of a garment comprises a transmitter 4. The apparatus for generating a three-dimensional model of a garment according to embodiments may be entirely hardware, entirely software, or partially hardware, and partially software. For example, the apparatus for generating a three-dimensional model of a garment may collectively refer to hardware equipped with data processing capability and operating software for driving the same. The terms "unit," "system," and " device "and the like are used herein to refer to a combination of hardware and software driven by that hardware. For example, the hardware may be a data processing device including a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), or another processor. Also, the software may refer to a running process, an object, an executable, a thread of execution, a program, and the like.

Each of the parts 1-4 constituting the device for generating the three-dimensional model of the garment according to the embodiments is not necessarily intended to refer to a separate component which is physically separated. 1, the image receiving unit 1, the model generating unit 2, the matching unit 3, and the transmitting unit 4 are shown as separate blocks separated from each other. However, the image receiving unit 1, Some or all of the model generation unit 2, the matching unit 3, and the transmission unit 4 may be integrated into the same single device (e.g., a computing device or a server). In addition, each of the units 1-4 is a functionally separated device according to an operation performed by the computing device in which they are implemented, and does not necessarily mean a separate device separate from each other. However, this is an exemplary one. In another embodiment, at least one of the image receiving unit 1, the model generating unit 2, the matching unit 3, and the transmitting unit 4 is implemented as a separate device . For example, each of the parts 1-4 may be communicatively coupled components under a distributed computing environment.

The image receiving unit 1 is a unit for receiving a two-dimensional image of a garment to be produced as a three-dimensional model. The two-dimensional image may be a photograph, preferably two photographs taken on the front and back sides of the garment, respectively.

The model generation unit 2 generates a three-dimensional model corresponding to the two-dimensional image of the garment. In one embodiment, the model generation unit 2 includes a database (DB) 21 and a model transformation unit 23. One or more preset base models are prepared in the DB 21, and the model modification unit 23 can generate a three-dimensional model corresponding to the image by modifying the base model according to the image of the garment. Further, in one embodiment, the model generating section 2 further includes a user input receiving section 22. The user input receiving unit 22 may receive, as a user input, one or more minutia information designated for the costume image, or / and numerical information related to specific details constituting the three-dimensional model, as user input. At this time, the model modification unit 23 generates a three-dimensional model corresponding to the image by modifying the base model based on the minutia information described above. Further, the model transforming unit 23 allows the user to fine-tune the three-dimensional model to a desired shape by modifying the generated three-dimensional model based on the above-described dimension information.

The matching unit 3 is a unit that combines the three-dimensional model created by the model generation unit 2 with the two-dimensional image of the garment to combine the garment image with the three-dimensional model. For example, an image can be combined with a three-dimensional model by mapping an image to a three-dimensional model surface so that the image of the garment covers the surface of the three-dimensional model. In one embodiment, the matching unit 3 may perform the combining by extending the image so that the image covers the entire surface of the three-dimensional model.

The transmitting unit 4 is a unit for transmitting the three-dimensional model combined with the garment image by the matching unit 3 to another apparatus. For example, the transmitting unit 4 transmits a three-dimensional model to a seller's server (not shown) requesting three-dimensional transformation of the costume image, so that the seller can use virtual three- ) Service to the user. Alternatively, in one embodiment, the apparatus for generating a three-dimensional model of the costume performs the function of an application server to provide a user device with a costume 3 (in this case, a costume 3) in response to a request by an application Dimensional model.

2 is a conceptual diagram illustrating an exemplary operation of an apparatus for generating a three-dimensional model of a garment according to an embodiment.

Referring to FIG. 2, a seller selling a garment can send a garment image to a garment's three-dimensional model generation apparatus 300 using the seller apparatus 100. The seller device 100 may be a server that provides an online shopping mall of a seller or may be a server that provides a merchant's online shopping mall or a mobile device such as a smartphone used by a seller, a personal computer (PC), a notebook computer, a personal computer a set top box for a digital assistant, a tablet computer, an Internet Protocol Television (IPTV), and the like, but is not limited thereto.

In one embodiment, the merchandising device 100 transmits the image to the three-dimensional model generation device 300 of the garment via the wired and / or wireless network 200. The communication methods supported by the wired and / or wireless network 200 may include all communication methods that the object and the object can network and may be limited by wired communication, wireless communication, 3G, 4G, It does not. For example, the wired and / or wireless network 200 may be a wireless local area network (LAN), a metropolitan area network (MAN), a global system for mobile network (GSM), an enhanced data GSM environment (EDGE) Downlink Packet Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Zigbee, Wi- , Voice over Internet Protocol (VoIP), LTE Advanced, IEEE 802.16m, WirelessMAN-Advanced, HSPA +, 3GPP Long Term Evolution (LTE), Mobile WiMAX (IEEE 802.16e), UMB Communication by one or more communication methods selected from the group consisting of OFDM, iBurst and MBWA (IEEE 802.20) systems, HIPERMAN, Beam-Division Multiple Access (BDMA), Wi-MAX (World Interoperability for Microwave Access) But it is not limited to this.

The three-dimensional model generation apparatus 300 generates a three-dimensional model corresponding to the garment using the image received from the seller apparatus 100. However, this is exemplary and, in other embodiments, the apparatus 300 for generating a three-dimensional model of the garment may be implemented in the form of a software module integrated into the merchandising apparatus 100. In this case, the seller apparatus 100 can directly generate the three-dimensional model using the image.

The generated three-dimensional model may be transmitted to one or more of the seller apparatus 100 or the user apparatus 400. [ The three-dimensional model may be transmitted to one or more of the seller device 100 or the user device 400 via the wired and / or wireless network 200. Similar to seller device 100, user device 400 may be, but is not limited to, a mobile device such as a smart phone used by a user, a PC, a notebook computer, a PDA, a tablet computer, an IPTV set-

For example, the user device 400 is configured to execute a predetermined application (or an app), and the three-dimensional model generation device 300 of the garment includes an application It can serve as a server. In addition, the garment three-dimensional model generation apparatus 300 may be incorporated in the seller apparatus 100 to configure one function of the application for online shopping mall access provided by the seller.

The user executes a virtual fitting-only application or an online shopping mall application on the user device 400 and uses the application to obtain a three-dimensional model of the garment from the seller device 100 or the three-dimensional model generation device 300 of the garment . Next, the user can apply a three-dimensional model of the costume to his or her three-dimensional body model created or received using the application. In addition, the user can check the change of the wear shape according to the body shape change, or confirm the material and the texture of the costume through the enlargement and reduction. Furthermore, when the user desires to purchase clothes, the user can access the online shopping mall of the seller through the application or share the clothing related information with other service applications such as the Social Networking Service (SNS).

FIG. 3 is a flow chart of a method for generating a three-dimensional model of a garment according to an embodiment, and FIGS. 4-12 are images representing an exemplary user interface for a method of generating a three-dimensional model of a garment according to an embodiment.

Referring to FIG. 3, a two-dimensional image of a garment that is to be initially created is received (S1). The two-dimensional image may be a photograph, preferably two photographs taken on the front and back sides of the garment, respectively. Preferably, the image is properly photographed so that the garment can be well represented. For example, in the case of a topsheet, a photograph of a wearer holding a light arm is preferred, while in the case of a bottomsheet, a photograph of a wearer climbing up on a chair is preferred so that the entire costume is visible. However, the present invention is not limited thereto. The image in this specification is not necessarily drawn for the purpose of creating a three-dimensional model, but it is sufficient if the image is clearly visible on the front and back of the garment. For example, the image of the present specification may be a photograph used for an online shopping mall or a brand website, or a photograph taken at the time of photographing of such a photograph.

Next, the user input for the type of garment is received (S2), and the selected base model is loaded based on the type of the garment among the previously stored three-dimensional base models (S3).

Figure 4 illustrates an exemplary user interface for receiving user input for a type of garment, such that the user may select any of various types of garments as shown. As a non-limiting example, the user can choose whether the garment is male or female, and also include long sleeve t-shirts, turtleneck shirts, long sleeve shirts, short sleeve t-shirts, pique shirts, long pants, , A mini skirt, a short-sleeved medium-length dress, a short-sleeved mini dress, a long sleeve dress, a jacket, a coat, and the like. At this time, the base model corresponding to each kind of clothes is stored in the DB, and the base model corresponding to the type of the clothes selected by the user can be loaded.

However, in the method of generating a three-dimensional model of a garment according to another embodiment, only one base model may be used regardless of the type of garment. In this case, step S2 for selecting the type of garment is omitted It is possible.

In one embodiment, the user is allowed to select an image of the garment, as shown in FIG. For example, a user interface can be displayed on the computing device that can select a front image and a back image of the garment, respectively. However, this is an exemplary one, and in another embodiment, the process of selecting the costume image may be performed in conjunction with the above-described reception step S1 of the image, and the related user interface may be different from that shown.

Next, one embodiment receives a user input for a feature point (S4). Feature points are points on the image that can be referenced when creating a three-dimensional model of the costume by appropriately reflecting the characteristics of the costume. For example, the feature points may be located at the end of the garment (e.g., at the end of the neck, at the neck, at the hem, etc.), at a portion of the garment associated with the body singularity (e.g., above the shoulders, underarm, waist) An intermediate point, or any other suitable point. The user can select suitable points to represent the characteristics of the garment in the garment image. The selection of the minutiae may be performed entirely based on the user's input or may be automatically extracted and designated as minutiae points, or preset reference points (e.g., the above-described wardrobe end, joint portion and midpoint) The set points may be loaded once and the feature points may be selected by the user if necessary.

Next, a three-dimensional model of the garment can be created by modifying the base model based on the minutiae specified by the user (S5). Specifically, when the three-dimensional base model is projected on a plane, the base model can be modified so that the outline of the base model is as close as possible to the set of line segments between the above-mentioned minutiae. The above operation can be performed by comparing each of the front side image and the back side image of the garment with the base model.

6 is an image showing a user interface for specifying a specific point in a front image of a garment, and Figs. 7A to 7C are images showing a user interface for transforming a base model based on the feature points in Fig. Similarly, FIG. 8 is an image showing a user interface for specifying a specific point on the back side image of the garment, and FIGS. 9A to 9C are images showing a user interface for transforming the base model based on the minutiae of FIG.

6 and 8, the user can designate one or more feature points indicated by circles on the garment image, and each feature point is defined by the clothing edge (feature points 0, 3, 4, 8, 9, 13, 14, 16, 11, 12, 15), the midpoint between them on the outline of the ward (singular points 1, 7, 10), and so on. Next, as shown in FIGS. 7A and 9A, the base model can be modified so that the line segments connecting the above-described respective minutiae are as close as possible to the two-dimensional projection of the base model. In Figures 7A and 9A, the two-dimensional projection of the base model is shown in white.

Referring to FIG. 3, the modified base model and the garment image can be matched (S6). In this case, since the costume image is two-dimensional and the base model is three-dimensional, even if a three-dimensional model is generated using the feature points, the two-dimensional image may not be perfectly matched to the surface of the base model, and an error may occur. In order to solve this problem, in the present embodiment, the garment image is expanded to transform the garment image so that the garment image covers all the surfaces of the three-dimensional model (S7). The expansion of the garment image means that the color and pattern of the existing image are used to give the continuous color and pattern to the adjacent areas of the image. Such image processing is well known in the technical field of the present invention, .

For example, the error between the three-dimensional base model and the two-dimensional garment image is shown as an area in which the white outline projecting the two-dimensional three-dimensional model in Figs. 7A and 9A is out of the garment image. 7B and 9B illustrate an intermediate step of matching the three-dimensional model and the image by expansion of the garment image described above (also referred to as auto filling). FIGS. 7C and 9C show a form in which the outline of the white outline of the three-dimensional model projected on the two-dimensionally expanded garment image finally disappears completely.

Figs. 10 and 11 are images showing the above process at another point. First, Figs. 10A and 10B show that the two-dimensional garment image is not completely matched to the three-dimensional model, Quot;) remains. At this time, by performing the expansion (or autofilling) process (S7) of the garment image described above, the two-dimensional garment image can be matched to completely cover all the surfaces of the three-dimensional model as shown in Figs. 11A and 11B. The user can observe the three-dimensional model finally generated through the user interface as shown in FIGS. 11A and 11B, and if there is no error, the user can confirm the model as a three-dimensional model of the costume.

Meanwhile, in one embodiment, the user may receive dimensional information associated with one or more details contained in the three-dimensional model, and further modify the three-dimensional model based thereon. The user can perform operations such as adjusting the length of the tail of the garment, adjusting the degree of neck, adjusting the position of the neck, and the like. Fine adjustment using the dimensional information may be performed on the final three-dimensional model combined with the image, or may be performed in the step of transforming the base model before combining with the image.

Meanwhile, in one embodiment, the 3D model combined with the costume image may be transmitted to an external device, for example, an online shopping mall server or an application service server (S8). FIG. 12 shows a user interface for transmitting a three-dimensional model to an external device. The user interface includes information related to the three-dimensional model of the costume such as an identification code, an icon, a description, a price and a currency, a name, Color, sex, and the like to be uploaded to an external device. The user browses the costumes provided by the seller through the online shopping mall website or an application running on the user device, downloads the three-dimensional model of the costume if there is a costume of interest, and puts it on his avatar, Effect can be obtained.

An apparatus and method for generating a three-dimensional model of a garment according to the embodiments described above can be at least partially implemented in a computer program and recorded on a computer-readable recording medium. In the recording medium according to the embodiments, a program for implementing a physical simulation apparatus and method of a garment is recorded, and the recording medium includes any kind of recording apparatus capable of storing data that can be read by a computer . For example, computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like. The computer readable recording medium may also be distributed over a networked computer system so that computer readable code is stored and executed in a distributed manner. In addition, functional programs, codes, and code segments for implementing the present embodiment may be easily understood by those skilled in the art to which this embodiment belongs.

While the invention has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. However, it should be understood that such modifications are within the technical scope of the present invention. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.

Claims (20)

An image receiving unit configured to receive a two-dimensional image of a garment;
A model generating unit configured to generate a three-dimensional model corresponding to the image using a pre-stored base model; And
And a matching unit configured to transform the image so that the image matches the surface of the three-dimensional model to combine the image with the three-dimensional model,
The model generation unit may generate,
A database in which one or more base models are stored; And
And a model transformation unit configured to generate the three-dimensional model by transforming a selected one of the one or more base models to correspond to the image,
Wherein the matching unit generates an area having a continuous color and pattern with the color and the pattern of the image so as to expand the image so that the three- Dimensional model of the garment.
delete The method according to claim 1,
Wherein the model generation unit further comprises a user input unit configured to receive a user input associated with the base model.
The method of claim 3,
The user input includes information on the type of clothes,
And the selected one base model is determined based on information on the type of the clothes.
The method of claim 3,
Wherein the user input comprises a plurality of feature points designated on the image,
Wherein the model transformation unit is configured to generate the three-dimensional model by transforming the base model based on the positions of the plurality of feature points.
6. The method of claim 5,
Wherein the model transformer is further configured to transform the base model to correct an error due to a set of line segments connecting the feature points and a gap between the images.
The method of claim 3,
Wherein the user input comprises dimensional information associated with one or more details contained in the three-dimensional model,
Wherein the model transformation unit further modifies the generated three-dimensional model based on the dimension information.
delete The method according to claim 1,
And a transmitter configured to transmit the three-dimensional model combined with the image to a seller device or a user device.
The method according to claim 1,
Wherein the image comprises a front image of the garment and a rear image of the garment.
A computing device, comprising: receiving a two-dimensional image of a garment;
Generating a three-dimensional model corresponding to the image using a pre-stored base model; And
Modifying the image so that the computing device matches the image to a surface of the three-dimensional model, and combining the image with the three-dimensional model,
Wherein the step of generating the three-
Selecting one of the one or more pre-stored base models; And
And modifying the selected one base model to correspond to the image,
The step of combining the image with the three-dimensional model may include generating an area having a continuous color and pattern with the color and pattern of the image to expand the image so that all the surfaces of the three- And combining the image with the three-dimensional model so as to cover the three-dimensional model.
delete 12. The method of claim 11,
The step of generating the three-dimensional model may further include the step of receiving information on the type of clothes from the user before the selecting step,
Wherein the selecting includes determining a base model to be selected based on information about the type of the garment.
12. The method of claim 11,
Wherein the step of generating the three-
Receiving a plurality of feature point information designated on the image; And
And modifying the base model based on the position of the plurality of feature points.
15. The method of claim 14,
Wherein modifying the base model comprises modifying the base model to correct errors due to a set of line segments connecting the feature points and a gap between the images.
12. The method of claim 11,
Receiving dimension information associated with one or more details contained in the three-dimensional model; And
And modifying the generated three-dimensional model based on the dimensional information.
delete 12. The method of claim 11,
And transmitting the three-dimensional model combined with the image to a seller device or a user device.
12. The method of claim 11,
Wherein the image comprises a front image of the garment and a rear image of the garment.
Combined with hardware,
Receiving a two-dimensional image of the garment;
Generating a three-dimensional model corresponding to the image using a previously stored base model; And
Transforming the image so that the image is matched to the surface of the three-dimensional model to combine the image with the three-dimensional model, the computer program comprising:
Wherein the step of generating the three-
Selecting one of the one or more pre-stored base models; And
And modifying the selected one base model to correspond to the image,
The step of combining the image with the three-dimensional model may include generating an area having a continuous color and pattern with the color and pattern of the image to expand the image so that all the surfaces of the three- And combining the image with the three-dimensional model to cover the three-
Computer program.
KR1020150114435A 2015-08-13 2015-08-13 Apparatus, method and computer program for generating 3-dimensional model of clothes KR101767144B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150114435A KR101767144B1 (en) 2015-08-13 2015-08-13 Apparatus, method and computer program for generating 3-dimensional model of clothes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150114435A KR101767144B1 (en) 2015-08-13 2015-08-13 Apparatus, method and computer program for generating 3-dimensional model of clothes

Publications (2)

Publication Number Publication Date
KR20170019917A KR20170019917A (en) 2017-02-22
KR101767144B1 true KR101767144B1 (en) 2017-08-11

Family

ID=58315203

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150114435A KR101767144B1 (en) 2015-08-13 2015-08-13 Apparatus, method and computer program for generating 3-dimensional model of clothes

Country Status (1)

Country Link
KR (1) KR101767144B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210056595A (en) * 2019-11-11 2021-05-20 서울과학기술대학교 산학협력단 Method for virtual try-on system using human pose estimation and re-posing, recording medium and device for performing the method
WO2021107203A1 (en) * 2019-11-28 2021-06-03 주식회사 지이모션 Clothes three-dimensional modeling method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108346174B (en) * 2017-12-31 2020-11-24 广州都市圈网络科技有限公司 Three-dimensional model merging method supporting single model interaction
KR102370255B1 (en) * 2021-05-28 2022-03-04 주식회사 문화새움 An exhibition kiosk that provides design contents for clothing and textile structure objects, which are the target items for exhibition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101116838B1 (en) * 2009-12-28 2012-03-06 성결대학교 산학협력단 Generating Method for exaggerated 3D facial expressions with personal styles
KR101508161B1 (en) * 2013-04-19 2015-04-07 주식회사 버추어패브릭스 Virtual fitting apparatus and method using digital surrogate

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101707707B1 (en) 2012-12-14 2017-02-16 한국전자통신연구원 Method for fiiting virtual items using human body model and system for providing fitting service of virtual items
KR102059356B1 (en) 2013-03-25 2020-02-11 삼성전자주식회사 Virtual fitting device of providing virtual fitting service using motion recognition and method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101116838B1 (en) * 2009-12-28 2012-03-06 성결대학교 산학협력단 Generating Method for exaggerated 3D facial expressions with personal styles
KR101508161B1 (en) * 2013-04-19 2015-04-07 주식회사 버추어패브릭스 Virtual fitting apparatus and method using digital surrogate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
웹기반 3D패션몰을 위한 의복시뮬레이션시스템,한국정보통신학회논문지제13권5호(2009)*

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210056595A (en) * 2019-11-11 2021-05-20 서울과학기술대학교 산학협력단 Method for virtual try-on system using human pose estimation and re-posing, recording medium and device for performing the method
KR102365750B1 (en) * 2019-11-11 2022-02-22 서울과학기술대학교 산학협력단 Method for virtual try-on system using human pose estimation and re-posing, recording medium and device for performing the method
WO2021107203A1 (en) * 2019-11-28 2021-06-03 주식회사 지이모션 Clothes three-dimensional modeling method
US11100725B2 (en) 2019-11-28 2021-08-24 Z-Emotion Co., Ltd. Three-dimensional (3D) modeling method of clothing

Also Published As

Publication number Publication date
KR20170019917A (en) 2017-02-22

Similar Documents

Publication Publication Date Title
CN110609617B (en) Apparatus, system and method for virtual mirror
US10991067B2 (en) Virtual presentations without transformation-induced distortion of shape-sensitive areas
EP3479296A1 (en) System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
WO2015129353A1 (en) Virtual trial-fitting system, virtual trial-fitting program, virtual trial-fitting method, and storage medium in which virtual trial-fitting program is stored
JP2019503906A (en) 3D printed custom wear generation
KR101767144B1 (en) Apparatus, method and computer program for generating 3-dimensional model of clothes
JP6242768B2 (en) Virtual try-on device, virtual try-on method, and program
JP6320237B2 (en) Virtual try-on device, virtual try-on method, and program
GB2526978A (en) Computer implemented methods and systems for generating virtual body models for garment fit visualisation
US11663792B2 (en) Body fitted accessory with physics simulation
US11836862B2 (en) External mesh with vertex attributes
JP6338966B2 (en) Virtual try-on device, virtual try-on system, virtual try-on method, and program
US20160267576A1 (en) System and Method for Controlling and Sharing Online Images of Merchandise
JP2016038811A (en) Virtual try-on apparatus, virtual try-on method and program
US20150269759A1 (en) Image processing apparatus, image processing system, and image processing method
WO2023034832A1 (en) Controlling interactive fashion based on body gestures
US20160071321A1 (en) Image processing device, image processing system and storage medium
CN107609946B (en) Display control method and computing device
WO2023034831A1 (en) Deforming custom mesh based on body mesh
US11854069B2 (en) Personalized try-on ads
KR101556158B1 (en) The social service system based on real image using smart fitting apparatus
CN106773050B (en) A kind of intelligent AR glasses virtually integrated based on two dimensional image
US20240013463A1 (en) Applying animated 3d avatar in ar experiences
KR101277553B1 (en) Method for providing fashion coordination image in online shopping mall using avatar and system therefor
WO2022137307A1 (en) Virtual fitting method, virtual fitting system, and program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant