CN114723517A - Virtual fitting method, device and storage medium - Google Patents

Virtual fitting method, device and storage medium Download PDF

Info

Publication number
CN114723517A
CN114723517A CN202210271970.XA CN202210271970A CN114723517A CN 114723517 A CN114723517 A CN 114723517A CN 202210271970 A CN202210271970 A CN 202210271970A CN 114723517 A CN114723517 A CN 114723517A
Authority
CN
China
Prior art keywords
clothing
model
picture
fitting
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210271970.XA
Other languages
Chinese (zh)
Inventor
朱蓓蓓
蒋爱玲
陈佳腾
吴佳
贺华勇
许东盈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vipshop Guangzhou Software Co Ltd
Original Assignee
Vipshop Guangzhou Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vipshop Guangzhou Software Co Ltd filed Critical Vipshop Guangzhou Software Co Ltd
Priority to CN202210271970.XA priority Critical patent/CN114723517A/en
Publication of CN114723517A publication Critical patent/CN114723517A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual fitting method, a virtual fitting device and a storage medium, and relates to the technical field of clothes. The method comprises the following steps: collecting a model graph, carrying out region segmentation on the model graph, and detecting a first key point of the model graph; acquiring a clothing image, segmenting the clothing image, and classifying the clothing image to obtain a first class clothing image; detecting a second key point on the first class clothing drawing; and fitting the model picture with the first type of clothing picture through matching the first key points and the second key points to obtain a fitting effect picture. The invention can create accurate and real fitting effect, is suitable for different types of clothes based on fine division of different types of clothes; the fitting device achieves quick identification, generation, matching and garment synthesis, and creates good fitting experience for users.

Description

Virtual fitting method, device and storage medium
Technical Field
The invention relates to the technical field of clothes, in particular to a virtual fitting method, a virtual fitting device and a storage medium.
Background
At present, the research on virtual fitting is a research hotspot in the industry, and the solution to the online fitting requirement of clothes is becoming more and more popular. And by combining a plurality of leading-edge technologies such as 3D modeling, augmented reality, virtual reality, artificial intelligence and the like, the large-scale online and offline retail and e-commerce supply companies construct an immersive shopping environment for the users.
In the early stage, the fitting work process is usually solved by using a three-dimensional measurement and modeling method, in recent years, with the continuous development of computer vision technology, various research works aiming at fitting algorithms are also carried out in some research institutes and industries at home and abroad, but the actually researched virtual fitting devices all have the following problems: the two-dimensional fitting effect is off-plane, and the three-dimensional effect of real fitting is not available; the three-dimensional fitting method needs shooting and modeling, and has high cost and poor effect; only a segmentation scheme is adopted in the simulation process, the clothing information is not clean enough, and the fitting effect is poor; the key points and the segmentation training disclosed at present are limited in scale and low in quality, and are difficult to cover various clothes in actual scenes.
Therefore, a virtual fitting technique which is low in calculation cost and suitable for users to use is needed to be established on the premise of mass data accumulation and accurate image algorithm.
Disclosure of Invention
In order to solve at least one problem mentioned in the background art, the invention provides a virtual fitting method, a virtual fitting device and a storage medium, which can create a precise and real fitting effect, are suitable for different types of clothes based on the fine division of different types of clothes; the fitting device achieves quick identification, generation, matching and garment synthesis, and creates good fitting experience for users.
The embodiment of the invention provides the following specific technical scheme:
in a first aspect, a virtual fitting method is provided, including:
collecting a model graph, carrying out region segmentation on the model graph, and detecting a first key point of the model graph;
acquiring a clothing image, segmenting the clothing image, and classifying the clothing image to obtain a first class clothing image;
detecting a second key point on the first class clothing drawing;
and fitting the model picture with the first type of clothing picture through matching the first key points and the second key points to obtain a fitting effect picture.
Further, the method further comprises: after the model picture is attached to the first type of clothing picture, judging whether an exposed skin area exists in the attached picture;
if so, cutting out the exposed skin area to obtain the fitting effect picture;
if not, directly obtaining the fitting effect picture.
Further, the method further comprises: performing region segmentation on the model graph, and detecting a first key point of the model graph, specifically including:
and segmenting the model graph into at least background, hair, arms, upper body, waist and leg regions through an hrnet segmentation model, and detecting a first key point of a key part on the model graph based on the hrnet key point detection model.
Further, the method further comprises: and after the fitting effect picture is obtained, optimizing the fitting effect picture based on the structural characteristics and the material characteristics of the first class of fitting pictures.
Further, the acquiring the clothing map comprises: and acquiring a picture from a database through an efficientNet identification model, extracting the characteristics of the picture, and identifying the characteristics of the picture to obtain the clothing drawing with the 3D effect.
Further, the method includes the steps of segmenting the clothing images, classifying the clothing images to obtain first-class clothing images, and specifically includes the following steps:
the clothing image is divided into a background area and a clothing area through an hrnet division model;
according to the clothing categories, the clothing images are divided into first category clothing images of corresponding categories through an efficientNet classification model.
Further, the background area is set as a transparent channel.
Further, the second key point comprises one or more of a neckline key point, a sleeve key point, a shoulder key point, a hem key point, a waist key point and a trouser leg key point.
In a second aspect, a virtual fitting apparatus is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the processor implements the virtual fitting method as described above.
In a third aspect, a computer-readable storage medium is provided that stores computer-executable instructions for performing the virtual fitting method as described above.
The embodiment of the invention has the following beneficial effects:
1. by the method in the embodiment of the invention, the model graph is obtained by shooting, the relevant segmentation model is adopted to segment the model graph, and the first key point on the model graph is detected; acquiring a clothing image with a 3D effect, classifying the clothing image through a segmentation model and a classification model to acquire a first class clothing image, detecting a second key point on the first class clothing image, matching and aligning the first key point and the second key point, attaching a model image to the first class clothing image after aligning, and then acquiring a fitting effect image;
2. after the first type of clothing picture is attached to the model picture, if an exposed skin area appears, cutting the exposed skin area, and then obtaining a fitting effect picture; moreover, the fitting effect graph is optimized based on the structural characteristics and the material characteristics of the first class of fitting graphs, so that the fitting effect graph is more real and natural, and the shopping experience of a user is improved;
3. in the step of classifying the clothing images, an efficientNet classification model is adopted to divide the clothing images into 13 categories of short-sleeve upper clothing, long-sleeve upper clothing, deep V upper clothing, waistcoat and the like, then second key points are detected through a deep learning hrnet key point detection model according to the structural characteristics of different categories of clothing, and the fitting effect of key parts of the clothing with different structures is further optimized through detailed classification of the categories.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a virtual fitting method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an exemplary system that may be used to implement the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, the research on virtual fitting is a research hotspot in the industry, and the solution to the online fitting requirement of clothes is becoming more and more popular. And by combining a plurality of leading-edge technologies such as 3D modeling, augmented reality, virtual reality, artificial intelligence and the like, the large-scale online and offline retail and e-commerce supply companies construct an immersive shopping environment for the users. In the early days, the fitting work process is usually solved by using a three-dimensional measurement and modeling method, in recent years, with the continuous development of computer vision technology, various research works aiming at fitting algorithms are also carried out by some domestic and foreign research institutes and industries, but the actually researched virtual fitting devices all have the following problems: the two-dimensional fitting effect is off-plane, and the three-dimensional effect of real fitting is not available; the three-dimensional fitting method needs shooting and modeling, and has high cost and poor effect; only a segmentation scheme is adopted in the simulation process, the clothing information is not clean enough, and the fitting effect is poor; the key points and the segmentation training disclosed at present are limited in scale and low in quality, and are difficult to cover various clothes in actual scenes. Based on the above problems, the present application provides a virtual fitting method, device and storage medium. The accurate and real fitting effect can be created, and the method is suitable for different types of clothes based on the fine division of different types of clothes; the fitting device achieves quick identification, generation, matching and garment synthesis, and creates good fitting experience for users.
Example one
Provided is a virtual fitting method, comprising the following steps:
step S1: collecting a model image, carrying out region segmentation on the model image, and detecting a first key point of the model image.
Specifically, a model image is obtained by shooting by a worker, and the model image contains a whole-body image of the model. Collecting a plurality of model data samples, making a data set, training an hrnet segmentation model, after the model converges, segmenting the model graph into background, hair, arms, upper body, waist and leg regions by adopting the hrnet segmentation model.
Detecting first key points of key parts on the model diagram based on the hrnet key point detection model, wherein the first key points comprise: two points on the left shoulder, two points on the right shoulder, two points on the abdomen border and two points on the waist border, for a total of eight points. Specifically, collecting image data on a model graph, respectively labeling eight key points, calculating a rotation translation vector label of the model, and establishing a training data set and a test data set; and inputting the obtained data set into an improved hrnet model for learning and training, and obtaining a final hrnet model after training, wherein the final hrnet model is used for detecting key points in the model diagram.
In the application, the hrnet model of deep learning is adopted to detect the key points, and extraction of various characteristics is fused, so that the positioning accuracy of the key points on the model graph is improved.
Step S2: the method comprises the steps of obtaining a clothing image, segmenting the clothing image, and classifying the clothing image to obtain a first class clothing image.
Specifically, the clothing drawing is divided into a background area and a clothing area through an hrnet division model, and the background area is set to be a transparent channel. According to the clothing categories, the clothing images are divided into first category clothing images of corresponding categories through an efficientNet classification model. Then, second key points on the first class clothing map are detected.
Specifically, a picture is obtained from a database through an efficientNet recognition model, the characteristics of the picture are extracted, and then the characteristics of the picture are recognized to obtain the clothing image with the 3D effect. Because the database comprises the 2D graph and the 3D graph, firstly, the pictures are collected based on the massive commodity graph in the talent-only meeting, and a picture database is established. Then, preprocessing the acquired picture, preprocessing the acquired commodity image by adopting methods such as gray processing, median filtering processing, image enhancement and the like, then dividing the preprocessed picture data set, keeping the original weight of the efcientNet network on the data set by adopting a migration learning method, training a new weight through fine tuning, storing the trained efcientNet network model, and then classifying and identifying the preprocessed picture data set through the trained efcientNet identification model to obtain the required clothing image with the 3D effect.
Further, the clothes map with the 3D effect is divided into two areas, namely a background and clothes through a deep learning hrnet division model, and the background is set to be a transparent channel. Then, the clothing drawing is further divided into 13 categories of short sleeve top clothing, long sleeve top clothing, deep V top clothing, waistcoat and the like, and key characteristics of each clothing are acquired respectively. And (3) keeping the original weight of the efficientNet network on the data set by adopting a migration learning method, finely adjusting and training a new weight, storing the trained efficientNet classification model, and classifying the clothing drawings by adopting the trained efficientNet classification model according to the characteristics of 13 different types of clothing drawings.
And acquiring key points of various classified clothing images as second key points based on the deep learning hrnet key point detection model and key parts and key point numbers of different types of clothing images. Wherein the second key points comprise a neckline key point, a sleeve key point, a shoulder key point, a hem key point, a waist key point, a trouser leg key point and the like. For example: the clothing drawing is a short sleeve upper garment, and the key points to be acquired are two neckline key points, sleeve key points, key points of left and right shoulders and the like; if the garment pattern is a trousers lower garment, key points of the waist and the leg of trousers are required to be obtained; if the dress picture is a skirt, the key points to be obtained are the waistband key point, the hem key point and the like. The number of key points of different types of clothing drawings can be adjusted according to actual needs.
Step S3: and fitting the model picture with the first type of clothing pictures through matching the first key points and the second key points to obtain a fitting effect picture.
Specifically, the size of the model diagram is determined to be 102 × 102mm, for example, when the clothing diagram needing fitting is a short-sleeved shirt, the upper body area of the model diagram is fixed, and then the clothing diagram is scaled according to the coordinate information of the first key point on the model diagram and the coordinate information of the second key point on the clothing diagram, so that the clothing diagram is scaled to an appropriate size. At this time, the coordinates of the first key points and the coordinates of the second key points are aligned, and since the background area is provided as the alpha transparent channel, when the model drawing and the clothing drawing are attached, the area using the model drawing, the area using the clothing drawing, and the area directly using the background of the model drawing in the attached effect drawing are distinguished through the alpha transparent channel.
After the model picture is attached to the first type of clothing picture, judging whether the attached picture has an exposed skin area; if so, cutting off the exposed skin area to obtain the fitting effect picture; if not, directly obtaining the fitting effect picture.
Specifically, for the shoulder key points on the short sleeves, whether exposed areas exist in the arm, upper body and waist areas of the effect picture after the effect picture is attached is judged, and if the exposed areas exist, the exposed areas are cut off. For a skirt and trousers, key point alignment is carried out according to two key points of the waist of a model picture and key points of the waist of trousers of a 3D dress picture, the dress picture is attached to the model picture after alignment, whether skin exposure areas exist in the waist, the hip and the leg areas of an effect picture after attachment is judged, and if the skin exposure areas exist, the areas are cut.
Step S4: after the fitting effect picture is obtained, the fitting effect picture is optimized based on the structural characteristics and the material characteristics of the first class of fitting pictures.
Specifically, the deformability of the cloth is considered from the cloth of the garment, and the fixed point of each cloth changes under the action of external force, internal elasticity and friction of the cloth. The method comprises the steps of detecting a tight part of a human body, such as a hip part, needing to stretch cloth, continuously adjusting the shapes of grid points of a clothing drawing by using a fitting method, and finally reaching a target of a child, wherein the cloth is deformed. At the moment, the cloth state of the U is analyzed and calculated according to the cloth particles at the previous moment, the stretching distribution of the particles at the current moment is analyzed, the wrinkle curve at the current moment is calculated by adopting implicit integration, and finally the shape of the cloth after the cloth self-collision is obtained, so that a more real and natural fitting effect picture is obtained.
The sequence of steps S1-S4 is not limited, and may be adjusted according to actual conditions.
Example two
Corresponding to the above embodiments, the present application provides a virtual fitting apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the program to implement the virtual fitting method as described above. The processor comprises a model graph processing module, a clothing graph processing module, a matching module and an optimization module.
The model graph processing module is used for collecting a model graph, performing region segmentation on the model graph and detecting a first key point of the model graph.
The clothing image processing module is used for acquiring a clothing image, segmenting the clothing image and classifying the clothing image to acquire a first class clothing image; meanwhile, a second key point on the first class clothing image is detected.
The matching module is used for fitting the model graph with the first type of clothing graph through matching of the first key points and the second key points to obtain a fitting effect graph.
And the optimization module is used for optimizing the fitting effect picture based on the structural characteristics and the material characteristics of the first class fitting picture after obtaining the fitting effect picture.
Specifically, a model graph is obtained through shooting, the model graph is segmented by adopting a relevant segmentation model, and a first key point on the model graph is detected; the method comprises the steps of obtaining a clothing image with a 3D effect, classifying the clothing image through a segmentation model and a classification model to obtain a first class clothing image, detecting a second key point on the first class clothing image, matching and aligning the first key point and the second key point, attaching a model image to the first class clothing image after aligning, and obtaining a fitting effect image.
Fig. 2 shows an exemplary system in the present embodiment.
As shown in fig. 2, the system can be used as the above-mentioned device of any one of the virtual fitting apparatuses in the above-mentioned embodiments. In particular, a system may include one or more computer-readable media (e.g., system memory or NVM/storage) having instructions and one or more processors (e.g., processor (s)) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, the system control module may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) and/or any suitable device or component in communication with the system control module.
The system control module may include a memory controller module to provide an interface to the system memory. The memory controller module may be a hardware module, a software module, and/or a firmware module.
System memory may be used, for example, to load and store data and/or instructions for the system. For one embodiment, the system memory may comprise any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, the system control module may include one or more input/output (I/O) controllers to provide an interface to the NVM/storage and communication interface(s).
For example, the NVM/storage may be used to store data and/or instructions. The NVM/storage may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard disk drive(s) (HDD (s)), one or more Compact Disc (CD) drive(s), and/or one or more Digital Versatile Disc (DVD) drive (s)).
The NVM/storage may include storage resources that are physically part of the device on which the system is installed, or it may be accessible by the device and not necessarily part of the device. For example, the NVM/storage may be accessible over a network via the communication interface(s).
The communication interface(s) may provide an interface for the system to communicate over one or more networks and/or with any other suitable device. The system may wirelessly communicate with one or more components of the wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) may be packaged together with logic for one or more controllers (e.g., memory controller modules) of the system control module. For one embodiment, at least one of the processor(s) may be packaged together with logic for one or more controllers of the system control module to form a System In Package (SiP). For one embodiment, at least one of the processor(s) may be integrated on the same die with logic for one or more controllers of the system control module. For one embodiment, at least one of the processor(s) may be integrated on the same die with logic of one or more controllers of a system control module to form a system on a chip (SoC).
In various embodiments, the system may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, the system may have more or fewer components and/or different architectures. For example, in some embodiments, the system includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
EXAMPLE III
Corresponding to the above embodiments, there is provided a computer-readable storage medium storing computer-executable instructions for performing the virtual fitting method as described above. In the present embodiment, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, magnetic tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A virtual fitting method, comprising:
collecting a model graph, carrying out region segmentation on the model graph, and detecting a first key point of the model graph;
acquiring a clothing image, segmenting the clothing image, and classifying the clothing image to obtain a first class clothing image;
detecting a second key point on the first class clothing drawing;
and fitting the model picture with the first type of clothing picture through matching the first key points and the second key points to obtain a fitting effect picture.
2. The method of claim 1, further comprising: after the model picture is attached to the first type of clothing picture, judging whether an exposed skin area exists in the attached picture;
if so, cutting off the exposed skin area to obtain the fitting effect picture;
if not, directly obtaining the fitting effect picture.
3. The method according to claim 1 or 2, characterized in that the method further comprises: performing region segmentation on the model graph, and detecting a first key point of the model graph, specifically including:
and segmenting the model graph into at least background, hair, arms, upper body, waist and leg regions through an hrnet segmentation model, and detecting first key points of key parts on the model graph based on the hrnet key point detection model.
4. The method according to claim 1 or 2, characterized in that the method further comprises: and after the fitting effect picture is obtained, optimizing the fitting effect picture based on the structural characteristics and the material characteristics of the first class of fitting pictures.
5. The method of claim 1, wherein the obtaining the clothing map comprises: and acquiring a picture from a database through an efficientNet identification model, extracting the characteristics of the picture, and identifying the characteristics of the picture to obtain the clothing drawing with the 3D effect.
6. The method according to claim 1, wherein the step of segmenting the clothing map and classifying the clothing map to obtain a first class of clothing map specifically comprises:
the clothing image is divided into a background area and a clothing area through an hrnet division model;
according to the clothing categories, the clothing images are divided into first category clothing images of corresponding categories through an efficientNet classification model.
7. The method of claim 6, wherein the background area is provided as a transparent channel.
8. The method of claim 6 or 7, wherein the second keypoints comprise one or more of a neckline keypoint, a sleeve keypoint, a shoulder keypoint, a hem keypoint, a waist keypoint, and a leg-sleeve keypoint.
9. A virtual fitting apparatus comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein: the processor, when executing the program, implements the virtual fitting method of any one of claims 1-8.
10. A computer-readable storage medium storing computer-executable instructions for performing the virtual fitting method of any one of claims 1-8.
CN202210271970.XA 2022-03-18 2022-03-18 Virtual fitting method, device and storage medium Pending CN114723517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210271970.XA CN114723517A (en) 2022-03-18 2022-03-18 Virtual fitting method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210271970.XA CN114723517A (en) 2022-03-18 2022-03-18 Virtual fitting method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114723517A true CN114723517A (en) 2022-07-08

Family

ID=82238264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210271970.XA Pending CN114723517A (en) 2022-03-18 2022-03-18 Virtual fitting method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114723517A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745990A (en) * 2024-02-21 2024-03-22 虹软科技股份有限公司 Virtual fitting method, device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105006014A (en) * 2015-02-12 2015-10-28 上海交通大学 Method and system for realizing fast fitting simulation of virtual clothing
CN107578323A (en) * 2017-10-10 2018-01-12 中国科学院合肥物质科学研究院 The three-dimensional online virtual fitting system of real human body
CN110349201A (en) * 2019-07-07 2019-10-18 创新奇智(合肥)科技有限公司 A kind of suit length measurement method, system and electronic equipment neural network based
CN110363867A (en) * 2019-07-16 2019-10-22 芋头科技(杭州)有限公司 Virtual dress up system, method, equipment and medium
KR20190123255A (en) * 2019-10-24 2019-10-31 주식회사 자이언소프트 Virtual fitting system
CN111508079A (en) * 2020-04-22 2020-08-07 深圳追一科技有限公司 Virtual clothing fitting method and device, terminal equipment and storage medium
CN111667479A (en) * 2020-06-10 2020-09-15 创新奇智(成都)科技有限公司 Pattern verification method and device for target image, electronic device and storage medium
WO2021057810A1 (en) * 2019-09-29 2021-04-01 深圳数字生命研究院 Data processing method, data training method, data identifying method and device, and storage medium
CN113191843A (en) * 2021-04-28 2021-07-30 北京市商汤科技开发有限公司 Simulation clothing fitting method and device, electronic equipment and storage medium
KR20220000123A (en) * 2020-06-25 2022-01-03 주식회사 큐브릭디지털 Method for 2d virtual fitting based key-point
CN114092572A (en) * 2021-11-03 2022-02-25 奇酷软件(深圳)有限公司 Clothing color analysis method, system, storage medium and computer equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105006014A (en) * 2015-02-12 2015-10-28 上海交通大学 Method and system for realizing fast fitting simulation of virtual clothing
CN107578323A (en) * 2017-10-10 2018-01-12 中国科学院合肥物质科学研究院 The three-dimensional online virtual fitting system of real human body
CN110349201A (en) * 2019-07-07 2019-10-18 创新奇智(合肥)科技有限公司 A kind of suit length measurement method, system and electronic equipment neural network based
CN110363867A (en) * 2019-07-16 2019-10-22 芋头科技(杭州)有限公司 Virtual dress up system, method, equipment and medium
WO2021057810A1 (en) * 2019-09-29 2021-04-01 深圳数字生命研究院 Data processing method, data training method, data identifying method and device, and storage medium
KR20190123255A (en) * 2019-10-24 2019-10-31 주식회사 자이언소프트 Virtual fitting system
CN111508079A (en) * 2020-04-22 2020-08-07 深圳追一科技有限公司 Virtual clothing fitting method and device, terminal equipment and storage medium
CN111667479A (en) * 2020-06-10 2020-09-15 创新奇智(成都)科技有限公司 Pattern verification method and device for target image, electronic device and storage medium
KR20220000123A (en) * 2020-06-25 2022-01-03 주식회사 큐브릭디지털 Method for 2d virtual fitting based key-point
CN113191843A (en) * 2021-04-28 2021-07-30 北京市商汤科技开发有限公司 Simulation clothing fitting method and device, electronic equipment and storage medium
CN114092572A (en) * 2021-11-03 2022-02-25 奇酷软件(深圳)有限公司 Clothing color analysis method, system, storage medium and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745990A (en) * 2024-02-21 2024-03-22 虹软科技股份有限公司 Virtual fitting method, device and storage medium
CN117745990B (en) * 2024-02-21 2024-05-07 虹软科技股份有限公司 Virtual fitting method, device and storage medium

Similar Documents

Publication Publication Date Title
US11321769B2 (en) System and method for automatically generating three-dimensional virtual garment model using product description
Yang et al. Physics-inspired garment recovery from a single-view image
CN108229559B (en) Clothing detection method, clothing detection device, electronic device, program, and medium
CN107111833B (en) Fast 3D model adaptation and anthropometry
CN104978762B (en) Clothes threedimensional model generation method and system
Yamaguchi et al. Parsing clothing in fashion photographs
TWI559242B (en) Visual clothing retrieval
CN108229496B (en) Clothing key point detection method and device, electronic device, storage medium, and program
Triantafyllou et al. A geometric approach to robotic unfolding of garments
CN108229324A (en) Gesture method for tracing and device, electronic equipment, computer storage media
CN109614925A (en) Dress ornament attribute recognition approach and device, electronic equipment, storage medium
CN110647906A (en) Clothing target detection method based on fast R-CNN method
CN108021847B (en) Apparatus and method for recognizing facial expression, image processing apparatus and system
CN111445426B (en) Target clothing image processing method based on generation of countermeasure network model
CN112330383A (en) Apparatus and method for visual element-based item recommendation
CN112905889A (en) Clothing searching method and device, electronic equipment and medium
CN111160225A (en) Human body analysis method and device based on deep learning
KR20170016578A (en) Clothes Fitting System And Operation Method of Threof
CN114375463A (en) Method for estimating nude body shape from hidden scan of body
CN114723517A (en) Virtual fitting method, device and storage medium
Hu et al. Recovery of upper body poses in static images based on joints detection
US11869152B2 (en) Generation of product mesh and product dimensions from user image data using deep learning networks
Zhang et al. On the correlation among edge, pose and parsing
Rogez et al. Exploiting projective geometry for view-invariant monocular human motion analysis in man-made environments
Ileperuma et al. An enhanced virtual fitting room using deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination