CN112070572A - Virtual fitting method, device, storage medium and computer equipment - Google Patents

Virtual fitting method, device, storage medium and computer equipment Download PDF

Info

Publication number
CN112070572A
CN112070572A CN202010745698.5A CN202010745698A CN112070572A CN 112070572 A CN112070572 A CN 112070572A CN 202010745698 A CN202010745698 A CN 202010745698A CN 112070572 A CN112070572 A CN 112070572A
Authority
CN
China
Prior art keywords
information
user
current user
human body
clothes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010745698.5A
Other languages
Chinese (zh)
Inventor
黄柏衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xizhi Electronics Co ltd
Original Assignee
Shenzhen Xizhi Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xizhi Electronics Co ltd filed Critical Shenzhen Xizhi Electronics Co ltd
Priority to CN202010745698.5A priority Critical patent/CN112070572A/en
Publication of CN112070572A publication Critical patent/CN112070572A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/325Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices using wireless networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual fitting method, a virtual fitting device, a storage medium and computer equipment, wherein the method comprises the following steps: acquiring body information of a current user, and generating a human body model; acquiring a user head portrait and body data, and importing the user head portrait into the head of the human body model; recommending clothes to the current user according to the season, style, user age, price, brand, discount, skin color and posture information; and acquiring instruction information of clothes selection, and finishing virtual fitting by the human body model. Compared with the prior art, through the scheme, the user can select the clothes suitable for the user in a short time, and shopping experience is improved.

Description

Virtual fitting method, device, storage medium and computer equipment
Technical Field
The invention relates to the field of mobile internet, in particular to a virtual fitting method, a virtual fitting device, a storage medium and computer equipment.
Background
With the development of mobile internet technology, virtual fitting systems combining natural interaction technology and graphics technology have come into play. In the traditional shopping process, only the style of clothes can be seen in online shopping, and the effect of wearing the clothes by oneself cannot be seen; when the user goes off the clothing shop online, particularly on holidays, the user needs to wait for a long time to try on clothing, if the user is not suitable, the user needs to queue again, and the fitting experience is poor.
The traditional fitting system only focuses on the fitting experience of clothes, and neglects the need of selecting clothes suitable for the user. A large amount of clothes exist in a traditional fitting system, but a user cannot pick out clothes suitable for the user in a short time.
Disclosure of Invention
The invention mainly aims to provide a virtual fitting method, and aims to solve the technical problem that a user can quickly screen out proper clothes in a fitting system.
The invention provides a virtual fitting method, which comprises the following steps:
acquiring body information of a current user, and generating a human body model;
acquiring a user head portrait and body data, and importing the user head portrait into the head of the human body model;
recommending clothes to the current user according to the season, style, user age, price, brand, discount, skin color and posture information;
and acquiring instruction information of clothes selection, and finishing virtual fitting by the human body model.
Preferably, the step of acquiring the head portrait and the body data of the user and importing the head portrait of the user into the head of the human body model includes:
acquiring a face picture of a current user by photographing;
identifying the positions of a nose, a chin and a cheek of the face picture, and detecting the bright areas of the nose, the chin and the cheek to obtain the skin information of the current user;
according to the skin information, beautifying the face picture, and simultaneously presenting the face pictures before and after beautifying to the current user;
judging whether the beauty instruction information of the current user is received or not;
and if so, importing the head portrait of the user after beautifying into the head of the preset human body model.
Preferably, the step of obtaining the face picture of the current user by taking a picture includes:
acquiring a photo of a current user;
preprocessing the picture, cutting environment image information, and keeping a human face and hair;
and according to preset specified feature points, the face of the current user is scratched to generate a face picture.
Preferably, the step of obtaining the physical information of the current user and generating the human body model includes:
generating trousers and clothes size options for the current user to select;
generating first body information according to the trousers size and the clothes size selected by the current user, wherein the body information comprises height information and three-dimensional information;
scanning to obtain the body of the current user and generating second body information;
judging whether the error between the first body information and the second body information is within a preset range or not;
if so, integrating the first body information data and the second body information data according to the preset weight to generate body information;
and generating a human body model according to the human body information.
Preferably, the selecting the clothes instruction information, after the step of the human body model completing the virtual fitting, comprises:
recording the expression of the current user within a preset time period;
when the current user tries on clothes, judging whether the expression of the current user is positive;
if yes, acquiring the type of the current clothes, and increasing the weight of the type;
and integrating the weight of the type, and selecting the clothes type with the highest weight value to recommend to the current user.
Preferably, the selecting the clothes instruction information, after the step of the human body model completing the virtual fitting, comprises:
taking the front view of the human body model as a first direction, and judging whether first confirmation instruction information is acquired or not;
if so, rotating the human body model by ninety degrees towards the preset direction, and judging whether second confirmation instruction information is acquired;
if so, continuing to rotate the human body model by ninety degrees in the preset direction, and judging whether third confirmation instruction information is acquired;
if so, continuing to rotate the human body model by ninety degrees in the preset direction, and judging whether fourth confirmation instruction information is acquired;
if yes, the human body model continues to rotate ninety degrees towards the preset direction, and the payment two-dimensional code is generated.
Preferably, the obtaining of the instruction information for selecting clothes, after the step of the human body model completing the virtual fitting, includes:
obtaining the style and color of the current clothes;
and generating a related clothing recommendation according to the style and the color of the current clothing and clothing matching information prestored in the database.
The present invention also provides a virtual fitting apparatus, comprising:
the first acquisition module is used for acquiring the body information of the current user and generating a human body model;
the second acquisition module is used for acquiring the head portrait and the body data of the user and guiding the head portrait of the user into the head of the human body model;
the recommendation module is used for recommending clothes to the current user according to the season, style, user age, price, brand, discount, skin color and posture information;
and the fitting module is used for acquiring the instruction information of selecting clothes and finishing virtual fitting by the human body model.
The present invention also provides a storage medium, which is a computer-readable storage medium, on which a computer program is stored, which, when executed, implements the virtual fitting method as described above.
The invention also provides a computer device, which is characterized by comprising a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the virtual fitting method when executing the computer program.
The invention has the beneficial effects that: through this scheme for the user can choose the clothes that is fit for oneself in the short time, has promoted shopping experience.
Drawings
FIG. 1 is a schematic flow chart of a virtual fitting method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of the virtual fitting system of FIG. 1;
FIG. 3 is a schematic diagram of a work of acquiring a face picture according to a virtual fitting method of the present invention;
FIG. 4 is a schematic structural diagram of a virtual fitting apparatus according to a first embodiment of the present invention;
FIG. 5 is a block diagram of an embodiment of a storage medium provided in the present application;
fig. 6 is a block diagram of an embodiment of a computer device provided in the present application.
Description of reference numerals:
1. a first acquisition module; 2. a second acquisition module; 3. a recommendation module; 4. fitting the clothes module;
100. a storage medium; 200. a computer program; 300. a computer device; 400. a processor.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, the present invention provides a virtual fitting method, including:
s1: acquiring body information of a current user, and generating a human body model;
s2: acquiring a user head portrait and body data, and importing the user head portrait into the head of the human body model;
s3: recommending clothes to the current user according to the season, style, user age, price, brand, discount, skin color and posture information;
s4: and acquiring instruction information of clothes selection, and finishing virtual fitting by the human body model.
In the embodiment of the invention, the method is applied to a virtual fitting system, the virtual fitting system comprises a mobile terminal, an intelligent fitting mirror and a server, wherein the mobile terminal is used for paying and uploading data, the intelligent fitting mirror is used for fitting, and the server is used for processing and receiving the data sent by the mobile terminal and the intelligent fitting mirror and transmitting the related data back to the mobile terminal and the intelligent fitting mirror. The intelligent fitting mirror acquires body information of a current user to generate a human body model, wherein the body information comprises face information, height information, stature information, three-dimensional information and the like. The intelligent fitting mirror recommends clothes to the current user in a list form according to the season, style, user age, price, brand, discount, skin color and posture information. The current user screens clothes, clicks fitting to generate clothes selecting instruction information, the intelligent fitting mirror acquires the clothes selecting instruction information of a person, and the human body model completes virtual fitting. Through the arrangement, the user can select the clothes suitable for the user in a short time, and shopping experience is improved.
Further, the step S2 of acquiring the user ' S head portrait and body data, and importing the user ' S head portrait into the human body model ' S head, includes:
s21: acquiring a face picture of a current user by photographing;
s22: identifying the positions of a nose, a chin and a cheek of a face picture, and detecting the bright areas of the nose, the chin and the cheek to obtain the skin information of the current user;
s23: according to the skin information, beautifying the face picture, and simultaneously presenting the face pictures before and after beautifying to the current user;
s24: judging whether the beauty instruction information of the current user is received or not;
s25: and if so, importing the head portrait of the user after beautifying into the head of the preset human body model.
In the embodiment of the invention, the intelligent fitting mirror acquires the face picture of the current user by photographing. The intelligent fitting mirror uploads the face picture to the server, the server identifies the positions of the nose, the chin and the cheeks of the face picture, the bright regions of the nose, the chin and the cheeks are detected, the area ratio of the bright regions to corresponding organs is calculated, and if the areas of the nose, the chin and the cheeks covered by the bright regions (namely the oil outlet regions) exceed 70%, the current user belongs to oil skin; if the shiny area (i.e. the oil-out area) covers between 40% and 70% of the area of the nose, chin and cheeks, then the current user is of mixed skin type; if the area of the shiny area covering the nose, chin and cheeks is between 20% and 40%, then the current user is of neutral skin; if the area of the shiny area covering the nose, chin and cheeks is below 20%, the current user belongs to a dry skin. And beautifying the face picture according to the skin information of the current user, and simultaneously presenting the face pictures before and after beautifying to the current user. Specifically, oily skin has the problems of thick texture, large pores and high greasy and smooth skin, so the key point of beautifying is that the skin has fine texture, pores are reduced, and the greasy and high-smooth part of the skin is reduced, so that the skin is smooth and fine; mixed skin has a problem in that the skin is oily at the "T" region of the face (forehead, nose, perioral and lower jaw) and dry at the cheek, so the emphasis of beauty is to reduce the oily high part of the skin at the "T" region; neutral skin is an ideal skin state, so that only a darker area of the skin needs to be supplemented with light properly; dry skin has a problem of dull skin, so beauty is focused on supplementing light to a dark area as appropriate. The server sends the face pictures before and after beautifying to the intelligent fitting mirror, and the intelligent fitting mirror simultaneously presents the face pictures to the current user. The user can select whether to beautify the face according to own preference. If yes, clicking a button on the intelligent fitting mirror, receiving the beautifying instruction information of the current user by the intelligent fitting mirror, and guiding the head portrait of the user after beautifying into the head of the preset human body model. Through the setting, the skin of the current user is beautified in a targeted manner, so that the head portrait of the user is more beautiful, and the fitting experience of the user is improved.
Referring to fig. 3, the step S21 of obtaining a picture of a face of a current user by taking a picture includes:
S21A: acquiring a photo of a current user;
S21B: preprocessing the picture, cutting environment image information, and keeping a human face and hair;
S21C: and according to preset specified feature points, the face of the current user is scratched to generate a face picture.
In the embodiment of the invention, the server acquires the photo of the current user, preprocesses the photo, retains the face and the hair through face recognition, and cuts the environment image information (such as the photo background). And according to preset specified feature points, the face of the current user is scratched to generate a face picture. In the embodiment of the invention, the facial features are positioned through 68 facial feature points, and then facial pictures are extracted. Through preliminary treatment, cut off the noise factor of photo, then fix a position through the characteristic point, fix a position people's face five sense organs, at last accurately scratch and get people's face picture.
Further, the step S1 of obtaining the physical information of the current user and generating a human body model includes:
s11: generating trousers and clothes size options for the current user to select;
s12: generating first body information according to the size of trousers and the size of clothes selected by a current user, wherein the body information comprises height and three-dimensional information;
s13: scanning to obtain the body of the current user and generating second body information;
s14: judging whether the error between the first body information and the second body information is within a preset range or not;
s15: if so, integrating the first body information data and the second body information data according to the preset weight to generate body information;
s16: and generating a human body model according to the human body information.
In the embodiment of the invention, the intelligent fitting mirror has trousers and clothes size options for the current user to select, and the system generates first body information according to the trousers size and the clothes size, wherein the body information comprises height information and three-dimensional information. The intelligent fitting mirror is provided with a scanner, and a user can scan and acquire the current body of the user by turning a circle to generate second body information, wherein the second body information comprises height and three-dimensional information. And judging whether the error between the first body information and the second body information is within a preset range, if so, proving that the body information of the current user is correct, integrating the first body information data and the second body information data according to a preset weight, generating the body information, and if the weight of the first body information data is 40 percent and the weight of the second body information data is 60 percent, generating the body model by the system according to the body information. Through the arrangement, the body information obtained through the clothes size of the user and the body information of the user obtained through scanning are cross-certified, and the body information is generated according to the corresponding weight, so that an accurate human body model is obtained.
Further, after the step S4 of selecting the clothes instruction information and the human body model completing the virtual fitting, the method includes:
s41: recording the expression of the current user within a preset time period;
s42: when the current user tries on clothes, judging whether the expression of the current user is positive;
s43: if yes, acquiring the type of the current clothes, and increasing the weight of the type;
s44: and integrating the weight of the type, and selecting the clothes type with the highest weight value to recommend to the current user.
In the embodiment of the invention, the expression of the current user within a preset time period is recorded, for example, the expression of the current user within 15 minutes is recorded. And when the current user tries on clothes, judging whether the expression of the current user is positive or not. Specifically, the method comprises the steps of firstly, positioning a face detection characteristic region; secondly, extracting facial expression characteristics, and thirdly, judging the facial expression. The method comprises the steps of firstly, adopting the problem of human face feature area positioning, utilizing the geometric features of a human face and the color and gray level characteristics of a human face image, combining a method for positioning the human face feature area with filtering to remove interference and invariability on size, rotation and displacement, effectively removing hair and clothes on positioning, and determining the area position of key feature points of eyes, nose and mouth; finally, the mouth angle is positioned by inputting the facial expression image, normalization, graying, binaryzation and gray level projection of the facial expression image, and then the judgment of 'positive' and 'negative' facial expressions is realized according to the condition that the mouth angle has gradient components, the mouth is opened to a certain threshold value and the mouth angle is opened and the binaryzation shadow is changed. And if the expression of the current user is positive, acquiring the type of the current clothes and increasing the weight of the type. And integrating the weight of the type, and selecting the clothes type with the highest weight value to recommend to the current user. Examples of types of clothing include T-shirts, skirts, and jackets. When the user tries on the jacket, the expression is positive, the weight of the jacket is increased, and when the user tries on the skirt, the expression is positive, the weight of the skirt is increased. And after the preset time period is finished, the weights of all types of clothes are integrated, and a clothes type with the highest weight value is selected and recommended to the current user. If the weight of the skirt is the highest at the end of 15 minutes, the system recommends more skirts to the current user in the clothes list. Through the setting, the system can intelligently identify the preference of the user and recommend the favorite clothes of the user.
Further, after the step S4 of selecting the clothes instruction information and the human body model completing the virtual fitting, the method includes:
S4A: taking the front view of the human body model as a first direction, and judging whether first confirmation instruction information is acquired or not;
S4B: if so, rotating the human body model by ninety degrees towards the preset direction, and judging whether second confirmation instruction information is acquired;
S4C: if so, continuing to rotate the human body model by ninety degrees in the preset direction, and judging whether third confirmation instruction information is acquired;
S4D: if so, continuing to rotate the human body model by ninety degrees in the preset direction, and judging whether fourth confirmation instruction information is acquired;
S4E: if yes, the human body model continues to rotate ninety degrees towards the preset direction, and the payment two-dimensional code is generated.
In the embodiment of the invention, the front view of the human body model is taken as a first direction, the system displays an 'affirmation' button and a 'give up payment' button, when a user clicks the 'affirmation' button, the system acquires first affirmation instruction information, rotates ninety degrees clockwise or anticlockwise, the user can continue to watch a first side fitting effect, then clicks the 'affirmation' button, the user rotates ninety degrees, the user watches a back fitting effect, the user clicks the 'support' button, the user rotates ninety degrees subsequently, and the user watches a second side fitting effect; and the user clicks the 'pay' button again, and the user rotates back to the front side to generate the payment two-dimensional code. In the above process, the user can click the "give up payment" button at any time. Through the setting, the user is ensured to completely appreciate the fitting effect of each angle of the clothes, payment is confirmed, and impulse consumption of the user is effectively prevented.
Further, obtaining the instruction information of selecting clothes, after the step S4 of the human body model completing the virtual fitting, the method includes:
s4 a: obtaining the style and color of the current clothes;
s4 b: and generating a related clothing recommendation according to the style and the color of the current clothing and clothing matching information prestored in the database.
In the embodiment of the invention, if the current clothes of the user is the 'black jacket', the style and the color of the black jacket are obtained, and the recommendation of 'blue jeans' is generated according to the clothes matching information in the database. Through the arrangement, the clothing recommendation conforming to the aesthetics is generated and is recommended to the user.
Referring to fig. 4, the present invention provides a virtual fitting apparatus, including:
the first acquisition module 1 is used for acquiring the body information of the current user and generating a human body model;
the second acquisition module 2 is used for acquiring the head portrait and the body data of the user and guiding the head portrait of the user into the head of the human body model;
the recommending module 3 is used for recommending clothes to the current user according to the season, style, user age, price, brand, discount, skin color and posture information;
and the fitting module 4 is used for acquiring the instruction information of selecting clothes and finishing virtual fitting by the human body model.
In the embodiment of the invention, the method is applied to a virtual fitting system, the virtual fitting system comprises a mobile terminal, an intelligent fitting mirror and a server, wherein the mobile terminal is used for paying and uploading data, the intelligent fitting mirror is used for fitting, and the server is used for processing and receiving the data sent by the mobile terminal and the intelligent fitting mirror and transmitting the related data back to the mobile terminal and the intelligent fitting mirror. The intelligent fitting mirror acquires body information of a current user to generate a human body model, wherein the body information comprises face information, height information, stature information, three-dimensional information and the like. The intelligent fitting mirror recommends clothes to the current user in a list form according to the season, style, user age, price, brand, discount, skin color and posture information. The current user screens clothes, clicks fitting to generate clothes selecting instruction information, the intelligent fitting mirror acquires the clothes selecting instruction information of a person, and the human body model completes virtual fitting. Through the arrangement, the user can select the clothes suitable for the user in a short time, and shopping experience is improved.
Further, the second obtaining module 2 includes:
the photographing submodule is used for acquiring a face picture of a current user through photographing;
the recognition submodule is used for recognizing the positions of the nose, the chin and the cheeks of the face picture, detecting the bright areas of the nose, the chin and the cheeks and obtaining the skin information of the current user;
the comparison sub-module is used for beautifying the face picture according to the skin information and simultaneously presenting the face picture before and after beautifying to the current user;
the first judgment submodule is used for judging whether the beautifying instruction information of the current user is received;
and the execution submodule is used for guiding the user head portrait after the face is beautified into the head of the preset human body model if the user head portrait is beautiful.
In the embodiment of the invention, the intelligent fitting mirror acquires the face picture of the current user by photographing. The intelligent fitting mirror uploads the face picture to the server, the server identifies the positions of the nose, the chin and the cheeks of the face picture, the bright regions of the nose, the chin and the cheeks are detected, the area ratio of the bright regions to corresponding organs is calculated, and if the areas of the nose, the chin and the cheeks covered by the bright regions (namely the oil outlet regions) exceed 70%, the current user belongs to oil skin; if the shiny area (i.e. the oil-out area) covers between 40% and 70% of the area of the nose, chin and cheeks, then the current user is of mixed skin type; if the area of the shiny area covering the nose, chin and cheeks is between 20% and 40%, then the current user is of neutral skin; if the area of the shiny area covering the nose, chin and cheeks is below 20%, the current user belongs to a dry skin. And beautifying the face picture according to the skin information of the current user, and simultaneously presenting the face pictures before and after beautifying to the current user. Specifically, oily skin has the problems of thick texture, large pores and high greasy and smooth skin, so the key point of beautifying is that the skin has fine texture, pores are reduced, and the greasy and high-smooth part of the skin is reduced, so that the skin is smooth and fine; mixed skin has a problem in that the skin is oily at the "T" region of the face (forehead, nose, perioral and lower jaw) and dry at the cheek, so the emphasis of beauty is to reduce the oily high part of the skin at the "T" region; neutral skin is an ideal skin state, so that only a darker area of the skin needs to be supplemented with light properly; dry skin has a problem of dull skin, so beauty is focused on supplementing light to a dark area as appropriate. The server sends the face pictures before and after beautifying to the intelligent fitting mirror, and the intelligent fitting mirror simultaneously presents the face pictures to the current user. The user can select whether to beautify the face according to own preference. If yes, clicking a button on the intelligent fitting mirror, receiving the beautifying instruction information of the current user by the intelligent fitting mirror, and guiding the head portrait of the user after beautifying into the head of the preset human body model. Through the setting, the skin of the current user is beautified in a targeted manner, so that the head portrait of the user is more beautiful, and the fitting experience of the user is improved.
Further, the photographing sub-module includes:
the acquisition unit is used for acquiring a photo of a current user;
the cutting unit is used for preprocessing the photo, cutting the environment image information and keeping the human face and the hair;
and the generating unit is used for matting the face of the current user according to the preset specified characteristic points to generate a face picture.
In the embodiment of the invention, the server acquires the photo of the current user, preprocesses the photo, retains the face and the hair through face recognition, and cuts the environment image information (such as the photo background). And according to preset specified feature points, the face of the current user is scratched to generate a face picture. In the embodiment of the invention, the facial features are positioned through 68 facial feature points, and then facial pictures are extracted. Through preliminary treatment, cut off the noise factor of photo, then fix a position through the characteristic point, fix a position people's face five sense organs, at last accurately scratch and get people's face picture.
Further, the first obtaining module 1 includes:
the selection submodule is used for generating trousers and clothes size options for the current user to select;
the first generation submodule is used for generating first body information according to the size of trousers and the size of clothes selected by a current user, and the body information comprises height and three-dimensional information;
the second generation submodule is used for scanning and acquiring the body of the current user and generating second body information;
the second judgment submodule is used for judging whether the error between the first body information and the second body information is within a preset range or not;
the third generation submodule is used for integrating the first body information data and the second body information data according to the preset weight to generate the body information if the first generation submodule is used for generating the body information;
and the fourth generation submodule is used for generating the human body model according to the human body information.
In the embodiment of the invention, the intelligent fitting mirror has trousers and clothes size options for the current user to select, and the system generates first body information according to the trousers size and the clothes size, wherein the body information comprises height information and three-dimensional information. The intelligent fitting mirror is provided with a scanner, and a user can scan and acquire the current body of the user by turning a circle to generate second body information, wherein the second body information comprises height and three-dimensional information. And judging whether the error between the first body information and the second body information is within a preset range, if so, proving that the body information of the current user is correct, integrating the first body information data and the second body information data according to a preset weight, generating the body information, and if the weight of the first body information data is 40 percent and the weight of the second body information data is 60 percent, generating the body model by the system according to the body information. Through the arrangement, the body information obtained through the clothes size of the user and the body information of the user obtained through scanning are cross-certified, and the body information is generated according to the corresponding weight, so that an accurate human body model is obtained.
Further, the virtual fitting apparatus further includes:
the recording module is used for recording the expression of the current user within a preset time period;
the first judgment module is used for judging whether the expression of the current user is positive or not when the current user fitting;
the first execution module is used for acquiring the type of the current clothes and increasing the weight of the type if the type of the current clothes is the same as the type of the current clothes;
and the second execution module is used for integrating the weight of the type, selecting the clothes type with the highest weight value and recommending the clothes type to the current user.
In the embodiment of the invention, the expression of the current user within a preset time period is recorded, for example, the expression of the current user within 15 minutes is recorded. And when the current user tries on clothes, judging whether the expression of the current user is positive or not. Specifically, the method comprises the steps of firstly, positioning a face detection characteristic region; secondly, extracting facial expression characteristics, and thirdly, judging the facial expression. The method comprises the steps of firstly, adopting the problem of human face feature area positioning, utilizing the geometric features of a human face and the color and gray level characteristics of a human face image, combining a method for positioning the human face feature area with filtering to remove interference and invariability on size, rotation and displacement, effectively removing hair and clothes on positioning, and determining the area position of key feature points of eyes, nose and mouth; finally, the mouth angle is positioned by inputting the facial expression image, normalization, graying, binaryzation and gray level projection of the facial expression image, and then the judgment of 'positive' and 'negative' facial expressions is realized according to the condition that the mouth angle has gradient components, the mouth is opened to a certain threshold value and the mouth angle is opened and the binaryzation shadow is changed. And if the expression of the current user is positive, acquiring the type of the current clothes and increasing the weight of the type. And integrating the weight of the type, and selecting the clothes type with the highest weight value to recommend to the current user. Examples of types of clothing include T-shirts, skirts, and jackets. When the user tries on the jacket, the expression is positive, the weight of the jacket is increased, and when the user tries on the skirt, the expression is positive, the weight of the skirt is increased. And after the preset time period is finished, the weights of all types of clothes are integrated, and a clothes type with the highest weight value is selected and recommended to the current user. If the weight of the skirt is the highest at the end of 15 minutes, the system recommends more skirts to the current user in the clothes list. Through the setting, the system can intelligently identify the preference of the user and recommend the favorite clothes of the user.
Further, the virtual fitting apparatus further includes:
the second judgment module is used for judging whether to acquire first confirmation instruction information or not by taking the front view of the human body model as a first direction;
the third judgment module is used for judging whether to acquire second confirmation instruction information or not by rotating the human body model by ninety degrees in the preset direction if the human body model is in the preset direction;
the fourth judgment module is used for judging whether to acquire third confirmation instruction information or not by rotating the human body model to ninety degrees in the preset direction if the human body model is in the preset direction;
the fifth judgment module is used for continuing to rotate the human body model by ninety degrees in the preset direction if the human body model is in the preset position, and judging whether fourth confirmation instruction information is acquired or not;
and the sixth judging module is used for continuing rotating the human body model by ninety degrees in the preset direction if the human body model is in the preset direction, and generating the payment two-dimensional code.
In the embodiment of the invention, the front view of the human body model is taken as a first direction, the system displays an 'affirmation' button and a 'give up payment' button, when a user clicks the 'affirmation' button, the system acquires first affirmation instruction information, rotates ninety degrees clockwise or anticlockwise, the user can continue to watch a first side fitting effect, then clicks the 'affirmation' button, the user rotates ninety degrees, the user watches a back fitting effect, the user clicks the 'support' button, the user rotates ninety degrees subsequently, and the user watches a second side fitting effect; and the user clicks the 'pay' button again, and the user rotates back to the front side to generate the payment two-dimensional code. In the above process, the user can click the "give up payment" button at any time. Through the setting, the user is ensured to completely appreciate the fitting effect of each angle of the clothes, payment is confirmed, and impulse consumption of the user is effectively prevented.
Further, the virtual fitting apparatus further includes:
the third acquisition module is used for acquiring the style and the color of the current clothes;
and the fourth execution module is used for generating related clothing recommendation according to the style and the color of the current clothing and clothing matching information prestored in the database.
In the embodiment of the invention, if the current clothes of the user is the 'black jacket', the style and the color of the black jacket are obtained, and the recommendation of 'blue jeans' is generated according to the clothes matching information in the database. Through the arrangement, the clothing recommendation conforming to the aesthetics is generated and is recommended to the user.
Referring to fig. 5, the present application further provides a storage medium 100, in which a computer program 200 is stored in the storage medium 100, and when the computer program runs on a computer, the computer is caused to execute the virtual fitting method described in the above embodiment.
Referring to fig. 6, the present application further provides a computer device 300 containing instructions, which when run on the computer device 300, causes the computer device 300 to execute the virtual fitting method described in the above embodiments by means of a processor 400 disposed therein.
Those skilled in the art will appreciate that the virtual fitting method apparatus of the present invention and the devices referred to above may be used to perform one or more of the methods described in the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs or applications that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A virtual fitting method, comprising:
acquiring body information of a current user, and generating a human body model;
acquiring a user head portrait and body data, and importing the user head portrait into the head of the human body model;
recommending clothes to the current user according to season, style, user age, price, brand, discount, skin color and posture information;
and acquiring instruction information of clothes selection, and finishing virtual fitting by the human body model.
2. The virtual fitting method according to claim 1, wherein the step of acquiring the head portrait and the body data of the user and importing the head portrait of the user into the head of the human body model comprises:
acquiring a face picture of the current user by photographing;
identifying the positions of a nose, a chin and a cheek of a face picture, and detecting the bright areas of the nose, the chin and the cheek to obtain the skin information of the current user;
according to the skin information, performing facial beautification on the face picture, and simultaneously presenting the face picture before and after facial beautification to the current user;
judging whether the beauty instruction information of the current user is received or not;
and if so, importing the user head portrait after the face beautifying into the head of a preset human body model.
3. The virtual fitting method according to claim 2, wherein the step of obtaining the face picture of the current user by taking a picture comprises:
acquiring a photo of the current user;
preprocessing the photo, cutting environment image information, and keeping a human face and hair;
and according to preset appointed characteristic points, the face of the current user is scratched to generate the face picture.
4. The virtual fitting method according to claim 1, wherein the step of obtaining the physical information of the current user and generating the human body model comprises:
generating pants and garment size options for selection by the current user;
generating first body information according to the trousers size and the clothes size selected by the current user, wherein the body information comprises height information and three-dimensional information;
scanning and acquiring the body of the current user to generate second body information;
judging whether the error between the first body information and the second body information is within a preset range or not;
if so, integrating the first body information data and the second body information data according to preset weight to generate the body information;
and generating the human body model according to the human body information.
5. The virtual fitting method according to claim 1, wherein the step of selecting the clothes instruction information and the human body model to complete the virtual fitting comprises:
recording the expression of the current user within a preset time period;
when the current user tries on clothes, judging whether the expression of the current user is positive;
if yes, acquiring the type of the current clothes, and increasing the weight of the type;
and integrating the weight of the type, and selecting the clothes type with the highest weight value to recommend to the current user.
6. The virtual fitting method according to claim 1, wherein the step of selecting the clothes instruction information and the human body model to complete the virtual fitting comprises:
taking the front view of the human body model as a first direction, and judging whether first confirmation instruction information is acquired or not;
if so, rotating the human body model by ninety degrees in a preset direction, and judging whether second confirmation instruction information is acquired;
if so, continuing to rotate the human body model by ninety degrees towards the preset direction, and judging whether third confirmation instruction information is acquired;
if so, continuing to rotate the human body model by ninety degrees towards the preset direction, and judging whether fourth confirmation instruction information is acquired;
if yes, the human body model continues to rotate ninety degrees towards the preset direction, and a payment two-dimensional code is generated.
7. The virtual fitting method according to claim 1, wherein the step of obtaining the instruction information of selecting clothes, after the step of the human body model completing the virtual fitting, comprises:
obtaining the style and color of the current clothes;
and generating a related clothing recommendation according to the style and the color of the current clothing and clothing matching information prestored in a database.
8. A virtual fitting apparatus, comprising:
the first acquisition module is used for acquiring the body information of the current user and generating a human body model;
the second acquisition module is used for acquiring a user head portrait and body data and guiding the user head portrait into the head of the human body model;
the recommendation module is used for recommending clothes to the current user according to the season, style, user age, price, brand, discount, skin color and posture information;
and the fitting module is used for acquiring the instruction information of selecting clothes, and the human body model completes virtual fitting.
9. A storage medium, characterized in that it is a computer-readable storage medium, on which a computer program is stored, which computer program, when executed, implements a virtual fitting method according to any of claims 1 to 7.
10. Computer device, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said processor implementing, when executing said computer program, a virtual fitting method according to any one of claims 1 to 7.
CN202010745698.5A 2020-07-29 2020-07-29 Virtual fitting method, device, storage medium and computer equipment Pending CN112070572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010745698.5A CN112070572A (en) 2020-07-29 2020-07-29 Virtual fitting method, device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010745698.5A CN112070572A (en) 2020-07-29 2020-07-29 Virtual fitting method, device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN112070572A true CN112070572A (en) 2020-12-11

Family

ID=73656576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010745698.5A Pending CN112070572A (en) 2020-07-29 2020-07-29 Virtual fitting method, device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112070572A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344672A (en) * 2021-06-25 2021-09-03 钟明国 3D virtual fitting method and system for shopping webpage browsing interface
CN113674053A (en) * 2021-08-13 2021-11-19 沈阳化工大学 Virtual intelligent display system of try-on mask

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402765A (en) * 2011-12-27 2012-04-04 纽海信息技术(上海)有限公司 Electronic-commerce recommendation method based on user expression analysis
CN102968555A (en) * 2012-11-01 2013-03-13 江苏物联网研究发展中心 Lazy Convenient clothes matching advice system based on electronic commerce
CN103886026A (en) * 2014-02-25 2014-06-25 刘强 Personal feature based clothing matching method
CN104992464A (en) * 2015-06-19 2015-10-21 上海卓易科技股份有限公司 Virtual garment try-on system and garment try-on method
CN106326511A (en) * 2015-06-30 2017-01-11 上海卓易科技股份有限公司 Intelligent fitting method for mobile terminal, mobile terminal and intelligent fitting system
CN106339929A (en) * 2016-08-31 2017-01-18 潘剑锋 3D fitting system
CN106504060A (en) * 2016-10-22 2017-03-15 肇庆市联高电子商务有限公司 Human body system for trying based on ecommerce
CN106651433A (en) * 2016-11-01 2017-05-10 杭州店湾科技有限公司 Interactive intelligent fitting mirror system and fitting method
CN109360037A (en) * 2018-08-17 2019-02-19 深圳市赛亿科技开发有限公司 Method, Intelligent fitting mirror and the computer readable storage medium that clothing is recommended

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402765A (en) * 2011-12-27 2012-04-04 纽海信息技术(上海)有限公司 Electronic-commerce recommendation method based on user expression analysis
CN102968555A (en) * 2012-11-01 2013-03-13 江苏物联网研究发展中心 Lazy Convenient clothes matching advice system based on electronic commerce
CN103886026A (en) * 2014-02-25 2014-06-25 刘强 Personal feature based clothing matching method
CN104992464A (en) * 2015-06-19 2015-10-21 上海卓易科技股份有限公司 Virtual garment try-on system and garment try-on method
CN106326511A (en) * 2015-06-30 2017-01-11 上海卓易科技股份有限公司 Intelligent fitting method for mobile terminal, mobile terminal and intelligent fitting system
CN106339929A (en) * 2016-08-31 2017-01-18 潘剑锋 3D fitting system
CN106504060A (en) * 2016-10-22 2017-03-15 肇庆市联高电子商务有限公司 Human body system for trying based on ecommerce
CN106651433A (en) * 2016-11-01 2017-05-10 杭州店湾科技有限公司 Interactive intelligent fitting mirror system and fitting method
CN109360037A (en) * 2018-08-17 2019-02-19 深圳市赛亿科技开发有限公司 Method, Intelligent fitting mirror and the computer readable storage medium that clothing is recommended

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344672A (en) * 2021-06-25 2021-09-03 钟明国 3D virtual fitting method and system for shopping webpage browsing interface
CN113674053A (en) * 2021-08-13 2021-11-19 沈阳化工大学 Virtual intelligent display system of try-on mask

Similar Documents

Publication Publication Date Title
JP7075085B2 (en) Systems and methods for whole body measurement extraction
CN109815850B (en) Iris image segmentation and positioning method, system and device based on deep learning
USRE42205E1 (en) Method and system for real-time facial image enhancement
JP3984191B2 (en) Virtual makeup apparatus and method
CN110363867B (en) Virtual decorating system, method, device and medium
CN105843386B (en) A kind of market virtual fitting system
CN107533642B (en) Apparatus, method and system for biometric user identification using neural networks
JP4435809B2 (en) Virtual makeup apparatus and method
CN110046546B (en) Adaptive sight tracking method, device and system and storage medium
WO2018225061A1 (en) System and method for image de-identification
EP3745352B1 (en) Methods and systems for determining body measurements and providing clothing size recommendations
CN108875452A (en) Face identification method, device, system and computer-readable medium
WO2016109884A1 (en) Automated recommendation and virtualization systems and methods for e-commerce
CN107820591A (en) Control method, controller, Intelligent mirror and computer-readable recording medium
US20220044311A1 (en) Method for enhancing a user's image while e-commerce shopping for the purpose of enhancing the item that is for sale
CN109785228B (en) Image processing method, image processing apparatus, storage medium, and server
CN112070572A (en) Virtual fitting method, device, storage medium and computer equipment
Emery et al. OpenNEEDS: A dataset of gaze, head, hand, and scene signals during exploration in open-ended VR environments
JP6656572B1 (en) Information processing apparatus, display control method, and display control program
CN111008935A (en) Face image enhancement method, device, system and storage medium
CN111028354A (en) Image sequence-based model deformation human face three-dimensional reconstruction scheme
Botezatu et al. Fun selfie filters in face recognition: Impact assessment and removal
KR20210000044A (en) System and method for virtual fitting based on augument reality
CN109299645A (en) Method, apparatus, system and storage medium for sight protectio prompt
CN113240819A (en) Wearing effect determination method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination