CN112242007A - Virtual fitting method - Google Patents

Virtual fitting method Download PDF

Info

Publication number
CN112242007A
CN112242007A CN202011086427.XA CN202011086427A CN112242007A CN 112242007 A CN112242007 A CN 112242007A CN 202011086427 A CN202011086427 A CN 202011086427A CN 112242007 A CN112242007 A CN 112242007A
Authority
CN
China
Prior art keywords
human body
target
fitting
virtual fitting
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011086427.XA
Other languages
Chinese (zh)
Other versions
CN112242007B (en
Inventor
李锋
周有喜
乔国坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang Aiwinn Information Technology Co Ltd
Original Assignee
Xinjiang Aiwinn Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang Aiwinn Information Technology Co Ltd filed Critical Xinjiang Aiwinn Information Technology Co Ltd
Priority to CN202011086427.XA priority Critical patent/CN112242007B/en
Publication of CN112242007A publication Critical patent/CN112242007A/en
Application granted granted Critical
Publication of CN112242007B publication Critical patent/CN112242007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application discloses virtual fitting method, through confirming that the target human body in preset region satisfies preset condition, based on target human body founds human body model, confirms target fitting dress, synthesizes human body model with target fitting dress and present fitting result, compare the virtual fitting method that this application provided and can not occupy fitting room, also need not artifical clothes change, such fitting method is efficient to can bring fine experience for client.

Description

Virtual fitting method
Technical Field
The application relates to the field of virtual fitting, in particular to a virtual fitting method.
Background
The traditional fitting mode is that the customer is in the market, the shop, utilize the fitting room of trade company to try on, let the customer see the dress in oneself upper part of the body effect, still can spend time after everybody fitting whether suitable, wear to take whether pleasing to the eye etc. because the fitting room in market is limited, if too much when the customer, the clothes that every customer wanted to try on is more, will cause the customer to queue up the fitting for a long time and have lower fitting efficiency, thereby influence customer experience.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of this, the present application provides a virtual fitting method to solve the problem that the traditional fitting method is low in efficiency and affects the experience of the customer.
The application provides a virtual fitting method, which comprises the following steps:
determining that a target human body in a preset area meets a preset condition;
constructing a human body model based on the target human body;
determining a target fitting dress;
and synthesizing the human body model and the target fitting clothes and presenting fitting results.
Optionally, determining that the target human body in the preset region meets the preset bar includes:
determining that the attribute of the target human body in the preset area meets the preset human body attribute condition;
or/and determining that the time when the target human body is in the preset area meets a preset time threshold.
Optionally, after synthesizing the human body model and the target fitting dress and presenting fitting results, the virtual fitting method further includes:
determining whether to continue virtual fitting;
if yes, re-determining the target fitting clothes, synthesizing the human body model and the target fitting clothes and presenting fitting results.
Optionally, after synthesizing the human body model and the target fitting dress and presenting fitting results, the virtual fitting method further includes:
judging whether the virtual fitting is finished or not;
if yes, storing the target fitting clothes and the synthetic result of each virtual fitting, and the length of each virtual fitting, counting the virtual fitting times of the target human body, and outputting a fitting report.
Optionally, determining whether the virtual fitting is finished includes:
when the number of the target human bodies in the preset area is zero, judging that the virtual fitting is finished;
or, when the number of the target human bodies in the preset area is zero and the zero state lasts for a preset time length, judging that the virtual fitting is finished;
or judging that the virtual fitting is finished based on the finishing instruction.
Optionally, before determining that the target human body in the preset area meets the preset condition, the virtual fitting method includes:
detecting a human body in a preset area;
and judging whether the detected human body is a mirror image human body or not, and determining a non-mirror image human body as a target human body in the preset area.
Optionally, constructing a human body model based on the target human body includes:
acquiring an image of the target human body;
analyzing the image of the target human body to obtain model parameters of the target human body, wherein the model parameters comprise height, chest circumference, waist circumference and hip circumference;
and establishing a human body model according to the model parameters of the target human body.
Optionally, determining the target fitting apparel includes:
determining the selected virtual fitting clothes as target fitting clothes based on the selection instruction;
and/or determining the virtual fitting clothes with the matching degree larger than a preset threshold value as the target fitting clothes based on the model parameters of the target human body.
Optionally, constructing a human body model based on the target human body includes:
acquiring an image of the human body;
and determining a human body model with the matching degree meeting a preset threshold value from a plurality of preset human body models based on the image of the human body.
Optionally, determining, from a plurality of preset human body models, a human body model with a matching degree meeting a preset threshold based on the image of the human body includes:
using the space set L ═ (L)1,l2,...,ln) Representing a spatial positional relationship between individual body parts in the image of the body, said/iSpatial position information representing the ith personal body part,
and (c) representing the human body model by using an undirected graph model G ═ (V, E), and the vertex set V ═ V1,v2,...,vnRepresenting n human body parts of the human body, and respectively representing the mutual connection relation between the human body parts vi and vj by a side set (vi, vj) epsilon E;
will satisfy the function
Figure BDA0002720500050000031
The undirected graph model is used as a human body model of which the matching degree of the image of the human body meets a preset threshold value;
mi(li) Indicating part viPosition l in the input imageiThe degree of matching of (c);
dij(li,lj) Which shows how much deformation of the phantom occurs at the position li for the part vi and at the position lj for the part vj.
The virtual fitting method provided by the application can be used for establishing the human body model based on the target human body after the target human body in the preset area is determined to meet the preset condition, determining the target fitting clothes, synthesizing the human body model and the target fitting clothes and presenting fitting results.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a virtual fitting method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a synthesis calculation provided by an embodiment of the present application;
fig. 3 is a schematic view of a fitting scene provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a virtual fitting device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that step numbers such as S11 and S12 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S12 first and then S11 in specific implementation, which should be within the scope of the present application.
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments, and not all of them. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. The following embodiments and their technical features may be combined with each other without conflict.
An embodiment of the present application provides a virtual fitting method, please refer to fig. 1, which includes steps S11 to S14:
and S11, determining that the target human body in the preset area meets the preset condition.
When a user, a customer and the like are in a preset area, namely in a preset position, a camera triggering the preset position detects a human body entering the preset area, and the detection mode comprises photographing, video recording and the like.
The determination that the target human body in the preset area meets the preset condition may be the determination that the attribute of the target human body in the preset area meets the preset human body attribute condition, or the determination that the time that the target human body is in the preset area meets a preset time threshold. Wherein the human body attribute comprises one or more of height, age, sex and body type.
Of course, in some examples, it may be determined that the target human body in the preset region meets the preset condition only when it is determined that the attribute of the target human body in the preset region meets the preset human body attribute condition and the time when the target human body is in the preset region meets the preset time threshold.
In the embodiment of the application, when the target human body is determined and further determined to meet the preset condition, the subsequent steps are triggered to be realized.
And S12, constructing a human body model based on the target human body.
In some examples, constructing the body model based on the target body may include obtaining an image of the target body, parsing the image of the target body to obtain model parameters of the target body, and building the body model according to the model parameters of the target body. In the embodiment of the present application, the model parameters may include data of height, chest circumference, waist circumference, hip circumference, etc., and may also include data of arm length, leg length, etc. The manikin may be seated, standing, squatting, lying, etc., although there are many environments in which the manikin is located.
In other examples, after the image of the target human body is obtained, a human body model with a matching degree meeting a preset threshold value is determined from a plurality of preset human body models based on the image of the target human body.
And S13, determining the target fitting clothes.
After the body model is constructed, a garment for virtual fitting may be determined, and the garment may be used to synthesize with the body model to obtain a virtual fitting result corresponding to the target body. Apparel includes clothing, pants, shoes, hats, glasses, and the like, although each may also have a different size. It will be appreciated that there may be many garments for virtual fitting (collectively referred to as virtual fitting garments), but not every virtual fitting garment may be synthesized with a human body model, so it is desirable to identify a virtual fitting garment for synthesis with a human body model. In this embodiment of the application, since the model parameters of the target human body are already acquired in step S204, the matching degree of each virtual fitting garment may be calculated based on the acquired model parameters of the target human body, and then the virtual fitting garment with the matching degree greater than the preset threshold value is determined as the target fitting garment.
In some other examples, the user may select the virtual fitting garment, for example, a display screen may be provided, on which multiple pieces of virtual fitting garments may be displayed, and the user may select the virtual fitting garment in manners of manipulating, touching, somatosensory controlling, and the like, and then in this example, the selected virtual fitting garment may be determined as the target fitting garment based on the selection instruction of the user.
And S14, synthesizing the human body model and the target fitting clothes and presenting fitting results.
In the embodiment of the application, after the human body model is constructed and the target fitting clothes are determined based on the human body model, the human body model and the target fitting clothes can be synthesized, the human body model after the virtual fitting clothes is finally obtained, at the moment, the synthesized result can be presented to a user, and the user can consider whether the virtual fitting clothes on the human body model are beautiful and appropriate.
Firstly, a pure white image with the same size as clothing data can be created, a background image and a clothing image are converted into a gray image through binarization operation, the difference value of the two images is calculated to obtain a clothing image binary mask, namely the clothing data pixel is 1 and the background image pixel is zero, because the clothing image is not accurate due to inevitable error in a series of operation processes, the image mask is subjected to mean value filtering, then a pixel area with the same size as the clothing data is divided in a scene image, so that the clothing and a person are overlapped, the virtual 'try-on' effect of the person is achieved, and the most key step is to obtain the 'synthetic point' coordinate of the clothing data in the scene image through calculation.
As shown in fig. 2, assuming that the coordinate system of the scene graph is O, the coordinate system of the clothing graph is O ', the height of the clothing picture is H, and the width is W, wherein the coordinates of the left shoulder of the person are (x1, y1), and the pixel coordinates of the left shoulder in the coordinate system of the clothing data are (x 1', y1 '), assuming that the position of O' in the coordinate system of the scene graph after the future image synthesis is a, the coordinates of a (x2, y2) can be calculated: x2 ═ x1-x1 ', y2 ═ y1-y 1'. And then, dividing a pixel area with the height of H and the width of W by taking A as an origin, copying the clothes into the scene image by utilizing the previously calculated image mask through an OpenCV function, and only copying a point with the pixel of 1 in a mask image of the collar data due to the existence of the image mask, thereby completing the image synthesis recently.
Referring to fig. 3, when the user a enters the preset area, the camera 2 collects the human body data of the user a, and after the human body model and the target fitting clothes are synthesized, the final fitting result B is displayed on the display screen 1. In some examples, user a may switch the target fitting garment through body feeling, and may display a new virtual fitting garment on display screen 1 after switching.
The virtual fitting method provided by the embodiment of the application can be used for establishing the human body model based on the target human body after the target human body in the preset area is determined to meet the preset condition, determining the target fitting clothes, combining the human body model and the target fitting clothes and presenting fitting results.
Further possible embodiments of the virtual fitting method provided by the present application will be described below based on the above steps of the virtual fitting method.
The application also provides a virtual fitting method, which comprises the following steps:
s201, detecting a human body in a preset area.
When a user, a customer and the like are in a preset area, namely in a preset position, a camera triggering the preset position detects a human body entering the preset area, and the detection mode comprises photographing, video recording and the like. The trigger condition at this time may be a trigger mode such as an infrared trigger, a pressure trigger, a temperature trigger, or the like. The human body herein refers to the body of a user, a customer, or the like who enters a predetermined area, and includes postures such as standing and sitting.
S202, judging whether the detected human body is a mirror image human body or not, and determining a non-mirror image human body as a target human body in the preset area.
In some scenes, a mirror is arranged at a position corresponding to the preset area, and a user in the preset area can see the user through the mirror.
The target human body is a human body for implementing the virtual fitting method of the present application, and it is understood that not all human bodies in the preset area are target human bodies, such as mirror image human bodies or children in adult hands.
S203, determining that the target human body in the preset area meets the preset condition.
In the embodiment of the application, when the target human body is determined and further determined to meet the preset condition, the subsequent steps can be realized by triggering. The determination that the target human body in the preset area meets the preset condition may be the determination that the attribute of the target human body in the preset area meets the preset human body attribute condition, or the determination that the time that the target human body is in the preset area meets a preset time threshold. Wherein the human body attribute comprises one or more of height, age, sex and body type.
Of course, in some examples, it may be determined that the target human body in the preset region meets the preset condition only when it is determined that the attribute of the target human body in the preset region meets the preset human body attribute condition and the time when the target human body is in the preset region meets the preset time threshold.
And S204, constructing a human body model based on the target human body.
In the embodiment of the application, the human body model constructed based on the target human body can be obtained by obtaining the image of the target human body, analyzing the image of the target human body to obtain the model parameters of the target human body, and establishing the human body model according to the model parameters of the target human body. In the embodiment of the present application, the model parameters may include data of height, chest circumference, waist circumference, hip circumference, etc., and may also include data of arm length, leg length, etc.
S205, determining a target fitting dress.
After the body model is constructed, a garment for virtual fitting may be determined, and the garment may be used to synthesize with the body model to obtain a virtual fitting result corresponding to the target body.
Apparel includes clothing, pants, shoes, hats, glasses, and the like, although each may also have a different size. It will be appreciated that there may be many garments for virtual fitting (collectively referred to as virtual fitting garments), but not every virtual fitting garment may be synthesized with a human body model, so it is desirable to identify a virtual fitting garment for synthesis with a human body model. In this embodiment of the application, since the model parameters of the target human body are already acquired in step S204, the matching degree of each virtual fitting garment may be calculated based on the acquired model parameters of the target human body, and then the virtual fitting garment with the matching degree greater than the preset threshold value is determined as the target fitting garment.
In some other examples, the user may select the virtual fitting garment, for example, a display screen may be provided, on which multiple pieces of virtual fitting garments may be displayed, and the user may select the virtual fitting garment in manners of manipulating, touching, somatosensory controlling, and the like, and then in this example, the selected virtual fitting garment may be determined as the target fitting garment based on the selection instruction of the user.
S206, synthesizing the human body model and the target fitting clothes and presenting fitting results.
In the embodiment of the application, after the human body model is constructed and the target fitting clothes are determined based on the human body model, the human body model and the target fitting clothes can be synthesized, the human body model after the virtual fitting clothes is finally obtained, at the moment, the synthesized result can be presented to a user, and the user can consider whether the virtual fitting clothes on the human body model are beautiful and appropriate.
And S207, determining whether to continue virtual fitting.
It will be appreciated that when the user is performing a virtual try-on, and may want to try another virtual try-on garment, the user will continue to perform the virtual try-on. At this time, the user can still stay in the preset area for a time longer than the preset time length after presenting the fitting result. Of course, in other examples, the user may control the display to continue the virtual fitting.
And S208, if the virtual fitting is determined to be continued, re-determining the target fitting clothes, synthesizing the human body model and the target fitting clothes and presenting fitting results.
After determining to continue the virtual fitting, the target fitting garment needs to be determined again, and at this time, the manner of determining the target fitting garment may be the manner of determining the target fitting garment in step S205.
And S209, judging whether the virtual fitting is finished.
It can be understood that, when the user does not want to perform virtual fitting, the user will leave the preset area, so that whether the virtual fitting is finished or not can be determined by detecting the number of target human bodies in the preset area, which may include two ways: when the number of the target human bodies in the preset area is zero, or when the number of the target human bodies in the preset area is zero and the state of zero lasts for a preset time length, it can be determined that the virtual fitting is finished. Of course, in some examples, the user may directly control the display screen to issue an end command, where the end command is used to stop performing the virtual fitting.
It should be noted that there may be a plurality of human bodies in the preset area, including mutually known people, it is understood that mutually known people may enter simultaneously, one of them is the main virtual fitting object, and the other people may stand in the preset area to perform the surrounding view, so as to help the main virtual fitting object perform the reference.
S210, if the virtual fitting is finished, storing the target fitting clothes and the synthetic result of each virtual fitting, the length of each virtual fitting, counting the virtual fitting times of the target human body and outputting a fitting report.
If the virtual fitting is determined to be finished, the target fitting clothes and the synthetic result of each virtual fitting, the length of each virtual fitting time can be stored, the number of virtual fitting times of the target human body is counted, a fitting report is output, and a merchant can acquire key information from the output fitting report to adjust own operation strategy.
It should be noted that, in the step S204 of constructing a human model based on the target human body, in addition to the above-described step of establishing a human model according to model parameters of height, chest circumference, waist circumference, hip circumference, etc. of the target human body by obtaining model parameters of the target human body, an image of the target human body may be obtained, and then a human model with a matching degree satisfying a preset threshold value may be determined from a plurality of preset human models based on the image of the target human body. It can be understood that a plurality of human body models can be constructed in advance, then the matching degree of the preset human body model is calculated according to the image of the target human body, and the human body model with the matching degree meeting the preset threshold value is determined from the matching degree, in some examples, the determination of the human body model with the matching degree meeting the preset threshold value from the plurality of preset human body models based on the image of the human body can be realized by the following method:
the spatial set L ═ L1, L2., ln is used to represent the spatial position relationship between the individual human body parts in the image of the target human body, where li represents the spatial position information of the ith human body part, including the position of the human body part in the image and the attribute information of the part (e.g., part direction, part name).
The body model is represented by an undirected graph model G ═ (V, E), where the set of vertices V ═ V1,v2,...,vnDenotes n human body parts of the human body, and the set of edges (vi, vj) E denotes the mutual connection relationship between the human body parts vi and vj, respectively.
Solving the matching relationship between the human body model and the image of the target human body can be converted into solving the problem of the extreme value of the energy function in the graph structure model, and can be obtained by calculation of a formula I:
Figure BDA0002720500050000121
wherein m isi(li) Indicating part viPosition l in the input imageiThe degree of matching of (c); dij(li,lj) Which shows how much deformation of the phantom occurs at the position li for the part vi and at the position lj for the part vj.
The formula can be converted to solve for the parameter values of the respective components when the energy function reaches an extreme value, i.e. solving for L makes formula one reach a minimum value, as shown in formula two:
Figure BDA0002720500050000131
equation two represents the difference m if and only if the human body parts matchiAnd the degree of deformation d of the interconnection between all the body partsijWhen the minimum is reached, the human body structure model which is most matched with the input image is found, namely, the function is satisfied
Figure BDA0002720500050000132
The undirected graph model of (2) is used as a human body model of which the matching degree of the image of the human body meets a preset threshold value. The deformation degree refers to the deviation of relative positions of various parts of the human body, and in the formula, parameters are not set for the initial positions of the parts of the human body, but the whole input image is taken as a reference value range, so that the human body structure model has invariance to the spatial positions of the parts of the human body.
In other embodiments, after the images (RGB face images to be recognized) of the target human body in the preset area are collected by the cameras, the human face in the preset area may be recognized, and when it is determined that the virtual fitting object of the user, the customer, and the like is an important customer (VIP customer) in a shopping mall or a shop, a shopper in the shopping mall or the shop may be notified to the preset area of the virtual fitting to help the important customer to make a reference for selecting clothes. Of course, in some examples, after determining that the virtual fitting object is an important customer, more virtual fitting clothes can be provided for the important customer, and a virtual clothes recommendation conforming to the preference of the important customer can be provided. Specifically, the method for identifying the face in the preset area comprises the following steps of constructing a face detection and identification model:
establishing an MTCNN model comprising 4 stages as a face detection and identification model; the sub-networks corresponding to the first three stages are respectively P-Net, R-Net and O-Net, the sub-networks corresponding to the first three stages are used for detecting faces in RGB images and outputting the RGB face images, and the sub-network F-Net corresponding to the last stage is used for realizing face recognition of the RGB face images; the output of the O-Net sub-network corresponding to the third stage is used as the input of the sub-network structure of the fourth stage;
the corresponding sub-network structure F-Net of the last stage is as follows: setting the size of an input image to be 224 × 224, firstly passing through a Conv2D convolutional layer of a 3 × 3 convolutional core, an MBConv convolutional layer of 3 × 3 convolutional cores, and finally passing through a full-connection layer, wherein the output of the full-connection layer is the probability of a face belonging to a face feature management library (namely, a database of important virtual fitting objects), and if the probability is greater than a set threshold value TH, determining that a face feature value corresponding to the input image is stored by the face feature management library, namely, a client/user/customer corresponding to the face is an important virtual fitting object (a client of a mall or a store).
In one embodiment, the loss function used to train the fourth stage model is:
Figure BDA0002720500050000141
where N is the general face class of the training sample library and M is the general class of the face feature management library, generally M<N,pijIs the probability of an output layer, representing the probability of whether the RGB face image to be recognized belongs to the face feature management library or not, sijFor image label, if the RGB face image to be recognized belongs to the image in the face feature management library, s ij1, the image not belonging to the face feature management library is 0; the total class of the face feature management library is equal to the number of people in the face feature management library, and the total face class of the training sample library is equal to the number of people in the face feature management library plus the number of people in the non-face feature management library.
An embodiment of the present application further provides a terminal, please refer to fig. 4, where the terminal includes: a processor, a memory;
the memory is for storing at least one program instruction and the processor is for implementing the method as described in the various possible embodiments above by loading and executing the at least one program instruction.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as described in the above various possible embodiments.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method described in the above various possible embodiments.
Although the application has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present application includes all such modifications and variations, and is supported by the technical solutions of the foregoing embodiments. In particular regard to the various functions performed by the above described components, the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the specification.
That is, the above-mentioned embodiments are only examples of the present application, and not intended to limit the scope of the present application, and all equivalent structural changes made by using the contents of the present specification and the drawings, such as the combination of technical features between the embodiments, or the direct or indirect application to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A virtual fitting method, the method comprising:
determining that a target human body in a preset area meets a preset condition;
constructing a human body model based on the target human body;
determining a target fitting dress;
and synthesizing the human body model and the target fitting clothes and presenting fitting results.
2. The virtual fitting method of claim 1, wherein the determining that the target human body of the preset area satisfies the preset bar comprises:
determining that the attribute of the target human body in the preset area meets the preset human body attribute condition;
or/and determining that the time when the target human body is in the preset area meets a preset time threshold.
3. The virtual fitting method of claim 1, wherein after synthesizing the mannequin and the target fitting garment and presenting fitting results, the virtual fitting method further comprises:
determining whether to continue virtual fitting;
if yes, re-determining the target fitting clothes, synthesizing the human body model and the target fitting clothes and presenting fitting results.
4. The virtual fitting method according to any one of claims 1 to 3, wherein after synthesizing the human body model and the target fitting garment and presenting fitting results, the virtual fitting method further comprises:
judging whether the virtual fitting is finished or not;
if yes, storing the target fitting clothes and the synthetic result of each virtual fitting, and the length of each virtual fitting, counting the virtual fitting times of the target human body, and outputting a fitting report.
5. The virtual fitting method according to claim 4, wherein the determining whether the virtual fitting is finished comprises:
when the number of the target human bodies in the preset area is zero, judging that the virtual fitting is finished;
or, when the number of the target human bodies in the preset area is zero and the zero state lasts for a preset time length, judging that the virtual fitting is finished;
or judging that the virtual fitting is finished based on the finishing instruction.
6. The virtual fitting method according to claim 1, wherein before the target human body in the predetermined area is determined to satisfy the predetermined condition, the virtual fitting method comprises:
detecting a human body in a preset area;
and judging whether the detected human body is a mirror image human body or not, and determining a non-mirror image human body as a target human body in the preset area.
7. The virtual fitting method of claim 1, wherein said building a mannequin based on the target human body comprises:
acquiring an image of the target human body;
analyzing the image of the target human body to obtain model parameters of the target human body, wherein the model parameters comprise height, chest circumference, waist circumference and hip circumference;
and establishing a human body model according to the model parameters of the target human body.
8. The virtual fitting method of claim 7, wherein said determining a target fitting garment comprises:
determining the selected virtual fitting clothes as target fitting clothes based on the selection instruction;
and/or determining the virtual fitting clothes with the matching degree larger than a preset threshold value as the target fitting clothes based on the model parameters of the target human body.
9. The virtual fitting method of claim 1, wherein said building a mannequin based on the target human body comprises:
acquiring an image of the human body;
and determining a human body model with the matching degree meeting a preset threshold value from a plurality of preset human body models based on the image of the human body.
10. The virtual fitting method according to claim 1, wherein the determining, from a plurality of preset human models, a human model having a matching degree satisfying a preset threshold based on the image of the human body comprises:
using the space set L ═ (L)1,l2,...,ln) Representing a spatial positional relationship between individual body parts in the image of the body, said/iSpatial position information representing the ith personal body part,
and (c) representing the human body model by using an undirected graph model G ═ (V, E), and the vertex set V ═ V1,v2,...,vnRepresenting n human body parts of the human body, and respectively representing the mutual connection relation between the human body parts vi and vj by a side set (vi, vj) epsilon E;
will satisfy the function
Figure FDA0002720500040000031
The undirected graph model is used as a human body model of which the matching degree of the image of the human body meets a preset threshold value;
mi(li) Indicating part viPosition l in the input imageiThe degree of matching of (c);
dij(li,lj) Which shows how much deformation of the phantom occurs at the position li for the part vi and at the position lj for the part vj.
CN202011086427.XA 2020-10-12 2020-10-12 Virtual fitting method Active CN112242007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011086427.XA CN112242007B (en) 2020-10-12 2020-10-12 Virtual fitting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011086427.XA CN112242007B (en) 2020-10-12 2020-10-12 Virtual fitting method

Publications (2)

Publication Number Publication Date
CN112242007A true CN112242007A (en) 2021-01-19
CN112242007B CN112242007B (en) 2023-06-20

Family

ID=74168845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011086427.XA Active CN112242007B (en) 2020-10-12 2020-10-12 Virtual fitting method

Country Status (1)

Country Link
CN (1) CN112242007B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004079652A1 (en) * 2003-03-07 2004-09-16 Digital Fashion Ltd. Virtual try-on display apparatus, virtual try-on display method, virtual try-on display program, and computer readable recording medium in which that program has been recorded
CN101895583A (en) * 2010-07-23 2010-11-24 西安工程大学 Internet two-dimensional fitting cooperative evaluation system and evaluation method
CN104346827A (en) * 2013-07-24 2015-02-11 深圳市华创振新科技发展有限公司 Rapid 3D clothes modeling method for common users
CN105825407A (en) * 2016-03-31 2016-08-03 上海晋荣智能科技有限公司 Virtual fitting mirror system
CN107633440A (en) * 2017-08-11 2018-01-26 捷开通讯(深圳)有限公司 The method of virtual fitting, mobile terminal and storage device for virtual fitting
CN109345337A (en) * 2018-09-14 2019-02-15 广州多维魔镜高新科技有限公司 A kind of online shopping examination method of wearing, virtual mirror, system and storage medium
CN109919727A (en) * 2019-03-12 2019-06-21 深圳市广德教育科技股份有限公司 A kind of 3D garment virtual ready-made clothes system
CN109934613A (en) * 2019-01-16 2019-06-25 中德(珠海)人工智能研究院有限公司 A kind of virtual costume system for trying
CN110991249A (en) * 2019-11-04 2020-04-10 支付宝(杭州)信息技术有限公司 Face detection method, face detection device, electronic equipment and medium
CN111742350A (en) * 2018-02-21 2020-10-02 株式会社东芝 Virtual fitting system, virtual fitting method, virtual fitting program, information processing device, and learning data

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004079652A1 (en) * 2003-03-07 2004-09-16 Digital Fashion Ltd. Virtual try-on display apparatus, virtual try-on display method, virtual try-on display program, and computer readable recording medium in which that program has been recorded
CN101895583A (en) * 2010-07-23 2010-11-24 西安工程大学 Internet two-dimensional fitting cooperative evaluation system and evaluation method
CN104346827A (en) * 2013-07-24 2015-02-11 深圳市华创振新科技发展有限公司 Rapid 3D clothes modeling method for common users
CN105825407A (en) * 2016-03-31 2016-08-03 上海晋荣智能科技有限公司 Virtual fitting mirror system
CN107633440A (en) * 2017-08-11 2018-01-26 捷开通讯(深圳)有限公司 The method of virtual fitting, mobile terminal and storage device for virtual fitting
CN111742350A (en) * 2018-02-21 2020-10-02 株式会社东芝 Virtual fitting system, virtual fitting method, virtual fitting program, information processing device, and learning data
CN109345337A (en) * 2018-09-14 2019-02-15 广州多维魔镜高新科技有限公司 A kind of online shopping examination method of wearing, virtual mirror, system and storage medium
CN109934613A (en) * 2019-01-16 2019-06-25 中德(珠海)人工智能研究院有限公司 A kind of virtual costume system for trying
CN109919727A (en) * 2019-03-12 2019-06-21 深圳市广德教育科技股份有限公司 A kind of 3D garment virtual ready-made clothes system
CN110991249A (en) * 2019-11-04 2020-04-10 支付宝(杭州)信息技术有限公司 Face detection method, face detection device, electronic equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴岩: ""基于Web的虚拟试衣系统关键技术研究"" *

Also Published As

Publication number Publication date
CN112242007B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN105447529B (en) Method and system for detecting clothes and identifying attribute value thereof
JP2020194602A (en) Search system, search method, and program
TWI559242B (en) Visual clothing retrieval
CA2734143C (en) Method and apparatus for estimating body shape
CN111787242B (en) Method and apparatus for virtual fitting
JP5439787B2 (en) Camera device
CN114663199A (en) Dynamic display real-time three-dimensional virtual fitting system and method
JP6069565B1 (en) RECOMMENDATION DEVICE, RECOMMENDATION METHOD, AND PROGRAM
CN111784845A (en) Virtual fitting method and device based on artificial intelligence, server and storage medium
JP2023095908A (en) Information processing system, information processing method, and program
CN107610239A (en) The virtual try-in method and device of a kind of types of facial makeup in Beijing operas
Shadrach et al. Smart virtual trial room for apparel industry
Hashmi et al. An augmented reality based Virtual dressing room using Haarcascades Classifier
JP2018112777A (en) Recommendation item output program, output control program, recommendation item output apparatus, recommendation item output method and output control method
JPH1185988A (en) Face image recognition system
CN112242007A (en) Virtual fitting method
JP2016062542A (en) Position conversion program and information processing device
CN114219578A (en) Unmanned garment selling method and device, terminal and storage medium
CN110620877B (en) Position information generation method, device, terminal and computer readable storage medium
CN116266408A (en) Body type estimating method, body type estimating device, storage medium and electronic equipment
CN111627118A (en) Scene portrait showing method and device, electronic equipment and storage medium
CN109393614B (en) System for measuring size of fit-measuring and clothes-cutting
Botre et al. Virtual Trial Room
CN112102018A (en) Intelligent fitting mirror implementation method and related device
JP6928984B1 (en) Product proposal system, product proposal method and product proposal program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant