CN112700306A - Virtual modeling generation method for electronic commerce - Google Patents

Virtual modeling generation method for electronic commerce Download PDF

Info

Publication number
CN112700306A
CN112700306A CN202110015172.6A CN202110015172A CN112700306A CN 112700306 A CN112700306 A CN 112700306A CN 202110015172 A CN202110015172 A CN 202110015172A CN 112700306 A CN112700306 A CN 112700306A
Authority
CN
China
Prior art keywords
model
makeup
virtual
human body
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110015172.6A
Other languages
Chinese (zh)
Other versions
CN112700306B (en
Inventor
黎阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dingqu Shanghai Technology Co ltd
Original Assignee
Chengdu Gaoqiao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Gaoqiao Technology Co ltd filed Critical Chengdu Gaoqiao Technology Co ltd
Priority to CN202110015172.6A priority Critical patent/CN112700306B/en
Publication of CN112700306A publication Critical patent/CN112700306A/en
Application granted granted Critical
Publication of CN112700306B publication Critical patent/CN112700306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The invention relates to the field of big data, and discloses a virtual modeling generation method for electronic commerce, which comprises the following steps: the user terminal sends a virtual model request to the virtual model management platform. And the human body construction module constructs a human body model according to the user human body image set and the human body size data in the virtual modeling request to generate the user human body model. The virtual makeup module performs virtual makeup according to the human body model and the makeup model of the user to obtain a first virtual modeling model; the virtual hair style module performs virtual hair cutting according to the first virtual modeling model and the hair style model to obtain a second virtual modeling model; and the virtual clothes module performs virtual fitting according to the second virtual model and the clothes model to obtain a user virtual model, and then sends the user virtual model to the user terminal.

Description

Virtual modeling generation method for electronic commerce
Technical Field
The invention relates to the fields of electronic commerce and big data, in particular to a virtual modeling generation method for electronic commerce.
Background
Big data refers to a collection of data whose content cannot be captured, managed, and processed within a certain time using conventional software tools. Big data technology, refers to the ability to quickly obtain valuable information from a wide variety of types of data. Technologies suitable for big data include massively parallel processing databases, distributed file systems, distributed databases, cloud computing platforms, the internet, and scalable storage systems.
Electronic commerce generally refers to a novel business operation mode in which, in wide commercial and trade activities worldwide, in an internet environment open to the internet, buyers and sellers conduct various commercial and trade activities without conspiracy based on a browser and server application mode, and consumer online shopping, online transactions and online electronic payments among merchants, and various commercial activities, transaction activities, financial activities, and related comprehensive service activities are realized.
However, in the prior art, the product display picture is generally browsed to observe the commodity, and the user does not have more intuitive feeling and experience on the commodity.
Disclosure of Invention
In view of the above, the present invention provides a method for generating a virtual model for electronic commerce, including:
a user terminal sends a virtual modeling request to a virtual modeling management platform; the virtual build request includes: the method comprises the following steps of (1) user number, a user human body image set, human body size data and model selection data;
a human body construction module of the virtual modeling management platform acquires a first model point set according to a user human body image set, and denoises all first model points in the first model point set to obtain a second model point set;
the human body construction module randomly selects a second model point from the second model point set as a central model point, obtains a radiation model point according to the central model point, and then performs plane fitting on the central model point and the radiation model point to obtain a model sub-plane;
the human body construction module projects the central model point in the model sub-plane to the tangent plane of the central model point, and projects the radiation model point to the tangent plane of the radiation model point to obtain the connection relation between each radiation model point and the central model point in the model sub-plane;
the human body construction module maps the model sub-plane to a three-dimensional space according to the connection relation between each radial model point and the central model point in the model sub-plane to obtain a human body sub-model corresponding to the model sub-plane;
the human body construction module carries out model merging treatment on each human body sub-model to obtain an initial human body model, and carries out size proportion correction on the initial human body model according to human body size data to obtain a user human body model;
the virtual makeup module acquires a corresponding makeup model from the database according to the makeup selection data, and performs virtual makeup according to the human body model and the makeup model of the user to obtain a first virtual modeling model;
the virtual hair style module acquires a corresponding hair style model from the database according to the hair style selection data, and performs virtual hair cutting according to the first virtual hair style model and the hair style model to obtain a second virtual hair style model;
the virtual clothes module obtains a corresponding clothes model from the database according to the clothes selection data, performs virtual fitting according to the second virtual model and the clothes model to obtain a user virtual model, and then sends the user virtual model to the user terminal.
According to a preferred embodiment, the virtual build request comprises: a user number, a user human image set, human size data, and build selection data. The user human body image set comprises a plurality of user human body images shot from different angles. The user number is used for uniquely identifying the user. The human body size data includes: height, weight, waist, hip and chest. The build selection data includes: makeup selection data, hair style selection data, and clothes selection data, and the shape selection data is data generated according to a user selecting a makeup, a hair style, and clothes to be tried by the user through a user terminal.
According to a preferred embodiment, the user terminal is a device with data transmission and image acquisition functions, and comprises a smart phone, a tablet computer, a notebook computer and a desktop computer. The user human body model is used for accurately displaying the current long stature and stature of the user. The first virtual modeling model is a user human body model after virtual makeup is carried out; the second virtual modeling model is a user human body model after virtual makeup and virtual haircut; the user virtual modeling model is a user human body model after virtual makeup, virtual haircut and virtual fitting, and the user virtual modeling model is used for displaying the overall modeling effect of the user after the virtual makeup, the virtual haircut and the virtual fitting.
According to a preferred embodiment, the human body construction module acquiring the radiation model points comprises:
the human body construction module acquires all first model points according to the user human body image set and acquires a first model point set according to all the first model points;
the human body construction module carries out denoising processing on all first model points in the first model point set to obtain a second model point set;
the human body construction module randomly selects a second model point from the second model point set as a central model point and obtains the distance between the central model point and the first number of surrounding second model points;
and the human body construction module carries out ascending sequencing on the distances between the first number of second model points and the center model point, and selects the first second number of second model points with the minimum distance as the radiation model points.
According to a preferred embodiment, the step of performing plane fitting on the central model point and the radial model point by the human body construction module to obtain a model sub-plane comprises the following steps:
the human body construction module connects the central model point with each radial model point to obtain a model subregion taking the central model point as a center;
the human body construction module performs plane fitting on the central model point and the radiation model point in the model sub-region to obtain a model sub-plane;
the body construction module maps the central model point and the radial model point to corresponding model sub-planes.
According to a preferred embodiment, the human body model building module corrects the size proportion of the initial human body model according to the human body size data to obtain the user human body model comprises:
the human body model building module obtains size matching points in the initial human body model and determines the projection direction of each size matching point;
the human body model building module traverses all size matching points, takes the size matching points which are being traversed as central matching points, and then connects the central matching points with a plurality of size matching points around the central matching points respectively to generate a size matching plane of the central matching points;
the human body model building module obtains the normal vector projection direction of each size matching plane, and generates a size correction matrix according to the normal vector projection direction of each size matching plane and corresponding human body size data, wherein the size matching plane comprises: a body length plane, a waist plane, a hip plane and a chest plane.
And the human body model building module corrects the size proportion of the initial human body model according to the size correction matrix to obtain the user human body model.
According to a preferred embodiment, the virtual makeup module performing virtual makeup according to the user human body model and the makeup model to obtain a first virtual model includes:
the virtual makeup module acquires all first makeup key points of the human body model of the user and obtains a first makeup key point set according to all the first makeup key points;
the virtual makeup module acquires all second makeup key points of the makeup model, and obtains a second makeup key point set according to all the second makeup key points;
the virtual makeup module acquires the coordinate average value of all the first makeup key points in the first makeup key point set under the first coordinate system, and takes the coordinate average value as the first coordinate average value;
and the virtual makeup module acquires the coordinate average value of all the second makeup key points in the second makeup key point set in the second coordinate system, and takes the coordinate average value as the second coordinate average value.
According to a preferred embodiment, the virtual makeup module performing virtual makeup according to the user human body model and the makeup model to obtain a first virtual model includes:
the virtual makeup module acquires a difference value between a coordinate value of each first makeup key point in the user human body model under the first coordinate system and a first coordinate average value;
the virtual makeup module acquires a difference value between a coordinate value of each second makeup key point in the makeup model in a second coordinate system and a second coordinate average value;
the virtual makeup module obtains a first makeup matrix according to the difference value between the coordinate value of each first makeup key point in the user human body model in the first coordinate system and the average value of the first coordinates and the difference value between the coordinate value of each second makeup key point in the makeup model in the second coordinate system and the average value of the second coordinates;
and the virtual makeup module maps the human body model and the makeup model of the user into a standard coordinate system according to the first makeup matrix, and rotates, translates and scales the human body model and the makeup model to obtain an initial matching model.
According to a preferred embodiment, the virtual makeup module performing virtual makeup according to the user human body model and the makeup model to obtain a first virtual model includes:
the virtual makeup module acquires all first makeup core points of the user human body model, and obtains a first makeup core point set according to all the first makeup core points;
the virtual makeup module acquires an average value of all the first makeup core points in the first makeup core point set and takes the average value as a first core average value;
the virtual makeup module acquires all second makeup core points of the makeup model, and obtains a second makeup core point set according to all the second makeup core points;
the virtual makeup module obtains an average value of all the second cosmetic core points in the second cosmetic core point set and takes it as a second core average value.
According to a preferred embodiment, the virtual makeup module performing virtual makeup according to the user human body model and the makeup model to obtain a first virtual model includes:
the virtual makeup module constructs a second makeup matrix according to the first core average value, the second core average value, the first makeup core point set and the second makeup core point set;
and the virtual makeup module performs row rotation, translation and scaling on the initial matching model according to the second makeup matrix to obtain a first virtual modeling model.
The invention has the following beneficial effects:
according to the invention, the user human body model is created through the user human body image set and the human body size data, so that the constructed user human body model is more fit with the figure and the appearance of the user. In addition, the invention respectively fuses the user human body model with the makeup model, the hair style model and the clothing model selected by the user so as to visually see the whole effect of the makeup, the hair style and the clothing selected by the user on the user, thereby improving the user experience.
Drawings
FIG. 1 is a flowchart illustrating a method for generating a virtual pose for electronic commerce, according to an exemplary embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention.
As shown in FIG. 1, in one embodiment, the method for generating virtual sculptures for electronic commerce of the present invention may comprise:
s1, the user terminal collects a plurality of user body images of the user to generate a user body image set, and the user terminal generates a virtual modeling request according to the user body image set, the user size information and the user modeling selection information. The user terminal sends a virtual model request to the virtual model management platform.
Specifically, the virtual build request includes: a user number, a user human image set, human size data, and build selection data. The user human body image set comprises a plurality of user human body images shot from different angles.
The user number is used to uniquely identify the user. The human body dimension data is human body figure information, and the human body dimension data comprises: height, weight, waist, hip and chest. The shape selection data is data generated according to the user selecting the makeup, the hair style and the clothes to be tried by the user through the user terminal, and comprises makeup selection data, hair style selection data and clothes selection data.
S2, a human body construction module of the virtual modeling management platform acquires a first model point set according to the user human body image set, and denoises all first model points in the first model point set to obtain a second model point set; and randomly selecting a second model point from the second model point set as a central model point, obtaining a radiation model point according to the central model point, and performing plane fitting on the central model point and the radiation model point to obtain a model sub-plane.
In one embodiment, the denoising all the first model points in the first model point set by the human body construction module to obtain the second model point set includes:
the human body construction module obtains the average distance between each first model point and a third number of first model points closest to the first model point so as to obtain the standard deviation of the average distance of each first model point;
the human body construction module calculates a distance threshold value of each first model point according to the standard deviation and the average distance of each first model point;
and the human body construction module compares the average distance of each first model point with the distance threshold corresponding to the first model point, and deletes the first model point when the average distance of the first model point is greater than the corresponding distance threshold.
In one embodiment, the human body construction module acquiring the radiation model points comprises:
the human body construction module acquires all first model points according to the user human body image set and acquires a first model point set according to all the first model points;
the human body construction module carries out denoising processing on all first model points in the first model point set to obtain a second model point set;
the human body construction module randomly selects a second model point from the second model point set as a central model point and obtains the distance between the central model point and the first number of surrounding second model points;
and the human body construction module carries out ascending sequencing on the distances between the first number of second model points and the center model point, and selects the first second number of second model points with the minimum distance as the radiation model points.
In one embodiment, the plane fitting the central model point and the radial model point by the human body construction module to obtain a model sub-plane comprises:
the human body construction module connects the central model point with each radial model point to obtain a model subregion taking the central model point as a center;
the human body construction module performs plane fitting on the central model point and the radiation model point in the model sub-region to obtain a model sub-plane;
the body construction module maps the central model point and the radial model point to corresponding model sub-planes.
The first quantity, the second quantity and the third quantity are preset according to actual conditions, and the user human body model is used for accurately displaying the current long stature and stature of the user.
S3, the human body construction module projects the central model point in the model sub-plane to the tangent plane of the central model point, and projects the radiation model point to the tangent plane of the radiation model point to obtain the connection relation between each radiation model point and the central model point in the model sub-plane. And mapping the model sub-plane to a three-dimensional space according to the connection relation between each radial model point and the central model point in the model sub-plane to obtain the human body sub-model corresponding to the model sub-plane. And carrying out model combination treatment on each human body submodel to obtain an initial human body model, and carrying out size proportion correction on the initial human body model according to human body size data to obtain the user human body model.
In another embodiment, the obtaining the user human body model by the human body model building module performing size scale correction on the initial human body model according to the human body size data comprises:
the human body model building module obtains size matching points in the initial human body model and determines the projection direction of each size matching point;
the human body model building module traverses all size matching points, takes the size matching points which are being traversed as central matching points, and then connects the central matching points with a plurality of size matching points around the central matching points respectively to generate a size matching plane of the central matching points;
the human body model building module obtains the normal vector projection direction of each size matching plane, and generates a size correction matrix according to the normal vector projection direction of each size matching plane and corresponding human body size data, wherein the size matching plane comprises: a body length plane, a waist plane, a hip plane and a chest plane.
And the human body model building module corrects the size proportion of the initial human body model according to the size correction matrix to obtain the user human body model.
S4, the virtual makeup module obtains a corresponding makeup model from the database according to the makeup selection data, and performs virtual makeup according to the human body model and the makeup model of the user to obtain a first virtual modeling model.
In one embodiment, the virtual makeup module performing virtual makeup based on the user human body model and the makeup model to obtain a first virtual model includes:
the virtual makeup module acquires all first makeup key points of the human body model of the user and obtains a first makeup key point set according to all the first makeup key points;
the virtual makeup module acquires all second makeup key points of the makeup model, and obtains a second makeup key point set according to all the second makeup key points;
the virtual makeup module acquires the coordinate average value of all the first makeup key points in the first makeup key point set under the first coordinate system, and takes the coordinate average value as the first coordinate average value;
and the virtual makeup module acquires the coordinate average value of all the second makeup key points in the second makeup key point set in the second coordinate system, and takes the coordinate average value as the second coordinate average value.
In another embodiment, the virtual makeup module performing virtual makeup based on the user human body model and the makeup model to obtain a first virtual model includes:
the virtual makeup module acquires a difference value between a coordinate value of each first makeup key point in the user human body model under the first coordinate system and a first coordinate average value;
the virtual makeup module acquires a difference value between a coordinate value of each second makeup key point in the makeup model in a second coordinate system and a second coordinate average value;
the virtual makeup module obtains a first makeup matrix according to the difference value between the coordinate value of each first makeup key point in the user human body model in the first coordinate system and the average value of the first coordinates and the difference value between the coordinate value of each second makeup key point in the makeup model in the second coordinate system and the average value of the second coordinates;
and the virtual makeup module maps the human body model and the makeup model of the user into a standard coordinate system according to the first makeup matrix, and rotates, translates and scales the human body model and the makeup model to obtain an initial matching model.
In one embodiment, the virtual makeup module performing virtual makeup based on the user human body model and the makeup model to obtain a first virtual model includes:
the virtual makeup module acquires all first makeup core points of the user human body model, and obtains a first makeup core point set according to all the first makeup core points;
the virtual makeup module acquires an average value of all the first makeup core points in the first makeup core point set and takes the average value as a first core average value;
the virtual makeup module acquires all second makeup core points of the makeup model, and obtains a second makeup core point set according to all the second makeup core points;
the virtual makeup module obtains an average value of all the second cosmetic core points in the second cosmetic core point set and takes it as a second core average value.
In one embodiment, the virtual makeup module performing virtual makeup based on the user human body model and the makeup model to obtain a first virtual model includes:
the virtual makeup module constructs a second makeup matrix according to the first core average value, the second core average value, the first makeup core point set and the second makeup core point set;
and the virtual makeup module performs row rotation, translation and scaling on the initial matching model according to the second makeup matrix to obtain a first virtual modeling model.
Optionally, the makeup model is a makeup model corresponding to makeup selection data acquired from a database according to the makeup selection data of the user. The first makeup key point set comprises a plurality of first makeup key points, and the first makeup key points are key points of the face of the user when virtual rough makeup is performed on the human body model of the user, such as relevant point positions of the nose, the lips and the eyes. The second set of cosmetic keypoints comprises a number of second cosmetic keypoints. The second cosmetic keypoints are the points in the cosmetic model that correspond to the first cosmetic keypoints.
The first coordinate system is a coordinate system based on the user human body model. The second coordinate system is a coordinate system based on the makeup model. The standard coordinate system is a coordinate system when the first cosmetic keypoint set and the second cosmetic keypoint set are mapped to the same coordinate system or a coordinate system based on the earth.
The first cosmetic matrix is used to rotate, translate and scale the user mannequin and the makeup model to obtain an initial matching model. The second cosmetic matrix is used for rotating, translating and scaling the initial matching model to obtain the first virtual modeling model. The initial matching model is the user human body model after virtual rough makeup is carried out on the user human body model.
First cosmetic core point set includes a plurality of first cosmetic core point, and first cosmetic core point is for carrying out the user manikin after virtual rough makeup, promptly, the facial core point of initial matching model when initial matching model carries out virtual makeup and corrects, and it includes: the points of the eyes, nose and lips.
The second cosmetic core point set comprises a plurality of second cosmetic core points, and the second cosmetic core points are points corresponding to the first cosmetic core points in the cosmetic model. The first virtual modeling model is the user human body model after the user human body model is subjected to virtual makeup. The first virtual modeling model is a user manikin which is obtained by performing makeup correction on the initial matching model.
In one embodiment, the virtual makeup module constructs a second makeup matrix based on the first kernel average, the second kernel average, the first makeup core point set, and the second makeup core point set according to the formula:
Figure BDA0002886548820000101
wherein n is the number of first cosmetic core points in the first cosmetic core point set, i is the index of the first cosmetic core points, giThe value of the ith first cosmetic core point,
Figure BDA0002886548820000102
is the first core mean value, hiThe value of the ith second cosmetic core point,
Figure BDA0002886548820000103
is the first core average.
And S5, the virtual hair style module obtains a corresponding hair style model from the database according to the hair style selection data, and performs virtual hair cutting according to the first virtual hair style model and the hair style model to obtain a second virtual hair style model.
Optionally, the second virtual styling model is a user mannequin that is virtually dressed and virtually barberped with the user mannequin. The hair style model is a hair style model corresponding to the hair style selection data, which is acquired from the database according to the hair style selection data of the user.
And S6, the virtual clothes module obtains a corresponding clothes model from the database according to the clothes selection data, performs virtual fitting according to the second virtual model and the clothes model to obtain a user virtual model, and then sends the user virtual model to the user terminal.
Optionally, the user virtual model is a user manikin after virtual makeup, virtual haircut and virtual fitting for the user manikin. The clothing model is a clothing model corresponding to the clothing selection data acquired from the database according to the clothing selection data of the user.
The user human body model is used for accurately displaying the current long stature and stature of the user. The first virtual modeling model is a user human body model after virtual makeup is carried out; the second virtual modeling model is a user human body model after virtual makeup and virtual haircut; the user virtual modeling model is a user human body model after virtual makeup, virtual haircut and virtual fitting, and is used for displaying the overall modeling effect of the user after the virtual makeup, the virtual haircut and the virtual fitting.
According to the invention, the user human body model is created through the user human body image set and the human body size data, so that the constructed user human body model is more fit with the figure and the appearance of the user. In addition, the invention respectively fuses the user human body model with the makeup model, the hair style model and the clothing model selected by the user so as to visually see the whole effect of the makeup, the hair style and the clothing selected by the user on the user, thereby improving the user experience.
In one embodiment, an e-commerce virtual build simulation system for performing the method of the present invention may include user terminals and a virtual build management platform, wherein the virtual build management platform has a communication link with each of the user terminals. The user terminal is equipment with data transmission and image acquisition functions, and comprises a smart phone, a tablet computer, a notebook computer and a desktop computer.
The virtual modeling management platform includes: the system comprises a human body construction module, a virtual makeup module, a virtual hair style module, a virtual clothes module and a database, wherein the modules are in communication connection.
The user terminal sends a virtual model request to the virtual model management platform. The virtual build request includes: the method comprises the following steps of (1) user number, a user human body image set, human body size data and model selection data;
the human body construction module acquires a first model point set according to the user human body image set, and denoises all first model points in the first model point set to obtain a second model point set;
the human body construction module randomly selects a second model point from the second model point set as a central model point, obtains a radiation model point according to the central model point, and then performs plane fitting on the central model point and the radiation model point to obtain a model sub-plane;
the human body construction module projects the central model point in the model sub-plane to the tangent plane of the central model point, and projects the radiation model point to the tangent plane of the radiation model point to obtain the connection relation between each radiation model point and the central model point in the model sub-plane;
the human body construction module maps the model sub-plane to a three-dimensional space according to the connection relation between each radial model point and the central model point in the model sub-plane to obtain a human body sub-model corresponding to the model sub-plane;
the human body construction module carries out model merging treatment on each human body sub-model to obtain an initial human body model, and carries out size proportion correction on the initial human body model according to human body size data to obtain a user human body model;
the virtual makeup module acquires a corresponding makeup model from the database according to the makeup selection data, and performs virtual makeup according to the human body model and the makeup model of the user to obtain a first virtual modeling model;
the virtual hair style module acquires a corresponding hair style model from the database according to the hair style selection data, and performs virtual hair cutting according to the first virtual hair style model and the hair style model to obtain a second virtual hair style model;
the virtual clothes module obtains a corresponding clothes model from the database according to the clothes selection data, performs virtual fitting according to the second virtual model and the clothes model to obtain a user virtual model, and then sends the user virtual model to the user terminal.
It should be noted that although specific functions are discussed above with reference to specific modules, it should be noted that the functions of the various modules discussed herein may be separated into multiple modules and/or at least some of the functions of multiple modules may be combined into a single module. Additionally, a particular module performing an action discussed herein includes the particular module itself performing the action, or alternatively the particular module invoking or otherwise accessing another component or module that performs the action (or performs the action in conjunction with the particular module). Thus, a particular module that performs an action can include the particular module that performs the action itself and/or another module that the particular module that performs the action calls or otherwise accesses.
The above are merely examples of the present invention, and are not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A virtual modeling generation method for electronic commerce is characterized in that a user terminal sends a virtual modeling request to a virtual modeling management platform; the virtual build request includes: the method comprises the following steps of (1) user number, a user human body image set, human body size data and model selection data;
a human body construction module of the virtual modeling management platform acquires a first model point set according to a user human body image set, and denoises all first model points in the first model point set to obtain a second model point set;
the human body construction module randomly selects a second model point from the second model point set as a central model point, obtains a radiation model point according to the central model point, and then performs plane fitting on the central model point and the radiation model point to obtain a model sub-plane;
the human body construction module projects the central model point in the model sub-plane to the tangent plane of the central model point, and projects the radiation model point to the tangent plane of the radiation model point to obtain the connection relation between each radiation model point and the central model point in the model sub-plane;
the human body construction module maps the model sub-plane to a three-dimensional space according to the connection relation between each radial model point and the central model point in the model sub-plane to obtain a human body sub-model corresponding to the model sub-plane;
the human body construction module carries out model merging treatment on each human body sub-model to obtain an initial human body model, and carries out size proportion correction on the initial human body model according to human body size data to obtain a user human body model;
the virtual makeup module acquires a corresponding makeup model from the database according to the makeup selection data, and performs virtual makeup according to the human body model and the makeup model of the user to obtain a first virtual modeling model;
the virtual hair style module acquires a corresponding hair style model from the database according to the hair style selection data, and performs virtual hair cutting according to the first virtual hair style model and the hair style model to obtain a second virtual hair style model;
the virtual clothes module obtains a corresponding clothes model from the database according to the clothes selection data, performs virtual fitting according to the second virtual model and the clothes model to obtain a user virtual model, and then sends the user virtual model to the user terminal.
2. The method of claim 1, wherein the body dimension data comprises: height, weight, waist, hip and chest;
the style selection data comprises makeup selection data, hair style selection data and clothing selection data;
the user human body image set comprises a plurality of user human body images shot from different angles; the user number is used for uniquely identifying the user.
3. The method of claim 2, wherein the body construction module acquiring the radiological model points comprises:
the human body construction module acquires all first model points according to the user human body image set and acquires a first model point set according to all the first model points;
the human body construction module carries out denoising processing on all first model points in the first model point set to obtain a second model point set;
the human body construction module randomly selects a second model point from the second model point set as a central model point and obtains the distance between the central model point and the first number of surrounding second model points;
and the human body construction module carries out ascending sequencing on the distances between the first number of second model points and the center model point, and selects the first second number of second model points with the minimum distance as the radiation model points.
4. The method of claim 3, wherein the human body construction module performing plane fitting on the central model points and the radial model points to obtain a model sub-plane comprises:
the human body construction module connects the central model point with each radial model point to obtain a model subregion taking the central model point as a center;
the human body construction module performs plane fitting on the central model point and the radiation model point in the model sub-region to obtain a model sub-plane;
the body construction module maps the central model point and the radial model point to corresponding model sub-planes.
5. The method of claim 4, wherein the human body model building module performing size scale correction on the initial human body model according to the human body size data to obtain the user human body model comprises:
the human body model building module obtains size matching points in the initial human body model and determines the projection direction of each size matching point;
the human body model building module traverses all size matching points, takes the size matching points which are being traversed as central matching points, and then connects the central matching points with a plurality of size matching points around the central matching points respectively to generate a size matching plane of the central matching points;
the human body model building module obtains the normal vector projection direction of each size matching plane, and generates a size correction matrix according to the normal vector projection direction of each size matching plane and corresponding human body size data, wherein the size matching plane comprises: a body length plane, a waist plane, a hip plane and a chest plane;
and the human body model building module corrects the size proportion of the initial human body model according to the size correction matrix to obtain the user human body model.
6. The method of claim 5, wherein the virtual makeup module performing virtual makeup based on the user's manikin and the makeup model to obtain a first virtual model comprises:
the virtual makeup module acquires all first makeup key points of the human body model of the user and obtains a first makeup key point set according to all the first makeup key points;
the virtual makeup module acquires all second makeup key points of the makeup model, and obtains a second makeup key point set according to all the second makeup key points;
the virtual makeup module acquires the coordinate average value of all the first makeup key points in the first makeup key point set under the first coordinate system, and takes the coordinate average value as the first coordinate average value;
and the virtual makeup module acquires the coordinate average value of all the second makeup key points in the second makeup key point set in the second coordinate system, and takes the coordinate average value as the second coordinate average value.
7. The method of claim 6, wherein the virtual makeup module performing virtual makeup based on the user's manikin and the makeup model to obtain a first virtual model comprises:
the virtual makeup module acquires a difference value between a coordinate value of each first makeup key point in the user human body model under the first coordinate system and a first coordinate average value;
the virtual makeup module acquires a difference value between a coordinate value of each second makeup key point in the makeup model in a second coordinate system and a second coordinate average value;
the virtual makeup module obtains a first makeup matrix according to the difference value between the coordinate value of each first makeup key point in the user human body model in the first coordinate system and the average value of the first coordinates and the difference value between the coordinate value of each second makeup key point in the makeup model in the second coordinate system and the average value of the second coordinates;
and the virtual makeup module maps the human body model and the makeup model of the user into a standard coordinate system according to the first makeup matrix, and rotates, translates and scales the human body model and the makeup model to obtain an initial matching model.
8. The method of claim 7, wherein the virtual makeup module performing virtual makeup based on the user mannequin and the makeup model to obtain a first virtual model comprises:
the virtual makeup module acquires all first makeup core points of the user human body model, and obtains a first makeup core point set according to all the first makeup core points;
the virtual makeup module acquires an average value of all the first makeup core points in the first makeup core point set and takes the average value as a first core average value;
the virtual makeup module acquires all second makeup core points of the makeup model, and obtains a second makeup core point set according to all the second makeup core points;
the virtual makeup module obtains an average value of all the second cosmetic core points in the second cosmetic core point set and takes it as a second core average value.
9. The method according to any one of claims 1 to 8, wherein the virtual makeup module performing virtual makeup based on the user's manikin and the makeup model to obtain a first virtual model comprises:
the virtual makeup module constructs a second makeup matrix according to the first core average value, the second core average value, the first makeup core point set and the second makeup core point set;
and the virtual makeup module performs row rotation, translation and scaling on the initial matching model according to the second makeup matrix to obtain a first virtual modeling model.
10. The method according to any one of claims 1 to 9, wherein the first virtual model is a user's manikin after virtual makeup;
the second virtual modeling model is a user human body model after virtual makeup and virtual haircut;
the user virtual modeling model is a user human body model after virtual makeup, virtual haircut and virtual fitting, and is used for accurately displaying the current long stature and stature of the user.
CN202110015172.6A 2021-01-06 2021-01-06 Virtual modeling generation method for electronic commerce Active CN112700306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110015172.6A CN112700306B (en) 2021-01-06 2021-01-06 Virtual modeling generation method for electronic commerce

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110015172.6A CN112700306B (en) 2021-01-06 2021-01-06 Virtual modeling generation method for electronic commerce

Publications (2)

Publication Number Publication Date
CN112700306A true CN112700306A (en) 2021-04-23
CN112700306B CN112700306B (en) 2022-11-11

Family

ID=75514918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110015172.6A Active CN112700306B (en) 2021-01-06 2021-01-06 Virtual modeling generation method for electronic commerce

Country Status (1)

Country Link
CN (1) CN112700306B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992464A (en) * 2015-06-19 2015-10-21 上海卓易科技股份有限公司 Virtual garment try-on system and garment try-on method
US20160092605A1 (en) * 2014-09-25 2016-03-31 Flatfab Inc. System and method for generating planar section 3d shape representations
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN109919735A (en) * 2019-03-19 2019-06-21 江苏皓之睿数字科技有限公司 A kind of 3D simulation dress ornament matching method and system based on mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160092605A1 (en) * 2014-09-25 2016-03-31 Flatfab Inc. System and method for generating planar section 3d shape representations
CN104992464A (en) * 2015-06-19 2015-10-21 上海卓易科技股份有限公司 Virtual garment try-on system and garment try-on method
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN109919735A (en) * 2019-03-19 2019-06-21 江苏皓之睿数字科技有限公司 A kind of 3D simulation dress ornament matching method and system based on mobile terminal

Also Published As

Publication number Publication date
CN112700306B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
KR102346320B1 (en) Fast 3d model fitting and anthropometrics
CN110662484B (en) System and method for whole body measurement extraction
US11321769B2 (en) System and method for automatically generating three-dimensional virtual garment model using product description
JP2019102068A (en) Recommendation system based on user's physical features
Verwulgen et al. A new data structure and workflow for using 3D anthropometry in the design of wearable products
CN112102480B (en) Image data processing method, apparatus, device and medium
CN114067057A (en) Human body reconstruction method, model and device based on attention mechanism
CN111815768B (en) Three-dimensional face reconstruction method and device
CN108537887A (en) Sketch based on 3D printing and model library 3-D view matching process
CN111653175B (en) Virtual sand table display method and device
CN112686733B (en) E-commerce product simulation system based on big data
CN111651055A (en) City virtual sand table display method and device, computer equipment and storage medium
CN114202597A (en) Image processing method and apparatus, device, medium, and product
CN112700306B (en) Virtual modeling generation method for electronic commerce
Liu et al. Three-dimensional cartoon facial animation based on art rules
CN115393487B (en) Virtual character model processing method and device, electronic equipment and storage medium
CN116402676A (en) Modeling method, device, equipment and storage medium for game character skin
Alemany et al. Three-dimensional body shape modeling and posturography
CN113610958A (en) 3D image construction method and device based on style migration and terminal
CN114581288A (en) Image generation method and device, electronic equipment and storage medium
CN107194980A (en) Faceform's construction method, device and electronic equipment
Alemany et al. 3D body modelling and applications
Scataglini et al. Using 3D statistical shape models for designing smart clothing
JPWO2020208836A1 (en) 3D model generation method and 3D model generation program
KR102467295B1 (en) Apparel wearing system based on face application, and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220928

Address after: No. 20, Group 3, Jinxing Wumianshan Village, Nanlong Town, Nanbu County, Nanchong City, Sichuan Province, 637300

Applicant after: Li Yang

Address before: 611130 room 14, 16 / F, unit 1, building 12, 1818, section 3, Guanghua Avenue, Wenjiang District, Chengdu City, Sichuan Province

Applicant before: Chengdu Gaoqiao Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221026

Address after: 2800 Wanyuan Road, Minhang District, Shanghai 201103

Applicant after: Dingqu (Shanghai) Technology Co.,Ltd.

Address before: No. 20, Group 3, Jinxing Wumianshan Village, Nanlong Town, Nanbu County, Nanchong City, Sichuan Province, 637300

Applicant before: Li Yang

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant