CN114004669A - Data processing method, device and computer readable storage medium - Google Patents

Data processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN114004669A
CN114004669A CN202111171902.8A CN202111171902A CN114004669A CN 114004669 A CN114004669 A CN 114004669A CN 202111171902 A CN202111171902 A CN 202111171902A CN 114004669 A CN114004669 A CN 114004669A
Authority
CN
China
Prior art keywords
human body
user
target
data
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111171902.8A
Other languages
Chinese (zh)
Inventor
王芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202111171902.8A priority Critical patent/CN114004669A/en
Publication of CN114004669A publication Critical patent/CN114004669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Computer Graphics (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a data processing method, a data processing device and a computer-readable storage medium, wherein human body image data of a user are acquired; acquiring human body coordinate data of a user based on the human body image data; constructing a three-dimensional human body model of the user according to the human body coordinate data; segmenting a target image in the human body image data and rendering the target image into the three-dimensional human body model; receiving target clothes selected by a user and corresponding size data; and covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model. Therefore, the three-dimensional human body model of the user is built, the three-dimensional human body model is rendered by using the target image of the user, and the target clothes are covered in the three-dimensional human body model of the user according to the size data selected by the user to truly display the wearing effect of the user on the target clothes, so that the data processing efficiency is improved, and the image display effect is further improved.

Description

Data processing method, device and computer readable storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a data processing method, apparatus, and computer-readable storage medium.
Background
With the development of internet technology, convenient and fast online shopping is becoming a popular shopping mode for all people. When a user selects clothes on the internet, the most important thing is whether the clothes are suitable for the user, but the user needs to know whether the clothes are suitable after purchasing and receiving the real goods, and a return and refund process is needed once the clothes are not suitable, which brings certain inconvenience to the user and a merchant and consumes energy and capital cost. In this regard, the user may ameliorate this problem through a smart fitting mirror.
At present, a mature scheme of the intelligent fitting mirror is not realized in the market, most target users of the intelligent fitting mirror are groups entering and exiting places such as shopping malls, high-end hotels, private customization and the like, some schemes for common people are only limited to simulation of cartoon characters, and the image display effect is low.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device and a computer-readable storage medium, which can truly display the dressing effect of dressing of a user, improve the data processing efficiency and further improve the image display effect.
An embodiment of the present application provides a data processing method, including:
collecting human body image data of a user;
acquiring human body coordinate data of a user based on the human body image data;
constructing a three-dimensional human body model of the user according to the human body coordinate data;
segmenting a target image in the human body image data and rendering the target image into the three-dimensional human body model;
receiving target clothes selected by a user and corresponding size data;
and covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model.
Correspondingly, an embodiment of the present application provides a data processing apparatus, including:
the acquisition unit is used for acquiring human body image data of a user;
an acquisition unit configured to acquire human body coordinate data of a user based on the human body image data;
the building unit is used for building a three-dimensional human body model of the user according to the human body coordinate data;
a segmentation unit, configured to segment a target image in the human body image data and render the target image into the three-dimensional human body model;
the receiving unit is used for receiving the target clothes selected by the user and the corresponding size data;
and the covering unit is used for covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model.
In one embodiment, the covering unit includes:
the first acquiring subunit is used for acquiring target clothes image data of the target clothes corresponding to the size data;
the segmentation subunit is used for segmenting the target clothing image data according to a preset segmentation granularity to obtain clothing image sub-data;
and the covering subunit is used for covering the clothing image sub-data on the three-dimensional human body model to obtain a target human body model.
In one embodiment, the partitioning subunit includes:
the transverse segmentation module is used for segmenting the target clothes image data according to a preset transverse segmentation granularity to obtain transverse clothes image subdata;
and the longitudinal segmentation module is used for segmenting the transverse clothing image subdata according to a preset longitudinal segmentation granularity to obtain clothing image subdata.
In one embodiment, the overlay subunit includes:
the acquisition module is used for acquiring human skeleton key point data of a user;
the covering module is used for covering the clothes image subdata on the three-dimensional human body model according to the corresponding relation between the clothes image subdata and the human body skeleton key point data of the user;
and the display module is used for distinguishing and displaying the uncovered area when the three-dimensional human body model has the area which cannot be covered by the clothing image subdata.
In one embodiment, the acquisition unit includes:
the detection subunit is used for detecting the actual distance between the user and the image acquisition device and comparing the actual distance with the target distance;
the first generating subunit is configured to generate distance prompt information when the actual distance is not equal to the target distance, where the distance prompt information is used to instruct a user to move a current location until an actual distance corresponding to the current location is equal to the target distance;
the second generation subunit is used for generating rotation prompt information when the actual distance is equal to the target distance;
and the first acquisition subunit is used for synchronously acquiring the human body image data of the user according to the rotation rate of the user when detecting that the user rotates at the current position based on the rotation prompt information.
In an embodiment, the obtaining unit includes:
the identification subunit is used for identifying each human body image in the human body image data to obtain human body contour data of the user;
and the calculating subunit is used for calculating the human body coordinate data of the user according to the human body contour data.
In an embodiment, the data processing apparatus further includes:
the switching unit is used for switching the current mode into a makeup trial mode when a user selects a makeup product and acquiring face image information of the user;
the color value acquisition unit is used for acquiring the face color value of each pixel point in the face image information;
the color value receiving unit is used for receiving the color value of the color number of the makeup product selected by the user;
the color value calculating unit is used for calculating a target color value according to the face color value and the color number color value;
and the adjusting unit is used for adjusting the color value of the human face area in the target human body model according to the target color value so that the user obtains a makeup trial effect according to the target human body model after the color value is adjusted.
In one embodiment, the color value calculating unit includes:
a transparency value obtaining subunit, configured to obtain a target transparency value preset by a user;
the adjusting subunit is used for adjusting the color value of the color number according to the target transparency value to obtain a target color value;
and the accumulation subunit is used for accumulating the face color value and the target color number color value to obtain a target color value.
In addition, a computer-readable storage medium is provided, where the computer-readable storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in any one of the data processing methods provided in the embodiments of the present application.
In addition, the embodiment of the present application further provides a computer device, which includes a processor and a memory, where the memory stores an application program, and the processor is configured to run the application program in the memory to implement the data processing method provided in the embodiment of the present application.
Embodiments of the present application also provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the steps in the data processing method provided by the embodiment of the application.
The embodiment of the application acquires the human body image data of a user; acquiring human body coordinate data of a user based on the human body image data; constructing a three-dimensional human body model of the user according to the human body coordinate data; segmenting a target image in the human body image data and rendering the target image into the three-dimensional human body model; receiving target clothes selected by a user and corresponding size data; and covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model. Therefore, the three-dimensional human body model is constructed by acquiring the human body coordinate data of the user, the three-dimensional human body model is rendered by using the target image of the user so as to improve the authenticity of the fitting display effect of the user, meanwhile, the target clothes are covered in the three-dimensional human body model of the user according to the size data selected by the user, the wearing effect of the user on the target clothes is really displayed by considering the stature characteristics of the user, the authenticity of the dressing effect display of the user is improved, the data processing efficiency is improved, and the image display effect is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an implementation scenario of a data processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 3 is another schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 4 is a schematic view of a specific implementation scenario of a data processing method according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a data processing method, a data processing device and a computer readable storage medium. The data processing apparatus may be integrated into a computer device, and the computer device may be a server or a terminal.
With the development of internet technology, convenient and fast online shopping is becoming a popular shopping mode for all people. When a user selects clothes on the internet, the most important thing is whether the clothes are suitable for the user, but the user needs to know whether the clothes are suitable after purchasing and receiving the real goods, and a return and refund process is needed once the clothes are not suitable, which brings certain inconvenience to the user and a merchant and consumes energy and capital cost. In this regard, the user may ameliorate this problem through a smart fitting mirror.
However, a mature scheme of the intelligent fitting mirror is not realized in the market at present, most target users of the intelligent fitting mirror are groups entering and exiting places such as shopping malls, high-end hotels, private customization and the like, some schemes for common people are only limited to simulation of cartoon characters, the fitting effect of the users is not really displayed, and the image display effect is low.
In order to solve the above problems, an embodiment of the present application provides a data processing method, which may be integrated in a smart television, construct a three-dimensional human body model by obtaining human body coordinate data of a user, render the three-dimensional human body model by using a target image of the user to improve the reality of a fitting display effect of the user, and cover a target garment in the three-dimensional human body model of the user according to size data selected by the user, so as to truly display a wearing effect of the user on the target garment by considering the stature characteristics of the user, improve the reality of the dressing effect display of the user, improve data processing efficiency, and further improve an image display effect.
Referring to fig. 1, taking an example that a data processing device is integrated in a terminal, fig. 1 is a schematic view of an implementation scenario of a data processing method provided in an embodiment of the present application, where the data processing method includes a server a and a terminal B, where the server a may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Network acceleration service (CDN), and a big data and artificial intelligence platform.
The terminal B may be, but is not limited to, a smart television, a smart phone, a tablet computer, a notebook computer, a desktop computer, or other computer devices capable of performing image acquisition. The terminal B can acquire human body image data of a user; acquiring human body coordinate data of a user based on the human body image data; constructing a three-dimensional human body model of the user according to the human body coordinate data; segmenting a target image in the human body image data and rendering the target image into the three-dimensional human body model; receiving target clothes selected by a user and corresponding size data; and covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model.
The terminal B and the server a may be directly or indirectly connected through a wired or wireless communication manner, and the server a may obtain data uploaded by the terminal B to perform a corresponding data processing operation, which is not limited herein.
It should be noted that the schematic diagram of the implementation environment scenario of the data processing method shown in fig. 1 is only an example, and the implementation environment scenario of the data processing method described in the embodiment of the present application is for more clearly explaining the technical solution of the embodiment of the present application, and does not constitute a limitation to the technical solution provided by the embodiment of the present application. As will be appreciated by those skilled in the art, with the evolution of data processing and the emergence of new business scenarios, the technical solutions provided in the present application are equally applicable to similar technical problems.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The present embodiment will be described from the perspective of a data processing apparatus, which may be specifically integrated in a computer device, and the computer device may be a terminal, and the present application is not limited herein.
Referring to fig. 2, fig. 2 is a schematic flow chart of a data processing method according to an embodiment of the present disclosure. The data processing method comprises the following steps:
in step 101, human body image data of a user is acquired.
Specifically, the human body image data of the user may be acquired by the image acquisition device, and the human body image data may be a series of images, that is, an image sequence, continuously acquired by the image acquisition device to the user in sequence at different times and different directions. In addition, the user may be prompted to rotate to acquire images of the user's body in different directions.
Wherein, for the human image data that can be accurate acquireing the user, can generate prompt information and guide the user to wear tight clothes, in addition, can also instruct the user to stand and carry out the collection of human image in the target position to guarantee to gather the suitability of angle, reduce follow-up calculation volume to human image data, simultaneously, avoid appearing gathering the unsuitable human image condition that leads to gathering of angle and produce deformation or distortion, with this accuracy that improves the collection.
In step 102, human body coordinate data of the user is acquired based on the human body image data.
The human body coordinate data of the user can be acquired according to the acquired human body image data, specifically, each human body image in the acquired human body image data can be identified through a binocular stereo vision system, the human body contours of the identified user in different directions are extracted, and then the human body coordinate data of the user is calculated according to the extracted human body contour data.
The Binocular Stereo Vision (Binocular Stereo Vision) is an important form of machine Vision, and is a method for acquiring three-dimensional geometric information of an object by acquiring two images of the object to be detected from different positions by using imaging equipment based on a parallax principle and calculating position deviation between corresponding points of the images.
In step 103, a three-dimensional human model of the user is constructed from the body coordinate data.
Specifically, the binocular stereo vision system can be used for shooting human body image data of a user, extracting a human body contour from a binocular image sequence, calculating the human body coordinate data of the user according to the extracted human body contour, estimating human body three-dimensional deformation and motion parameters according to the human body coordinate data and constraint conditions such as unchanged volume and the like, and finally drawing the human body model by using a spherical body and a rotating conical surface, so that the three-dimensional human body model of the user can be obtained.
In step 104, a target image in the human body image data is segmented and rendered into the three-dimensional human body model.
In order to obtain a three-dimensional human body model more similar to a user and achieve the effect that the user displays a more real fitting effect for the user when fitting, each human body image in the human body image data can be identified to obtain a target image in each human body image, the target image in the human body image data can be further segmented, and the target image is rendered into the three-dimensional human body model, wherein the target image can be a human face image of the user in the human body image data or an image including the whole body of the user.
When the target image is rendered into the three-dimensional human body model, the position coordinates of human body Skeleton key nodes of the human body at the weight points of shoulders, breasts, waist, buttocks, five sense organs and the like can be accurately identified by utilizing a Skeleton detection interface (Skeleton Detect API) technology according to curve characteristics of different positions of the human body model, and then the target image can be segmented from the human body image and covered to the corresponding position in the three-dimensional human body model according to the coordinate position of the human body Skeleton key nodes. In an embodiment, the corresponding position in the three-dimensional human body model may also be rendered according to the target image, for example, the mapping relationship between the 2-dimensional coordinates and the 3-dimensional coordinates of any point in the target image and the three-dimensional human body model may be calculated according to the target image and the coordinate position of the key node of the human skeleton by using OpenGL (Open Graphics Library or Open Graphics Library) to render the three-dimensional human body model. OpenGL is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D and 3D vector graphics.
In step 105, a target apparel selected by a user and corresponding dimensional data are received.
Different users have different heights, weights and body shapes, in order to adapt to the body features of different users, the users can select different sizes of clothes for fitting according to their own actual conditions, and can determine the most suitable size and clothes according to the data processing method provided by the embodiment of the application, specifically, can receive the target clothes selected by the users and corresponding size data, wherein the target clothes can be clothes selected by the users to be fitted, and the size information can be the size of the target clothes selected by the users, for example, S (Small, abbreviated as S), M (Medium, abbreviated as M), L (Large, abbreviated as L), or XL (Extra Large, abbreviated as XL) codes, and the like, and can also be size data representing other size information, which is not limited herein.
In step 106, the target apparel is overlaid on the rendered three-dimensional human model according to the size data to obtain a target human model.
For example, when the size data is M code, the size data may be M code target clothing image data, the target clothing image data may be two-dimensional planar picture or three-dimensional stereo picture, and the target clothing image data of different size data may be scaled in equal proportion according to the size of the size. In order to obtain the effect that the user tries on the target clothes of the size data, the target clothes image data of the size data can be covered on the rendered three-dimensional human body model to obtain the target human body model.
Specifically, the target apparel image data may be segmented into a plurality of apparel image subdata according to a preset segmentation granularity, each piece of apparel image subdata obtained by the segmentation is covered to a corresponding position in the three-dimensional human model according to a corresponding relationship between the target apparel and key nodes of human bones in the three-dimensional human model and a segmentation rule of the target apparel, and meanwhile, when an area which cannot be covered by the apparel image subdata exists in the three-dimensional human model, the uncovered area is displayed in a distinguishing manner, for example, the uncovered area may be displayed in a color deepening manner, or may be displayed in a highlighting manner, etc., so as to represent that the target apparel is not suitable for a user to visually represent that the target apparel is unsuitable for the user, thereby obtaining the target human model, wherein the preset segmentation granularity may be preset according to the number of the piece of apparel image subdata obtained by segmenting the target apparel image data, for example, the preset segmentation granularity may be 12 × 13, that is, the target apparel image data may be transversely cut into 12 parts, and then the target apparel image data may be longitudinally cut into 13 parts, and the specific value may be set according to an actual situation, which is not limited herein. The user can obtain the fitting effect of the user fitting the target clothes corresponding to the different size data according to the target human body model, so that whether the target clothes are suitable or not and which size data is most suitable are determined, the authenticity of clothes fitting effect display of the user is further improved, and the image display effect is improved.
According to the method, the human body image data of the user are collected; acquiring human body coordinate data of a user based on the human body image data; constructing a three-dimensional human body model of the user according to the human body coordinate data; segmenting a target image in the human body image data and rendering the target image into the three-dimensional human body model; receiving target clothes selected by a user and corresponding size data; and covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model. Therefore, the three-dimensional human body model is constructed by acquiring the human body coordinate data of the user, the three-dimensional human body model is rendered by using the target image of the user so as to improve the authenticity of the fitting display effect of the user, meanwhile, the target clothes are covered in the three-dimensional human body model of the user according to the size data selected by the user, the wearing effect of the user on the target clothes is really displayed by considering the stature characteristics of the user, the authenticity of the dressing effect display of the user is improved, the data processing efficiency is improved, and the image display effect is further improved.
The method described in the above examples is further illustrated in detail below by way of example.
In this embodiment, the data processing apparatus is specifically described by taking an example in which the data processing apparatus is specifically integrated in a terminal.
For a better description of the embodiments of the present application, please refer to fig. 3. As shown in fig. 3, fig. 3 is another schematic flow chart of the data processing method according to the embodiment of the present application. The specific process is as follows:
in step 201, the terminal detects an actual distance between the user and the image capture device, compares the actual distance with a target distance, and generates distance prompt information when the actual distance is not equal to the target distance.
The terminal can be an intelligent television, the actual distance can be the horizontal distance between a user and the terminal, the target distance is a preset ideal distance, when the actual distance between the user and the terminal is the ideal distance, the terminal can conveniently collect human body images, meanwhile, subsequent calculation is facilitated, the image acquisition equipment can acquire the human body images of the user and identify the user, the camera is moved according to the position of the user, so that the user is positioned right in front of the human body images, in one embodiment, when the camera of the image acquisition equipment cannot be moved, prompt information can be generated and used for indicating the user to move the current position until the current position of the user is positioned right in front of the camera of the image acquisition equipment. The image acquisition equipment can be installed in the terminal or associated with the terminal.
The distance prompt information is used for indicating the user to move the current position until the actual distance corresponding to the current position is equal to the target distance, the terminal can detect the actual distance between the user and the image acquisition device, compare the actual distance with the target distance, and generate the distance prompt information when the actual distance is not equal to the target distance, so that the user can adjust the current position until the actual distance is equal to the target distance.
Referring to fig. 4, fig. 4 is a schematic view of a specific implementation scenario of the data processing method according to the embodiment of the present application, a user stands in front of a smart tv, and the smart tv can detect an actual distance of the user and can acquire human body image data of the user.
In step 202, when the actual distance is equal to the target distance, the terminal generates rotation prompting information, and when the user is detected to rotate at the current position based on the rotation prompting information, the human body image data of the user is synchronously acquired according to the rotation rate of the user.
When the actual distance between the user and the terminal is equal to the target distance, the terminal generates rotation prompt information, the rotation prompt information is used for indicating the user to rotate at the current position, when the fact that the user rotates at the current position based on the rotation prompt information is detected, the human body image data of the user can be synchronously collected according to the rotation rate of the user, specifically, when the terminal is difficult to accurately collect the human body image data according to the rotation of the user, the rotation rate can be slowed down or accelerated by the terminal, and therefore the user can rotate at a proper speed.
In an embodiment, assuming that the terminal is an intelligent television, because the heights of the intelligent televisions placed in homes are different, the heights of the image acquisition devices are possibly different, and the user needs to input the height data of the user and the height of the image acquisition devices relative to the horizontal ground, according to the two height data and the target distance, the terminal can calculate the relative angle between the image acquisition devices and the human body of the user, and the clothing can be more accurately covered in the human body model according to the relative angle when the clothing is subsequently covered in the three-dimensional human body model.
In step 203, the terminal identifies each human body image in the human body image data to obtain human body contour data of the user.
In order to more accurately construct a three-dimensional human body model of a user, the terminal can identify each human body image in the human body image data to obtain human body contour data of the user, and specifically, the terminal can identify each human body image in the collected human body image data through a binocular stereo vision system and extract the identified human body contour data of the user in different directions.
In one embodiment, since there is a difference in physical characteristics between different genders, the user can input gender information into the terminal to assist the terminal in more accurately constructing a three-dimensional human body model of the user.
In an embodiment, in order to accurately acquire the human body contour data of the user, prompt information can be generated to guide the user to wear tight clothes, and in addition, the user can be instructed to stand to a target position to acquire the human body image, so that the acquisition angle is appropriate, the subsequent calculation amount of the human body contour data is reduced, meanwhile, the situation that the acquired human body image is deformed or distorted due to the fact that the acquisition angle is not appropriate is avoided, and the acquisition accuracy is improved.
In step 204, the terminal calculates body coordinate data of the user according to the body contour data, and constructs a three-dimensional body model of the user according to the body coordinate data.
The terminal can shoot human body image data of a user by using a binocular stereo vision system through a human body three-dimensional modeling method, extracts a human body outline from a binocular image sequence, calculates human body coordinate data of the user according to the extracted human body outline, further estimates human body three-dimensional deformation and motion parameters according to constraint conditions such as the human body coordinate data and unchanged volume, and finally draws a human body model by using a spherical body and a rotating conical surface, so that the three-dimensional human body model of the user can be obtained.
In step 205, the terminal segments a target image in the human body image data and renders the target image into the three-dimensional human body model.
In order to obtain a three-dimensional human body model which is more similar to a user and display a more real fitting effect for the user when fitting, the terminal can identify each human body image in the human body image data to obtain a target image in each human body image, further can segment the target image in the human body image data and render the target image into the three-dimensional human body model, wherein the target image can be a human face image of the user in the human body image data or an image including the whole body of the user.
When the target image is rendered into the three-dimensional human body model, the terminal can accurately identify the position coordinates of human body Skeleton key nodes of the human body at the weight points of shoulders, breasts, waist, buttocks, five sense organs and the like of the human body by utilizing a Skeleton detection interface (Skeleton Detect API) technology according to curve characteristics of different positions of the human body model, and further can segment the target image from the human body image according to the coordinate positions of the human body Skeleton key nodes and cover the target image to the corresponding position in the three-dimensional human body model. In an embodiment, the corresponding position in the three-dimensional human body model may also be rendered according to the target image, for example, the mapping relationship between the 2-dimensional coordinates and the 3-dimensional coordinates of any point in the target image and the three-dimensional human body model may be calculated according to the target image and the coordinate position of the key node of the human skeleton by using OpenGL (Open Graphics Library or Open Graphics Library) to render the three-dimensional human body model.
In step 206, the terminal receives the target clothes selected by the user and the corresponding size data, and obtains the target clothes image data of the target clothes corresponding to the size data.
Different users have different heights, weights and body shapes, in order to adapt to the body features of different users, the users can select different sizes of clothes for fitting according to their own actual conditions, and can determine the most suitable size and clothes according to the data processing method provided by the embodiment of the application, specifically, the terminal can receive the target clothes selected by the users and corresponding size data, wherein the target clothes can be clothes selected by the users and required to be fitted, and the size information can be the size of the target clothes selected by the users, for example, the size information can be S (Small, abbreviated as S), M (Medium, abbreviated as M), L (Large, abbreviated as L), XL (Extra Large, abbreviated as XL), and the like, and can also be size data representing other size information, which is not limited herein.
For example, when the size data is M code, the size data may be obtained as target apparel image data of M code, the target apparel image data may be a two-dimensional planar picture or a three-dimensional stereo picture provided by a merchant corresponding to the target apparel, and the target apparel image data of different size data may be scaled in equal proportion according to the size of the size.
In step 207, the terminal segments the target apparel image data according to a preset transverse segmentation granularity to obtain transverse apparel image subdata, and segments the transverse apparel image subdata according to a preset longitudinal segmentation granularity to obtain apparel image subdata.
In order to present the target clothes in the form worn by the user into the three-dimensional human body model, the target clothes image data of the target clothes can be segmented, and the segmented target clothes image data is covered into the three-dimensional human body model, specifically, the terminal can segment the target clothes image data according to a preset transverse segmentation granularity to obtain transverse clothes image subdata, segment the transverse clothes image subdata according to a preset longitudinal segmentation granularity to obtain clothes image subdata, wherein values of the preset transverse segmentation granularity and the preset longitudinal segmentation granularity can be predetermined according to the number of the target clothes image data segmented into the clothes image subdata, for example, the preset segmentation granularity can be set to 12 × 13, that is, the target clothes image data can be transversely segmented into 12 parts, and obtaining transverse clothing image subdata, longitudinally cutting the target clothing image data into 13 parts to obtain clothing image subdata, wherein specific values can be set according to actual conditions, and are not limited herein.
Specifically, assuming that the target clothes image data is a two-dimensional plane picture of the target clothes, assuming that the preset horizontal division granularity is M and the preset vertical division granularity is N, the terminal can transversely divide the front side of the two-dimensional plane picture of the target clothes into M strip pictures, that is, horizontal clothes image subdata, and then longitudinally divide each horizontal picture into N pictures, that is, clothes image subdata, so that the 1 st picture is divided into N small pictures, which can be sequentially numbered P1, P2 … … P1, the same 2 nd picture is divided into N small pictures, which are sequentially numbered P2 1, P2 … … P2, N, the M of … … is divided into N small pictures, which are sequentially numbered P M1, P M2 … … P M N, then, nxm small blocks of pictures, i.e. the clothing image sub-data, are obtained.
In an embodiment, the size of the M value and the size of the N value may be determined according to the memory occupancy rate of the terminal, the strength of the computing capability, and the length and the size of the target clothing, for example, when the length of the target clothing is longer, the M value may be adjusted to a larger value, the larger the M value is, the smaller the granularity of the obtained clothing image sub-data is, the target clothing may be more fit when being covered in the three-dimensional human body model, and the display of the user fitting is more real and accurate. For another example, when the memory occupancy rate of the terminal is low, the terminal has more memories and can process more data, and at this time, the values of M and N can be set to a larger value, so as to obtain the smaller-granularity clothing image sub-data. For another example, assuming that the computing power of the terminal is strong, the terminal can process more data, so the M and N values can be set to a larger value to obtain the smaller-granularity sub-data of the apparel image.
In an embodiment, a user may set values of the preset transverse segmentation granularity and the preset longitudinal segmentation granularity through a terminal, and may adjust the values of the preset transverse segmentation granularity and the preset longitudinal segmentation granularity through a final effect.
In step 208, the terminal obtains the human skeleton key point data of the user, and overlays the clothing image subdata on the three-dimensional human body model according to the corresponding relationship between the clothing image subdata and the human skeleton key point data of the user.
The terminal can obtain the human skeleton key point data of the user and cover the clothing image subdata on the three-dimensional human body model according to the corresponding relation between the clothing image subdata and the human skeleton key point data of the user. Specifically, the terminal can cover each of the segmented clothes image subdata to a corresponding position in the three-dimensional human body model according to the target clothes, the corresponding relationship between the human skeleton key nodes in the three-dimensional human body model and the segmentation rule of the target clothes.
In an embodiment, a central point of each piece of clothing image subdata can be obtained, a first central coordinate corresponding to the central point is found in a target clothing, a second central coordinate corresponding to the first central coordinate is found in a three-dimensional human body model according to the corresponding relation of the human skeleton key point in the target clothing and the three-dimensional human body model, and the clothing image subdata is covered on a position corresponding to the second central coordinate. The terminal can process the front and the back of the target clothes in the same processing mode to obtain the effect that the target clothes are covered on the three-dimensional human body model.
In step 209, when there is an area in the three-dimensional human body model that cannot be covered by the clothing image sub-data, the terminal displays the uncovered area differently to obtain the target human body model.
When the three-dimensional human body model has the area which cannot be covered by the clothing image subdata, the uncovered area is displayed in a distinguishing mode, for example, the uncovered area can be displayed in a color deepening mode, highlight display can be conducted, and the like, so that the target clothing and the human body of the user are represented to be unsuitable and visually embodied, and the target human body model is obtained. The user can obtain the information whether the target clothes are suitable for the self stature characteristics according to the differentially displayed area.
Therefore, the user can obtain the fitting effect of the user fitting the target clothes corresponding to the different size data according to the target human body model, so as to determine whether the target clothes are proper and which size data are most proper, further improve the authenticity of clothes fitting effect display of the user and improve the image display effect.
In step 210, when the user selects a makeup product, the terminal switches the current mode to a makeup trial mode, collects face image information of the user, and obtains a face color value of each pixel point in the face image information.
When a user selects a makeup product through the terminal, the terminal can switch the current mode from the fitting mode to the makeup fitting mode, acquire face image information of the user and acquire a face color value of each pixel point in the face image information, wherein the face image information of the user can be a face image of the user, the face image information of the user can be acquired by each human body image in the acquired human body image data, and the face image information of the user can also be acquired in real time. The face color value may be a color value of each pixel point in the face image of the user, and the color value may be an RGB (red, green, and blue) color value.
In an embodiment, in order to make the effect of the trial makeup mode more real, the terminal can close the beautifying effect in the trial makeup mode to obtain the real face image information of the user, enlarge the face area according to a reasonable proportion for displaying the result clearly, and read the face image information to obtain the face color value of each pixel point.
In step 211, the terminal receives the color value of the makeup product selected by the user, obtains a target transparency value preset by the user, and adjusts the color value according to the target transparency value to obtain a target color value.
Because the formation after makeup product paints covers the effect and can not be the color number value of makeup product completely, but can be by the color number of makeup product, the shade length of paining and the skin color of user come final definite, consequently, in order to improve the true degree that the user tried to make up, can revise the color number colour value of the makeup product of selecting under the regulation of transparency value, and is specific, the terminal can receive the color number colour value of the makeup product of user's choosing, acquires user preset's target transparency value simultaneously, and then can adjust this color number colour value according to this target transparency value, obtain target color number colour value. Wherein, the colour number colour value can be the colour value of the colour number that beautiful dress product corresponds, and this target transparency value can be set for according to the shade degree of actually making up in advance by the user, is used for adjusting the colour value that the shade degree of making up finally appears on the user face to the colour number colour value of beautiful dress product to further improve the authenticity of trying to make up the effect. The target color value can be a color value which is finally adjusted according to the color value of the makeup product and the shading degree of makeup set by the user and influences the final makeup trial skin color of the user.
In step 212, the terminal accumulates the face color value and the target color number color value to obtain a target color value, and adjusts a color value of a face region in the target human body model according to the target color value.
The terminal accumulates the face color value and the target color number color value, namely, the real skin color of the user and the influence of the makeup product on the skin color of the user are superposed to obtain the final effect that the user uses the selected makeup product to make up in a trial mode, namely, the target color value, and the color value of the face area in the target human body model is adjusted according to the target color value, so that the user obtains the makeup trial effect according to the target human body model after the color value is adjusted.
Therefore, in the embodiment of the application, the actual distance between the user and the image acquisition device is detected through the terminal, the actual distance is compared with the target distance, and when the actual distance is not equal to the target distance, distance prompt information is generated; when the actual distance is equal to the target distance, the terminal generates rotation prompt information, and when the fact that the user rotates at the current position based on the rotation prompt information is detected, human body image data of the user are synchronously acquired according to the rotation rate of the user; the terminal identifies each human body image in the human body image data to obtain human body contour data of the user; the terminal calculates the body coordinate data of the user according to the body contour data, and constructs a three-dimensional body model of the user according to the body coordinate data; the terminal segments a target image in the human body image data and renders the target image into the three-dimensional human body model; the terminal receives a target garment selected by a user and corresponding size data, and target garment image data of the target garment corresponding to the size data is obtained; the terminal divides the target clothes image data according to a preset transverse division granularity to obtain transverse clothes image subdata, and divides the transverse clothes image subdata according to a preset longitudinal division granularity to obtain clothes image subdata; the terminal obtains human skeleton key point data of the user, and covers the clothing image subdata on the three-dimensional human body model according to the corresponding relation between the clothing image subdata and the human skeleton key point data of the user; when the three-dimensional human body model has an area which cannot be covered by the clothing image subdata, the terminal displays the uncovered area in a distinguishing manner to obtain a target human body model; when a user selects a makeup product, the terminal switches the current mode into a makeup trial mode, acquires face image information of the user and acquires a face color value of each pixel point in the face image information; the method comprises the steps that a terminal receives a color value of a makeup product selected by a user, obtains a target transparency value preset by the user, and adjusts the color value according to the target transparency value to obtain a target color value; the terminal accumulates the human face color value and the target color number color value to obtain a target color value, and the color value of the human face area in the target human body model is adjusted according to the target color value. Therefore, the three-dimensional human body model is constructed by acquiring the human body coordinate data of the user, the three-dimensional human body model is rendered by using the target image of the user so as to improve the authenticity of the fitting display effect of the user, meanwhile, the target clothes are covered in the three-dimensional human body model of the user according to the size data selected by the user, the wearing effect of the user on the target clothes is really displayed by considering the stature characteristics of the user, the authenticity of the dressing effect display of the user is improved, the data processing efficiency is improved, and the image display effect is further improved.
In order to better implement the above method, an embodiment of the present invention further provides a data processing apparatus, which may be integrated in a terminal.
For example, as shown in fig. 5, for a schematic structural diagram of a data processing apparatus provided in an embodiment of the present application, the data processing apparatus may include an acquisition unit 301, an acquisition unit 302, a construction unit 303, a segmentation unit 304, a receiving unit 305, and a covering unit 306, as follows:
an acquisition unit 301, configured to acquire human body image data of a user;
an obtaining unit 302 configured to obtain human body coordinate data of a user based on the human body image data;
a building unit 303, configured to build a three-dimensional human body model of the user according to the human body coordinate data;
a segmentation unit 304, configured to segment a target image in the human body image data and render the target image into the three-dimensional human body model;
a receiving unit 305, configured to receive a target garment selected by a user and corresponding size data;
and the covering unit 306 is configured to cover the target garment on the rendered three-dimensional human body model according to the size data to obtain a target human body model.
In one embodiment, the covering unit 306 includes:
the first acquiring subunit is used for acquiring target clothes image data of the target clothes corresponding to the size data;
the segmentation subunit is used for segmenting the target clothing image data according to a preset segmentation granularity to obtain clothing image sub-data;
and the covering subunit is used for covering the clothing image sub-data on the three-dimensional human body model to obtain a target human body model.
In one embodiment, the partitioning subunit includes:
the transverse segmentation module is used for segmenting the target clothes image data according to a preset transverse segmentation granularity to obtain transverse clothes image subdata;
and the longitudinal segmentation module is used for segmenting the transverse clothing image subdata according to a preset longitudinal segmentation granularity to obtain clothing image subdata.
In one embodiment, the overlay subunit includes:
the acquisition module is used for acquiring human skeleton key point data of a user;
the covering module is used for covering the clothes image subdata on the three-dimensional human body model according to the corresponding relation between the clothes image subdata and the human body skeleton key point data of the user;
and the display module is used for distinguishing and displaying the uncovered area when the three-dimensional human body model has the area which cannot be covered by the clothing image subdata.
In an embodiment, the acquisition unit 301 includes:
the detection subunit is used for detecting the actual distance between the user and the image acquisition device and comparing the actual distance with the target distance;
the first generating subunit is configured to generate distance prompt information when the actual distance is not equal to the target distance, where the distance prompt information is used to instruct a user to move a current location until an actual distance corresponding to the current location is equal to the target distance;
the second generation subunit is used for generating rotation prompt information when the actual distance is equal to the target distance;
and the first acquisition subunit is used for synchronously acquiring the human body image data of the user according to the rotation rate of the user when detecting that the user rotates at the current position based on the rotation prompt information.
In an embodiment, the obtaining unit 302 includes:
the identification subunit is used for identifying each human body image in the human body image data to obtain human body contour data of the user;
and the calculating subunit is used for calculating the human body coordinate data of the user according to the human body contour data.
In an embodiment, the data processing apparatus further includes:
the switching unit is used for switching the current mode into a makeup trial mode when a user selects a makeup product and acquiring face image information of the user;
the color value acquisition unit is used for acquiring the face color value of each pixel point in the face image information;
the color value receiving unit is used for receiving the color value of the color number of the makeup product selected by the user;
the color value calculating unit is used for calculating a target color value according to the face color value and the color number color value;
and the adjusting unit is used for adjusting the color value of the human face area in the target human body model according to the target color value so that the user obtains a makeup trial effect according to the target human body model after the color value is adjusted.
In one embodiment, the color value calculating unit includes:
a transparency value obtaining subunit, configured to obtain a target transparency value preset by a user;
the adjusting subunit is used for adjusting the color value of the color number according to the target transparency value to obtain a target color value;
and the accumulation subunit is used for accumulating the face color value and the target color number color value to obtain a target color value.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, in the embodiment of the present application, the human body image data of the user is acquired by the acquisition unit 301; the acquisition unit 302 acquires human body coordinate data of the user based on the human body image data; the construction unit 303 constructs a three-dimensional human body model of the user according to the human body coordinate data; the segmentation unit 304 segments a target image in the human body image data and renders the target image into the three-dimensional human body model; the receiving unit 305 receives the target apparel selected by the user and the corresponding size data; the covering unit 306 covers the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model. Therefore, the three-dimensional human body model is constructed by acquiring the human body coordinate data of the user, the three-dimensional human body model is rendered by using the target image of the user so as to improve the authenticity of the fitting display effect of the user, meanwhile, the target clothes are covered in the three-dimensional human body model of the user according to the size data selected by the user, the wearing effect of the user on the target clothes is really displayed by considering the stature characteristics of the user, the authenticity of the dressing effect display of the user is improved, the data processing efficiency is improved, and the image display effect is further improved.
An embodiment of the present application further provides a computer device, as shown in fig. 6, which shows a schematic structural diagram of a computer device according to an embodiment of the present application, where the computer device may be a terminal, and specifically:
the computer device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 6 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device as a whole. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The computer device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
collecting human body image data of a user; acquiring human body coordinate data of a user based on the human body image data; constructing a three-dimensional human body model of the user according to the human body coordinate data; segmenting a target image in the human body image data and rendering the target image into the three-dimensional human body model; receiving target clothes selected by a user and corresponding size data; and covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein. It should be noted that the computer device provided in the embodiment of the present application and the data processing method in the foregoing embodiment belong to the same concept, and specific implementation processes thereof are described in the foregoing method embodiment and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any data processing method provided by the embodiments of the present application. For example, the instructions may perform the steps of:
collecting human body image data of a user; acquiring human body coordinate data of a user based on the human body image data; constructing a three-dimensional human body model of the user according to the human body coordinate data; segmenting a target image in the human body image data and rendering the target image into the three-dimensional human body model; receiving target clothes selected by a user and corresponding size data; and covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in any data processing method provided in the embodiments of the present application, the beneficial effects that can be achieved by any data processing method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described again here.
According to an aspect of the application, there is provided, among other things, a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations provided by the embodiments described above.
The foregoing detailed description has provided a data processing method, apparatus, and computer-readable storage medium according to embodiments of the present application, and specific examples are used herein to explain the principles and implementations of the present application, and the above descriptions of the embodiments are only used to help understand the method and its core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A data processing method, comprising:
collecting human body image data of a user;
acquiring human body coordinate data of a user based on the human body image data;
constructing a three-dimensional human body model of the user according to the human body coordinate data;
segmenting a target image in the human body image data and rendering the target image into the three-dimensional human body model;
receiving target clothes selected by a user and corresponding size data;
and covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model.
2. The data processing method of claim 1, wherein overlaying the target apparel onto the rendered three-dimensional human model according to the size data to obtain a target human model comprises:
acquiring target clothes image data of the target clothes corresponding to the size data;
segmenting the target clothes image data according to preset segmentation granularity to obtain clothes image subdata;
and covering the clothing image subdata on the three-dimensional human body model to obtain a target human body model.
3. The data processing method of claim 2, wherein the segmenting the target apparel image data according to a preset segmentation granularity to obtain apparel image sub-data comprises:
segmenting the target clothes image data according to a preset transverse segmentation granularity to obtain transverse clothes image subdata;
and segmenting the transverse clothing image subdata according to a preset longitudinal segmentation granularity to obtain clothing image subdata.
4. The data processing method of claim 2, wherein said overlaying said apparel image sub-data onto said three-dimensional human model comprises:
acquiring human skeleton key point data of a user;
covering the clothing image subdata on the three-dimensional human body model according to the corresponding relation between the clothing image subdata and the human body skeleton key point data of the user;
and when the three-dimensional human body model has an area which cannot be covered by the clothing image subdata, distinguishing and displaying the uncovered area.
5. The data processing method of claim 1, wherein the acquiring human body image data of the user comprises:
detecting the actual distance between a user and an image acquisition device, and comparing the actual distance with a target distance;
when the actual distance is not equal to the target distance, generating distance prompt information, wherein the distance prompt information is used for indicating a user to move the current position until the actual distance corresponding to the current position is equal to the target distance;
when the actual distance is equal to the target distance, generating rotation prompt information, wherein the rotation prompt information is used for indicating a user to rotate at the current position;
and when the fact that the user rotates at the current position based on the rotation prompt information is detected, synchronously acquiring the human body image data of the user according to the rotation rate of the user.
6. The data processing method of claim 5, wherein the obtaining of the body coordinate data of the user based on the body image data comprises:
identifying each human body image in the human body image data to obtain human body contour data of a user;
and calculating the body coordinate data of the user according to the body contour data.
7. The data processing method of claim 1, wherein the method further comprises:
when a user selects a makeup product, switching the current mode into a makeup trial mode, and acquiring face image information of the user;
acquiring a face color value of each pixel point in the face image information;
receiving a color value of a makeup product selected by a user;
calculating a target color value according to the face color value and the color number color value;
and adjusting the color value of the face area in the target human body model according to the target color value so that the user obtains a makeup trial effect according to the target human body model after the color value is adjusted.
8. The data processing method of claim 7, wherein the calculating a target color value according to the face color value and the color number color value comprises:
acquiring a target transparency value preset by a user;
adjusting the color value of the color number according to the target transparency value to obtain a target color value of the color number;
and accumulating the face color value and the target color number color value to obtain a target color value.
9. A data processing apparatus, comprising:
the acquisition unit is used for acquiring human body image data of a user;
an acquisition unit configured to acquire human body coordinate data of a user based on the human body image data;
the building unit is used for building a three-dimensional human body model of the user according to the human body coordinate data;
a segmentation unit, configured to segment a target image in the human body image data and render the target image into the three-dimensional human body model;
the receiving unit is used for receiving the target clothes selected by the user and the corresponding size data;
and the covering unit is used for covering the target clothes on the rendered three-dimensional human body model according to the size data to obtain the target human body model.
10. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the data processing method according to any one of claims 1 to 8.
CN202111171902.8A 2021-10-08 2021-10-08 Data processing method, device and computer readable storage medium Pending CN114004669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111171902.8A CN114004669A (en) 2021-10-08 2021-10-08 Data processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111171902.8A CN114004669A (en) 2021-10-08 2021-10-08 Data processing method, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114004669A true CN114004669A (en) 2022-02-01

Family

ID=79922372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111171902.8A Pending CN114004669A (en) 2021-10-08 2021-10-08 Data processing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114004669A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116797723A (en) * 2023-05-09 2023-09-22 阿里巴巴达摩院(杭州)科技有限公司 Three-dimensional modeling method for clothing, three-dimensional changing method and corresponding device
CN117235200A (en) * 2023-09-12 2023-12-15 杭州湘云信息技术有限公司 Data integration method and device based on AI technology, computer equipment and storage medium
CN117523142A (en) * 2023-11-13 2024-02-06 书行科技(北京)有限公司 Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116797723A (en) * 2023-05-09 2023-09-22 阿里巴巴达摩院(杭州)科技有限公司 Three-dimensional modeling method for clothing, three-dimensional changing method and corresponding device
CN116797723B (en) * 2023-05-09 2024-03-26 阿里巴巴达摩院(杭州)科技有限公司 Three-dimensional modeling method for clothing, three-dimensional changing method and corresponding device
CN117235200A (en) * 2023-09-12 2023-12-15 杭州湘云信息技术有限公司 Data integration method and device based on AI technology, computer equipment and storage medium
CN117235200B (en) * 2023-09-12 2024-05-10 杭州湘云信息技术有限公司 Data integration method and device based on AI technology, computer equipment and storage medium
CN117523142A (en) * 2023-11-13 2024-02-06 书行科技(北京)有限公司 Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN114004669A (en) Data processing method, device and computer readable storage medium
CN105354876B (en) A kind of real-time volume fitting method based on mobile terminal
GB2564745B (en) Methods for generating a 3D garment image, and related devices, systems and computer program products
JP6392756B2 (en) System and method for obtaining accurate body size measurements from a two-dimensional image sequence
KR102346320B1 (en) Fast 3d model fitting and anthropometrics
US9984409B2 (en) Systems and methods for generating virtual contexts
US11375922B2 (en) Body measurement device and method for controlling the same
US10311508B2 (en) Garment modeling simulation system and process
CN102509349B (en) Fitting method based on mobile terminal, fitting device based on mobile terminal and mobile terminal
Boulay et al. Applying 3d human model in a posture recognition system
CN105404392A (en) Monocular camera based virtual wearing method and system
US20130173226A1 (en) Garment modeling simulation system and process
CN102982581A (en) Virtual try-on system and method based on images
WO2019032982A1 (en) Devices and methods for extracting body measurements from 2d images
JP7342366B2 (en) Avatar generation system, avatar generation method, and program
CN110298917B (en) Face reconstruction method and system
CN112274926A (en) Virtual character reloading method and device
CN114638929A (en) Online virtual fitting method and device, electronic equipment and storage medium
CN113763440A (en) Image processing method, device, equipment and storage medium
Liu et al. Real-time 3D virtual dressing based on users' skeletons
CN112102018A (en) Intelligent fitting mirror implementation method and related device
Dayik et al. Real-time virtual clothes try-on system
CN111369651A (en) Three-dimensional expression animation generation method and system
WO2014028714A2 (en) Garment modeling simulation system and process
Chen et al. Two‐dimensional virtual try‐on algorithm and application research for personalized dressing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination