CN116703507A - Image processing method, display method and computing device - Google Patents

Image processing method, display method and computing device Download PDF

Info

Publication number
CN116703507A
CN116703507A CN202310608672.XA CN202310608672A CN116703507A CN 116703507 A CN116703507 A CN 116703507A CN 202310608672 A CN202310608672 A CN 202310608672A CN 116703507 A CN116703507 A CN 116703507A
Authority
CN
China
Prior art keywords
image
human body
target
face
texture map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310608672.XA
Other languages
Chinese (zh)
Inventor
庄亦村
詹鹏鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310608672.XA priority Critical patent/CN116703507A/en
Publication of CN116703507A publication Critical patent/CN116703507A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes

Abstract

The embodiment of the application provides an image processing method, a display method and computing equipment. Generating a face geometric model and a face texture map based on at least one first image comprising a face region of a target user; generating a human body geometric model based on at least one second image comprising a human body region of the target user; fusing the human face geometric model into the human body geometric model to generate a target human body geometric model; fusing the face texture map to a human texture map to generate a target texture map; obtaining a target human body model based on the target texture map and the target human body geometric model; the target human body model is used for rendering and generating a three-dimensional human body image of the target user. The technical scheme provided by the embodiment of the application improves the display effect of the three-dimensional human body image.

Description

Image processing method, display method and computing device
Technical Field
The embodiment of the application relates to the technical field of computer application, in particular to an image processing method, a display method and computing equipment.
Background
In some application scenarios, a user may record in an online manner or detect the body data of the user through an intelligent detection manner, and may display the body data to the user, so as to facilitate the user to view, for example, in an online shopping scenario, for some wearing types of objects, such as clothing products, size information may be provided, and the user may select a suitable clothing product according to the body data of the user, against the size information of the clothing product. In order to increase the display effect, a three-dimensional human body image can be displayed, however, the three-dimensional human body image has no correlation with a user and has poor visual display effect.
Disclosure of Invention
The embodiment of the application provides an image processing method, a display method and computing equipment, which are used for solving the problem of poor visual display effect of three-dimensional human body images in the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
generating a face geometric model and a face texture map based on at least one first image including a face region of a target user;
generating a human body geometric model based on at least one second image comprising a human body region of the target user;
fusing the human face geometric model into the human body geometric model to generate a target human body geometric model;
fusing the face texture map to a human texture map to generate a target texture map;
obtaining a target human body model based on the target texture map and the target human body geometric model; the target human body model is used for rendering and generating a three-dimensional human body image of the target user.
Optionally, generating the face geometric model and the face texture map based on the at least one first image includes:
performing three-dimensional reconstruction on the at least one first image to obtain a face geometric model;
Based on the at least one first image and the face geometric model, performing texture sampling to obtain a first texture map of a corresponding face area;
and fusing the first texture map with a second texture map corresponding to the non-face area to obtain the face texture map.
Optionally, the three-dimensional reconstructing the at least one first image to obtain a face geometric model includes:
and carrying out three-dimensional reconstruction on the at least one first image by using the parameterized model to obtain a face geometric model and the second texture map.
Optionally, generating the face geometric model and the face texture map based on the at least one first image includes:
carrying out cartoon processing on the at least one first image to generate at least one cartoon face image;
and generating a face geometric model and a face texture map based on the at least one cartoon face image.
Optionally, the method further comprises:
identifying a hair style category corresponding to the target user based on the at least one first image;
determining a hairstyle texture map corresponding to the hairstyle category and a hairstyle geometric model;
the fusing the face geometric model into the human geometric model, and generating a target human geometric model comprises the following steps:
Fusing the hairstyle geometric model and the face geometric model into the human body geometric model to generate a target human body geometric model;
the fusing the face texture map into a human texture map, and generating a target texture map comprises:
and fusing the hairstyle texture map and the face texture map into the human body texture map to generate a target texture map.
Optionally, after the obtaining the target manikin, the method further includes:
determining a bone animation corresponding to the target human body model; the target human body model is used for generating a dynamically-changed three-dimensional human body image by combining the skeletal animation rendering.
Optionally, the method further comprises:
the human texture map is selected from at least one preset human texture map template.
Optionally, the method further comprises:
a second texture map corresponding to the non-facial region is determined from at least one preset non-facial texture map template.
Optionally, the identifying, based on the at least one first image, a hair style category corresponding to the target user includes:
identifying a hairstyle category corresponding to the target user based on the at least one first image by using an identification model;
The recognition model is obtained by training in advance based on the first image of the sample and the corresponding hair style category.
Optionally, the method further comprises:
at least one first image including a face region provided by a user and at least one second image including a body region are acquired.
In a second aspect, an embodiment of the present application provides a display method, including:
displaying a three-dimensional human body image generated based on the rendering of the target human body model;
the target human body model is generated based on the target texture mapping and the target human body geometric model, and the target human body geometric model is obtained by fusing the human face geometric model into the human body geometric model; the target texture map is obtained by fusing a face texture map to a human texture map; the face geometry model and the face texture map are generated based on at least one first image comprising a face region of a target user; the human body geometric model is generated based on at least one second image comprising a human body region of the target user.
Optionally, the method further comprises:
acquiring the target human body model sent by a server;
the three-dimensional human body image is generated based on the target human body model rendering.
Optionally, the method further comprises:
acquiring a face frontal area of a target user from at least one angle to obtain at least one first image;
acquiring a human body area of the target user from at least one angle to obtain at least one second image;
and sending the at least one first image and the at least one second image to a server.
Optionally, the method further comprises:
determining at least one object matching the target user;
displaying object prompt information of the at least one object;
responding to the triggering operation of the target user for the object prompt information, and determining a corresponding target object;
and displaying a fusion image of the object image of the target object and the three-dimensional human body image.
Optionally, the method comprises:
displaying trial prompt information in an object detail page of a target object;
responding to the triggering operation aiming at the trial prompt information, and acquiring the target human body model;
the three-dimensional human body image is generated based on the object image of the target object and the target human body model.
Optionally, the method further comprises:
image acquisition is carried out on a target user from at least one angle, and at least one human body image is obtained;
Identifying a face region and a body region from the at least one body image;
taking at least one human body image comprising the human face area as a first image;
and taking at least one human body image comprising the human body area as a second image.
Optionally, the method further comprises:
acquiring human body data identifying the target user in the at least one second image;
and displaying the human body data at the corresponding characteristic parts in the three-dimensional human body image.
Optionally, the method further comprises:
detecting a first interaction operation for the three-dimensional human body image;
and rotating the three-dimensional human body image and displaying the rotated three-dimensional human body image.
Optionally, the rotating the three-dimensional human body image and displaying the rotated three-dimensional human body image includes:
and rotating the three-dimensional human body image according to a preset rotation direction by taking the vertical direction as a rotation axis and following the change of the contact position, and displaying the rotated three-dimensional human body image after the first interactive operation is finished.
Optionally, the method further comprises:
detecting a second interaction operation for the three-dimensional human body image;
and switching and displaying the three-dimensional human body image as a local human body image, and displaying a corresponding dynamic change image in the switching process.
Optionally, the obtaining the target mannequin in response to the triggering operation for the trial prompt information includes:
responding to the triggering operation aiming at the trial prompt information, and acquiring images to obtain at least one first image and at least one second image;
transmitting the at least one first image and the at least one second image to a server;
and acquiring the target human body model sent by the server.
Optionally, the generating the three-dimensional human body image based on the object image of the target object and the target human body model includes:
displaying the selection prompt information of at least one single item corresponding to the target object;
determining a target single item in response to a single item selection operation of the target user;
and displaying a fusion image of the single-item image of the target single-item and the three-dimensional human body image.
The embodiment of the application generates a human face geometric model and a human face texture map based on at least one first image comprising a human face area of a target user, generates a human body geometric model based on at least one second image comprising a human body area of the target user, fuses the human face geometric model into the human body geometric model, generates a target human body geometric model, fuses the human face texture map into the human body texture map, generates a target texture map, and obtains a target human body model based on the target texture map and the target human body geometric model.
These and other aspects of the application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram illustrating an embodiment of an information handling system according to the present application;
FIG. 2 is a flow chart illustrating one embodiment of an image processing method provided by the present application;
FIG. 3 is a flow chart illustrating one embodiment of a display method provided by the present application;
FIG. 4 is a schematic view of scene interaction in a practical application of an embodiment of the present application;
FIGS. 5 a-5 d are schematic diagrams of models in a model generation process in a practical application according to an embodiment of the present application;
fig. 6 is a schematic diagram showing the structure of an embodiment of an image processing apparatus provided by the present application;
FIG. 7 is a schematic diagram showing a structure of an embodiment of a display device according to the present application;
FIG. 8 illustrates a schematic diagram of one embodiment of a computing device provided by the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
In some of the flows described in the specification and claims of the present application and in the foregoing figures, a plurality of operations occurring in a particular order are included, but it should be understood that the operations may be performed out of order or performed in parallel, with the order of operations such as 101, 102, etc., being merely used to distinguish between the various operations, the order of the operations themselves not representing any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The technical scheme of the embodiment of the application can be applied to an application scene of human body data processing, a user can record in an online form or detect the human body data of the user in an intelligent detection mode, the human body data can be displayed to the user so as to be convenient for the user to check, and the like, for example, in an online shopping scene, size information can be provided for some wearing type objects, such as clothing products, the user can check the human body data recorded by an online shopping platform or detect the human body data obtained by detecting photos and the like of the user through intelligent detection service provided by the online shopping platform, and a proper clothing product is selected according to the size information of the clothing products. In order to improve the display effect, three-dimensional human body images can be displayed at present, however, the three-dimensional human body images are all preconfigured and have no correlation with users, and the three-dimensional human body images correspondingly displayed by different users are the same, so that the visual display effect is poor.
In order to improve the visual effect of the three-dimensional human body image, the inventor provides a technical scheme through a series of researches, in the embodiment of the application, a human face geometric model and a human face texture map can be generated based on at least one first image comprising a human face area of a target user, a human body geometric model is generated based on at least one second image comprising a human body area of the target user, the human face geometric model is fused into the human body geometric model, the target human body geometric model is generated, the human face texture map is fused into the human body texture map, the target texture map is generated, the target human body model is obtained based on the target texture map and the target human body geometric model, the target human body model can be used for rendering and generating the three-dimensional human body image of the target user, and the human face area and the human body area are separately generated to be recombined, so that the human face of the user and the human body system of the user can be effectively restored, the reality and the accuracy of the three-dimensional human body image are improved, and the display effect of the three-dimensional human body image is improved.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The technical solution of the embodiment of the present application may be applied to the processing system shown in fig. 1, where the processing system may include a user terminal 101 and a server terminal 102.
The connection between the client 101 and the server 102 may be established through a network. The network provides a medium for a communication link between the client 101 and the server 102. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The client 101 may be a browser, APP (Application program), or a web Application such as H5 (HyperText Markup Language, 5 th edition of hypertext markup language) Application, or a light Application (also called applet, a lightweight Application program), or cloud Application, etc., the client 101 may be deployed in an electronic device, which needs to run depending on the device or some APPs in the device, etc., and the electronic device may have a display screen and support information browsing, etc., for example, may be a personal mobile terminal such as a mobile phone, a tablet computer, a personal computer, etc., and for ease of understanding, fig. 1 mainly shows the client with a device image. Various other types of applications may also be configured in the electronic device, such as search types, instant messaging types, and the like.
The server 102 may include one or more servers that provide various services, for example, may be implemented as a distributed server cluster formed by a plurality of servers, may be implemented as a single server, may be a server of a distributed system, or may be a server combined with a blockchain, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms, or may be an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology, and so on.
In the application scenario in the embodiment of the present application, for example, the server may generate a target human body model, and display, through the user terminal, a three-dimensional human body image generated based on rendering of the target human body model to the user.
For example, in the embodiment of the present application, the user terminal 101 may acquire at least one first image including the face area of the target user and at least one second image including the body area of the target user and send the at least one first image and the at least one second image to the server terminal 102, the server terminal 102 generates a body geometric model based on the at least one first image and the at least one second image, fuses the face geometric model to the body geometric model to generate a target body geometric model, fuses the face texture map to the body texture map to generate a target texture map, generates a target body model based on the target texture map and the target body geometric model and sends the target body model to the user terminal 101, and the user terminal 101 renders the generated three-dimensional body image based on the target body model and displays the three-dimensional body image to the user.
It should be noted that, in the embodiment of the present application, the display method is generally executed by the user side, and the image processing method is generally executed by the server side, however, in other embodiments of the present application, the user side may have a similar function to the server side, so that the scheme of the image processing method provided in the embodiment of the present application may also be executed.
It should be noted that, the technical solution of the embodiment of the present application is applicable to a network virtual environment, where the described user is generally referred to as a "virtual user", and a real user may register a user account in a server through a registration manner to obtain a user identity in the network environment.
It should be noted that, in the embodiment of the present application, the use of user data may be involved, and in practical application, the user specific personal data may be used in the solution described herein within the scope allowed by the applicable legal regulations in the country under the condition of meeting the applicable legal regulations in the country (for example, the user explicitly agrees to the user to notify practically, etc.).
Implementation details of the technical solution of the embodiment of the present application are set forth in detail below.
Fig. 2 is a flowchart of an embodiment of an image processing method provided by the present application, where the technical solution of the present embodiment may be executed by a server, and the method may include the following steps:
201: a face geometry model and a face texture map are generated based on at least one first image including a face region of a target user.
Wherein, to further improve the visual display effect and the user's look and feel, optionally, generating the face geometric model and the face texture map based on the at least one first image may further include: carrying out cartoon processing on at least one first image to generate at least one cartoon face image; a face geometry model and a face texture map are generated based on the at least one cartoon face image.
The cartoonization processing model can be utilized to carry out the cartoonization processing on the at least one first image to generate at least one cartoon face image. The cartoonization processing model may be obtained by training in advance using training data, which may include a sample image as input data and a cartoonization image corresponding to the sample image as a training tag. The cartoonization processing model may be implemented by using any machine learning model, which is not limited in the present application.
The face geometric model and the face texture map may be generated based on the at least one first image by a three-dimensional reconstruction method, and the specific implementation manner will be described in detail in the corresponding embodiments below.
The at least one image may be an image or the like comprising different face angles of the target user, thereby improving the accuracy of the face geometry model and the face texture map.
202: a human body geometric model is generated based on at least one second image including a human body region of the target user.
Optionally, the at least one first image and the at least one second image may be provided by a user, and thus the method may further comprise:
at least one first image including a face region provided by a user and at least one second image including a body region are acquired.
Optionally, the user may extract the at least one first image and the at least one second image through the user side.
The second image and the first image may be the same image. The user terminal may acquire an image of a target user, obtain at least one human body image, send the at least one human body image to the server terminal, identify a human face region and a human body region from the at least one human body image by the server terminal, take the at least one human body image including the human face region as a first image, and take the at least one human body image including the human body region as a second image.
Optionally, the second image and the first image may be different images, and the user side may acquire the face front area of the target user from at least one angle to obtain at least one first image; acquiring a human body area of a target user from at least one angle to obtain at least one second image; and sending the at least one first image and the at least one second image to the server.
In addition, the at least one second image and the at least one first image may also be uploaded locally by the user to the user terminal, and sent by the user terminal to the server terminal.
The at least one second image and the at least one first image may be two-dimensional images.
Wherein, the human body geometric model can be generated by a three-dimensional reconstruction mode.
203: and fusing the human face geometric model into the human body geometric model to generate a target human body geometric model.
The human face geometric model can be replaced by the human face region of the human body geometric model to generate the target human body combination model.
204: and fusing the human face texture map into the human body texture map to generate the target texture map.
The target texture map may be generated by stitching the face texture map with the body texture map. In the case that the human texture map includes a human face region, the human face region of the human texture map may be replaced with the human face texture map, so as to obtain the target texture map.
Wherein the human texture map may be selected from at least one preset human texture map.
205: and obtaining a target human body model based on the target texture map and the target human body geometric model.
And attaching the target texture mapping to the target human body geometric model to obtain the target human body model. The target manikin is used for rendering and generating a three-dimensional human body image of a target user.
The three-dimensional body image may be generated by rendering the target body model in accordance with the first camera parameters.
The first camera parameters may include a first camera position and a first camera height, e.g., the first camera position may be a horizontal orientation relative to the target manikin, e.g., may be located directly in front of, directly behind, directly to the left of, directly to the right of, diagonally behind, etc. the target manikin. The first camera height may be a height relative to a portion of the target manikin to achieve the effects of nodding, panning, and tilting.
In this embodiment, a face geometric model and a face texture map are generated based on at least one first image including a face region of a target user, a human body geometric model is generated based on at least one second image including a human body region of the target user, and the face geometric model is fused into the human body geometric model to generate a target human body geometric model; the human face texture map is fused into the human body texture map to generate a target texture map, and a target human body model is obtained based on the target texture map and a target human body geometric model and is used for rendering and generating a three-dimensional human body image of a target user.
In some embodiments, generating the face geometric model and the face texture map based on the at least one first image may include:
three-dimensional reconstruction is carried out on at least one first image, and a face geometric model is obtained;
based on at least one first image and a face geometric model, performing texture sampling to obtain a first texture map of a corresponding face area;
and fusing the first texture map with the second texture map corresponding to the non-face area to obtain the face texture map.
Optionally, in the case of performing the cartoonization processing on at least one first image, specifically, three-dimensional reconstruction may be performed on at least one cartoonized image to obtain a face geometric model; based on at least one cartoon image and a face geometric model, performing texture sampling to obtain a first texture map of a corresponding face area; therefore, the finally reduced face is a cartoon image, so as to improve the look and feel.
The texture sampling may be performed by respectively projecting the face geometric model into at least one first image, the three-dimensional vertex coordinates in the face geometric model may obtain corresponding two-dimensional plane coordinates through projection, pixel sampling may be performed on the at least one first image based on the two-dimensional plane coordinates, and pixel colors at the two-dimensional plane coordinates corresponding to the three-dimensional vertex coordinates in the at least one first image may be fused, for example, weighting may be performed, so as to obtain a texture map corresponding to the three-dimensional vertex coordinates, and finally obtain a first texture map corresponding to the face geometric model.
The determining of the second texture map may have multiple implementation manners, and as an optional implementation manner, performing three-dimensional reconstruction on at least one first image, and obtaining the face geometric model may include:
and carrying out three-dimensional reconstruction on at least one first image by using the parameterized model to obtain a face geometric model and a second texture map.
The parameterized Model may be, for example, 3DMM (3D Morphable models, three-dimensional deformable Face Model), BFM (base Face Model), and flag (three-dimensional Face Model), and the parameterized Model may be used to obtain a corresponding Face geometric Model and a second texture map by performing orthogonal basis weighted linear addition on a certain number of Face images in the Face database based on at least one first image input.
The second texture map obtained by using the parameterized model may be used as a texture map for a non-face region to complement the texture map for a face region to obtain a face texture map, since the second texture map is obtained from a face image from a face database, possibly at a lower resolution.
As another alternative implementation, the second texture map may be obtained in a preconfigured manner, and optionally, the method may further include:
A second texture map corresponding to the non-facial region is determined from at least one preset non-facial texture map template.
Wherein the at least one preset non-facial texture map template may be one or more, and the plurality of preset non-facial texture map templates may correspond to different facial types, such as skin colors, or shapes, etc.
The second texture map corresponding to the non-facial region may be determined from the at least one preset non-facial texture map template by inputting the at least one first image into a matching model to determine a corresponding target face type, such that the preset non-facial texture map template corresponding to the target face type may be used as the second texture map.
The matching model may be obtained by training using the sample image and the face type corresponding to the sample image as a training sample.
The fusing of the first texture map and the second texture map may be performed by stitching the texture map of the face region extracted from the first texture map with the texture map of the non-face region extracted from the second texture map to obtain the face texture map.
In some embodiments, the fusing of the first texture map and the second texture map corresponding to the non-facial area may be implemented by using an image mask (image mask) technology, where in this embodiment, a face mask may be set, and a front area of the face is located within the mask, that is, a facial area, and non-facial areas are located outside the mask, such as two sides of the face, ears, neck, and other areas. The texture map in the mask can be generated by adopting a first texture map obtained based on at least one first image, and the texture map outside the mask can be generated by adopting a second texture map, so that the face texture map can be obtained.
In addition, when fusing the first texture map corresponding to the facial region with the second texture map corresponding to the non-facial region, the color at the boundary of the facial region and the non-facial region may also be smoothed using a gaussian smoothing algorithm to make the boundary region transition natural.
In some embodiments, the face texture map may be further whitened, for example, a skin area in the face texture map may be detected based on a face detection algorithm such as face mapping, and pixels in the skin area may be subjected to color mapping transformation based on a pre-designed whitening color mapping table, so as to achieve a whitening effect. Of course, the present application is not limited to a specific whitening algorithm.
In addition, other beautifying treatments may be performed on the face texture map, which is not limited in the present application.
In some embodiments, to further enhance the visual effect, the target mannequin may include a hairstyle model to make the target mannequin more vivid, the method may further include: determining a hairstyle geometric model corresponding to a target user and a hairstyle texture map;
accordingly, fusing the face geometry model into the body geometry model, generating the target body geometry model may include: and fusing the hairstyle geometric model and the human face geometric model into the human body geometric model to generate a target human body geometric model.
Accordingly, fusing the face texture map into the body texture map, generating the target texture map may include: and fusing the hairstyle texture map and the face texture map into the human body texture map to generate a target texture map.
Thus, a target manikin comprising a hairstyle model may be generated based on the target manikin fused with the hairstyle geometric model and the target texture map fused with the hairstyle texture map.
Wherein the hairstyle geometric model and hairstyle texture map may be preconfigured.
In addition, in order to further improve the visual effect, the hairstyle geometric model and the hairstyle texture map of different hairstyle categories may be preconfigured, so in some embodiments, determining the hairstyle geometric model and the hairstyle texture map corresponding to the target user may include:
identifying a hairstyle category corresponding to the target user based on the at least one first image; and determining a hairstyle texture map corresponding to the hairstyle category and a hairstyle geometric model.
Wherein identifying the hair style category corresponding to the target user based on the at least one first image may include: and identifying the hairstyle category corresponding to the target user based on the at least one first image by using the identification model. The recognition model is obtained by training in advance based on the sample image and the corresponding hair style category.
In some embodiments, to further enhance the visual effect, after obtaining the target mannequin, the method may further include:
and determining a skeleton animation corresponding to the target human body model, wherein the target human body model is used for generating a dynamically-changed three-dimensional human body image by combining skeleton animation rendering.
One or more bone animation templates can be provided, any bone animation template can be selected randomly, the corresponding positions of the corresponding bone points in the target human body model and the bone animation templates are bound, and the target human body model can be driven according to the bone animation in the bone animation templates, so that a dynamically-changed three-dimensional human body image is rendered and generated, and the visual effect is enhanced.
Wherein, the human texture map may be obtained by pre-configuring, in some embodiments, the method may further comprise: a human texture map is selected from at least one preset human texture map template.
Alternatively, in the case where there are a plurality of preset human texture map templates, one preset human texture map template may be randomly selected as the human texture map.
The parametric model may be used to reconstruct at least one second image in three dimensions to obtain a geometric model of the human body, and other non-parametric models may be used to reconstruct in three dimensions.
Fig. 3 is a flowchart of an embodiment of a display method provided by the present application, where the technical solution of the present embodiment may be executed by a user terminal, and the method may include the following steps:
301: and displaying the three-dimensional human body image generated based on the target human body model rendering.
The target manikin may be generated based on a target texture map and a target manikin obtained by fusing a face geometric model into the manikin; the target texture map is obtained by fusing the face texture map into the body texture map.
The face geometry model and the face texture map are generated based on at least one first image including a face region of a target user; the body geometry model is generated based on at least one second image comprising a body region of the target user.
The target mannequin acquired by the user side can be generated and sent by the server side.
Thus, optionally, the method may further comprise: acquiring a target human body model sent by a server; a three-dimensional human body image is generated based on the target human body model rendering.
The specific generation manner of the target manikin can be described in detail in the embodiment shown in fig. 2, and will not be described herein.
In this embodiment, the user side displays a three-dimensional human body image generated based on rendering of a target human body model, the target human body image is generated by the service side based on a target texture map and a target human body geometric model, and the target human body geometric model is obtained by fusing a human face geometric model into a human body geometric model; the target texture map is obtained by fusing the face texture map to the human texture map; a face geometry model and a face texture map are generated based on at least one first image comprising a face region of a target user; the body geometry model is generated based on at least one second image comprising a body region of the target user. According to the embodiment of the application, the three-dimensional human body image is generated by combining the images comprising the human face area or the human body area, and the human face area and the human body area are separately generated and recombined, so that the human face image of the user and the body system of the user can be effectively restored, the authenticity and the accuracy of the three-dimensional human body image are improved, and the display effect of the three-dimensional human body image is improved.
Wherein, as an alternative, the method may further comprise:
acquiring a face frontal area of a target user from at least one angle to obtain at least one first image;
Acquiring a human body area of a target user from at least one angle to obtain at least one second image;
and sending the at least one first image and the at least one second image to the server.
The server may thus generate the target mannequin based on the at least one first image and the at least one second image.
Alternatively, the acquisition prompt may be displayed first, and the acquisition prompt may be used to prompt the user for the acquisition mode. Thus, based on the user acquisition operation, acquiring the face frontal area of the target user from at least one angle to obtain at least one first image; at least one second image is acquired of a body region of the target user from at least one angle.
The server may generate the target manikin based on at least one first image and at least one second image, and the specific generating manner may be described in the foregoing corresponding embodiments, which are not described herein.
In some embodiments, as another alternative, the method may further include:
image acquisition is carried out on a target user from at least one angle, and at least one human body image is obtained;
identifying a face region and a body region from at least one body image;
Taking at least one human body image comprising a human face area as a first image;
at least one human body image including a human body region is taken as a second image.
In some embodiments, to enrich the form of interaction with the user, the method may further comprise:
detecting a first interactive operation for a three-dimensional human body image;
and rotating the three-dimensional human body image and displaying the rotated three-dimensional human body image.
Wherein rotating the three-dimensional human body image and displaying the rotated three-dimensional human body image may include: and rotating the three-dimensional human body image according to a preset rotation direction by taking the vertical direction as a rotation axis and following the change of the contact position, and displaying the rotated three-dimensional human body image after the first interactive operation is finished.
The first interactive operation may be, for example, a left-right sliding operation with respect to the three-dimensional human body image. The rotating the three-dimensional human body image and displaying the rotated three-dimensional human body image may be rotating the three-dimensional human body image in a preset rotation direction with a vertical direction as a rotation axis along with the contact position change, and displaying the rotated three-dimensional human body image after the first interactive operation is completed.
The three-dimensional human body image is rotated according to a preset direction by taking the vertical direction as a rotation axis, so that a plurality of camera parameters corresponding to the preset direction can be determined, and a plurality of corresponding three-dimensional human body images are generated.
In addition, after the three-dimensional human body image after the rotation is displayed, the original three-dimensional human body image can be restored after no interactive operation is detected within a predetermined time.
In some embodiments, the method may further comprise:
detecting a second interactive operation for the three-dimensional human body image;
and switching and displaying the three-dimensional human body image as a local human body image, and displaying a corresponding dynamic change image in the switching process.
The method can be to preferentially display dynamic change images corresponding to switching from the three-dimensional human body images to the local human body images; after the dynamic change image is displayed, a partial human body image is displayed. The dynamic change image and the local human body image are continuously displayed, so that the three-dimensional human body image, the dynamic change image and the local human body image are smoothly connected.
The local human body image may be determined according to an interaction position of the second interaction operation, and may be an image of a certain area where the interaction position is located, or may be a feature part image corresponding to a feature part corresponding to the interaction position.
Displaying the corresponding dynamic change image in the switching process may include: and displaying dynamic change images formed by the target images respectively corresponding to the camera parameters in the switching process. The target images respectively corresponding to the plurality of camera parameters can be combined according to the arrangement sequence corresponding to the plurality of camera parameters to generate dynamic change images, namely videos generated by the plurality of target images.
The embodiment can realize the switching display of the whole image and the partial image, and the connection is carried out through the dynamic change image, so that the visual effect is improved, the interaction form is enriched, and the effective interaction with the user is realized.
In some embodiments, the method may further comprise:
acquiring human body data of an identification target user in at least one second image;
and displaying the human body data at the corresponding characteristic parts in the three-dimensional human body image.
Wherein, the server side can identify the human body data of the target user from at least one second image and send the human body data to the user side.
The corresponding feature labels can also be displayed on a plurality of feature parts in the three-dimensional human body image.
Accordingly, the detection of the second interaction operation for the three-dimensional human body image may be the detection of the trigger operation for any one of the feature tags, and the switching of the three-dimensional human body image to the local human body image may be the switching of the three-dimensional human body image to the local human body image of the feature location corresponding to the feature tag.
The feature tag can display feature data of corresponding feature parts, and the feature tag can be realized in a control mode for user interaction.
In some embodiments, the method may further comprise:
Determining at least one object matching the target user;
displaying object prompt information of at least one object;
responding to the triggering operation of the target user on the object prompt information, and determining a corresponding target object;
and displaying a fusion image of the object image of the target object and the three-dimensional human body image.
The object may be a single product of a certain product, one product is often a single product combination formed by a plurality of single products, in an e-commerce scene, the single product may be further represented as an inventory measurement unit (SKU, english full name: stock Keeping Unit), and the product may be further represented as a standard product unit (SPU, english full name: standard Product Unit). The object prompt information can comprise trial prompt information of each object, and further, an object image of a corresponding target object can be determined in response to triggering operation aiming at any one trial prompt information; and fusing the object image with the three-dimensional human body image to generate a fused image, and displaying the fused image.
In addition, the object prompt information may further include processing prompt information, and then the target object may be processed correspondingly in response to a triggering operation for any processing prompt information.
Wherein, the processing prompt information may include a processing control corresponding to at least one processing type, and responding to the triggering operation for any processing prompt information to perform corresponding processing on the target object may include:
and responding to the triggering operation of the processing control for any processing type, and correspondingly processing the target object according to the processing type.
Wherein the processing controls may include a purchase control. Accordingly, the target object may be added to the shopping cart in response to a triggering operation for the purchase control.
The processing controls may also include transaction controls. Accordingly, an order request may be generated based on the target object in response to a triggering operation for the transaction control.
The embodiment can be applied to an object exchange scene, a user can check the fused images of different objects and three-dimensional human body images by triggering the trial prompt information of the different objects, can feel the effect of the different objects on the human body of the user, and can perform purchasing or exchange operation aiming at the expected objects.
In some embodiments, the method may further comprise:
displaying trial prompt information in an object detail page of a target object;
responding to triggering operation aiming at trial prompt information to acquire a target human body model;
A three-dimensional human body image is generated based on the object image of the target object and the target human body model.
Wherein the target object may be a target product comprising a plurality of individual items. The target product can comprise attribute parameters such as size, style, performance parameters and the like, and a plurality of single products can be obtained by combining different attribute parameters of the target product.
Accordingly, generating the three-dimensional human body image based on the object image of the target object and the target human body model may include: and generating a three-dimensional human body image based on the single-article image of the target single article and the target human body model.
The target single article can be obtained based on the matching of the human body data of the target user identified from at least one second image, for example, the target product is jeans of a style, the sport pants are divided into S, M, L types, the S type is suitable for the crowd with the waistline of 66-68 cm, the M type is suitable for the crowd with the waistline of 68-70 cm, the L type is suitable for the crowd with the waistline of 70-72 cm, the waistline of the target user identified is 69cm, and the matched target single article is the M type sport pants single article.
Further, as yet another alternative, the generating the three-dimensional human body image based on the object image of the target object and the target human body model may include: displaying the selection prompt information of at least one single item corresponding to the target object; determining a target single item in response to a single item selection operation of the target user; and displaying a fusion image of the single-item image of the target single-item and the three-dimensional human body image.
I.e. the target individual may also be determined by the user selection, etc.
The embodiment can be applied to an object exchange scene, and a user can check a three-dimensional human body image generated by the target object and human body data of the user by triggering trial prompt information, so that the recommendation effect of the target object is improved, and the conversion rate of the target object is further improved.
In some embodiments, the obtaining the target mannequin in response to the triggering operation for the trial prompt information may include: responding to the triggering operation aiming at the trial prompt information, and acquiring images to obtain at least one first image and at least one second image; transmitting the at least one first image and the at least one second image to a server; acquiring the target human body model sent by the server;
the image acquisition operation can be triggered according to the triggering operation of the trial prompt information, and the trial prompt information can comprise acquisition prompt information for prompting a user to acquire at least one first image, at least one second image and the like according to the user acquisition operation.
The server may generate the target manikin based on at least one first image and at least one second image, and the specific generating manner may be described in the foregoing corresponding embodiments, which are not described herein.
For easy understanding, referring to the scene interaction diagram shown in fig. 4, the technical solution of the embodiment of the present application is described below with reference to the scene interaction diagram shown in fig. 4.
The user side 401 may perform image acquisition on the target user, acquire a face frontal area of the target user from at least one angle, obtain at least one first image, and acquire a body area of the target user from at least one angle, obtain at least one second image, and send the at least one first image and the at least one second image to the server side 402.
The server 402 may generate a human body geometric model 500 shown in fig. 5a based on at least one second image, generate a face geometric model 501 shown in fig. 5a and a face texture map 502 shown in fig. 5b based on at least one first image, and determine a hairstyle geometric model 503 shown in fig. 5a and a hairstyle texture map 504 shown in fig. 5b corresponding to the target user;
the face geometric model 501 is used to replace the face region of the body geometric model 500, and the hairstyle geometric model 502 is added to the body geometric model to generate the target body geometric model 505.
The server 402 may randomly select a preset human texture map template from a plurality of preset human texture map templates as the human texture map 506. The hairstyle texture map 504 and the face texture map 502 are fused into a body texture map 506 to generate a target texture map 507. The target texture map and the target human body geometric model form a target human body model, and a three-dimensional human body image shown in fig. 5c can be generated through rendering.
After generating the target mannequin, the server 402 may also randomly select a skeletal animation template as shown in fig. 5d, bind the target mannequin with the corresponding positions of the corresponding skeletal points in the skeletal animation template, and the target mannequin may be driven according to the skeletal animation in the skeletal animation template as shown in fig. 5d to generate a dynamic target mannequin and send the dynamic target mannequin to the user 401, where the user 401 renders a dynamically-changed three-dimensional human image based on the dynamic target mannequin.
Fig. 6 is a schematic structural diagram of an embodiment of an image processing apparatus according to an embodiment of the present application, where the method apparatus includes:
a first generation module 601, configured to generate a face geometric model and a face texture map based on at least one first image including a face region of a target user;
a second generation module 602 for generating a human body geometric model based on at least one second image comprising a human body region of the target user;
a third generating module 603, configured to fuse the face geometric model into the human geometric model, and generate a target human geometric model;
a fourth generating module 604, configured to fuse the face texture map to the human texture map, and generate a target texture map;
A fifth generating module 605 is configured to obtain a target manikin based on the target texture map and the target manikin geometry model.
The target human body model is used for rendering and generating a three-dimensional human body image of a target user.
The human body texture map may be pre-configured and may be selected from at least one pre-set human body texture map template.
Alternatively, in the case where there are a plurality of preset human texture map templates, one preset human texture map template may be randomly selected as the human texture map.
In some embodiments, the first generating module may generate the face geometric model and the face texture map based on the at least one first image, the face geometric model and the face texture map may include: three-dimensional reconstruction is carried out on at least one first image, and a face geometric model is obtained; based on at least one first image and a face geometric model, performing texture sampling to obtain a first texture map of a corresponding face area; and fusing the first texture map with the second texture map corresponding to the non-face area to obtain the face texture map.
The determining of the second texture map may have multiple implementation manners, and as an optional implementation manner, performing three-dimensional reconstruction on at least one first image, and obtaining the face geometric model may include:
And carrying out three-dimensional reconstruction on at least one first image by using the parameterized model to obtain a face geometric model and a second texture map.
As another alternative implementation, the second texture map may be obtained in a preconfigured manner, and optionally, the method may further include: a second texture map corresponding to the non-facial region is determined from at least one preset non-facial texture map template.
In some embodiments, the apparatus may further include a beautifying processing module, configured to whiten the face texture map, for example, a skin area in the face texture map may be detected based on a face detection algorithm such as face mapping (face detection), and color mapping transformation is performed on pixels of the skin area based on a pre-designed whitening color mapping table, so as to achieve a whitening effect. Of course, the present application is not limited to a specific whitening algorithm.
In addition, other beautifying treatments may be performed on the face texture map, which is not limited in the present application.
In some embodiments, to further enhance the visual effect, the target mannequin may include a hairstyle model to make the target mannequin more vivid, and the apparatus may further include a hairstyle recognition module for determining a hairstyle geometric model and a hairstyle texture map corresponding to the target user.
Accordingly, the third generating module fuses the face geometric model to the human geometric model, and generating the target human geometric model may include: and fusing the hairstyle geometric model and the human face geometric model into the human body geometric model to generate a target human body geometric model.
Accordingly, the fourth generation module fuses the face texture map to the human texture map, and generating the target texture map may include: and fusing the hairstyle texture map and the face texture map into the human body texture map to generate a target texture map.
Thus, the fifth generation module may generate a target mannequin including a hairstyle model based on the target mannequin fused with the hairstyle geometric model and the target texture map fused with the hairstyle texture map.
Wherein the hairstyle geometric model and hairstyle texture map may be preconfigured.
In some embodiments, to further improve the visual effect, after obtaining the target mannequin, the apparatus may further include an animation generation module configured to determine a bone animation corresponding to the target mannequin, where the target mannequin is configured to generate a dynamically changing three-dimensional human body image in combination with rendering the bone animation.
The processing device shown in fig. 6 may perform the image processing method described in the embodiment shown in fig. 2, and its implementation principle and technical effects are not repeated. The specific manner in which the respective modules and units of the processing apparatus in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 7 is a schematic structural diagram of an embodiment of a display device according to an embodiment of the present application, where the method device includes:
a display module 701, configured to display a three-dimensional human body image generated based on the rendering of the target human body model.
The target human body model is generated based on the target texture mapping and the target human body geometric model, and the target human body geometric model is obtained by fusing the human face geometric model into the human body geometric model; the target texture map is obtained by fusing the face texture map to the human texture map; a face geometry model and a face texture map are generated based on at least one first image comprising a face region of a target user; the body geometry model is generated based on at least one second image comprising a body region of the target user.
In some embodiments, to enrich the interaction form with the user, the apparatus may further include a first interaction module for detecting a first interaction operation for the three-dimensional human body image; and rotating the three-dimensional human body image and displaying the rotated three-dimensional human body image.
Wherein rotating the three-dimensional human body image and displaying the rotated three-dimensional human body image may include: and rotating the three-dimensional human body image according to a preset rotation direction by taking the vertical direction as a rotation axis and following the change of the contact position, and displaying the rotated three-dimensional human body image after the first interactive operation is finished.
In addition, after the three-dimensional human body image after the rotation is displayed, the original three-dimensional human body image can be restored after no interactive operation is detected within a predetermined time.
In some embodiments, the apparatus may further include a second interaction module for detecting a second interaction operation for the three-dimensional human body image; and switching and displaying the three-dimensional human body image as a local human body image, and displaying a corresponding dynamic change image in the switching process.
The method can be to preferentially display dynamic change images corresponding to switching from the three-dimensional human body images to the local human body images; after the dynamic change image is displayed, a partial human body image is displayed. The dynamic change image and the local human body image are continuously displayed, so that the three-dimensional human body image, the dynamic change image and the local human body image are smoothly connected.
The local human body image may be determined according to an interaction position of the second interaction operation, and may be an image of a certain area where the interaction position is located, or may be a feature part image corresponding to a feature part corresponding to the interaction position.
Displaying the corresponding dynamic change image in the switching process may include: and displaying dynamic change images formed by the target images respectively corresponding to the camera parameters in the switching process. The target images respectively corresponding to the plurality of camera parameters can be combined according to the arrangement sequence corresponding to the plurality of camera parameters to generate dynamic change images, namely videos generated by the plurality of target images.
In some embodiments, the apparatus may further include a second display module for acquiring human body data identifying the target user in at least one second image; and displaying the human body data at the corresponding characteristic parts in the three-dimensional human body image.
Wherein, the server side can identify the human body data of the target user from at least one second image and send the human body data to the user side.
The corresponding feature labels can also be displayed on a plurality of feature parts in the three-dimensional human body image.
Accordingly, the detection of the second interaction operation for the three-dimensional human body image may be the detection of the trigger operation for any one of the feature tags, and the switching of the three-dimensional human body image to the local human body image may be the switching of the three-dimensional human body image to the local human body image of the feature location corresponding to the feature tag.
The feature tag can display feature data of corresponding feature parts, and the feature tag can be realized in a control mode for user interaction.
In some embodiments, the apparatus may further include a third display module for determining at least one object matching the target user; displaying object prompt information of at least one object; responding to the triggering operation of the target user on the object prompt information, and determining a corresponding target object; and displaying a fusion image of the object image of the target object and the three-dimensional human body image.
Wherein the object may be a single item of a certain product. The object prompt information can comprise trial prompt information, and further, an object image corresponding to a target object can be determined in response to triggering operation aiming at any one trial prompt information; and fusing the object image with the three-dimensional human body image to generate a fused image, and displaying the fused image.
For example, the object prompt information may further include processing prompt information, and then the target object may be processed correspondingly in response to a triggering operation for any processing prompt information.
Wherein, the processing prompt information may include a processing control corresponding to at least one processing type, and responding to the triggering operation for any processing prompt information to perform corresponding processing on the target object may include:
And responding to the triggering operation of the processing control for any processing type, and correspondingly processing the target object according to the processing type.
Wherein the processing controls may include a purchase control. Accordingly, the target object may be added to the shopping cart in response to a triggering operation for the purchase control.
The processing controls may also include transaction controls. Accordingly, an order request may be generated based on the target object in response to a triggering operation for the transaction control.
In some embodiments, the apparatus may further include a fourth display module configured to display a prompt message in the object detail page; responding to triggering operation aiming at prompt information to acquire a target human body model; a three-dimensional human body image is generated based on the object image of the target object and the target human body model.
The display device shown in fig. 7 may perform the display method shown in the embodiment shown in fig. 3, and its implementation principle and technical effects are not repeated. The specific manner in which the respective modules and units of the display device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Embodiments of the present application also provide a computing device, as shown in FIG. 8, which may include a storage component and a processing component;
The storage component may be configured to execute one or more computer instructions, where the one or more computer instructions are invoked by the processing component to implement the image processing method described in the embodiment shown in fig. 2 or the display method described in the embodiment shown in fig. 3.
Of course, the computing device may necessarily include other components, such as input/output interfaces, communication components, and the like. In the case where the above-described computing device implements the information display method described in the embodiment shown in fig. 3, the computing device may further include a display component or the like.
The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
Wherein the processing component 802 may include one or more processors to execute computer instructions to perform all or part of the steps in the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component 801 is configured to store various types of data to support operations at the terminal. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The display component may be an Electroluminescent (EL) element, a liquid crystal display or a micro display having a similar structure, or a retina-directly displayable or similar laser scanning type display.
It should be noted that, in the case where the above-mentioned computing device implements the display method described in the embodiment of fig. 3, the computing device may be specifically implemented as an electronic device, where the electronic device may be a device that is used by a user and has functions of computing, surfing the internet, communication, and the like, and may be, for example, a mobile phone, a tablet computer, a personal computer, a wearable device, and the like.
In the case where the above-mentioned computing device implements the image processing method described in the embodiment shown in fig. 2, the computing device may be an elastic computing host provided by a physical device or a cloud computing platform. It may be implemented as a distributed cluster of multiple servers or terminal devices, or as a single server or single terminal device.
The embodiment of the application also provides a computer readable storage medium storing a computer program, which when executed by a computer can implement the image processing method described in the embodiment shown in fig. 2 or the display method described in the embodiment shown in fig. 3. The computer-readable medium may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device.
The embodiment of the present application further provides a computer program product, which includes a computer program loaded on a computer readable storage medium, where the computer program when executed by a computer can implement an image processing method as described in the embodiment shown in fig. 2 or a display method as described in the embodiment shown in fig. 3.
In such embodiments, the computer program may be downloaded and installed from a network, and/or installed from a removable medium. The computer program, when executed by a processor, performs the various functions defined in the system of the application.
The computer readable storage medium in the foregoing respective embodiments may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. An image processing method, comprising:
generating a face geometric model and a face texture map based on at least one first image including a face region of a target user;
generating a human body geometric model based on at least one second image comprising a human body region of the target user;
fusing the human face geometric model into the human body geometric model to generate a target human body geometric model;
fusing the face texture map to a human texture map to generate a target texture map;
obtaining a target human body model based on the target texture map and the target human body geometric model; the target human body model is used for rendering and generating a three-dimensional human body image of the target user.
2. The method of claim 1, wherein generating a face geometry model and a face texture map based on the at least one first image comprises:
performing three-dimensional reconstruction on the at least one first image to obtain a face geometric model;
based on the at least one first image and the face geometric model, performing texture sampling to obtain a first texture map of a corresponding face area;
and fusing the first texture map with a second texture map corresponding to the non-face area to obtain the face texture map.
3. The method of claim 2, wherein the three-dimensionally reconstructing the at least one first image to obtain a face geometry model comprises:
and carrying out three-dimensional reconstruction on the at least one first image by using the parameterized model to obtain a face geometric model and the second texture map.
4. The method of claim 1, wherein generating a face geometry model and a face texture map based on the at least one first image comprises:
carrying out cartoon processing on the at least one first image to generate at least one cartoon face image;
And generating a face geometric model and a face texture map based on the at least one cartoon face image.
5. The method as recited in claim 1, further comprising:
identifying a hair style category corresponding to the target user based on the at least one first image;
determining a hairstyle texture map corresponding to the hairstyle category and a hairstyle geometric model;
the fusing the face geometric model into the human geometric model, and generating a target human geometric model comprises the following steps:
fusing the hairstyle geometric model and the face geometric model into the human body geometric model to generate a target human body geometric model;
the fusing the face texture map into a human texture map, and generating a target texture map comprises:
and fusing the hairstyle texture map and the face texture map into the human body texture map to generate a target texture map.
6. The method of claim 1, wherein after obtaining the target mannequin, further comprising:
determining a bone animation corresponding to the target human body model; the target human body model is used for generating a dynamically-changed three-dimensional human body image by combining the skeletal animation rendering.
7. The method as recited in claim 1, further comprising:
the human texture map is selected from at least one preset human texture map template.
8. A display method, comprising:
displaying a three-dimensional human body image generated based on the rendering of the target human body model;
the target human body model is generated based on the target texture mapping and the target human body geometric model, and the target human body geometric model is obtained by fusing the human face geometric model into the human body geometric model; the target texture map is obtained by fusing a face texture map to a human texture map; the face geometry model and the face texture map are generated based on at least one first image comprising a face region of a target user; the human body geometric model is generated based on at least one second image comprising a human body region of the target user.
9. The method as recited in claim 8, further comprising:
acquiring the target human body model sent by a server;
the three-dimensional human body image is generated based on the target human body model rendering.
10. The method as recited in claim 9, further comprising:
Acquiring a face frontal area of a target user from at least one angle to obtain at least one first image;
acquiring a human body area of the target user from at least one angle to obtain at least one second image;
and sending the at least one first image and the at least one second image to a server.
11. The method as recited in claim 8, further comprising:
determining at least one object matching the target user;
displaying object prompt information of the at least one object;
responding to the triggering operation of the target user for the object prompt information, and determining a corresponding target object;
and displaying a fusion image of the object image of the target object and the three-dimensional human body image.
12. The method as recited in claim 8, further comprising:
displaying trial prompt information in an object detail page of a target object;
responding to the triggering operation aiming at the trial prompt information, and acquiring the target human body model;
the three-dimensional human body image is generated based on the object image of the target object and the target human body model.
13. A computing device comprising a processing component, a storage component; the storage component stores one or more computer instructions; the one or more computer instructions are for invocation and execution by the processing component to implement the image processing method of any one of claims 1 to 7 or the display method of any one of claims 8 to 12.
14. A computer storage medium storing a computer program which, when executed by a computer, implements the image processing method according to any one of claims 1 to 7 or the display method according to any one of claims 8 to 12.
CN202310608672.XA 2023-05-23 2023-05-23 Image processing method, display method and computing device Pending CN116703507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310608672.XA CN116703507A (en) 2023-05-23 2023-05-23 Image processing method, display method and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310608672.XA CN116703507A (en) 2023-05-23 2023-05-23 Image processing method, display method and computing device

Publications (1)

Publication Number Publication Date
CN116703507A true CN116703507A (en) 2023-09-05

Family

ID=87824918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310608672.XA Pending CN116703507A (en) 2023-05-23 2023-05-23 Image processing method, display method and computing device

Country Status (1)

Country Link
CN (1) CN116703507A (en)

Similar Documents

Publication Publication Date Title
Sekhavat Privacy preserving cloth try-on using mobile augmented reality
US10489683B1 (en) Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks
US11010896B2 (en) Methods and systems for generating 3D datasets to train deep learning networks for measurements estimation
US10964078B2 (en) System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
CN111787242B (en) Method and apparatus for virtual fitting
CN108961369B (en) Method and device for generating 3D animation
US20200066052A1 (en) System and method of superimposing a three-dimensional (3d) virtual garment on to a real-time video of a user
US20090109214A1 (en) Product Modeling System and Method
CN106127552B (en) Virtual scene display method, device and system
CN108875524A (en) Gaze estimation method, device, system and storage medium
US20180144548A1 (en) Virtual trial of products and appearance guidance in display device
US11507781B2 (en) Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks
WO2023226454A1 (en) Product information processing method and apparatus, and terminal device and storage medium
CN116524088B (en) Jewelry virtual try-on method, jewelry virtual try-on device, computer equipment and storage medium
Marelli et al. Faithful fit, markerless, 3d eyeglasses virtual try-on
CN108629824B (en) Image generation method and device, electronic equipment and computer readable medium
US20220036421A1 (en) Sales system using apparel modeling system and method
CN116703507A (en) Image processing method, display method and computing device
KR20190023486A (en) Method And Apparatus for Providing 3D Fitting
Bhagyalakshmi et al. Virtual dressing room application using GANs
CN116266408A (en) Body type estimating method, body type estimating device, storage medium and electronic equipment
Dias et al. Augmented Reality Based Virtual Dressing Room Using Unity3D
Joglekar et al. Review on Modern Techniques Behind Virtual Cloth Try-On
Abhishek et al. Smart Virtual Dressing Room
Loh Virtual fitting room using augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination