CN115953561A - 3D model hairstyle processing method and device and electronic equipment - Google Patents

3D model hairstyle processing method and device and electronic equipment Download PDF

Info

Publication number
CN115953561A
CN115953561A CN202211605021.7A CN202211605021A CN115953561A CN 115953561 A CN115953561 A CN 115953561A CN 202211605021 A CN202211605021 A CN 202211605021A CN 115953561 A CN115953561 A CN 115953561A
Authority
CN
China
Prior art keywords
model
contour point
hairstyle
hair style
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211605021.7A
Other languages
Chinese (zh)
Inventor
肖萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Information Technology Co Ltd
Original Assignee
Guangzhou Huya Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Information Technology Co Ltd filed Critical Guangzhou Huya Information Technology Co Ltd
Priority to CN202211605021.7A priority Critical patent/CN115953561A/en
Publication of CN115953561A publication Critical patent/CN115953561A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a 3D model hair style processing method, a device and electronic equipment, which are used for adjusting a hair style part of a 3D model to be processed by determining matching point pairs on a hair style mask image of a model plane rendering image of a reference image and the 3D model to be processed and determining a change constraint condition according to the relative position relation of the matching point pairs so as to enable a hair style area of the 3D model to be processed to approach to the reference image. Therefore, the adjustment of the hairstyle area of the 3D model to be processed can be automatically realized under the condition that the user only needs to provide the 2D reference image, and the diversified generation requirements of the hairstyle area of the 3D model are met.

Description

3D model hairstyle processing method and device and electronic equipment
Technical Field
The application relates to the field of image processing, in particular to a 3D model hair style processing method and device and electronic equipment.
Background
With the continuous development of network social related technologies, the application of the digital virtual image is more and more extensive, and the demand for diversification and individuation of the virtual image is higher and higher. In the traditional virtual image generation scheme, a user can only select a preset hair style type aiming at a hair style area of a virtual image, and the mode is difficult to meet the generation requirement of massive and diversified virtual images; alternatively, the user may manually adjust the hairstyle model based on the selection of the preset hairstyle model, but this requires the user to perform cumbersome operations.
Disclosure of Invention
In order to overcome the above-mentioned deficiencies in the prior art, the present application aims to provide a 3D model hair style processing method, comprising:
acquiring a reference image and a to-be-processed 3D model, and acquiring a model plane rendering image of the to-be-processed 3D model under a preset visual angle;
respectively inputting the model plane rendering image and the reference image into an image semantic segmentation model to obtain a first hair style mask image indicating a hair style area in the model plane rendering image and a second hair style mask image indicating a hair style area in the reference image;
determining, by a dynamic programming algorithm, first matched point pairs matched on the first hairstyle mask image and the second hairstyle mask image, the first matched point pairs including first hairstyle plane contour points on the first hairstyle mask image and second hairstyle plane contour points on the second hairstyle mask image;
determining a first hair style space contour point and a second hair style space contour point which correspond to the first hair style plane contour point and the second hair style plane contour point in the space where the 3D model to be processed is located;
determining a change constraint condition according to the first hair style plane contour point, the second hair style plane contour point, the first hair style space contour point and the position relation among points on the 3D model to be processed;
and adjusting each point in the hairstyle part of the 3D model to be processed according to the change constraint condition.
In a possible implementation manner, before the step of inputting the model plane rendering image and the reference image into the image semantic segmentation model respectively, the method further includes:
and performing face key point alignment adjustment processing on the model plane rendering image and the reference image.
In a possible implementation manner, the step of acquiring the reference image and the to-be-processed 3D model includes:
acquiring a reference image, and performing hairstyle type identification on the reference image;
and determining a corresponding 3D model to be processed according to the identification result of the hair style class identification.
In one possible implementation, the hairstyle part of the 3D model to be processed comprises a plurality of patches; before the step of determining a change constraint according to the position relationship between the first hair style spatial contour point and the second hair style spatial contour point and the position relationship between points on the 3D model to be processed, the method further includes:
removing repetition points formed by overlapping of different patches in the hairstyle part of the 3D model to be processed;
and/or adding connection constraint patches among different patches in the hairstyle part of the 3D model to be processed.
In one possible implementation, the change constraint includes a first constraint and a second constraint; the step of determining a change constraint condition according to the position relationship between the first hair style space contour point and the second hair style space contour point and the position relationship between points on the to-be-processed 3D model includes:
determining a target space contour point after performing thin plate spline interpolation change on the first hairstyle space contour point according to the position relationship between the first hairstyle space contour point and the second hairstyle plane contour point and the position relationship between the first hairstyle space contour point and the second hairstyle space contour point, and taking the first hairstyle space contour point as close to the target space contour point as possible as a first constraint condition;
and according to the connection relation between each adjacent point in the 3D model to be processed, maximizing the number of points keeping the connection relation unchanged as a second constraint condition.
In one possible implementation, the 3D model to be treated comprises a head part and a hairstyle part; the constraints further comprise a third constraint; the step of determining a change constraint condition according to the position relationship between the first hair style space contour point and the second hair style space contour point and the position relationship between points on the 3D model to be processed further includes:
determining a plurality of points in the hairstyle part, which are within a preset range of distance from the head part, as fixing points to keep the positions of the fixing points unchanged as the third constraint condition.
In a possible implementation manner, the step of determining a target spatial contour point after performing a thin plate spline interpolation change on the first hairpin spatial contour point according to a positional relationship between the first hairpin planar contour point and the second hairpin planar contour point and a positional relationship between the first hairpin spatial contour point and the second hairpin spatial contour point includes:
determining a point which is invisible in the preset visual angle from the 3D model to be processed and is within a set range with the first hair-style space contour point as a third hair-style space contour point;
determining a first position relationship between the first hairstyle plane contour point and the second hairstyle plane contour point, and determining a second position relationship between the first hairstyle space contour point, the third hairstyle space contour point and the second hairstyle space contour point;
and determining a target space contour point after the thin-plate spline interpolation change is executed on the first type space contour point according to the first position relation and the second position relation.
Another object of the present application is to provide a 3D model hairstyle processing device, the 3D model hairstyle processing device comprising:
the image acquisition module is used for acquiring a reference image and a to-be-processed 3D model and acquiring a model plane rendering image of the to-be-processed 3D model under a preset visual angle;
a semantic segmentation module, configured to input the model plane rendering image and the reference image into an image semantic segmentation model respectively, and obtain a first hair style mask image indicating a hair style region in the model plane rendering image and a second hair style mask image indicating a hair style region in the reference image;
a point location matching module, configured to determine, through a dynamic programming algorithm, a first matching point pair that is matched with the first hair style mask image and the second hair style mask image, where the first matching point pair includes a first hair style plane contour point on the first hair style mask image and a second hair style plane contour point on the second hair style mask image;
the point location mapping module is used for determining a first hair-style spatial contour point and a second hair-style spatial contour point which correspond to the first hair-style planar contour point and the second hair-style planar contour point in the space where the 3D model to be processed is located;
a constraint determining module, configured to determine a change constraint condition according to the first hair-style plane contour point, the second hair-style plane contour point, the first hair-style space contour point, and a position relationship between points on the to-be-processed 3D model;
and the model adjusting module is used for adjusting each point in the hair style part of the 3D model to be processed according to the change constraint condition.
Another object of the present application is to provide an electronic device, which includes a processor and a machine-readable storage medium, wherein the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are executed by the processor, the 3D model hair style processing method provided by the present application is implemented.
Another object of the present application is to provide a machine-readable storage medium, wherein the machine-readable storage medium stores machine executable instructions, which when executed by one or more processors, implement the 3D model hair style processing method provided by the present application.
Compared with the prior art, the method has the following beneficial effects:
according to the 3D model hair style processing method, the device and the electronic equipment, the matching point pairs on the hair style mask image of the reference image and the model plane rendering image of the 3D model to be processed are determined, the change constraint condition is determined according to the relative position relation of the matching point pairs so as to adjust the hair style part of the 3D model to be processed, and the hair style area of the 3D model to be processed approaches to the reference image. Therefore, the adjustment of the hairstyle area of the 3D model to be processed can be automatically realized under the condition that the user only needs to provide the 2D reference image, and the diversified generation requirements of the hairstyle area of the 3D model are met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flow chart illustrating steps of a 3D model hair style processing method according to an embodiment of the present application;
fig. 2 is a schematic view of an application scenario of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 4 is a functional block diagram of a 3D model hair style processing device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it is noted that the terms "first", "second", "third", and the like are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance.
In the description of the present application, it is further noted that, unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating steps of a 3D model hair style processing method provided in this embodiment, and each step of the method is described in detail below.
Step S110, obtaining a reference image and a to-be-processed 3D model, and obtaining a model plane rendering image of the to-be-processed 3D model under a preset visual angle.
In this embodiment, the 3D model to be processed may be a 3D model that needs to be adjusted in a hair style part, the reference image may include a human face part and a hair style part, and the reference image is used as a reference target in adjusting the 3D model to be processed.
Optionally, the difference in the contour of the 3D model is large for different hair style types, e.g., the difference in the contour of the 3D model is large between a double horse tail hair style and a single horse tail hair style. Therefore, in this embodiment, after the reference image is obtained, the hair style class identification may be performed on the reference image, and then the corresponding to-be-processed 3D model may be determined according to the identification result of the hair style class identification. In this way, the hairstyle type of the 3D model to be processed and the reference image can be kept substantially consistent, thereby reducing the subsequent adjustment data processing amount on the 3D model to be processed and reducing the adjustment distortion.
After the reference image and the to-be-processed 3D model are acquired, a model plane rendering image of the to-be-processed 3D model at a preset viewing angle may be acquired, for example, the to-be-processed 3D model is rendered at a viewing angle of a face of the to-be-processed 3D model in front view, so as to obtain the model plane rendering image.
Step S120, the model plane rendering image and the reference image are respectively input into an image semantic segmentation model, and a first hairstyle mask image indicating a hairstyle area in the model plane rendering image and a second hairstyle mask image indicating the hairstyle area in the reference image are obtained.
In this embodiment, the model plane rendering image and the reference image may be respectively input to an image semantic segmentation model for processing, and the image semantic segmentation model may be trained to identify a hair style region in the input image and output a corresponding hair style mask image, for example, in the hair style mask image, a region pixel value of the hair style region may be 1, and a region pixel value of a non-hair style region may be 0. A first hair style mask image indicating a hair style area in the model plane rendering image and a second hair style mask image indicating the hair style area in the reference image can be obtained through the image semantic segmentation model.
Specifically, the semantic segmentation model may adopt a deep learning network based Deeplab V3+ model, and may adopt a data enhancement method to train the semantic segmentation model to converge. Rendering an image for the model plane, since the hairstyle part of the 3D model to be processed is usually composed of a large number of patches, rather than a fine hair, the segmentation result is relatively stable. The reference image is usually a real portrait photograph, comprising a large number of picturesThe segmentation result of the reference image needs to be subjected to post-processing operations such as fuzzy smooth elimination and dissociation speed removal and the like, and then is subjected to processing by a next module, and finally the second hairstyle mask image mask r The calculation formula of (a) is as follows:
mask r =Pre(E seg (img-r aligned ))
wherein img _ r aligned For said reference image, E seg For the semantic segmentation network, pre is a post-processing function of the output result of the semantic segmentation for smoothing the edge and removing the free block.
In some possible implementations, in order to improve the accuracy of the subsequent processing, a face key point alignment adjustment process may be performed on the model plane rendering image and the reference image before step S120. In particular, the reference image may be aligned to the model plane rendered image based on keypoints and triangulation algorithms.
Step S130, determining a first matching point pair matched with the first hair style mask image and the second hair style mask image through a dynamic programming algorithm, where the first matching point pair includes a first hair style plane contour point on the first hair style mask image and a second hair style plane contour point on the second hair style mask image.
In this embodiment, in order to make the hair style part of the to-be-processed 3D model similar to the reference image, the point location change that needs to be generated between the model plane rendering image of the to-be-processed 3D model and the reference image is determined at a 2D level, so as to constrain the point location change at the 3D level. In this embodiment, a first matching point pair matched on the first hair-style mask image and the second hair-style mask image may be determined by a dynamic programming algorithm, where in the first matching point pair, a first hair-style plane contour point on the first hair-style mask image is a point position to be adjusted, and a second hair-style plane contour point on the second hair-style mask image is a target point position to be reached or approached.
Specifically, in this embodiment, an optimization problem that converts the problem of finding the matching point pairs into a minimized energy function may be adopted, and specifically, in this embodiment, the first and second hair style mask images may be subjected to boundary point sampling first, for example, the first boundary point P of the first hair style mask image is obtained by the following formula sampling m And a second boundary point P on the second hairstyle mask image r
P m =Sample(FC(mask m ))
P r =Sample(FC(mask r ))
Wherein, mask m For the first type mask image, mask r For the second hair style mask image, FC () is a find boundary function and Sample () is a function that samples boundary points.
The first boundary point P may then be determined m As the first profile point and then at the second boundary point P r Searching for the first type plane contour point P m Matching second hairstyle plane contour points r . Specifically, the second hairstyle plane contour point pofnts can be determined by the following formula r
Figure BDA0003998323040000091
Wherein, EP () is the energy function of the point, formed by the Euclidean distance and the normal vector difference between the matching point pairs; EE () is an edge energy function of matching points, consisting of the direction difference of the boundary change formed by the matching points; finally obtaining the first type plane contour point P by minimizing an energy function, namely enabling the distance, the normal line and the edge direction between corresponding points of the two matching point sets to be consistent as much as possible m And the corresponding second hairstyle plane contour points r
Step S140, determining a first hair style spatial contour point and a second hair style spatial contour point corresponding to the first hair style planar contour point and the second hair style planar contour point in the space where the to-be-processed 3D model is located.
In this embodiment, after determining the first matching point pair in the 2D layer, the first matching point pair needs to be mapped to the to-be-processed fig. 3D image and sit in the 3D space, so as to obtain a second matching point pair, where the second matching point pair includes a first hair-style space contour point corresponding to the first hair-style plane contour point and a second hair-style space contour point corresponding to the second hair-style plane contour point.
Specifically, in this embodiment, the first type plane contour point P may be subjected to rendering mapping according to a rendering mapping matrix adopted when the to-be-processed 3D model is rendered into the model plane rendering map m Inverse mapping is carried out so as to obtain the first generation space profile point P' m . Similarly, points of the second hairstyle plane contour can be obtained r Corresponding second hairstyle spatial contour point points' r . It is understood that, in the present embodiment, the first generation space profile point P' m Namely the contour points of the 3D model to be processed.
Step S150, determining a change constraint condition according to the first hair style plane contour point, the second hair style plane contour point, the first hair style space contour point and the position relation among the points on the 3D model to be processed.
In this embodiment, some variation constraints may be determined according to the first hair-style plane contour point, the second hair-style plane contour point, and the first hair-style spatial contour point, so that the hair-style part of the 3D model to be processed is as similar as possible to the reference image. In order to avoid excessive distortion in the adjustment process, some variation constraints need to be determined according to the positional relationship between the first hairstyle spatial contour point and the second hairstyle spatial contour point and the positional relationship between points on the 3D model to be processed.
And S160, carrying out point position adjustment on the hair style part of the 3D model to be processed according to the change constraint condition.
In this embodiment, in addition to the first hair-style spatial contour point, there are many other contour points in the hair-style part of the 3D model to be processed, and for these points, changes can be made while satisfying the change constraint condition determined in step S150.
Specifically, in this embodiment, non-rigid transformation constraints such as ASAP may be used to operate on the point structure, and the original operators such as laplace and cosine laplace used in the non-rigid transformation are all the flow type structures or watertight structures, but the local structure constraint added for the non-watertight structure may also ensure the stability of the local structure to a certain extent.
Based on the above design, in the 3D model hair style processing method provided by the application, the matching point pairs on the hair style mask image of the model plane rendering image of the reference image and the 3D model to be processed are determined, and the change constraint condition is determined according to the relative position relationship of the matching point pairs to adjust the hair style part of the 3D model to be processed, so that the hair style area of the 3D model to be processed approaches to the reference image. Therefore, the adjustment of the hairstyle area of the 3D model to be processed can be automatically realized under the condition that the user only needs to provide the 2D reference image, and the diversified generation requirements of the hairstyle area of the 3D model are met.
In one possible implementation, the change constraint includes a first constraint and a second constraint.
In step S150, the first type plane contour point P may be determined m And said second hairstyle plane contour points r And the first type space profile point P' m And said second hairstyle spatial contour point points 'of' r Determining a target space contour point P 'after the first generation space contour point is changed (such as TPS change)' tps From the first type space profile point P' m As close as possible to the target spatial profile point P' tps As a first constraint.
And maximizing the number of points for which the connection relationship is kept unchanged according to the connection relationship between each adjacent point in the to-be-processed 3D model as a second constraint condition. Specifically, in the Mesh network structure of the to-be-processed 3D model, the connection relationship between each point is recorded, and in the process of performing the transformation in step S160, it is necessary to keep the connection relationship between each point as unchanged as possible.
Further, the 3D model to be processed comprises a head part and a hair style part, the constraints further comprising a third constraint. In step S150, a plurality of points of the hair styling part whose distance from the head part is within a preset range may be determined as fixing points, and the third constraint condition is to keep the positions of the fixing points unchanged. In this way, it is possible to prevent the hair styling part and the head part from being pierced after the adjustment of the hair styling part.
In a possible implementation manner, the hairstyle part of the 3D model to be processed includes a plurality of patches, there may be overlapping repeated points between the patches, and there may be no topological constraint on the positional relationship between the patches, which may cause the hair patches to be scattered or structurally distorted during the adjustment process for the hairstyle part of the 3D model to be processed.
Therefore, before step S160, the method may further remove a repetition point formed by overlapping of different patches in the hairstyle part of the 3D model to be processed, and/or add a connection constraint patch between different patches in the hairstyle part of the 3D model to be processed.
Specifically, in this embodiment, according to the Mesh structure data of the to-be-processed 3D model, the repetition points formed by overlapping different patches may be removed, the mapping relationship may be stored, and after the hairstyle adjustment is completed, the corresponding points may be restored according to the mapping relationship. And then, performing an operation of adding a face structure on the de-duplicated points, and adding constraints among the face patches according to the following formula:
mesh′=Struc(Dedup(mesh),k,perc)
the Mesh is original Mesh data of the 3D model to be processed, the Dedup () is a duplicate removal operation, the Struc () is an operation of adding a surface constraint function, the K is a parameter for calculating a KNN (K-Nearest Neighbor, K Nearest Neighbor) point, the perc is a parameter for adding surface structure constraint according to the number of proportional points, and the too large perc causes too much constraint, so that the influence of a distortion operation on the structure is small.
In a possible implementation manner, in step S150, a point within a set range from the first point to be processed may also be determined as a third hair-style spatial contour point from among the points in the 3D model to be processed that are not visible within the preset viewing angle.
Then, a first position relation between the first hair style plane contour point and the second hair style plane contour point is determined, and a second position relation between the first hair style space contour point, the third hair style space contour point and the second hair style space contour point is determined.
And determining a target space contour point after the thin-plate spline interpolation change is performed on the first type space contour point according to the first position relation and the second position relation.
Therefore, in the process of TPS adjustment of the first generation space contour point, the invisible hindbrain point of the 3D model part to be processed can be influenced by TPS change, and the position of the invisible area is not restrained too much.
The embodiment also provides an electronic device which can use the 3D model hair style processing method, and the electronic device may include a server, a personal computer, a notebook computer, and other devices with image processing capability.
Referring to fig. 2, in a possible implementation manner, the electronic device 100 may communicate with a user terminal 200 through a network, and the electronic device 100 may obtain the reference image from the user terminal 200 and then adjust the to-be-processed 3D model according to the method steps shown in fig. 1.
In a possible implementation manner, the electronic device 100 may be a server of a live platform, the user terminal 200 may be a user terminal of a viewer or a anchor, and the to-be-processed 3D model may be a 3D model for configuring an avatar of the viewer or the anchor.
Referring to fig. 3, fig. 3 is a block diagram of the electronic device 100. The electronic device 100 comprises a 3D model hair style processing apparatus 110, a machine readable storage medium 120, and a processor 130.
The elements of the machine-readable storage medium 120 and the processor 130 are directly or indirectly electrically connected to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The 3D model hair style processing device 110 includes at least one software function module which can be stored in the form of software or firmware (firmware) in the machine readable storage medium 120 or solidified in an Operating System (OS) of the electronic device 100. The processor 130 is used to execute executable modules stored in the machine-readable storage medium 120, such as software functional modules and computer programs included in the 3D model hair style processing device 110.
The machine-readable storage medium 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The machine-readable storage medium 120 is used for storing a program, and the processor 130 executes the program/can execute the 3D model hair style processing method provided by this embodiment after receiving an execution instruction.
The processor 130 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 4, the embodiment further provides a 3D model hair styling apparatus 110,3D model hair styling apparatus 110, which includes at least one functional module capable of being stored in a machine readable storage medium 120 in a software form. Functionally, the 3D model hair style processing device 110 may include an image obtaining module 111, a semantic segmentation module 112, a point location matching module 113, a point location mapping module 114, a constraint determining module 115, and a model adjusting module 116.
The image obtaining module 111 is configured to obtain a reference image and a to-be-processed 3D model, and obtain a model plane rendering image of the to-be-processed 3D model at a preset view angle.
In this embodiment, the image obtaining module 111 may be configured to execute step S110 shown in fig. 1, and reference may be made to the description of step S110 for a detailed description of the image obtaining module 111.
The semantic segmentation module 112 is configured to input the model plane rendering image and the reference image into an image semantic segmentation model respectively, and obtain a first hair style mask image indicating a hair style region in the model plane rendering image and a second hair style mask image indicating a hair style region in the reference image.
In this embodiment, the semantic segmentation module 112 may be configured to execute step S120 shown in fig. 1, and the detailed description about the semantic segmentation module 112 may refer to the description about step S120.
The point location matching module 113 is configured to determine, through a dynamic programming algorithm, a first matching point pair that is matched with the first hair style mask image and the second hair style mask image, where the first matching point pair includes a first hair style plane contour point on the first hair style mask image and a second hair style plane contour point on the second hair style mask image.
In this embodiment, the point matching module 113 may be configured to execute step S130 shown in fig. 1, and the detailed description about the point matching module 113 may refer to the description about the step S130.
The point location mapping module 114 is configured to determine a first hair-style spatial contour point and a second hair-style spatial contour point, which correspond to the first hair-style planar contour point and the second hair-style planar contour point in the space where the to-be-processed 3D model is located.
In this embodiment, the point location mapping module 114 may be configured to execute step S140 shown in fig. 1, and the detailed description about the point location mapping module 114 may refer to the description about step S140.
The constraint determining module 115 is configured to determine a change constraint according to the first hair style plane contour point, the second hair style plane contour point, the first hair style space contour point, and a position relationship between points on the to-be-processed 3D model.
In this embodiment, the constraint determining module 115 may be configured to execute step S150 shown in fig. 1, and reference may be made to the description of step S150 for a detailed description of the constraint determining module 115.
The model adjusting module 116 is configured to adjust each point in the hair style part of the 3D model to be processed according to the variation constraint condition
In this embodiment, the model adjustment module 116 may be configured to execute step S160 shown in fig. 1, and the detailed description about the model adjustment module 116 may refer to the description about step S160.
In summary, according to the 3D model hair style processing method, the device and the electronic device provided by the application, the matching point pairs on the hair style mask image of the model plane rendering image of the reference image and the 3D model to be processed are determined, and the change constraint condition is determined according to the relative position relationship of the matching point pairs to adjust the hair style part of the 3D model to be processed, so that the hair style area of the 3D model to be processed approaches to the reference image. Therefore, the adjustment of the hairstyle area of the 3D model to be processed can be automatically realized under the condition that the user only needs to provide the 2D reference image, and the diversified generation requirements of the hairstyle area of the 3D model are met.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A 3D model hair style processing method, characterized in that the method comprises:
acquiring a reference image and a to-be-processed 3D model, and acquiring a model plane rendering image of the to-be-processed 3D model under a preset visual angle;
respectively inputting the model plane rendering image and the reference image into an image semantic segmentation model to obtain a first hair style mask image indicating a hair style area in the model plane rendering image and a second hair style mask image indicating the hair style area in the reference image;
determining, by a dynamic programming algorithm, first matched point pairs matched on the first hairstyle mask image and the second hairstyle mask image, the first matched point pairs including first hairstyle plane contour points on the first hairstyle mask image and second hairstyle plane contour points on the second hairstyle mask image;
determining a first hair style space contour point and a second hair style space contour point which correspond to the first hair style plane contour point and the second hair style plane contour point in the space where the 3D model to be processed is located;
determining a change constraint condition according to the first hair style plane contour point, the second hair style plane contour point, the first hair style space contour point and the position relation among points on the 3D model to be processed;
and adjusting each point in the hairstyle part of the 3D model to be processed according to the change constraint condition.
2. The method according to claim 1, wherein before the step of inputting the model plane rendering image and the reference image into the image semantic segmentation model respectively, the method further comprises:
and performing face key point alignment adjustment processing on the model plane rendering image and the reference image.
3. The method according to claim 1, wherein the step of acquiring the reference image and the 3D model to be processed comprises:
acquiring a reference image, and performing hairstyle type identification on the reference image;
and determining a corresponding 3D model to be processed according to the recognition result of the hair style class recognition.
4. The method according to claim 1, wherein the hairstyle portion of the 3D model to be processed comprises a plurality of patches; before the step of determining a change constraint according to the position relationship between the first hair style spatial contour point and the second hair style spatial contour point and the position relationship between points on the 3D model to be processed, the method further includes:
removing repetition points formed by overlapping of different patches in the hairstyle part of the 3D model to be processed;
and/or adding connection constraint patches among different patches in the hairstyle part of the 3D model to be processed.
5. The method of claim 1, wherein the change constraint comprises a first constraint and a second constraint; the step of determining a change constraint condition according to the position relationship between the first hair style space contour point and the second hair style space contour point and the position relationship between points on the to-be-processed 3D model includes:
determining a target space contour point after performing thin-plate spline interpolation change on the first hair-style space contour point according to the position relationship between the first hair-style plane contour point and the second hair-style plane contour point and the position relationship between the first hair-style space contour point and the second hair-style space contour point, and taking the first hair-style space contour point as close as possible to the target space contour point as a first constraint condition;
and according to the connection relation between each adjacent point in the 3D model to be processed, maximizing the number of points keeping the connection relation unchanged as a second constraint condition.
6. The method according to claim 5, wherein the 3D model to be treated comprises a head part and a hairstyle part; the constraints further comprise a third constraint; the step of determining a change constraint condition according to the position relationship between the first hair style space contour point and the second hair style space contour point and the position relationship between points on the to-be-processed 3D model further includes:
determining a plurality of points in the hairstyle part, which are within a preset range of distance from the head part, as fixing points to keep the positions of the fixing points unchanged as the third constraint condition.
7. The method according to claim 5, wherein the step of determining the target spatial contour point after performing the thin plate spline interpolation change on the first hairpin spatial contour point according to the positional relationship between the first hairpin planar contour point and the second hairpin planar contour point and the positional relationship between the first hairpin spatial contour point and the second hairpin spatial contour point comprises:
determining a point which is invisible in the preset visual angle from the 3D model to be processed and is within a set range with the first hairstyle space contour point as a third hairstyle space contour point;
determining a first positional relationship between the first hairstyle plane contour point and the second hairstyle plane contour point, and determining a second positional relationship between the first hairstyle spatial contour point, a third hairstyle spatial contour point and the second hairstyle spatial contour point;
and determining a target space contour point after the thin-plate spline interpolation change is executed on the first type space contour point according to the first position relation and the second position relation.
8. A3D model hairstyle processing device, characterized in that, the 3D model hairstyle processing device comprises:
the image acquisition module is used for acquiring a reference image and a to-be-processed 3D model and acquiring a model plane rendering image of the to-be-processed 3D model under a preset visual angle;
a semantic segmentation module, configured to input the model plane rendering image and the reference image into an image semantic segmentation model respectively, and obtain a first hair style mask image indicating a hair style region in the model plane rendering image and a second hair style mask image indicating a hair style region in the reference image;
a point location matching module, configured to determine, through a dynamic programming algorithm, a first matching point pair that is matched on the first hair style mask image and the second hair style mask image, where the first matching point pair includes a first hair style plane contour point on the first hair style mask image and a second hair style plane contour point on the second hair style mask image;
the point location mapping module is used for determining a first hair-style space contour point and a second hair-style space contour point which correspond to the first hair-style plane contour point and the second hair-style plane contour point in the space where the 3D model to be processed is located;
a constraint determining module, configured to determine a change constraint condition according to a position relationship among the first hairstyle plane contour point, the second hairstyle plane contour point, the first hairstyle space contour point, and each point on the to-be-processed 3D model;
and the model adjusting module is used for adjusting each point in the hair style part of the 3D model to be processed according to the change constraint condition.
9. An electronic device comprising a processor and a machine-readable storage medium having stored thereon machine-executable instructions that, when executed by the processor, implement the method of any of claims 1-7.
10. A machine-readable storage medium having stored thereon machine-executable instructions which, when executed by one or more processors, perform the method of any one of claims 1-7.
CN202211605021.7A 2022-12-14 2022-12-14 3D model hairstyle processing method and device and electronic equipment Pending CN115953561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211605021.7A CN115953561A (en) 2022-12-14 2022-12-14 3D model hairstyle processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211605021.7A CN115953561A (en) 2022-12-14 2022-12-14 3D model hairstyle processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115953561A true CN115953561A (en) 2023-04-11

Family

ID=87285288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211605021.7A Pending CN115953561A (en) 2022-12-14 2022-12-14 3D model hairstyle processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115953561A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117793479A (en) * 2023-12-26 2024-03-29 北京中科大洋科技发展股份有限公司 Rapid generation method of smooth transition video mask

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117793479A (en) * 2023-12-26 2024-03-29 北京中科大洋科技发展股份有限公司 Rapid generation method of smooth transition video mask
CN117793479B (en) * 2023-12-26 2024-05-14 北京中科大洋科技发展股份有限公司 Rapid generation method of smooth transition video mask

Similar Documents

Publication Publication Date Title
CN111126125B (en) Method, device, equipment and readable storage medium for extracting target text in certificate
US9679192B2 (en) 3-dimensional portrait reconstruction from a single photo
CN110675487B (en) Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face
CN111428579A (en) Face image acquisition method and system
EP3074945B1 (en) Content-aware image rotation
US10957062B2 (en) Structure depth-aware weighting in bundle adjustment
CN110023989B (en) Sketch image generation method and device
US20130236068A1 (en) Calculating facial image similarity
CN107944324A (en) A kind of Quick Response Code distortion correction method and device
CN112966725B (en) Method and device for matching template images and terminal equipment
CN112085033A (en) Template matching method and device, electronic equipment and storage medium
CN114758093A (en) Three-dimensional model generation method, device, equipment and medium based on image sequence
CN115953561A (en) 3D model hairstyle processing method and device and electronic equipment
CN110930503A (en) Method and system for establishing three-dimensional model of clothing, storage medium and electronic equipment
CN115239861A (en) Face data enhancement method and device, computer equipment and storage medium
Yan et al. Flower reconstruction from a single photo
Yung et al. Efficient feature-based image registration by mapping sparsified surfaces
CN110473281B (en) Method and device for processing edges of three-dimensional model, processor and terminal
Giachetti Effective characterization of relief patterns
CN113570725A (en) Three-dimensional surface reconstruction method and device based on clustering, server and storage medium
CN110660091A (en) Image registration processing method and device and photographing correction operation system
JP2013196225A (en) Program, information processing method and information processor
CN115330803A (en) Surface defect data enhancement method and device, electronic equipment and storage medium
CN114723973A (en) Image feature matching method and device for large-scale change robustness
CN112232143B (en) Face point cloud optimization method and device, machine readable medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination